Planet GNU

Aggregation of development blogs from the GNU Project

August 12, 2020

Christopher Allan Webber

Terminal Phase in Linux Magazine (Polish edition)

Terminal Phase featured in Polish version of Linux Magazine

Hey look at that! My terminal-space-shooter-game Terminal Phase made an appearance in the Polish version of Linux Magazine. I had no idea, but Michal Majchrzak both tipped me off to it and took the pictures. (Thank you!)

I don't know Polish but I can see some references to Konami and SHMUP (shoot-em-up game). The screenshot they have isn't the one I published, so I guess the author got it running too... I hope they had fun!

Apparently it appeared in the June 2020 edition:

June 2020 edition of Polish Magazine

I guess because print media coverage is smaller, it feels cooler to get covered these days in it in some way?

I wonder if I can find a copy somewhere!

12 August, 2020 05:54PM by Christopher Lemmer Webber

August 07, 2020

FSF Blogs

The FSF's approach to using online videos for advocacy

A consistent bit of feedback we hear from both current and potential free software supporters is: do better at using video to communicate the importance of free software philosophy. If we aim to make free software a "kitchen table" issue, it is imperative we reach new audiences and make our points clearly, in formats that successfully engage people with limited time, across a diverse set of learning styles. From a technical perspective, this means reaching them where they are -- or more specifically -- on whatever device they are using at the moment.

Many unfortunately commonly used devices such as the iPhone do not support the video and audio formats we prefer to use in the world of free software. Apple's iron grip on the device prevents all but technically advanced users from installing the software necessary to play these formats: among them Ogg Vorbis, FLAC, and WebM. The Free Software Foundation (FSF) and other free software activists advocate for these formats due to the danger posed by software patents, a pernicious legal invention that casts a dark cloud over all software development. Software patents make it possible for patent owners who can state their case well enough to make claims against any piece of free software. This alone puts developers at risk. One doesn't even need to have a valid patent to threaten action: if the developer lacks the funds to defend themselves, an absurd patent claim could be equally dangerous.

In contrast to this, some authors of common formats choose to freely license any potential patent claims along with all other aspects of their project. Groups like these are intentionally helping to create the world we want to live in, and they are worthy of our support.

While we must continue campaigning against Apple and other companies for their support of software patents and insistence on trying to control users, we can't do that nearly as effectively if users of those platforms can't hear us. Without supporting video codecs other than those above, such as Advanced Video Coding (commonly called H.264), we run the risk of reaching only those who already know about free software.

To make it possible for users new to free software to watch the videos we make about free software, we've set up a "fallback" system for our embedded video player. Formats like WebM and Ogg Theora are preferred, but if these are not supported by the device, a file encoded in H.264 is played instead. Thus, without signing any agreement "buying" or attaining any supposed patent license, we can make sure that these users can access our materials. Ideally, these videos will motivate them to move to a device or operating system that respects their freedom. This brings one more person into the "free world," moving us closer to eliminating software patents and proprietary software altogether.

Although formats preferred by the free software community are now much more widespread than they were when we launched the PlayOgg campaign, most media sharing sites require you to use nonfree software (often in the form of JavaScript) or agree to ethically unacceptable terms of service (often claiming to prohibit you from writing your own software similar to that running the service). True to our principles, the FSF self-hosts all of its media, supports decentralization and federation, and never requires nonfree JavaScript. The FSF will continue campaigning against software patents, and will always ensure that our materials can be viewed by systems that exclusively run free software.

07 August, 2020 06:49PM

libredwg @ Savannah

libredwg-0.11 released

New features:
  * new programs dwgfilter, dxfwrite.
    dwgfilter allows custom jq queries.
    dxfwrite allows version conversions, which dwgwrite does not yet support.
  * Can now read all 2004+ section types: added AppInfo, FileDepList,
    Template, ObjFreeSpace;
    and as blob: RevHistory, Security, AppInfoHistory.
    AcDsPrototype_1b datastore not fully yet, so we cannot reliably yet read
    new ACIS r2013+ SAB blobs stored there, but we extract them from the AcDs
    blob by brute-force.
  * Added new string types: T and T16, TU16, T32 (for those sections)
  * Convert ACIS BinaryFile v2 SAB to old encrypted ACIS SAT v1 data, needed
    to convert pre-r2013 ACIS v2 entities to DXF.
  * Added support for many object/entity types:

    Now stable: ACSH_BOOLEAN_CLASS ACSH_BOX_CLASS ACSH_CYLINDER_CLASS
    ACSH_FILLET_CLASS ACSH_SPHERE_CLASS ACSH_WEDGE_CLASS LIGHT MESH
    CELLSTYLEMAP DETAILVIEWSTYLE DYNAMICBLOCKPURGEPREVENTER INDEX
    GEODATA LAYERFILTER MULTILEADER PLOTSETTINGS SECTION_MANAGER
    SECTIONOBJECT SECTIONVIEWSTYLE VBA_PROJECT VISUALSTYLE.
    and some Dynblocks: BLOCKGRIPLOCATIONCOMPONENT BLOCKBASEPOINTPARAMETER
    BLOCKFLIPACTION BLOCKFLIPPARAMETER BLOCKFLIPGRIP BLOCKLINEARGRIP
    BLOCKMOVEACTION BLOCKROTATEACTION BLOCKSCALEACTION
    BLOCKVISIBILITYGRIP

    New unstable: ACSH_BREP_CLASS ACSH_CHAMFER_CLASS ACSH_CONE_CLASS
    ACSH_PYRAMID_CLASS ACSH_TORUS_CLASS ARC_DIMENSION ASSOCACTION
    ASSOCBLENDSURFACEACTIONBODY ASSOCEXTENDSURFACEACTIONBODY
    ASSOCEXTRUDEDSURFACEACTIONBODY ASSOCFILLETSURFACEACTIONBODY
    ASSOCGEOMDEPENDENCY ASSOCLOFTEDSURFACEACTIONBODY ASSOCNETWORK
    ASSOCNETWORKSURFACEACTIONBODY ASSOCOFFSETSURFACEACTIONBODY
    ASSOCPATCHSURFACEACTIONBODY ASSOCREVOLVEDSURFACEACTIONBODY
    ASSOCTRIMSURFACEACTIONBODY ASSOCVALUEDEPENDENCY BACKGROUND
    BLOCKLINEARPARAMETER BLOCKLOOKUPGRIP BLOCKROTATIONPARAMETER
    BLOCKXYPARAMETER BLOCKVISIBILITYPARAMETER HELIX
    LARGE_RADIAL_DIMENSION LIGHTLIST MATERIAL MENTALRAYRENDERSETTINGS
    RAPIDRTRENDERSETTINGS RENDERSETTINGS SECTION_SETTINGS
    SPATIAL_INDEX SUN TABLESTYLE.

    Fixed PROXY_OBJECT, PROXY_ENTITY.
    Demoted to Unstable: SPATIAL_INDEX
    Demoted to Debugging: PERSSUBENTMANAGER DIMASSOC

    Note: Unstable objects are not preserved via DXF conversion, just
    the external import is supported.

    Add most Constraint (ASSOC*) and DYNBLOCK objects (BLOCK*).
    Debugging classes added (needs --with-debug option):

    ACMECOMMANDHISTORY ACMESCOPE ACMESTATEMGR ACSH_EXTRUSION_CLASS
    ACSH_HISTORY_CLASS ACSH_LOFT_CLASS ACSH_REVOLVE_CLASS
    ACSH_SWEEP_CLASS ALDIMOBJECTCONTEXTDATA ALIGNMENTPARAMETERENTITY
    ANGDIMOBJECTCONTEXTDATA ANNOTSCALEOBJECTCONTEXTDATA
    ASSOC3POINTANGULARDIMACTIONBODY ASSOCACTIONPARAM
    ASSOCARRAYACTIONBODY ASSOCARRAYMODIFYACTIONBODY
    ASSOCARRAYMODIFYPARAMETERS ASSOCARRAYPATHPARAMETERS
    ASSOCARRAYPOLARPARAMETERS ASSOCARRAYRECTANGULARPARAMETERS
    ASSOCASMBODYACTIONPARAM ASSOCCOMPOUNDACTIONPARAM
    ASSOCDIMDEPENDENCYBODY ASSOCEDGEACTIONPARAM
    ASSOCEDGECHAMFERACTIONBODY ASSOCEDGEFILLETACTIONBODY
    ASSOCFACEACTIONPARAM ASSOCMLEADERACTIONBODY ASSOCOBJECTACTIONPARAM
    ASSOCORDINATEDIMACTIONBODY ASSOCOSNAPPOINTREFACTIONPARAM
    ASSOCPATHACTIONPARAM ASSOCPOINTREFACTIONPARAM
    ASSOCRESTOREENTITYSTATEACTIONBODY ASSOCROTATEDDIMACTIONBODY
    ASSOCSWEPTSURFACEACTIONBODY ASSOCVARIABLE ASSOCVERTEXACTIONPARAM
    ATEXT BASEPOINTPARAMETERENTITY BLKREFOBJECTCONTEXTDATA
    BLOCKALIGNEDCONSTRAINTPARAMETER BLOCKALIGNMENTGRIP
    BLOCKALIGNMENTPARAMETER BLOCKANGULARCONSTRAINTPARAMETER
    BLOCKARRAYACTION BLOCKDIAMETRICCONSTRAINTPARAMETER
    BLOCKHORIZONTALCONSTRAINTPARAMETER BLOCKLINEARCONSTRAINTPARAMETER
    BLOCKLOOKUPACTION BLOCKLOOKUPPARAMETER BLOCKPARAMDEPENDENCYBODY
    BLOCKPOINTPARAMETER BLOCKPOLARGRIP BLOCKPOLARPARAMETER
    BLOCKPOLARSTRETCHACTION BLOCKPROPERTIESTABLE
    BLOCKPROPERTIESTABLEGRIP BLOCKRADIALCONSTRAINTPARAMETER
    BLOCKREPRESENTATION BLOCKROTATIONGRIP BLOCKSTRETCHACTION
    BLOCKUSERPARAMETER BLOCKVERTICALCONSTRAINTPARAMETER BLOCKXYGRIP
    CONTEXTDATAMANAGER CSACDOCUMENTOPTIONS CURVEPATH DATALINK
    DATATABLE DMDIMOBJECTCONTEXTDATA DYNAMICBLOCKPROXYNODE
    EXTRUDEDSURFACE FCFOBJECTCONTEXTDATA FLIPPARAMETERENTITY
    GEOPOSITIONMARKER LAYOUTPRINTCONFIG LEADEROBJECTCONTEXTDATA
    LINEARPARAMETERENTITY LOFTEDSURFACE MLEADEROBJECTCONTEXTDATA
    MOTIONPATH MPOLYGON MTEXTATTRIBUTEOBJECTCONTEXTDATA
    MTEXTOBJECTCONTEXTDATA NAVISWORKSMODEL NURBSURFACE
    ORDDIMOBJECTCONTEXTDATA PERSUBENTMGR PLANESURFACE
    POINTPARAMETERENTITY POINTPATH RADIMLGOBJECTCONTEXTDATA
    RADIMOBJECTCONTEXTDATA RENDERENTRY RENDERENVIRONMENT RENDERGLOBAL
    REVOLVEDSURFACE ROTATIONPARAMETERENTITY RTEXT SUNSTUDY
    SWEPTSURFACE TABLE TABLECONTENT TEXTOBJECTCONTEXTDATA
    TVDEVICEPROPERTIES VISIBILITYGRIPENTITY VISIBILITYPARAMETERENTITY
    XYPARAMETERENTITY

  * Started support to write r2004+ format DWG's (which includes also r2010,
    r2013, r2018, but not r2007), but this does not work fully yet.
  * Added all remaining Dwg_Version types: R_1_3 for AC1.3, R_2_4 for AC1001, and
    AC1013 for R_13c3.
  * The header can now be compiled wth C++ compilers, needed for some bindings.
    Re-arranged nested structs, names, malloc casts, reserved keywords like this,
    template.
    Started with the gambas bindings, a Visual Basic clone for unix.
  * DXF and JSON importers now create PLACEHOLDER objects for unsupported
    objects.
  * 3DSOLID got now material properties and revisionguid fields.
  * Many parts of the API are now auto-generated/updated: dwg.i, dwg_api.c, dwg_api.h,
    unions and setup in dwg.h
  * Added geojsonhint or gjv linter support. Fixed all violations (esp. point arrays,
    and POLYLINE_2D). Add a Feature id (the handle)
  * Added support for GeoJSON RFC7946, write closed polygons, re-order by the
    right-hand rule..
  * new API functions:
    dwg_ctrl_table, dwg_handle_name, dwg_find_dicthandle_objname, dwg_variable_dict,
    dwg_next_entity, get_next_owned_block_entity, dwg_section_name,
    dwg_version_type, dwg_version_as, dwg_errstrings,  dwg_rgb_palette,
    dwg_find_color_index.
  * new dynapi functions: dwg_dynapi_subclass_value, dwg_dynapi_subclass_field,
    dwg_dynapi_fields_size.
    (BTW. the dynapi proved to be a godsend for the json importer)

API breaking changes:
  * Renamed dwg_section_type to dwg_section_wtype, added a new dwg_section_type
    for ASCII names.
  * Removed all extra null_handle fields, and add the missing handle fields.
  * Renamed all dwg_add_OBJECT functions to dwg_setup_OBJECT. They didn't add them, just setup
    the internal structures.
  * Renamed VPORT_ENTITY_HEADER to VX_TABLE_RECORD and VPORT_ENTITY_CONTROL to VX_CONTROL.
    Also section enum SECTION_VPORT_ENTITY to SECTION_VX and dwg->vport_entity_control likewise.
  * Hundreds of field renames due to harmonization efforts with the more generic
    JSON importer. Note that some deprecated dwg_api accessor functions were also
    renamed accordingly, but not all.

    For the stable objects:
    TEXT,ATTRIB,ATTDEF,SHAPE,STYLE: oblique_ang => oblique_angle,
    TEXT,ATTRIB,ATTDEF,SHAPE,MTEXT,UNDERLAY,TABLE,...: insertion_pt => ins_pt,
    DIMENSION_* _13_pt => xline1_pt, _14_pt => xline2_pt,
      ext_line_rotation => oblique_angle
    DIMENSION_ANG2LN _16_pt => xline1start_pt, _14_pt => xline2start_pt,
      _13_pt => xline1end_pt, first_arc_pt => xline2end_pt
    VIEW,VIEWPORT: view_direction => VIEWDIR (as it overrides this header),
      view_twist => twist_angle,
      view_height => VIEWSIZE,
      snap_angle => SNAPANG,
      view_center => VIEWCTR,
      snap_base => SNAPBASE,
      snap_spacing => SNAPUNIT,
      grid_spacing => GRIDUNIT,
      ucs_per_viewport => UCSVP,
      ucs_origin => ucsorg,
      ucs_x_axis => ucsxdir,
      ucs_y_axis => ucsydir,
      ucs_ortho_view_type => UCSORTHOVIEW
    OLEFRAME.data_length => data_size,
    LEADER.offset_to_block_ins_pt => inspt_offset
    TOLERANCE.text_string => text_value
    STYLE.vertical => is_vertical, shape_file => is_shape, fixed_height => text_size,
      extref => xref
    DICTIONARYVAR.intval => schema, str => strvalue

    COMMON_TABLE_FIELDS: xrefref => is_xref_ref, xrefindex_plus1 => is_xref_resolved,
      xrefdep => is_xref_dep. new common xref HANDLE field (was null_handle in many objects)

    LAYER got a new visualstyle handle.
    LTYPE.dashes got a new style handle and text field.
    LTYPE has no styles H* anymore, moved to dashes.
    LTYPE.text_area_is_present => has_strings_area, extref_handle => xref.
    VIEW, VIEWPORT:
      height => VIEWSIZE, width => view_width, center => VIEWCTR, target => view_target,
      direction => VIEWDIR, front_clip => front_clip_z, back_clip => back_clip_z,
      pspace_flag => is_pspace,
      origin => ucsorg, x_direction => ucsxdir, y_direction => ucsydir,
      elevation => ucs_elevation, orthographic_view_type => UCSORTHOVIEW,
      camera_plottable => is_camera_plottable

    UCS got a new orthopts array, and the renames as above.
    DIMSTYLE got a new flag0. flag is computed from that.
    VPORT_ENTITY_HEADER flag1 => is_on, vport_entity => viewport, xref_handle => xref,
      new prev_entity handle.
    MLINESTYLE index/ltype union changed to seperate lt_index, lt_ltype fields.
      They were overwriting each other on version conversions.
    MLINESTYLE.desc => description, data_length => data_size.
    HATCH booleans got a is_ prefix.
    MTEXT.annotative => is_annotative.
    MTEXT.drawing_dir => flow_dir.
    XRECORD.num_databytes => xdata_size
    MLEADERSTYLE text_frame => is_text_frame, is_new_format removed.
      changed => is_changed.
    DICTIONARYYWDFLT got a new format_flags and data_handle
    SCALE.has_unit_scale => is_unit_scale
    SORTENTSTABLE.dictionary => block_owner

    Type changes in stable objects:
    SPLINE.fit_pts are now ordinary BITCODE_3DPOINT*
    SPLINE.color is BL, scale is now 3BD
  * Changed truecolor attributes in GeoJSON with a # prefix.

Major bugfixes:
  * Fixed converting ASCII from and to Unicode strings, when converting across
    versions. Embed Unicode as \U+XXXX into ASCII, and decode it back to Unicode.
    Honor dat,dwg->from_version and use the new FIELD_T as seperate type. (#185)
  * Invalid numbers (read failures) are converted to 0.0 in the released version.
  * Fixed wrong CMC color reads and writes, check the method, lookup the index,
    support a CMTC truecolor-only type.
  * Fixed EED writes, by writing to seperate streams and merging them at the end,
    with proper size calculation.
  * All remaining assertions are now protected. (GH #187)

Minor bugfixes:
  * Fixed uncompressed sections overflows, some fuzzed (GH #183), some
    with the new sections.
  * Normalize extrusion vectors.
  * Fix bit_write_BT, the thickness vector pre-R2000.
  * Added many overflow checks, due to extensive fuzzing campains.
  * Fixed wrong julian date conversions with TIMEBLL types.
  * Fxed keyword conflicts with the bindings: No next, from, self fieldnames.
  * Many more, see the ChangeLog.

Other newsworthy changes:
  * Harmonized 2004 section search with the better 2007 variant. Added a new
    section and info fixedtype field.
  * Added unit-tests for all supported objects.
  * Added src/classes.c defining the class stability (stable, unstable, debugging, unhandled).
  * Need now -lm, the system math library, in all cases.
  * Got a complete collection of old DWGs to cross-check against. Many new object types
    could be stabilized because of this. Many thanks to Michal Josef Špaček.
  * CMC color got 2 new fields: raw (EMC only), method (the first rgb byte).
  * Many DXF re-ordering fixes.

Notes: The new constraint and dynblock objects just miss a major refactor into seperate
impl subclasses, and subent and curve support.

Here are the compressed sources:
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.11.tar.gz  (16.0MB)
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.11.tar.xz   (7.9MB)

Here are the GPG detached signatures[*]:
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.11.tar.gz.sig
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.11.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are more binaries:
  https://github.com/LibreDWG/libredwg/releases/tag/0.11

Here are the SHA256 checksums:
6b48304c50320b1ee7fbfea63c3b1437bbc3d4cd1b0ecb7fecd5f00ed4f4bdc8  libredwg-0.11.tar.gz
c25bbab29e7814663a203c38df8cbcce850d0b003a7132cf3f849429468ca7ed  libredwg-0.11.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify libredwg-0.11.tar.gz.sig

If that command fails because you don't have the required public key, then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

07 August, 2020 10:49AM by Reini Urban

FSF Blogs

The University of Costumed Heroes: A video from the FSF

This video is the second in a series of animated videos created by the Free Software Foundation's (FSF), and this one is themed around our campaign against the use of proprietary remote education software.

We must reverse the trend of forsaking young people's freedom, which has been accelerating as corporations try to capitalize on the need to establish new remote education practices. Free software not only protects the freedoms of your child or grandchild by allowing people to study the source code for any malicious functionalities, it also communicates important values like autonomy, sharing, social responsibility, and collaboration.

Support our work

To further help us bring attention to, and start a conversation with, institutions that are endangering students' futures and jeopardizing their education by relying on proprietary software , please show your support for free software in education and this video by promoting it.

If you enjoy this video, consider becoming an FSF associate member or donating to the FSF to help us create more videos like this to help spread free software awareness.


Download the video:

More information about the different formats the FSF chooses to use.

Subtitles and translations

Help us translate to many different languages so we can share this video across the globe! Translation drafts and the how-to explanation can be found on our the LibrePlanet wiki. Once you have finalized a translation, email campaigns@fsf.org and we will publish it.

Subtitle files: English, Español

Embed

Embed The University of Costumed Heroes on your site or blog with this code:

<iframe src="https://static.fsf.org/nosvn/videos/fsf-heroes/" id="fsf-heroes-video" scrolling="no" style="overflow: hidden; margin: 0; border: 0 none; display: block; width: 100%; height: 67vw; max-height: 550px;"></iframe>

Video credits:

The University of Costumed Heroes by the Free Software Foundation
LENGTH: 02:33
PRODUCER & DIRECTOR: Brad Burkhart
STORY: Douglas J. Eboch
ANIMATOR: Zygis Luksas

The University of Costumed Heroes by the Free Software Foundation Copyright © 2020 is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.

07 August, 2020 05:55AM

August 05, 2020

FSF News

Geoffrey Knauth elected Free Software Foundation president; Odile Bénassy joins the board

BOSTON, Massachusetts, USA -- Wednesday, August 5th, 2020 -- The Free Software Foundation (FSF) today announced the addition of a new director to its board, and the election of a new president.

Long-time free software activist and developer Odile Bénassy, known especially for her work promoting free software in France, was elected to the FSF's board of directors. Geoffrey Knauth, who has served on the FSF's board for over twenty years, was elected president.

On her election, Bénassy said, "I'm happy and proud to accept FSF's invitation to be part of the board. I want to help keep steady the principles of free software, and the philosophical values around it. Free software counts among what the world badly needs nowadays."

Knauth welcomed Bénassy, saying, "I am delighted that Odile Bénassy has agreed to become a director of the FSF, FSF's first director from Europe. Odile is a mathematics educator, researcher, software engineer, and leader of the GNU Edu project. She has been advocating for and developing free software for more than twenty years."

FSF's executive director, John Sullivan, added, "Being on the FSF's board of directors means first and foremost standing as a guardian for free software and the associated user freedoms. With such a long track record, Odile has shown herself to be someone FSF members and supporters can count on. I'm really looking forward to working with her, and I'm excited to see all the ways she'll help the FSF be better and stronger."

Describing his approach to his new position as president, Knauth posted a statement which begins, "The FSF board chose me at this moment as a servant leader to help the community focus on our shared dedication to protect and grow software that respects our freedoms. It is also important to protect and grow the diverse membership of the community. It is through our diversity of backgrounds and opinions that we have creativity, perspective, intellectual strength and rigor."

Geoff Knauth Odile Benassy

The full list of FSF board members and officers can be found at https://www.fsf.org/about/staff-and-board.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

John Sullivan
Executive Director
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

Geoffrey Knauth Photo Copyright ©2020 Geoffrey Knauth and used with permission. Odile Bénassy Photo Copyright ©2020 Odile Bénassy and used with permission.

Updated 2020-08-07: Knauth has served on the FSF board for over twenty years, not thirty.

05 August, 2020 08:20PM

FSF Blogs

Statement from FSF's new president, Geoffrey Knauth

The FSF Board chose me at this moment as a servant leader to help the community focus on our shared dedication to protect and grow software that respects our freedoms. It is also important to protect and grow the diverse membership of the community. It is through our diversity of backgrounds and opinions that we have creativity, perspective, intellectual strength, and rigor.

It is the community that has selflessly built the impressive collection of free software the world now enjoys. The community must be given credit for this achievement. The free software movement may have started with Richard Stallman's passion and lifelong commitment, and we all are grateful to that spark of imagination that gave us high purpose. At the same time, we are all aware that this community has grown large over the years. That's a very good thing.

It requires renewed focus to achieve our goals. We must remember what unites us and why we came to free software in the first place. What inspired us in the past? What will keep us inspired, and what will inspire new generations of free software developers? We must be kind to each other and respect each other when our good faith arguments differ, in order to produce the best solutions together. I pledge to support honest dialog and emerging leaders in the quest to secure the future for free software for generations to come, and not to alter the tenets of the free software vision.

I have been an active supporter and contributor from the moment the GNU Manifesto appeared, and by accident of time and space, I was lucky to witness the birth of a movement truly great and wonderful. To be honest, at the time my first thought was, "What a noble idea, but one person cannot do all this." Then I saw how over time, many good people from literally every corner of the planet gave of themselves to make free software a reality. It is you who are important, it is you who joined the effort to help the world see the virtues of free software, the dedication of its thousands of contributors and volunteers, the high quality of free software used every day around the world, and its sheer endurance and ability to find itself in widespread use even by those who were once fierce opponents to free software. Take that to heart, let's keep it going. Tell it to your children, and let's make sure your children have the freedoms you have achieved, and more.

05 August, 2020 08:17PM

August 04, 2020

Help the FSF tech team empower software users

Illustration of 2 people working on a computer

The Free Software Foundation (FSF) tech team is the four-person cornerstone of the primary infrastructure of the FSF and the GNU Project, providing the backbone for hundreds of free software projects, and they epitomize the hard work, creativity, and can-do attitude that characterize the free software movement. They’re pretty modest about it, but I think they deserve some serious credit: it’s only because of their everyday efforts (with the help of volunteers all over the world) that the FSF can boast that we can host our own services entirely on free software, and help other people to become freer every day. It’s also largely to their credit that the FSF staff were able to shift to mostly remote work this spring with barely a blip in our operations.

You can read a summary of their work over the last six months in the most recent issue of the Free Software Foundation Bulletin, but I wanted to give you a few highlights:

  • This March, the novel coronavirus swept in and caused the shutdown of nearly all in-person activities at the most inopportune time in the FSF’s yearly schedule: the week of the 2020 LibrePlanet conference. After deep discussion, the decision was made to take LibrePlanet online-only on Monday, March 9th; the conference was due to begin on Saturday, March 14th. You can see all of the details of how the conference ultimately ran on our blog, and you can watch the session videos on our MediaGoblin page. However, the thing that I want to emphasize here is that the tech team successfully ran an entire conference online, which they had never done before, and made it all run smoothly with only five days to prepare, and every piece of software used was free software. Like I said, they’re modest.

  • Next, the tech team set about addressing how proprietary remote communication tools used for staying in touch and for education are becoming a dangerous fact of everyday life. Having used Jitsi Meet as one part of the livestreaming process for LibrePlanet, they created a Jitsi Meet instance that FSF associate members can use for work and play. They can invite anyone to connect with them in a freedom-respecting video chat room. Not only does this instance enable you to chat with the people you care about without the abuses of proprietary software, but it also makes it easier than ever to demonstrate the advantages of free software to everyone you know!

  • Finally, we’re so proud that FSF Web developer Michael McMahon spearheaded the HACKERS and HOSPITALS project on the LibrePlanet wiki, enabling the hacker community to share resources and connect with activists who have been manufacturing an astounding variety of desperately-needed medical and protective equipment. Only free software gives hackers and makers the complete flexibility and freedom they need (and deserve!) in order to meet immediate needs, and Michael and many others have risen to the occasion admirably. You can read a dedicated article on HACKERS and HOSPITALS in the new issue of the Free Software Foundation Bulletin.

If you’re finding these accomplishments as exciting as we do, we hope you’re now motivated to chip in by becoming an associate member of the FSF! At this writing, we are only 13 members away from our goal of 200. The farther we surpass this goal, the more our tech team can achieve!

The value of a membership goes far beyond the dollars and cents needed to help us weather the challenges of this year: a membership is a vote of confidence that helps us launch new initiatives and puts weight behind our campaigns, licensing, and technical work. Plus, membership comes with plenty of benefits, including merchandise discounts, a bootable membership card, and the newest member perk: access to our Jitsi Meet videoconferencing server.

We don’t know what the future will bring in many ways, but we know that we can count on the ingenuity and hard work of the FSF tech team -- and so can you. Thank you so much for supporting their efforts!

Illustration Copyright © 2020 Free Software Foundation, Inc., by Raghavendra Kamath, licensed under Creative Commons Attribution 4.0 International license.

04 August, 2020 05:55PM

July 30, 2020

July GNU Spotlight with Mike Gerwitz: 22 new releases!

For announcements of most new GNU releases, subscribe to the info-gnu mailing list: https://lists.gnu.org/mailman/listinfo/info-gnu.

To download: nearly all GNU software is available from https://ftp.gnu.org/gnu/, or preferably one of its mirrors from https://www.gnu.org/prep/ftp.html. You can use the url https://ftpmirror.gnu.org/ to be automatically redirected to a (hopefully) nearby and up-to-date mirror.

A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see https://www.gnu.org/server/takeaction.html#unmaint if you'd like to help. The general page on how to help GNU is at https://www.gnu.org/help/help.html.

If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see https://www.gnu.org/help/evaluation.html.

As always, please feel free to write to us at maintainers@gnu.org with any GNUish questions or suggestions for future installments.

30 July, 2020 09:23PM

July 25, 2020

remotecontrol @ Savannah

July 24, 2020

bison @ Savannah

Bison 3.7 released

I am very happy to announce the release of Bison 3.7, whose main novelty,
contributed by Vincent Imbimbo, is the generation of counterexamples for
conflicts.  For instance on a grammar featuring the infamous "dangling else"
problem, "bison -Wcounterexamples" now gives:

    $ bison -Wcounterexamples else.y
    else.y: warning: 1 shift/reduce conflict [-Wconflicts-sr]
    else.y: warning: shift/reduce conflict on token "else" [-Wcounterexamples]
      Example: "if" exp "then" "if" exp "then" exp • "else" exp
      Shift derivation
        exp
        ↳ "if" exp "then" exp
                          ↳ "if" exp "then" exp • "else" exp
      Reduce derivation
        exp
        ↳ "if" exp "then" exp                     "else" exp
                          ↳ "if" exp "then" exp •

which actually proves that the grammar is ambiguous by exhibiting a text
sample with two derivations (corresponding to two parse trees).  When Bison
is installed with text styling enabled, the example is actually shown twice,
with colors demonstrating the ambiguity (see
https://www.gnu.org/software/bison/manual/html_node/Counterexamples.html).

Joshua Watt contributed the option "--file-prefix-map OLD=NEW", to make
reproducible builds.

There are many other changes (hyperlinks in the diagnostics, reproducible
handling of string aliases with escapes, improvements in the push parsers,
etc.), please see the NEWS below for more details.

Cheers!

       Akim

PS/ The experimental back-end for the D programming language is still
looking for active support from the D community.

==================================================================

Here are the compressed sources:
  https://ftp.gnu.org/gnu/bison/bison-3.7.tar.gz   (5.1MB)
  https://ftp.gnu.org/gnu/bison/bison-3.7.tar.lz   (3.1MB)
  https://ftp.gnu.org/gnu/bison/bison-3.7.tar.xz   (3.1MB)

Here are the GPG detached signatures[*]:
  https://ftp.gnu.org/gnu/bison/bison-3.7.tar.gz.sig
  https://ftp.gnu.org/gnu/bison/bison-3.7.tar.lz.sig
  https://ftp.gnu.org/gnu/bison/bison-3.7.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify bison-3.7.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 0DDCAA3278D5264E

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.69
  Automake 1.16.2
  Flex 2.6.4
  Gettext 0.19.8.1
  Gnulib v0.1-3644-gac34618e8

==================================================================

GNU Bison is a general-purpose parser generator that converts an annotated
context-free grammar into a deterministic LR or generalized LR (GLR) parser
employing LALR(1) parser tables.  Bison can also generate IELR(1) or
canonical LR(1) parser tables.  Once you are proficient with Bison, you can
use it to develop a wide range of language parsers, from those used in
simple desk calculators to complex programming languages.

Bison is upward compatible with Yacc: all properly-written Yacc grammars
work with Bison with no change.  Anyone familiar with Yacc should be able to
use Bison with little trouble.  You need to be fluent in C, C++ or Java
programming in order to use Bison.

Bison and the parsers it generates are portable, they do not require any
specific compilers.

GNU Bison's home page is https://gnu.org/software/bison/.

==================================================================

* Noteworthy changes in release 3.7 (2020-07-23) [stable]

** Deprecated features

  The YYPRINT macro, which works only with yacc.c and only for tokens, was
  obsoleted long ago by %printer, introduced in Bison 1.50 (November 2002).
  It is deprecated and its support will be removed eventually.

  In conformance with the recommendations of the Graphviz team, in the next
  version Bison the option `--graph` will generate a *.gv file by default,
  instead of *.dot.  A transition started in Bison 3.4.

** New features

*** Counterexample Generation

  Contributed by Vincent Imbimbo.

  When given `-Wcounterexamples`/`-Wcex`, bison will now output
  counterexamples for conflicts.

**** Unifying Counterexamples

  Unifying counterexamples are strings which can be parsed in two ways due
  to the conflict.  For example on a grammar that contains the usual
  "dangling else" ambiguity:

    $ bison else.y
    else.y: warning: 1 shift/reduce conflict [-Wconflicts-sr]
    else.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples

    $ bison else.y -Wcex
    else.y: warning: 1 shift/reduce conflict [-Wconflicts-sr]
    else.y: warning: shift/reduce conflict on token "else" [-Wcounterexamples]
      Example: "if" exp "then" "if" exp "then" exp • "else" exp
      Shift derivation
        exp
        ↳ "if" exp "then" exp
                          ↳ "if" exp "then" exp • "else" exp
      Example: "if" exp "then" "if" exp "then" exp • "else" exp
      Reduce derivation
        exp
        ↳ "if" exp "then" exp                     "else" exp
                          ↳ "if" exp "then" exp •

  When text styling is enabled, colors are used in the examples and the
  derivations to highlight the structure of both analyses.  In this case,

    "if" exp "then" [ "if" exp "then" exp • ] "else" exp

  vs.

    "if" exp "then" [ "if" exp "then" exp • "else" exp ]


  The counterexamples are "focused", in two different ways.  First, they do
  not clutter the output with all the derivations from the start symbol,
  rather they start on the "conflicted nonterminal". They go straight to the
  point.  Second, they don't "expand" nonterminal symbols uselessly.

**** Nonunifying Counterexamples

  In the case of the dangling else, Bison found an example that can be
  parsed in two ways (therefore proving that the grammar is ambiguous).
  When it cannot find such an example, it instead generates two examples
  that are the same up until the dot:

    $ bison foo.y
    foo.y: warning: 1 shift/reduce conflict [-Wconflicts-sr]
    foo.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples
    foo.y:4.4-7: warning: rule useless in parser due to conflicts [-Wother]
        4 | a: expr
          |    ^~~~

    $ bison -Wcex foo.y
    foo.y: warning: 1 shift/reduce conflict [-Wconflicts-sr]
    foo.y: warning: shift/reduce conflict on token ID [-Wcounterexamples]
      First example: expr • ID ',' ID $end
      Shift derivation
        $accept
        ↳ s                      $end
          ↳ a                 ID
            ↳ expr
              ↳ expr • ID ','
      Second example: expr • ID $end
      Reduce derivation
        $accept
        ↳ s             $end
          ↳ a        ID
            ↳ expr •
    foo.y:4.4-7: warning: rule useless in parser due to conflicts [-Wother]
        4 | a: expr
          |    ^~~~

  In these cases, the parser usually doesn't have enough lookahead to
  differentiate the two given examples.

**** Reports

  Counterexamples are also included in the report when given
  `--report=counterexamples`/`-rcex` (or `--report=all`), with more
  technical details:

    State 7

      1 exp: "if" exp "then" exp •  [$end, "then", "else"]
      2    | "if" exp "then" exp • "else" exp

      "else"  shift, and go to state 8

      "else"    [reduce using rule 1 (exp)]
      $default  reduce using rule 1 (exp)

      shift/reduce conflict on token "else":
          1 exp: "if" exp "then" exp •
          2 exp: "if" exp "then" exp • "else" exp
        Example: "if" exp "then" "if" exp "then" exp • "else" exp
        Shift derivation
          exp
          ↳ "if" exp "then" exp
                            ↳ "if" exp "then" exp • "else" exp
        Example: "if" exp "then" "if" exp "then" exp • "else" exp
        Reduce derivation
          exp
          ↳ "if" exp "then" exp                     "else" exp
                            ↳ "if" exp "then" exp •

*** File prefix mapping

  Contributed by Joshua Watt.

  Bison learned a new argument, `--file-prefix-map OLD=NEW`.  Any file path
  in the output (specifically `#line` directives and `#ifdef` header guards)
  that begins with the prefix OLD will have it replaced with the prefix NEW,
  similar to the `-ffile-prefix-map` in GCC.  This option can be used to
  make bison output reproducible.

** Changes

*** Diagnostics

  When text styling is enabled and the terminal supports it, the warnings
  now include hyperlinks to the documentation.

*** Relocatable installation

  When installed to be relocatable (via `configure --enable-relocatable`),
  bison will now also look for a relocated m4.

*** C++ file names

  The `filename_type` %define variable was renamed `api.filename.type`.
  Instead of

    %define filename_type "symbol"

  write

    %define api.filename.type {symbol}

  (Or let `bison --update` do it for you).

  It now defaults to `const std::string` instead of `std::string`.

*** Deprecated %define variable names

  The following variables have been renamed for consistency.  Backward
  compatibility is ensured, but upgrading is recommended.

    filename_type       -> api.filename.type
    package             -> api.package

*** Push parsers no longer clear their state when parsing is finished

  Previously push-parsers cleared their state when parsing was finished (on
  success and on failure).  This made it impossible to check if there were
  parse errors, since `yynerrs` was also reset.  This can be especially
  troublesome when used in autocompletion, since a parser with error
  recovery would suggest (irrelevant) expected tokens even if there were
  failure.

  Now the parser state can be examined when parsing is finished.  The parser
  state is reset when starting a new parse.

** Documentation

*** Examples

  The bistromathic demonstrates %param and how to quote sources in the error
  messages:

    > 123 456
    1.5-7: syntax error: expected end of file or + or - or * or / or ^ before number
        1 | 123 456
          |     ^~~

** Bug fixes

*** Include the generated header (yacc.c)

  Historically, when --defines was used, bison generated a header and pasted
  an exact copy of it into the generated parser implementation file.  Since
  Bison 3.4 it is possible to specify that the header should be `#include`d,
  and how.  For instance

    %define api.header.include {"parse.h"}

  or

    %define api.header.include {<parser/parse.h>}

  Now api.header.include defaults to `"header-basename"`, as was intended in
  Bison 3.4, where `header-basename` is the basename of the generated
  header.  This is disabled when the generated header is `y.tab.h`, to
  comply with Automake's ylwrap.

*** String aliases are faithfully propagated

  Bison used to interpret user strings (i.e., decoding backslash escapes)
  when reading them, and to escape them (i.e., issue non-printable
  characters as backslash escapes, taking the locale into account) when
  outputting them.  As a consequence non-ASCII strings (say in UTF-8) ended
  up "ciphered" as sequences of backslash escapes.  This happened not only
  in the generated sources (where the compiler will reinterpret them), but
  also in all the generated reports (text, xml, html, dot, etc.).  Reports
  were therefore not readable when string aliases were not pure ASCII.
  Worse yet: the output depended on the user's locale.

  Now Bison faithfully treats the string aliases exactly the way the user
  spelled them.  This fixes all the aforementioned problems.  However, now,
  string aliases semantically equivalent but syntactically different (e.g.,
  "A", "\x41", "\101") are considered to be different.

*** Crash when generating IELR

  An old, well hidden, bug in the generation of IELR parsers was fixed.

24 July, 2020 04:40AM by Akim Demaille

July 23, 2020

GNU Guix

Improve Internationalization Support for the Guix Data Service

The first half of my Outreachy internship is already over and I am really excited to share my experience. Over the past weeks I’ve had the opportunity to work on the Guix Data Service, watch myself change, and accomplish way more than I thought I would.

The Guix Data Service processes, stores and provides data about Guix over time. It provides a complementary interface to Guix itself by having a web interface and API to browse and access the data.

The work I have done so far revolves around storing translated lint checker descriptions as well as package synopsis and descriptions in the Guix Data Service PostgreSQL database and making them available through the Guix Data Service web interface.

Initially the Guix Data Service database had translated versions of lint warning messages available, but they were not accessible through the web interface, so I made that possible during the contribution period.

Working on making lint warning messages available on the web interface made it easier for me to understand how translations for lint checker descriptions and package synopsis and descriptions would be stored in the database and later on be made available through the Guix Data Service web interface. At this point, the Guix Data Service supports package synopsis and descriptions as well as lint checker descriptions in various locales.

Guix Data Service page for the audacity package, in the Spanishlocale

Hopefully these changes will provide the Guix Data Service users with a more feasible way to interact with Guix data.

I have to note that this is my first internship and I was initially reluctant to believe that I would be able to tackle or successfully accomplish the tasks I was assigned, but with my mentor’s help and guidance I managed to. So far it has been a rewarding experience because it has helped me make progress in so many aspects, whilst contributing to a project that will potentially increase inclusion.

While working on this project, I’ve significantly improved my Guile, SQL, and Git skills and I am now more aware of how software localization is achieved. In addition to getting more technically skilled, this internship has taught me how to manage time and emotions when dealing with more than one activity at a time.

Now that a good share of what was initially planned to be done is accomplished, my mentor suggested working on something using the Guix Data Service data and I will be engaged in that during the remaining half.

These first 7 weeks of my internship have gone by really fast, but I have enjoyed everything and I am so eager to experience what's to come.

23 July, 2020 12:00PM by Danjela Lura

July 22, 2020

parallel @ Savannah

GNU Parallel 20200722 ('Privacy Shield') released [stable]

GNU Parallel 20200722 ('Privacy Shield') [stable] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

No new functionality was introduced so this is a good candidate for a stable release.

Quote of the month:

  With multicore systems everywhere GNU Parallel is a must have tool.
    -- Neil H. Watson @neil_h_watson@twitter

 

New in this release:

  • No new functionality
  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 3374ec53bacb199b245af2dda86df6c9
    12345678 3374ec53 bacb199b 245af2dd a86df6c9
    $ md5sum install.sh | grep 029a9ac06e8b5bc6052eac57b2c3c9ca
    029a9ac0 6e8b5bc6 052eac57 b2c3c9ca
    $ sha512sum install.sh | grep f517006d9897747bed8a4694b1acba1b
    40f53af6 9e20dae5 713ba06c f517006d 9897747b ed8a4694 b1acba1b 1464beb4
    60055629 3f2356f3 3e9c4e3c 76e3f3af a9db4b32 bd33322b 975696fc e6b23cfb
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 July, 2020 09:17PM by Ole Tange

July 20, 2020

Applied Pokology

Integral structs in GNU poke

This weekend I finally implemented support for the so-called "integral structs" in poke. This expands the expressivity power of Poke structs to cover cases where data is stored in composited integral containers, i.e. when data is structured within stored integers.

20 July, 2020 12:00AM

July 17, 2020

GNU Guix

Running a Ganeti cluster on Guix

The latest addition to Guix's ever-growing list of services is a little-known virtualization toolkit called Ganeti. Ganeti is designed to keep virtual machines running on a cluster of servers even in the event of hardware failures, and to make maintenance and recovery tasks easy.

It is comparable to tools such as Proxmox or oVirt, but has some distinctive features. One is that there is no GUI: third party ones exist, but are not currently packaged in Guix, so you are left with a rich command-line client and a fully featured remote API.

Another interesting feature is that installing Ganeti on its own leaves you no way to actually deploy any virtual machines. That probably sounds crazy, but stems from the fact that Ganeti is designed to be API-driven and automated, thus it comes with a OS API and users need to install one or more OS providers in addition to Ganeti. OS providers offer a declarative way to deploy virtual machine variants and should feel natural to Guix users. At the time of writing, the providers available in Guix are debootstrap for provisioning Debian- and Ubuntu-based VMs, and of course a Guix provider.

Finally Ganeti comes with a sophisticated instance allocation framework that efficiently packs virtual machines across a cluster while maintaining N+1 redundancy in case of a failover scenario. It can also make informed scheduling decisions based on various cluster tags, such as ensuring primary and secondary nodes are on different power distribution lines.

(Note: if you are looking for a way to run just a few virtual machines on your local computer, you are probably better off using libvirt or even a Childhurd as Ganeti is fairly heavyweight and requires a complicated networking setup.)

Preparing the configuration

With introductions out of the way, let's see how we can deploy a Ganeti cluster using Guix. For this tutorial we will create a two-node cluster and connect instances to the local network using an Open vSwitch bridge with no VLANs. We assume that each node has a single network interface named eth0 connected to the same network, and that a dedicated partition /dev/sdz3 is available for virtual machine storage. It is possible to store VMs on a number of other storage backends, but a dedicated drive (or rather LVM volume group) is necessary to use the DRBD integration to replicate VM disks.

We'll start off by defining a few helper services to create the Open vSwitch bridge and ensure the physical network interface is in the "up" state. Since Open vSwich stores the configuration in a database, you might as well run the equivalent ovs-vsctl commands on the host once and be done with it, but we do it through the configuration system to ensure we don't forget it in the future when adding or reinstalling nodes.

(use-modules (gnu)
             (gnu packages linux)
             (gnu packages networking)
             (gnu services shepherd))

(define (start-interface if)
  #~(let ((ip #$(file-append iproute "/sbin/ip")))
      (invoke/quiet ip "link" "set" #$if "up")))

(define (stop-interface if)
  #~(let ((ip #$(file-append iproute "/sbin/ip")))
      (invoke/quiet ip "link" "set" #$if "down")))

;; This service is necessary to ensure eth0 is in the "up" state on boot
;; since it is otherwise unmanaged from Guix PoV.
(define (ifup-service if)
  (let ((name (string-append "ifup-" if)))
    (simple-service name shepherd-root-service-type
                    (list (shepherd-service
                           (provision (list (string->symbol name)))
                           (start #~(lambda ()
                                      #$(start-interface if)))
                           (stop #~(lambda (_)
                                     #$(stop-interface if)))
                           (respawn? #f))))))

;; Note: Remove vlan_mode to use tagged VLANs.
(define (create-openvswitch-bridge bridge uplink)
  #~(let ((ovs-vsctl (lambda (cmd)
                       (apply invoke/quiet
                              #$(file-append openvswitch "/bin/ovs-vsctl")
                              (string-tokenize cmd)))))
      (and (ovs-vsctl (string-append "--may-exist add-br " #$bridge))
           (ovs-vsctl (string-append "--may-exist add-port " #$bridge " "
                                     #$uplink
                                     " vlan_mode=native-untagged")))))

(define (create-openvswitch-internal-port bridge port)
  #~(invoke/quiet #$(file-append openvswitch "/bin/ovs-vsctl")
                  "--may-exist" "add-port" #$bridge #$port
                  "vlan_mode=native-untagged"
                  "--" "set" "Interface" #$port "type=internal"))

(define %openvswitch-configuration-service
  (simple-service 'openvswitch-configuration shepherd-root-service-type
                  (list (shepherd-service
                         (provision '(openvswitch-configuration))
                         (requirement '(vswitchd))
                         (start #~(lambda ()
                                    #$(create-openvswitch-bridge
                                       "br0" "eth0")
                                    #$(create-openvswitch-internal-port
                                       "br0" "gnt0")))
                         (respawn? #f)))))

This defines a openvswitch-configuration service object that creates a logical switch br0, connects eth0 as the "uplink", and creates a logical port gnt0 that we will use later as the main network interface for this system. We also create an ifup service that can bring network interfaces up and down. By themselves these variables do nothing, we also have to add them to our operating-system configuration below.

Such a configuration might be suitable for a small home network. In a datacenter deployment you would likely use tagged VLANs, and maybe a traditional Linux bridge instead of Open vSwitch. You can also forego bridging altogether with a routed networking setup, or do any combination of the three.

With this in place, we can start creating the operating-system configuration that we will use for the Ganeti servers:

;; [continued from the above configuration snippet]

(use-service-modules base ganeti linux networking ssh)

(operating-system
  (host-name "node1")
  [...]
  ;; Ganeti requires that each node and the cluster name resolves to an
  ;; IP address.  The easiest way to achieve this is by adding everything
  ;; to the hosts file.
  (hosts-file (plain-file "hosts" "
127.0.0.1       localhost
::1             localhost

192.168.1.200   ganeti.lan
192.168.1.201   node1
192.168.1.202   node2
"))
  (kernel-arguments
   (append %default-kernel-arguments
           '(;; Disable DRBDs usermode helper, as Ganeti is the only entity
             ;; that should manage DRBD.
             "drbd.usermode_helper=/run/current-system/profile/bin/true")))

  (packages (append (map specification->package
                         '("qemu" "drbd-utils" "lvm2"
                           "ganeti-instance-guix"
                           "ganeti-instance-debootstrap"))
                    %base-packages))

  (services (cons* (service ganeti-service-type
                            (ganeti-configuration
                             (file-storage-paths '("/srv/ganeti/file-storage"))
                             (os
                              (list (guix-os %default-guix-variants)
                                    (debootstrap-os
                                     (list (debootstrap-variant
                                            "buster"
                                            (debootstrap-configuration
                                             (suite "buster")))
                                           (debootstrap-variant
                                            "testing+contrib+paravirtualized"
                                            (debootstrap-configuration
                                             (suite "testing")
                                             (hooks
                                              (local-file
                                               "paravirt-hooks"
                                               #:recursive? #t))
                                             (extra-pkgs
                                              (delete "linux-image-amd64"
                                                      %default-debootstrap-extra-pkgs))
                                             (components '("main" "contrib"))))))))))

                   ;; Ensure the DRBD kernel module is loaded.
                   (service kernel-module-loader-service-type
                            '("drbd"))

                   ;; Create a static IP on the "gnt0" Open vSwitch interface.
                   (service openvswitch-service-type)
                   %openvswitch-configuration-service
                   (ifup-service "eth0")
                   (static-networking-service "gnt0" "192.168.1.201"
                                              #:netmask "255.255.255.0"
                                              #:gateway "192.168.1.1"
                                              #:requirement '(openvswitch-configuration)
                                              #:name-servers '("192.168.1.1"))

                   (service openssh-service-type
                            (openssh-configuration
                             (permit-root-login 'without-password)))
                   %base-services)))

Here we declare two OS "variants" for the debootstrap OS provider. Debootstrap variants rely on a set of scripts (known as "hooks") in the installation process to do things like configure networking, install bootloader, create users, etc. In the example above, the "buster" variant uses the default hooks provided by Guix which configures network and GRUB, whereas the "testing+contrib+paravirtualized" variant use a local directory next to the configuration file named "paravirt-hooks" (it is copied into the final system closure).

We also declare a default guix-os variant provided by Guix's Ganeti service.

Ganeti veterans may be surprised that each OS variant has its own hooks. The Ganeti deployments I've seen use a single set of hooks for all variants, sometimes with additional logic inside the script based on the variant. Guix offers a powerful abstraction that makes it trivial to create per-variant hooks, obsoleting the need for a big /etc/ganeti/instance-debootstrap/hooks directory. Of course you can still create it if you wish and set the hooks property of the variants to #f.

Not all Ganeti options are exposed in the configuration system yet. If you find it limiting, you can add custom files using extra-special-file, or ideally extend the <ganeti-configuration> data type to suite your needs. You can also use gnt-cluster copyfile and gnt-cluster command to distribute files or run executables, but undeclared changes in /etc may be lost on the next reboot or reconfigure.

Initializing a cluster

At this stage, you should run guix system reconfigure with the new configuration on all nodes that will participate in the cluster. If you do this over SSH or with guix deploy, beware that eth0 will lose network connectivity once it is "plugged in to" the virtual switch, and you need to add any IP configuration to gnt0.

The Guix configuration system does not currently support declaring LVM volume groups, so we will create these manually on each node. We could write our own declarative configuration like the %openvswitch-configuration-service, but for brevity and safety reasons we'll do it "by hand":

pvcreate /dev/sdz3
vgcreate ganetivg /dev/sdz3

On the node that will act as the "master node", run the init command:

Warning: this will create new SSH keypairs, both host keys and for the root user! You can prevent that by adding --no-ssh-init, but then you will need to distribute /var/lib/ganeti/known_hosts to all hosts, and authorize the Ganeti key for the root user in openssh-configuration. Here we let Ganeti manage the keys for simplicity. As a bonus, we can automatically rotate the cluster keys in the future using gnt-cluster renew-crypto --new-ssh-keys.

gnt-cluster init \
    --master-netdev=gnt0 \
    --vg-name=ganetivg \
    --enabled-disk-templates=file,plain,drbd \
    --drbd-usermode-helper=/run/current-system/profile/bin/true \
    --enabled-hypervisors=kvm \
    --hypervisor-parameters=kvm:kvm_flag=enabled \
    --nic-parameters=mode=openvswitch,link=br0 \
    --no-etc-hosts \
    ganeti.lan

--no-etc-hosts prevents Ganeti from automatically updating the /etc/hosts file when nodes are added or removed, which makes little sense on Guix because it is recreated every reboot/reconfigure.

See the gnt-cluster manual for information on the available options. Most can be changed at runtime with gnt-cluster modify.

If all goes well, the command returns no output and you should have the ganeti.lan IP address visible on gnt0. You can run gnt-cluster verify to check that the cluster is in good shape. Most likely it complains about something:

root@node1 ~# gnt-cluster verify
Submitted jobs 3, 4
Waiting for job 3 ...
Thu Jul 16 18:26:34 2020 * Verifying cluster config
Thu Jul 16 18:26:34 2020 * Verifying cluster certificate files
Thu Jul 16 18:26:34 2020 * Verifying hypervisor parameters
Thu Jul 16 18:26:34 2020 * Verifying all nodes belong to an existing group
Waiting for job 4 ...
Thu Jul 16 18:26:34 2020 * Verifying group 'default'
Thu Jul 16 18:26:34 2020 * Gathering data (1 nodes)
Thu Jul 16 18:26:34 2020 * Gathering information about nodes (1 nodes)
Thu Jul 16 18:26:35 2020 * Gathering disk information (1 nodes)
Thu Jul 16 18:26:35 2020 * Verifying configuration file consistency
Thu Jul 16 18:26:35 2020 * Verifying node status
Thu Jul 16 18:26:35 2020   - ERROR: node node1: hypervisor kvm parameter verify failure (source cluster): Parameter 'kernel_path' fails validation: not found or not a file (current value: '/boot/vmlinuz-3-kvmU')
Thu Jul 16 18:26:35 2020 * Verifying instance status
Thu Jul 16 18:26:35 2020 * Verifying orphan volumes
Thu Jul 16 18:26:35 2020 * Verifying N+1 Memory redundancy
Thu Jul 16 18:26:35 2020 * Other Notes
Thu Jul 16 18:26:35 2020 * Hooks Results

When using the KVM hypervisor, Ganeti expects to find a dedicated kernel image for virtual machines in /boot. For this tutorial we only use fully virtualized instances (meaning each VM runs its own kernel), so we can set kernel_path to an empty string to make the warning disappear:

gnt-cluster modify -H kvm:kernel_path=

Now let's add our other machine to the cluster:

gnt-node add node2

Ganeti will log into the node, copy the cluster configuration and start the relevant Shepherd services. You may need to authorize node1's SSH key first. Run gnt-cluster verify again to check that everything is in order:

gnt-cluster verify

If you used --no-ssh-init earlier you will likely get SSH host key warnings here. In that case you should update /var/lib/ganeti/known_hosts with the new node information, and distribute it with gnt-cluster copyfile or by adding it to the OS configuration.

The above configuration will make three operating systems available:

# gnt-os list
Name
debootstrap+buster
debootstrap+testing+contrib+paravirtualized
guix+default

Let's try them out. But first we'll make Ganeti aware of our network so it can choose a static IP for the virtual machines.

# gnt-network add --network=192.168.1.0/24 --gateway=192.168.1.1 lan
# gnt-network connect -N mode=openvswitch,link=br0 lan

Now we can add an instance:

root@node1 ~# gnt-instance add --no-name-check --no-ip-check \
    -o debootstrap+buster -t drbd --disk 0:size=5G \
    --net 0:network=lan,ip=pool bustervm1
Thu Jul 16 18:28:58 2020  - INFO: Selected nodes for instance bustervm1 via iallocator hail: node1, node2
Thu Jul 16 18:28:58 2020  - INFO: NIC/0 inherits netparams ['br0', 'openvswitch', '']
Thu Jul 16 18:28:58 2020  - INFO: Chose IP 192.168.1.2 from network lan
Thu Jul 16 18:28:58 2020 * creating instance disks...
Thu Jul 16 18:29:03 2020 adding instance bustervm1 to cluster config
Thu Jul 16 18:29:03 2020 adding disks to cluster config
Thu Jul 16 18:29:03 2020  - INFO: Waiting for instance bustervm1 to sync disks
Thu Jul 16 18:29:03 2020  - INFO: - device disk/0:  0.60% done, 5m 26s remaining (estimated)
[...]
Thu Jul 16 18:31:08 2020  - INFO: - device disk/0: 100.00% done, 0s remaining (estimated)
Thu Jul 16 18:31:08 2020  - INFO: Instance bustervm1's disks are in sync
Thu Jul 16 18:31:08 2020  - INFO: Waiting for instance bustervm1 to sync disks
Thu Jul 16 18:31:08 2020  - INFO: Instance bustervm1's disks are in sync
Thu Jul 16 18:31:08 2020 * running the instance OS create scripts...
Thu Jul 16 18:32:09 2020 * starting instance...

Ganeti will automatically select the optimal primary and secondary node for this VM based on available cluster resources. You can manually specify primary and secondary nodes with the -n and -s options.

By default Ganeti assumes that the new instance is already configured in DNS, so we need --no-name-check and --no-ip-check to bypass some sanity tests.

Try adding another instance, now using the Guix OS provider with the 'plain' (LVM) disk backend:

gnt-instance add --no-name-check --no-ip-check -o guix+default \
    -t plain --disk 0:size=5G -B memory=1G,vcpus=2 \
    --net 0:network=lan,ip=pool \
    guix1

The guix+default variant has a configuration that starts an SSH server and authorizes the hosts SSH key, and configures static networking based on information from Ganeti. To use other configuration files, you should declare variants with the config file as the configuration property. The Guix provider also supports "OS parameters" that lets you specify a specific Guix commit or branch:

gnt-instance add --no-name-check --no-ip-check \
    -o guix+gnome -O "commit=<commit>" \
    -H kvm:spice_bind=0.0.0.0,cpu_type=host \
    -t file --file-storage-dir=/srv/ganeti/file-storage \
    --disk 0:size=20G -B minmem=1G,maxmem=6G,vcpus=3 \
    --net 0:network=lan,ip=pool -n node1 \
    guix2

You can connect to a VM serial console using gnt-instance console <instance>. For this last VM we used a hypothetical 'guix+gnome' variant, and added a graphical SPICE console that you can connect to remotely using the spicy command.

If you are new to Ganeti, the next steps is to familiarize yourself with the gnt- family commands. Fun stuff to do include gnt-instance migrate to move VMs between hosts, gnt-node evacuate to migrate all VMs off a node, and gnt-cluster master-failover to move the master role to a different node.

If you wish to start over for any reason, you can use gnt-cluster destroy.

Final remarks

The declarative nature of Guix maps well to Ganetis OS API. OS variants can be composed and inherit from each other, something that is not easily achieved with traditional configuration management tools. The author had a lot of fun creating native data types in the Guix configuration system for Ganetis OS configuration, and it made me wonder whether other parts of Ganeti could be made declarative such as aspects of instance and cluster configuration. In any case I'm happy and excited to finally be able to use Guix as a Ganeti host OS.

Like most services in Guix, Ganeti comes with a system test that runs in a VM and ensures that things like initializing a cluster work. The continuous integration system runs this automatically whenever a dependency is updated, and provides comfort that both the package and service is in a good shape. Currently it has rudimentary service tests, but it can conceivably be extended to provision a real cluster inside Ganeti and try things like master-failover and live migration.

So far only the KVM hypervisor has been tested. If you use LXC or Xen with Ganeti, please reach out to guix-devel@gnu.org and share your experience.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

17 July, 2020 03:00PM by Marius Bakke

July 16, 2020

Applied Pokology

Writing binary utilities with GNU poke

GNU poke is, first and foremost, intended to be used as an interactive editor, either directly on the command line or using a graphical user interface built on it. However, since its conception poke was intended to also provide a suitable and useful foundation on which other programs, the so-called binary utilities, could be written. At last, the development of poke has progressed to a point where we can start writing such utilities, and the purpose of this article is to show a small, albeit working and useful example of what can be achieved by writing a few lines of Poke: an extractor for ELF sections.

16 July, 2020 12:00AM

July 14, 2020

autoconf @ Savannah

Autoconf 2.69b [beta]

Autoconf 2.69b has been released, see the release announcement:
<https://lists.gnu.org/archive/html/autoconf/2020-07/msg00006.html>

14 July, 2020 06:02PM by Zack Weinberg

Christopher Allan Webber

Announcing FOSS and Crafts

I wrote recently about departing Libre Lounge but as I said there, "This is probably not the end of me doing podcasting, but if I start something up again it'll be a bit different in its structure."

Well! Morgan and I have co-launched a new podcast called FOSS and Crafts! As the title implies, it's going to be a fairly interdisciplinary podcast... the title says it all fairly nicely I think: "A podcast about free software, free culture, and making things together."

We already have the intro episode out! It's fairly intro-episode'y... meet the hosts, hear about what to expect from the show, etc etc... but we do talk a bit about some background of the name!

But more substantial episodes will be out soon. We have a lot of plans and ideas for the show, and I've got a pretty good setup for editing/publishing now. So if that sounds fun, subscribe, and more stuff should be hitting your ears soon!

(PS: we have a nice little community growing in #fossandcrafts on irc.freenode.net if you're into that kind of thing!)

14 July, 2020 04:20PM by Christopher Lemmer Webber

July 11, 2020

GNUnet News

GNUnet 0.13.1

GNUnet 0.13.1 released

This is a bugfix release for gnunet and gnunet-gtk specifically.
For gnunet, no changes to the source have been made. However, the default configuration had to be modified to support the changes made in 0.13.0.
For gnunet-gtk, this fixes a more serious issue where the 0.13.0 tarball failed to build.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

11 July, 2020 10:00PM

July 09, 2020

www @ Savannah

Malware in Proprietary Software - Latest Additions

The initial injustice of proprietary software often leads to further injustices: malicious functionalities.

The introduction of unjust techniques in nonfree software, such as backdoors, DRM, tethering and others, has become ever more frequent. Nowadays, it is standard practice.

We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.

Here are our latest additions:

Malware In Cars

Surveillance

This shows why requiring the user's “consent” is not an adequate basis for protecting digital privacy. The boss can coerce most workers into consenting to almost anything, even probable exposure to contagious disease that can be fatal. Software like this should be illegal and bosses that demand it should be prosecuted for it.

09 July, 2020 04:49PM by Dora Scilipoti

July 07, 2020

www-cs @ Savannah

July 06, 2020

GNUnet News

GNUnet 0.13.0

GNUnet 0.13.0 released

We are pleased to announce the release of GNUnet 0.13.0.
This is a new major release. It breaks protocol compatibility with the 0.12.x versions. Please be aware that Git master is thus henceforth INCOMPATIBLE with the 0.12.x GNUnet network, and interactions between old and new peers will result in signature verification failures. 0.12.x peers will NOT be able to communicate with Git master or 0.13.x peers.
In terms of usability, users should be aware that there are still a large number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.13.0 release is still only suitable for early adopters with some reasonable pain tolerance.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.13.0 (since 0.12.2)

  • GNS:
    • Aligned with specification LSD001.
    • NSS plugin "block" fixed. #5782
    • Broken set NICK API removed.#6092
    • New record flags: SUPPLEMENTAL. Records which are not explicitly configured/published under a specific label but which are still informational are returned by the resolver and flagged accordingly. #6103
    • gnunet-namestore now complains when adding TLSA or SRV records outside of a BOX
  • CADET: Fixed tunnel establishment as well as an outstanding bug regarding tunnel destruction. #5822
  • GNS/REVOCATION: Revocation proof of work has function changed to argon2 and modified to reduce variance.
  • RECLAIM: Increased ticket length to 256 bit. #6047
  • TRANSPORT: UDP plugin moved to experimental as it is known to be unstable.
  • UTIL:
    • Serialization / file format of ECDSA private keys harmonized with other libraries. Old private keys will no longer work! #6070
    • Now using libsodium for EC cryptography.
    • Builds against cURL which is not linked against gnutls are now possible but still not recommended. Configure will warn that this will impede the GNS functionality. This change will make hostlist discovery work more reliable for some distributions.
    • GNUNET_free_non_null removed. GNUNET_free changed to not assert that the pointer is not NULL. For reference see the Taler security audit.
    • AGPL request handlers added GNUnet and extension templates.
  • (NEW) GANA Registry: We have established a registry to be used for names and numbers in GNUnet. This includes constants for protocols including GNS record types and GNUnet peer-to-peer messages. See GANA.
  • (NEW) Living Standards: LSD subdomain and LSD0001 website: LSD0001
  • (NEW) Continuous integration: Buildbot is back.
  • Buildsystem: A significant number of build system changes:
    • libmicrohttpd and libjansson are now required dependencies.
    • New dependency: libsodium.
    • Fixed an issue with libidn(2) detection.
A detailed list of changes can be found in the ChangeLog andthe 0.13.0 bugtracker.

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, Florian Dold, Jonathan Buchanan, t3sserakt, nikita and Martin Schanzenbach.

06 July, 2020 10:00PM

July 02, 2020

health @ Savannah

GNU Health Control Center 3.6.5 supports Weblate

Dear community

As you may know, GNU Health HMIS has migrated its translation server to Weblate.
(see news https://savannah.gnu.org/forum/forum.php?forum_id=9762)

Today, we have released the GH Control Center 3.6.5, which has support to Weblate for the language installation. The syntax is the same, and you won't notice any difference.

To update to the latest GH Control Center, run:

$ cdutil
$ ./gnuhealth-control update

That will fetch and install the latest version, and you're ready to go :)

Happy and Healthy hacking!
Luis

02 July, 2020 09:50PM by Luis Falcon

gnucobol @ Savannah

GnuCOBOL 3.1-rc-1 on alpha.gnu.org

While this version is a release-candidate (with an expected full release within 3 months) it is the most stable and complete free COBOL compiler ever available.

Source kits can be found at https://alpha.gnu.org/gnu/gnucobol, the first pre-built binaries are already available and the OS package managers are invited to update their packages.

Compared to the last stable release of 2.2 we have such a huge change list that it is too much to note here for the release candidate; but it will be done with the final release and even there the NEWS entry which you can check now at https://sourceforge.net/p/gnucobol/code/HEAD/tree/tags/gnucobol-3.1-rc1/NEWS is very compacted.

The RC1 got extensive tests over the lasts months and is downward compatible to GnuCOBOL 2.2 - you are invited to upgrade GnuCOBOL now or start your own tests with the release candidate to be able to update when the final release arrives.

02 July, 2020 07:13AM by Simon Sobisch

Parabola GNU/Linux-libre

ath9k wifi devices may not work with linux-libre 5.7.6

if you have a USB wifi device which uses the ath9k or ath9k_htc kernel module, you should postpone upgrading to any of the 5.7.6 kernels; or the device may not work when you next reboot - PCI devices do not seem to be affected by this bug

watch this bug report for further details

linux-libre and linux-libre-headers 5.7.6 have been pulled from the repos and replaced with 5.7.2; but other kernels remain at 5.7.6 - if you have already upgraded to one of the 5.7.6 kernels, and your wifi does not work, you will need to revert to the previous kernel:

# pacman -Syuu linux-libre

or boot a parabola LiveISO, mount your / partition, and install it with pacstrap:

# mount /dev/sdXN /mnt
# pacstrap /mnt linux-libre

02 July, 2020 01:49AM by bill auger

July 01, 2020

GNU Guix

Securing updates

Software deployment tools like Guix are in a key position when it comes to securing the “software supply chain”—taking source code fresh from repositories and providing users with ready-to-use binaries. We have been paying attention to several aspects of this problem in Guix: authentication of pre-built binaries, reproducible builds, bootstrapping, and security updates.

A couple of weeks ago, we addressed the elephant in the room: authentication of Guix code itself by guix pull, the tool that updates Guix and its package collection. This article looks at what we set out to address, how we achieved it, and how it compares to existing work in this area.

What updates should be protected against

The problem of securing distro updates is often viewed through the lens of binary distributions such as Debian, where the main asset to be protected are binaries themselves. The functional deployment model that Guix and Nix implement is very different: conceptually, Guix is a source distribution, like Gentoo if you will.

Pre-built binaries are of course available and very useful, but they’re optional; we call them substitutes because they’re just that: substitutes for local builds. When you do choose to accept substitutes, they must be signed by one of the keys you authorized (this has been the case since version 0.6 in 2014).

Guix consists of source code for the tools as well as all the package definitions—the distro. When users run guix pull, what happens behind the scene is equivalent to git clone or git pull. There are many ways this can go wrong. An attacker can trick the user into pulling code from an alternate repository that contains malicious code or definitions for backdoored packages. This is made more difficult by the fact that code is fetched over HTTPS from Savannah by default. If Savannah is compromised (as happened in 2010), an attacker can push code to the Guix repository, which everyone would pull. The change might even go unnoticed and remain in the repository forever. An attacker with access to Savannah can also reset the main branch to an earlier revision, leading users to install outdated software with known vulnerabilities—a downgrade attack. These are the kind of attacks we want to protect against.

Authenticating Git checkouts

If we take a step back, the problem we’re trying to solve is not specific to Guix and to software deployment tools: it’s about authenticating Git checkouts. By that, we mean that when guix pull obtains code from Git, it should be able to tell that all the commits it fetched were pushed by authorized developers of the project. We’re really looking at individual commits, not tags, because users can choose to pull arbitrary points in the commit history of Guix and third-party channels.

Checkout authentication requires cryptographically signed commits. By signing a commit, a Guix developer asserts that they are the one who made the commit; they may be its author, or they may be the person who applied somebody else’s changes after review. It also requires a notion of authorization: we don’t simply want commits to have a valid signature, we want them to be signed by an authorized key. The set of authorized keys changes over time as people join and leave the project.

To implement that, we came up with the following mechanism and rule:

  1. The repository contains a .guix-authorizations file that lists the OpenPGP key fingerprints of authorized committers.
  2. A commit is considered authentic if and only if it is signed by one of the keys listed in the .guix-authorizations file of each of its parents. This is the authorization invariant.

(Remember that Git commits form a directed acyclic graph (DAG) where each commit can have zero or more parents; merge commits have two parent commits, for instance. Do not miss Git for Computer Scientists for a pedagogical overview!)

Let’s take an example to illustrate. In the figure below, each box is a commit, and each arrow is a parent relationship:

Example commit graph.

This figure shows two lines of development: the orange line may be the main development branch, while the purple line may correspond to a feature branch that was eventually merged in commit F. F is a merge commit, so it has two parents: D and E.

Labels next to boxes show who’s in .guix-authorizations: for commit A, only Alice is an authorized committer, and for all the other commits, both Bob and Alice are authorized committers. For each commit, we see that the authorization invariant holds; for example:

  • commit B was made by Alice, who was the only authorized committer in its parent, commit A;
  • commit C was made by Bob, who was among the authorized committers as of commit B;
  • commit F was made by Alice, who was among the authorized committers of both parents, commits D and E.

The authorization invariant has the nice property that it’s simple to state, and it’s simple to check and enforce. This is what guix pull implements. If your current Guix, as returned by guix describe, is at commit A and you want to pull to commit F, guix pull traverses all these commits and checks the authorization invariant.

Once a commit has been authenticated, all the commits in its transitive closure are known to be already authenticated. guix pull keeps a local cache of the commits it has previously authenticated, which allows it to traverse only new commits. For instance, if you’re at commit F and later update to a descendant of F, authentication starts at F.

Since .guix-authorizations is a regular file under version control, granting or revoking commit authorization does not require special support. In the example above, commit B is an authorized commit by Alice that adds Bob’s key to .guix-authorizations. Revocation is similar: any authorized committer can remove entries from .guix-authorizations. Key rotation can be handled similarly: a committer can remove their former key and add their new key in a single commit, signed by the former key.

The authorization invariant satisfies our needs for Guix. It has one downside: it prevents pull-request-style workflows. Indeed, merging the branch of a contributor not listed in .guix-authorizations would break the authorization invariant. It’s a good tradeoff for Guix because our workflow relies on patches carved into stone tablets (patch tracker), but it’s not suitable for every project out there.

Bootstrapping

The attentive reader may have noticed that something’s missing from the explanation above: what do we do about commit A in the example above? In other words, which commit do we pick as the first one where we can start verifying the authorization invariant?

We solve this bootstrapping issue by defining channel introductions. Previously, one would identify a channel simply by its URL. Now, when introducing a channel to users, one needs to provide an additional piece of information: the first commit where the authorization invariant holds, and the fingerprint of the OpenPGP key used to sign that commit (it’s not strictly necessary but provides an additional check). Consider this commit graph:

Example commit graph with introduction.

On this figure, B is the introduction commit. Its ancestors, such as A are considered authentic. To authenticate, C, D, E, and F, we check the authorization invariant.

As always when it comes to establishing trust, distributing channel introductions is very sensitive. The introduction of the official guix channel is built into Guix. Users obtain it when they install Guix the first time; hopefully they verify the signature on the Guix tarball or ISO image, as noted in the installation instructions, which reduces chances of getting the “wrong” Guix, but it is still very much trust-on-first-use (TOFU).

For signed third-party channels, users have to provide the channel’s introduction in their channels.scm file, like so:

(channel
  (name 'my-channel)
  (url "https://example.org/my-channel.git")
  (introduction
   (make-channel-introduction
    "6f0d8cc0d88abb59c324b2990bfee2876016bb86"
    (openpgp-fingerprint
     "CABB A931 C0FF EEC6 900D  0CFB 090B 1199 3D9A EBB5"))))

The guix describe command now prints the introduction if there’s one. That way, one can share their channel configuration, including introductions, without having to be an expert.

Channel introductions also solve another problem: forks. Respecting the authorization invariant “forever” would effectively prevent “unauthorized” forks—forks made by someone who’s not in .guix-authorizations. Someone publishing a fork simply needs to emit a new introduction for their fork, pointing to a different starting commit.

Last, channel introductions give a point of reference: if an attacker manipulates branch heads on Savannah to have them point to unrelated commits (such as commits on an orphan branch that do not share any history with the “official” branches), authentication will necessarily fail as it stumbles upon the first unauthorized commit made by the attacker. In the figure above, the red branch with commits G and H cannot be authenticated because it starts from A, which lacks .guix-authorizations and thus fails the authorization invariant.

That’s all for authentication! I’m glad you read this far. At this point you can take a break or continue with the next section on how guix pull prevents downgrade attacks.

Downgrade attacks

An important threat for software deployment tools is downgrade or roll-back attacks. The attack consists in tricking users into installing older, known-vulnerable software packages, which in turn may offer new ways to break into their system. This is not strictly related to the authentication issue we’ve been discussing, except that it’s another important issue in this area that we took the opportunity to address.

Guix saves provenance info for itself: guix describe prints that information, essentially the Git commits of the channels used during git pull:

$ guix describe
Generation 149  Jun 17 2020 20:00:14    (current)
  guix 8b1f7c0
    repository URL: https://git.savannah.gnu.org/git/guix.git
    branch: master
    commit: 8b1f7c03d239ca703b56f2a6e5f228c79bc1857e

Thus, guix pull, once it has retrieved the latest commit of the selected branch, can verify that it is doing a fast-forward update in Git parlance—just like git pull does, but compared to the previously-deployed Guix. A fast-forward update is when the new commit is a descendant of the current commit. Going back to the figure above, going from commit A to commit F is a fast-forward update, but going from F to A or from D to E is not.

Not doing a fast-forward update would mean that the user is deploying an older version of the Guix currently used, or deploying an unrelated version from another branch. In both cases, the user is at risk of ending up installing older, vulnerable software.

By default guix pull now errors out on non-fast-forward updates, thereby protecting from roll-backs. Users who understand the risks can override that by passing --allow-downgrades.

Authentication and roll-back prevention allow users to safely refer to mirrors of the Git repository. If git.savannah.gnu.org is down, one can still update by fetching from a mirror, for instance with:

guix pull --url=https://github.com/guix-mirror/guix

If the repository at this URL is behind what the user already deployed, or if it’s not a genuine mirror, guix pull will abort. In other cases, it will proceed.

Unfortunately, there is no way to answer the general question “is X the latest commit of branch B ?”. Rollback detection prevents just that, rollbacks, but there’s no mechanism in place to tell whether a given mirror is stale. To mitigate that, channel authors can specify, in the repository, the channel’s primary URL. This piece of information lives in the .guix-channel file, in the repository, so it’s authenticated. guix pull uses it to print a warning when the user pulls from a mirror:

$ guix pull --url=https://github.com/guix-mirror/guix
Updating channel 'guix' from Git repository at 'https://github.com/guix-mirror/guix'...
Authenticating channel 'guix', commits 9edb3f6 to 3e51f9e (44 new commits)...
guix pull: warning: pulled channel 'guix' from a mirror of https://git.savannah.gnu.org/git/guix.git, which might be stale
Building from this channel:
  guix      https://github.com/guix-mirror/guix 3e51f9e
…

So far we talked about mechanics in a rather abstract way. That might satisfy the graph theorist or the Git geek in you, but if you’re up for a quick tour of the implementation, the next section is for you!

A long process

We’re kinda celebrating these days, but the initial bug report was opened… in 2016. One of the reasons was that we were hoping the general problem was solved already and we’d “just” have to adapt what others had done. As for the actual design: you would think it can be implemented in ten lines of shell script invoking gpgv and git. Perhaps that’s a possibility, but the resulting performance would be problematic—keep in mind that users may routinely have to authenticate hundreds of commits. So we took a long road, but the end result is worth it. Let’s recap.

Back in April 2016, committers started signing commits, with a server-side hook prohibiting unsigned commits. In July 2016, we had proof-of-concept libgit2 bindings with the primitives needed to verify signatures on commits, passing them to gpgv; later Guile-Git was born, providing good coverage of the libgit2 interface. Then there was a two-year hiatus during which no code was produced in that area.

Everything went faster starting from December 2019. Progress was incremental and may have been hard to follow, even for die-hard Guix hackers, so here are the major milestones:

Whether you’re a channel author or a user, the feature is now fully documented in the manual, and we’d love to get your feedback!

SHA-1

We can’t really discuss Git commit signing without mentioning SHA-1. The venerable crytographic hash function is approaching end of life, as evidenced by recent breakthroughs. Signing a Git commit boils down to signing a SHA-1 hash, because all objects in the Git store are identified by their SHA-1 hash.

Git now relies on a collision attack detection library to mitigate practical attacks. Furthermore, the Git project is planning a hash function transition to address the problem.

Some projects such as Bitcoin Core choose to not rely on SHA-1 at all. Instead, for the commits they sign, they include in the commit log the SHA512 hash of the tree, which the verification scripts check.

Computing a tree hash for each commit in Guix would probably be prohibitively costly. For now, for lack of a better solution, we rely on Git’s collision attack detection and look forward to a hash function transition.

As for SHA-1 in an OpenPGP context: our authentication code rejects SHA-1 OpenPGP signatures, as recommended.

Related work

A lot of work has gone into securing the software supply chain, often in the context of binary distros, sometimes in a more general context; more recent work also looks into Git authentication and related issues. This section attempts to summarize how Guix relates to similar work that we’re aware of in these two areas. More detailed discussions can be found in the issue tracker.

The Update Framework (TUF) is a reference for secure update systems, with a well-structured spec and a number of implementations. TUF is a great source of inspiration to think about this problem space. Many of its goals are shared by Guix. Not all the attacks it aims to protect against (Section 1.5.2 of the spec) are addressed by what’s presented in this post: indefinite freeze attacks, where updates never become available, are not addressed per se (though easily observable), and slow retrieval attacks aren’t addressed either. The notion of role is also something currently missing from the Guix authentication model, where any authorized committer can touch any files, though the model and .guix-authorizations format leave room for such an extension.

However, both in its goals and system descriptions, TUF is biased towards systems that distribute binaries as plain files with associated meta-data. That creates a fundamental impedance mismatch. As an example, attacks such as fast-forward attacks or mix-and-match attacks don’t apply in the context of Guix; likewise, the repository depicted in Section 3 of the spec has little in common with a Git repository.

Developers of OPAM, the OCaml package manager, adapted TUF for use with their Git-based package repository, later updated to write Conex, a separate tool to authenticate OPAM repositories. OPAM is interesting because like Guix it’s a source distro and its package repository is a Git repository containing “build recipe”. To date, it appears that opam update itself does not authenticate repositories though; it’s up to users or developer to run Conex.

Another very insightful piece of work is the 2016 paper On omitting commits and committing omissions. The paper focuses on the impact of malicious modifications to Git repository meta-data. An attacker with access to the repository can modify, for instance, branch references, to cause a rollback attack or a “teleport” attack, causing users to pull an older commit or an unrelated commit. As written above, guix pull would detect such attacks. However, guix pull would fail to detect cases where metadata modification does not yield a rollback or teleport, yet gives users a different view than the intended one—for instance, a user is directed to an authentic but different branch rather than the intended one. The “secure push” operation and the associated reference state log (RSL) the authors propose would be an improvement.

Wrap-up and outlook

Guix now has a mechanism that allows it to authenticate updates. If you’ve run guix pull recently, perhaps you’ve noticed additional output and a progress bar as new commits are being authenticated. Apart from that, the switch has been completely transparent. The authentication mechanism is built around the commit graph of Git; in fact, it’s a mechanism to authenticate Git checkouts and in that sense it is not tied to Guix and its application domain. It is available not only for the main guix channel, but also for third-party channels.

To bootstrap trust, we added the notion of channel introductions. These are now visible in the user interface, in particular in the output of guix describe and in the configuration file of guix pull and guix time-machine. While channel configuration remains a few lines of code that users typically paste, this extra bit of configuration might be intimidating. It certainly gives an incentive to provide a command-line interface to manage the user’s list of channels: guix channel add, etc.

The solution here is built around the assumption that Guix is fundamentally a source-based distribution, and is thus completely orthogonal to the public key infrastructure (PKI) Guix uses for the signature of substitutes. Yet, the substitute PKI could probably benefit from the fact that we now have a secure update mechanism for the Guix source code: since guix pull can securely retrieve a new substitute signing key, perhaps it could somehow handle substitute signing key revocation and delegation automatically? Related to that, channels could perhaps advertise a substitute URL and its signing key, possibly allowing users to register those when they first pull from the channel. All this requires more thought, but it looks like there are new opportunities here.

Until then, if you’re a user or a channel author, we’d love to hear from you! We’ve already gotten feedback that these new mechanisms broke someone’s workflow; hopefully it didn’t break yours, but either way your input is important in improving the system. If you’re into security and think this design is terrible or awesome, please do provide feedback.

It’s a long and article describing a long ride on a path we discovered as we went, and it felt like an important milestone to share!

Acknowledgments

Thanks to everyone who provided feedback, ideas, or carried out code review during this long process, notably (in no particular order): Christopher Lemmer Webber, Leo Famulari, David Thompson, Mike Gerwitz, Ricardo Wurmus, Werner Koch, Justus Winter, Vagrant Cascadian, Maxim Cournoyer, Simon Tournier, John Soo, and Jakub Kądziołka. Thanks also to janneke, Ricardo, Marius, and Simon for reviewing an earlier draft of this post.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

01 July, 2020 05:40PM by Ludovic Courtès

Christopher Allan Webber

Some updates: CapTP in progress, Datashards, chiptune experiments, etc

(Originally written as a post for Patreon donors.)

Hello... just figured I'd give a fairly brief update. Since I wrote my last post I've been working hard towards the distributed programming stuff in Goblins.

In general, this involves implementing a protocol called CapTP, which is fairly obscure... the idea is generally to apply the same "object capability security" concept that Goblins already follows but on a networked protocol level. Probably the most prominent other implementation of CapTP right now is being done by the Agoric folks, captp.js. I've been in communication with them... could we achieve interoperability between our implementations? It could be cool, but it's too early to tell. Anyway it's one of those technical areas that's so obscure that I decided to document my progress on the cap-talk mailing list, but that's becoming the length of a small novel... so I guess, beware before you try to read that whole thing. I'm far enough along where the main things work, but not quite everything (CapTP supports such wild things as distributed garbage collection...!!!!)

Anyway, in general I don't think that people get too excited by hearing "backend progress is happening"; I believe that implementing CapTP is even more important than standardizing ActivityPub was in the long run of my life work, but I also am well aware that in general people (including myself!) understand best by seeing an interesting demonstration. So, I do plan another networked demo, akin to the time-travel Terminal Phase demo, but I'm not sure just how fancy it will be (yet). I think I'll have more things to show on that front in 1-2 months.

(Speaking of Goblins and games, I'm putting together a little library called Game Goblin to make making games on top of Goblins a bit easier; it isn't quite ready yet but thought I'd mention it. It's currently going through some "user testing".)

More work is happening on the Datashards front; Serge Wroclawski (project leader for Datashards; I guess you could say I'm "technical engineer") and I have started assembling more documentation and have put together some proto-standards documents. (Warning: WIP WIP WIP!!!) We are exploring with a standards group whether or not Datashards would be a good fit there, but it's too early to talk about that since the standards group is still figuring it out themselves. Anyway, it's taken up a good chunk of time so I figured it was worth mentioning.

So, more to come, and hopefully demos not too far ahead.

But let's end on a fun note. In-between all that (and various things at home, of course), I have taken a bit of what might resemble "downtime" and I'm learning how to make ~chiptunes / "tracker music" with Milkytracker, which is just a lovely piece of software. (I've also been learning more about sound theory and have been figuring out how to compose some of my own samples/"instruments" from code.) Let me be clear, I'm not very good at it, but it's fun to learn a new thing. Here's a dollhouse piano thing (XM file), the start of a haunted video game level (XM file), a sound experiment representing someone interacting with a computer (XM file), and the mandatory demonstration that I've figured out how to do C64-like phase modulation and arpeggios (XM file). Is any of that stuff... "good"? Not really, all pretty amateurish, but maybe in a few months of off-hour experiments it won't be... so maybe some of my future demos / games won't be quite as quiet! ;)

Hope everyone's doing ok out there...

01 July, 2020 12:14AM by Christopher Lemmer Webber

June 30, 2020

GNU Taler news

Exchange independent security audit report published

2020-07: Exchange external security audit completed

We received a grant from NLnet foundation to pay for an external security audit of the GNU Taler exchange cryptography, code and documentation. CodeBlau now concluded their audit. You can find the final report here. We have compiled a preliminary response detailing what changes we have already made and which changes we are still planning to make in the future. We thank CodeBlau for their work, and NLnet and the European Commission's Horizion 2020 NGI initiative for funding this work.

30 June, 2020 10:00PM

June 29, 2020

GNUnet News

GNS Specification Milestone 3/4

GNS Technical Specification Milestone 3/4

We are happy to announce the completion of the third milestone for the GNS Specification. The third milestone consists of documenting the GNS zone revocation process. As part of this, we have reworked the proof-of-work algorithms in GNUnet also used for GNS revocations.
The (protocol breaking) changes will be released as part of GNUnet 0.13.0. The specification document LSD001 can be found at:

In preparation for the fourth and last milestone, we have started the IETF process to find a working group and expect to present our work initially at IETF 108.

This work is generously funded by NLnet as part of their Search and discovery fund.

29 June, 2020 10:00PM

June 24, 2020

GNU Guile

GNU Guile 3.0.4 released

We are pleased but also embarrassed to announce GNU Guile 3.0.4. This release fixes the SONAME of libguile-3.0.so, which was wrongfully bumped in 3.0.3. Distributions should use 3.0.4.

Apologies for the inconvenience!

24 June, 2020 03:00PM by Ludovic Courtès (guile-devel@gnu.org)

June 22, 2020

parallel @ Savannah

GNU Parallel 20200622 ('Floyd') released

GNU Parallel 20200622 ('Floyd') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

  Who needs spark when GNU Parallel exists
    -- MatthijsB @MatthijsBrs@twitter

New in this release:

  • No new functionality
  • Bug fixes and man page updates.

News about GNU Parallel:

  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 3374ec53bacb199b245af2dda86df6c9
    12345678 3374ec53 bacb199b 245af2dd a86df6c9
    $ md5sum install.sh | grep 029a9ac06e8b5bc6052eac57b2c3c9ca
    029a9ac0 6e8b5bc6 052eac57 b2c3c9ca
    $ sha512sum install.sh | grep f517006d9897747bed8a4694b1acba1b
    40f53af6 9e20dae5 713ba06c f517006d 9897747b ed8a4694 b1acba1b 1464beb4
    60055629 3f2356f3 3e9c4e3c 76e3f3af a9db4b32 bd33322b 975696fc e6b23cfb
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 June, 2020 10:05PM by Ole Tange

health @ Savannah

GNU Health starts the migration to Weblate

Dear all

Our friends from Weblate (www.weblate.org) have provided hosting for GNU Health!

The import of the strings for the official languages is already there (https://hosted.weblate.org/projects/gnu-health/).

During the coming days, we'll organize the teams, adapt the scripts, links to mercurial, etc..

The server translate.gnusolidario.org is now deprecated and will no longer be active by July 1st.

Let me take the opportunity to thank the great Pootle community, that has been the translation system of GNU Health for many years! Your contribution has been immense.

We are looking forward to this new translation experience with Weblate.

All the best
Luis

22 June, 2020 11:32AM by Luis Falcon

mit-scheme @ Savannah

Test release 11.0.90 is available

Please try it out and report any bugs.

This will become the 11.1 release.

22 June, 2020 05:28AM by Chris Hanson

June 21, 2020

GNU Guile

GNU Guile 3.0.3 released

We are pleased to announce GNU Guile 3.0.3, the third bug-fix release of the new 3.0 stable series! This release represents 170 commits by 17 people since version 3.0.2.

The highlight of this release is the addition of a new baseline compiler, used at optimizations levels -O1 and -O0. The baseline compiler is designed to generate code fast, for applications where compilation speed matters more than execution time of the generated code. It is around ten times faster than the optimizing continuation-passing style (CPS) compiler.

This version also includes a new pipeline procedure to create shell-like process pipelines, improvements to the bitvector interface, and bug fixes for JIT compilation on ARMv7 machines.

See the release announcement for details and the download page to give it a go!

21 June, 2020 09:20PM by Ludovic Courtès (guile-devel@gnu.org)

www-zh-cn @ Savannah

GNU CTT posts an openning position.

Dear volunteers:
Dear visitors:
Dear GNU lovers:

We are GNU Chinese Translators Team (GNU CTT). Our goal is to translate the http://www.gnu.org webpages into Chinese. According to
https://www.gnu.org/help/help.html

每个翻译团队需要多个母语为目标语言的成员(英语熟练),还需要至少一个母语为英语的成员(目标语言熟练)。

GNU CTT 目前缺少母语是英语的成员。如果您想提供帮助,请发送邮件至 <a href="mailto:web-translators@gnu.org">&lt;web-translators@gnu.org&gt;</a>,或订阅我们的邮件列表<a href="mailto:www-zh-cn-translators@gnu.org">&lt;GNU CTT translators&gt;</a>。

Thank you.
GNU CTT

21 June, 2020 01:39AM by Wensheng XIE

June 20, 2020

www @ Savannah

GNU is Looking for Native English Speakers for Proofreading and Translations

We at the GNU Project love languages. Here are two main things we do in this area and how you can join to help.

  • To bring our message of software freedom to the widest possible audience, we translate http://www.gnu.org into as many languages as we can. Our volunteer translation teams do a wonderful job and more help is always welcome. Apart from native speakers of the target language, these teams also need at least one member that is a native speaker of English and fairly fluent in the target language. Find more about the GNU translation teams and contact them directly or write to <web-translators@gnu.org> if you would like to help.
  • English is the default language of the GNU Project. The vast majority of our web pages, manuals and documentation are originally written in English. Occasionally, we like to have the material proofread by native English speakers with a good command of the written language to ensure quality and readability. To help with this, please subscribe to our low-traffic  GNU documentation proofreaders list.

Other ways to  help GNU

20 June, 2020 06:47PM by Dora Scilipoti

health @ Savannah

Security: GNU Health HMIS Control Center 3.6.4 is out!

Dear all
The GH control center (gnuhealth-control) version 3.6.4 is out, fixing some minor security issues found by the openSUSE security team[1]

You just need to to the following:

1) Login as GNU Health
2) cdutil
3) ./gnuhealth-control update

That is all. The new gnuhealth-control center is now installed. To verify, please check it with

./gnuhealth-control version

If the update reports that, in addition to the gnuhealth-control, there are other components that need to be upgraded, please re-run gnuhealth-control and update the instance, as explained in the wikibook[2]

Please remember to submit any possible vulnerabilities to security@gnuhealth.org

Have a great weekend!
1.- https://bugzilla.opensuse.org/show_bug.cgi?id=1167128#c13
2.- https://en.wikibooks.org/wiki/GNU_Health/Control_Center

20 June, 2020 04:25PM by Luis Falcon

June 15, 2020

GNU Guix

Guix Further Reduces Bootstrap Seed to 25%

We are delighted to announce that the second reduction by 50% of the Guix bootstrap binaries has now been officially released!

The initial set of binaries from which packages are built now weighs in at approximately 60~MiB, a quarter of what it used to be.

In a previous blog post we elaborate on why this reduction and bootstrappability in general is so important. One reason is to eliminate---or greatly reduce the attack surface of---a “trusting trust” attack. Last summer at the Breaking Bitcoin conference, Carl Dong gave a fun and remarkably gentle introduction and at FOSDEM2020 I also gave a short talk about this. If you choose to believe that building from source is the proper way to do computing, then it follows that the “trusting trust” attack is only a symptom of an incomplete or missing bootstrap story.

Further Reduced Binary Seed bootstrap

Last year, the first reduction removed the GCC, glibc and Binutils binary seeds. The new Further Reduced Binary Seed bootstrap, merged in Guix master last month, removes the “static-binaries tarball” containing GNU Awk, Bash, Bzip2, the GNU Core Utilities, Grep, Gzip, GNU Make, Patch, sed, Tar, and Xz. It replaces them by Gash and Gash Core Utils. Gash is a minimalist POSIX shell written in Guile Scheme, while Gash Core Utils is a Scheme implementation for most of the tools found in GNU Coreutils, as well as the most essential bits of Awk, grep and sed.

After three new GNU Mes releases with numerous Mes C Library updates and fixes, a major update of Gash and the first official Gash Utils release, and the delicate balancing of 17 new bootstrap source packages and versions, the bottom of the package graph now looks like this (woohoo!):

                              gcc-mesboot (4.9.4)
                                      ^
                                      |
                                    (...)
                                      ^
                                      |
               binutils-mesboot (2.14), glibc-mesboot (2.2.5),
                          gcc-core-mesboot (2.95.3)
                                      ^
                                      |
            bash-mesboot (2.05), bzip2-mesboot, gawk-mesboot (3.0.0)
       diffutils-mesboot (2.7), patch-mesboot (2.5.9), sed-mesboot (1.18)
                                      ^
                                      |
                             gnu-make-mesboot (3.82)
                                      ^
                                      |
                                gzip-mesboot (1.2.4)
                                      ^
                                      |
                                  tcc-boot
                                      ^
                                      |
                                  mes-boot
                                      ^
                                      |
                          gash-boot, gash-utils-boot
                                      ^
                                      |
                                      *
                 bootstrap-mescc-tools, bootstrap-mes (~12 MiB)
                            bootstrap-guile (~48 MiB)

full graph

We are excited that the Nlnet Foundation has sponsored this work!

However, we aren't done yet; far from it.

Lost Paths

The idea of reproducible builds and bootstrappable software is not very new. Much of that was implemented for the GNU tools in the early 1990s. Working to recreate it in present time shows us much of that practice was forgotten.

Readers who are familiar with the GNU toolchain may have noticed the version numbers of the *-mesboot source packages in this great new bootstrap: They are ancient! That's a problem.

Typically, newer versions of the tool chain fix all kinds of bugs, make the software easier to build and add support for new CPU architectures, which is great. However---more often than not--- simultaneously new features are introduced or dependencies are added that are not necessary for bootstrapping and may increase the bootstrap hurdle. Sometimes, newer tools are more strict or old configure scripts do not recognise newer tool versions.

A trivial example is GNU sed. In the current bootstrap we are using version 1.18, which was released in 1993. Until recently the latest version of sed we could hope to bootstrap was sed-4.2.2 (2012). Newer releases ship as xz-compressed tarballs only, and xz is notoriously difficult to bootstrap (it needs a fairly recent GCC and try building that without sed).

Luckily, the sed maintainers (Jim Meyering) were happy to correct this mistake and starting from release sed-4.8 (2020) also gzip-compressed tarballs will be shipped. Similar for the GNU Core Utils: Releases made between 2011 and 2019 will probably be useless for bootstrapping. Confronted with this information, also the coreutils maintainers (Pádraig Brady) were happy to release coreutils-8.32 also in gzip compression from now on.

Even these simple cases show that solving bootstrap problems can only be done together: For GNU it really is a project-wide responsibility that needs to be addressed.

Most bootstrap problems or loops are not so easy to solve and sometimes there are no obvious answers, for example:

and while these examples make for a delightful puzzle from a bootstrappability perspective, we would love to see the maintainers of GNU softwares to consider bootstrappability and start taking more responsibility for the bootstrap story of their packages.

Towards a Universal, Full Source Bootstrap

Our next target will be a third reduction by ~50%; the Full-Source bootstrap will replace the MesCC-Tools and GNU Mes binaries by Stage0 and M2-Planet.

The Stage0 project by Jeremiah Orians starts everything from ~512 bytes; virtually nothing. Have a look at this incredible project if you haven’t already done so.

We are most grateful and excited that the Nlnet Foundation has again decided to sponsor this work!

While the reduced bootstrap currently only applies to the i686-linux and x86_64-linux architectures, we are thrilled that ARM will be joining soon. The Trusted ARM bootstrapping work is progressing nicely, and GNU Mes is now passing its entire mescc test suite on native ARMv7, and passing nigh its entire gcc test suite on native ARMv7. Work is underway to compile tcc using that GNU Mes. Adding this second architecture is a very important one towards the creation of a universal bootstrap!

Upcoming releases of Gash and Gash-Utils will allow us to clean up the bottom of the package graph and remove many of the “vintage” packages. In particular, the next version of Gash-Utils will be sophisticated enough to build everything up to gcc-mesboot using only old versions of GNU Make and Gzip. This is largely thanks to improvements to the implementation of Awk, which now includes nearly all of the standard features.

Looking even further into the future, we will likely have to remove the “vintage” GCC-2.95.3 that was such a helpful stepping stone and reach straight for GCC-4.6.4. Interesting times ahead!

About Bootstrappable Builds and GNU Mes

Software is bootstrappable when it does not depend on a binary seed that cannot be built from source. Software that is not bootstrappable---even if it is free software---is a serious security risk for a variety of reasons. The Bootstrappable Builds project aims to reduce the number and size of binary seeds to a bare minimum.

GNU Mes is closely related to the Bootstrappable Builds project. Mes aims to create an entirely source-based bootstrapping path for the Guix System and other interested GNU/Linux distributions. The goal is to start from a minimal, easily inspectable binary (which should be readable as source) and bootstrap into something close to R6RS Scheme.

Currently, Mes consists of a mutual self-hosting scheme interpreter and C compiler. It also implements a C library. Mes, the scheme interpreter, is written in about 5,000 lines of code of simple C. MesCC, the C compiler, is written in scheme. Together, Mes and MesCC can compile a lightly patched TinyCC that is self-hosting. Using this TinyCC and the Mes C library, it is possible to bootstrap the entire Guix System for i686-linux and x86_64-linux.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

15 June, 2020 12:00PM by Jan (janneke) Nieuwenhuizen

June 07, 2020

www @ Savannah

Successful Resistance against Nonfree Software in Schools

The GNU Education Team presents examples of teachers, students, parents, free software advocates and the community at large who are taking action to stop the use of nonfree programs in schools and are succeeding. Stay tuned for new cases.

https://www.gnu.org/education/successful-resistance-against-nonfree-software.html

07 June, 2020 03:31AM by Dora Scilipoti

Resisting Proprietary Software

Teachers, students, parents, free software advocates and the community at large are taking action to stop the use of nonfree programs in schools.

The recent health emergency situation caused by COVID-19 presented a new challenge. Traditional in-person classes were suddenly disallowed, and overnight thousands of schools around the world were confronted with a decision to make: either suspend their teaching activities entirely or comply by switching to online classes.

Here is what we can do.

https://www.gnu.org/education/resisting-proprietary-software.html

07 June, 2020 03:30AM by Dora Scilipoti

June 03, 2020

Andy Wingo

a baseline compiler for guile

Greets, my peeps! Today's article is on a new compiler for Guile. I made things better by making things worse!

The new compiler is a "baseline compiler", in the spirit of what modern web browsers use to get things running quickly. It is a very simple compiler whose goal is speed of compilation, not speed of generated code.

Honestly I didn't think Guile needed such a thing. Guile's distribution model isn't like the web, where every page you visit requires the browser to compile fresh hot mess; in Guile I thought it would be reasonable for someone to compile once and run many times. I was never happy with compile latency but I thought it was inevitable and anyway amortized over time. Turns out I was wrong on both points!

The straw that broke the camel's back was Guix, which defines the graph of all installable packages in an operating system using Scheme code. Lately it has been apparent that when you update the set of available packages via a "guix pull", Guix would spend too much time compiling the Scheme modules that contain the package graph.

The funny thing is that it's not important that the package definitions be optimized; they just need to be compiled in a basic way so that they are quick to load. This is the essential use-case for a baseline compiler: instead of trying to make an optimizing compiler go fast by turning off all the optimizations, just write a different compiler that goes from a high-level intermediate representation straight to code.

So that's what I did!

it don't do much

The baseline compiler skips any kind of flow analysis: there's no closure optimization, no contification, no unboxing of tagged numbers, no type inference, no control-flow optimizations, and so on. The only whole-program analysis that is done is a basic free-variables analysis so that closures can capture variables, as well as assignment conversion. Otherwise the baseline compiler just does a traversal over programs as terms of a simple tree intermediate language, emitting bytecode as it goes.

Interestingly the quality of the code produced at optimization level -O0 is pretty much the same.

This graph shows generated code performance of the CPS compiler relative to new baseline compiler, at optimization level 0. Bars below the line mean the CPS compiler produces slower code. Bars above mean CPS makes faster code. You can click and zoom in for details. Note that the Y axis is logarithmic.

The tests in which -O0 CPS wins are mostly because the CPS-based compiler does a robust closure optimization pass that reduces allocation rate.

At optimization level -O1, which adds partial evaluation over the high-level tree intermediate language and support for inlining "primitive calls" like + and so on, I am not sure why CPS peels out in the lead. No additional important optimizations are enabled in CPS at that level. That's probably something to look into.

Note that the baseline of this graph is optimization level -O1, with the new baseline compiler.

But as I mentioned, I didn't write the baseline compiler to produce fast code; I wrote it to produce code fast. So does it actually go fast?

Well against the -O0 and -O1 configurations of the CPS compiler, it does excellently:

Here you can see comparisons between what will be Guile 3.0.3's -O0 and -O1, compared against their equivalents in 3.0.2. (In 3.0.2 the -O1 equivalent is actually -O1 -Oresolve-primitives, if you are following along at home.) What you can see is that at these optimization levels, for these 8 files, the baseline compiler is around 4 times as fast.

If we compare to Guile 3.0.3's default -O2 optimization level, or -O3, we see bigger disparities:

Which is to say that Guile's baseline compiler runs at about 10x the speed of its optimizing compiler, which incidentally is similar to what I found for WebAssembly compilers a while back.

Also of note is that -O0 and -O1 take essentially the same time, with -O1 often taking less time than -O0. This is because partial evaluation can make the program smaller, at a cost of being less straightforward to debug.

Similarly, -O3 usually takes less time than -O2. This is because -O3 is allowed to assume top-level bindings that aren't exported from a module can be transformed to lexical bindings, which are more available for contification and inlining, which usually leads to smaller programs; it is a similar debugging/performance tradeoff to the -O0/-O1 case.

But what does one gain when choosing to spend 10 times more on compilation? Here I have a gnarly graph that plots performance on some microbenchmarks for all the different optimization levels.

Like I said, it's gnarly, but the summary is that -O1 typically gets you a factor of 2 or 4 over -O0, and -O2 often gets you another factor of 2 above that. -O3 is mostly the same as -O2 except in magical circumstances like the mbrot case, where it adds an extra 16x or so over -O2.

worse is better

I haven't seen the numbers yet of this new compiler in Guix, but I hope it can have a good impact. Already in Guile itself though I've seen a couple interesting advantages.

One is that because it produces code faster, Guile's boostrap from source can take less time. There is also a felicitous feedback effect in that because the baseline compiler is much smaller than the CPS compiler, it takes less time to macro-expand, which reduces bootstrap time (as bootstrap has to pay the cost of expanding the compiler, until the compiler is compiled).

The second fortunate result is that now I can use the baseline compiler as an oracle for the CPS compiler, when I'm working on new optimizations. There's nothing worse than suspecting that your compiler miscompiled itself, after all, and having a second compiler helps keep me sane.

stay safe, friends

The code, you ask? Voici.

Although this work has been ongoing throughout the past month, I need to add some words on the now before leaving you: there is a kind of cognitive dissonance between nerding out on compilers in the comfort of my home, rain pounding on the patio, and at the same time the world on righteous fire. I hope it is clear to everyone by now that the US police are an essentially racist institution: they harass, maim, and murder Black people at much higher rates than whites. My heart is with the protestors. Godspeed to you all, from afar. At the same time, all my non-Black readers should reflect on the ways they participate in systems that support white supremacy, and on strategies to tear them down. I know I will be. Stay safe, wear eye protection, and until next time: peace.

03 June, 2020 08:39PM by Andy Wingo

Aleksander Morgado

QMI and MBIM in Python, Javascript…

This is what introspection looks like

The libqmi and libmbim libraries are every day getting more popular to control your QMI or MBIM based devices. One of the things I’ve noticed, though, is that lots of users are writing applications in e.g. Python but then running qmicli or mbimcli commands, and parsing the outputs. This approach may work, but there is absolutely no guarantee that the format of the output printed by the command line programs will be kept stable across new releases. And also, the way these operations are performed may be suboptimal (e.g. allocating QMI clients for each operation, instead of reusing them).

Since the new stable libqmi 1.26 and libmbim 1.24 releases, these libraries integrate GObject Introspection support for all their types, and that provides a much better integration within Python applications (or really, any other language supported by GObject Introspection).

The only drawback of using the libraries in this way, if you’re already using and parsing command line interface commands, is that you would need to go deep into how the protocol works in order to use them.

For example, in order to run a DMS Get Capabilities operation, you would need to create a Qmi.Device first, open it, then allocate a Qmi.Client for the DMS service, then run the Qmi.Client.get_capabilities() operation, receive and process the response with Qmi.Client.get_capabilities_finish(), and parse the result with the per-TLV reader method, e.g. output.get_info() to process the Info TLV. Once the client is no longer needed, it would need to be explicitly released before exiting. A full example doing just this is provided in the libqmi sources.

In the case of MBIM operations, there is no need for the extra step of allocating a per-service client, but instead, the user should be more aware of the actual types of messages being transferred, in order to use the correct parsing operations. For example, in order to query device capabilities you would need to create a Mbim.Device first, open it, create a message to query the device capabilities, send the message, receive and process the response, check whether an error is reported, and if it isn’t, fully parse it. A full example doing just this is provided in the libmbim sources.

Of course, all these low level operations can also be done through the qmi-proxy or mbim-proxy, so that ModemManager or other programs can be running at the same time, all sharing access to the same QMI or MBIM ports.

P.S.: not a true Python or GObject Introspection expert here, so please report any issue found or improvements that could be done đŸ˜€

And special thanks to Vladimir Podshivalov who is the one that started the hard work of setting everything up in libqmi. Thank you!

Enjoy!

03 June, 2020 09:56AM by aleksander

May 28, 2020

www-zh-cn @ Savannah

FSF gives freedom-respecting videoconferencing to all associate members

Dear Chinese Translators:

Are you interested in having a video conference using Jitsi?

As a valued associate member of the Free Software Foundation (FSF), we are now offering you free "as in freedom" videoconferencing, to help you push back against increased societal pressure to use nonfree software for communicating with collaborators, friends, and loved ones during the COVID-19 pandemic, and after.

Try out our FSF associate member videoconferencing

Only current FSF members can create a channel on the server, but nonmembers are then able to join you. It's a good opportunity to showcase why you are an FSF member!

We have been raising the alarm about encroachments upon user freedom by popular remote communication tools since social distancing guidelines were issued. You might have seen our recent publications warning users about widely used nonfree applications for remote communication and education, like Zoom.

As promised at LibrePlanet 2020, we have formed a working group to document and address major issues facing free software communication platforms, and this project is part of that effort. Another initiative in our free communication toolbox is a collaborative resource page created to steer users to applications that respect them. This will help you and the people you care about to stay away from conferencing tools like Zoom, which requires users to give up their software-related freedoms, and which has been a recent focal point of criticism due to problems ranging from security issues to privacy violations.

The platform we use to offer ethical videoconferencing access is Jitsi Meet. We used it previously to stream and record our annual LibrePlanet conference for an online audience after the COVID-19 pandemic forced us to cancel the in-person event. Choosing Jitsi Meet is only the first step to addressing the problems posed to freedom by services like Zoom and Facebook. Even users that start a call via a server running Jitsi could still be vulnerable if that server depends on or shares information with third parties. The FSF made changes to the code we are running, in order to enhance privacy and software freedom, and published the source code, to motivate others to host their own instances. The FSF instance does not use any third party servers for network initialization, and does not recommend or link to any potentially problematic services.
How to communicate freely with everyone you know

In order to provide a sustainable and reliable service, we are offering the ability to create conversations on the server exclusively to associate members, and it is only intended for personal, noncommercial use. You can create a channel by logging into the server using your member credentials; (your account username is wxie). Any person or group can then participate in the conversation. Nonmembers can be invited, but cannot start a channel.

To use the system, follow these steps:

    Go to https://jitsi.member.fsf.org/;

    Create a room (for privacy reasons it is better to use something random as a name);

    Click on "I am the host" in the modal window to be asked for your membership credentials.

You are now the moderator of the room. Other guests can join using the same URL, without needing to login. For extra privacy, we recommend giving the room a password by clicking on the "i" icon in the bottom right.

best regards,
wxie

28 May, 2020 11:10PM by Wensheng XIE

FSF News

Free Software Foundation announces freedom-respecting videoconferencing for its associate members

The FSF has been raising the alarm about encroachments upon freedom by remote communication tools since social distancing guidelines were issued. The FSF's new videoconferencing service powered by free software comes after several of its recent publications warned users about widely used nonfree applications for remote communication and education, like Zoom.

"The freedoms to associate and communicate are some of our most important. To have the means to exercise these freedoms online controlled by gatekeepers of despotic software is always dangerous and unacceptable, only more so when we can't safely gather in person," executive director John Sullivan explains. "We are a small nonprofit and can't provide hosting for the entire world, but we want to do our part. By offering feature-rich videoconferencing in freedom to our community of supporters, and sharing how others can do it, too, we demonstrate that it is possible to do this kind of communication in an ethical way."

This project came out of the working group the FSF established to document and address major issues facing free software communication platforms. Another initiative in its free communication toolbox is a collaborative resource page created to steer users to applications that respect them. The goal is to help users avoid conferencing tools like Zoom, which requires users to give up their software-related freedoms, and which has been a recent focal point for criticism due to problems ranging from security issues to privacy violations.

Zoom is not the only nonfree communication software that has received scrutiny recently while surging in popularity. Facebook's recently launched Messenger Rooms service may offer tools to keep users out, but it is not encrypted, nor does it offer protection from the ongoing data sharing issues that are inherent to the company. Google Meet, Microsoft Teams, and Webex were also reported to be collecting more data than users realized. These kinds of problems, the FSF argues, are examples of what happens when the terms of the code users are running prohibits them from inspecting or improving it for themselves and their communities.

The platform the FSF will use to offer ethical videoconferencing access is Jitsi Meet. Jitsi Meet was also used when the COVID-19 pandemic forced the FSF to bring its annual LibrePlanet conference online. Choosing Jitsi Meet is the first step to addressing the problems posed to freedom by services like Zoom and Facebook. However, even users that start a call via a server running Jitsi could still be vulnerable, if that server depends on or shares information with third parties. The FSF made changes to the code it is running to enhance privacy and software freedom, and published the source code. The FSF instance does not use any third party servers for network initialization, and does not recommend or link to any potentially problematic services.

Jitsi Meet initiates an encrypted peer-to-peer conference when there are only two participants, but achieving end-to-end encryption for more than two people is not yet possible. FSF chief technical officer Ruben Rodriguez elaborates: "For any multiparticipant conversation, there will always be encryption at the network level, but you still have to place some level of trust in the server operator that processes your video stream. We are offering what is currently possible when it comes to multiparticipant privacy, and we are doing it on machines that we physically own." The FSF servers do not store any voice, video, or messages from calls, and logging is minimal and for the purpose of troubleshooting and abuse prevention only. According to its Web site, Jitsi is working to implement end-to-end encryption for multiple callers, and the FSF has confirmed plans to implement the improvements as soon as they become available.

Sullivan provided further comment: "The FSF is offering people a chance to keep their freedom and remain in touch at the same time. With these services, you usually have to sacrifice your freedom for the ability to stay in touch with the people you care about, and place your data in the hands of an organization you don't know. Our members trust the FSF not to compromise their data, and this way, we can offer both."

Associate members of the FSF pay a $10 USD monthly fee, which is discounted to $5 USD for students. An FSF associate membership will provide users with the ability to create their own meeting rooms for personal, noncommercial use, which they can use to invite others to join regardless of their location or membership status.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux.

Associate members are critical to the FSF, since they contribute to the existence of the foundation and help propel the movement forward. Besides gratis access to the FSF Jitsi Meet instance, they receive a range of additional benefits. Donations to support the FSF's work can be made at https://my.fsf.org/donate. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contact

ZoĂŤ Kooyman
Program Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

28 May, 2020 05:30PM

May 27, 2020

GNUnet News

GNUnet Hacker Meeting 2020

Online GNUnet Hacker Meeting in June 2020

We are happy to announce that we will have a GNUnet Hacker Meeting from 17-21 of June 2020 taking place online. For more information see here.

27 May, 2020 10:00PM

May 23, 2020

parallel @ Savannah

GNU Parallel 20200522 ('Kraftwerk') released

GNU Parallel 20200522 ('Kraftwerk') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

  GNU Parallel: dead simple process-level parallelization of ad hoc tasks.
  Write for a chunk, let gnu manage the splitting, permutations and pool concurrency.
    -- Nick Ursa @nickursa@twitter

New in this release:

  • While running a job $PARALLEL_JOBSLOT is the jobslot of the job. It is equal to {%} unless the job is being retried. See {%} for details.
  • While running a job $PARALLEL_SSHLOGIN is the sshlogin line with number of cores removed. E.g. '4//usr/bin/specialssh user@host' becomes: '/usr/bin/specialssh user@host'
  • While running a job $PARALLEL_SSHHOST is the host part of an sshlogin line. E.g. '4//usr/bin/specialssh user@host' becomes: 'host'
  • --plus activates the replacement strings {slot} = $PARALLEL_JOBSLOT, {sshlogin} = $PARALLEL_SSHLOGIN, {host} = $PARALLEL_SSHHOST
  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 3374ec53bacb199b245af2dda86df6c9
    12345678 3374ec53 bacb199b 245af2dd a86df6c9
    $ md5sum install.sh | grep 029a9ac06e8b5bc6052eac57b2c3c9ca
    029a9ac0 6e8b5bc6 052eac57 b2c3c9ca
    $ sha512sum install.sh | grep f517006d9897747bed8a4694b1acba1b
    40f53af6 9e20dae5 713ba06c f517006d 9897747b ed8a4694 b1acba1b 1464beb4
    60055629 3f2356f3 3e9c4e3c 76e3f3af a9db4b32 bd33322b 975696fc e6b23cfb
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

23 May, 2020 07:43PM by Ole Tange

May 21, 2020

freeipmi @ Savannah

FreeIPMI 1.6.5

https://ftp.gnu.org/gnu/freeipmi/freeipmi-1.6.5.tar.gz

- Add FRU parsing workaround for Fujitsu Primergy RX1330, in which a CEh is used to indicate that no FRU data is available.
- Misc minor fixes.

21 May, 2020 11:04PM by Albert Chu

May 20, 2020

gnuastro @ Savannah

Gnuastro 0.12 released

The 12th release of GNU Astronomy Utilities (Gnuastro) is now available. Please see the announcement for more.

20 May, 2020 05:26PM by Mohammad Akhlaghi

May 18, 2020

denemo @ Savannah

Release 2.4 now available

New Features
        Omission Criteria
            A lightweight alternative to Score Layouts
            A single flag turns on/off features of the score
        Swing Playback
            Playback with altered note durations
            Use for Jazz swing and note inègales
        Page Turner/Annotater
            Annotate while playing from digital score
            Page turn digital score from pedals
        New from Current
            Create a new score using the current one as template
            Use for books of songs, sonatas etc to keep style uniform
Bug Fixes
        Easier object edit interface
        After Grace command now fully automatic
        Crash on Windows during delete measure all staffs
        Template save bugs fixed
        Assign Instrument command in score with voices fixed.

18 May, 2020 01:41PM by Richard Shann

May 17, 2020

GNU Guix

GNU Shepherd user services

One of the things which sets Guix apart from other GNU/Linux distributions is that it uses GNU Shepherd instead of the now ubiquitous systemd. A side effect of this is that user systemd units do not work on Guix System. Love, hate or extremely ambivalent toward systemd, this means that users cannot rely on already written systemd unit files for their regular user-level services.

There are a couple of benefits to using GNU Shepherd, and not all of them are due to it already being installed on Guix. Becoming comfortable with using Shepherd and understanding how to write and edit Shepherd service configurations makes the transition from other GNU/Linux distributions to Guix System easier. More complex services with their own logic tree, using the full power of GNU Guile, are also possible. This means you can have one service that behaves differently if it's running on a different system or architecture without needing to call out to shell scripts or using minimally different service definitions.

The GNU Shepherd manual suggests putting all the services inside a monolithic init.scm file, located by default at $XDG_CONFIG_DIR/shepherd/init.scm. While this does make it easy to keep everything in one place, it does create one glaring issue: any changes to the file mean that all the services need to be stopped and restarted in order for any changes to take place.

Luckily there's a nice function called scandir hiding in ice-9 ftw which returns a list of all files in a specified directory (with options for narrowing down the list or sorting it). This means that our init.scm can contain a minimum of code and all actual services can be loaded from individual files.

First the minimal init.scm:

(use-modules (shepherd service)
             ((ice-9 ftw) #:select (scandir)))

;; Load all the files in the directory 'init.d' with a suffix '.scm'.
(for-each
  (lambda (file)
    (load (string-append "init.d/" file)))
  (scandir (string-append (dirname (current-filename)) "/init.d")
           (lambda (file)
             (string-suffix? ".scm" file))))

;; Send shepherd into the background
(action 'shepherd 'daemonize)

Let's take a sample service for running syncthing, as defined in $XDG_CONFIG_DIR/shepherd/init.d/syncthing.scm:

(define syncthing
  (make <service>
    #:provides '(syncthing)
    #:docstring "Run `syncthing' without calling the browser"
    #:start (make-forkexec-constructor
              '("syncthing" "-no-browser")
              #:log-file (string-append (getenv "HOME")
                                        "/log/syncthing.log"))
    #:stop (make-kill-destructor)
    #:respawn? #t))
(register-services syncthing)

(start syncthing)

As with any other shepherd service it is defined and registered, and in this case it will start automatically. When the file is loaded by shepherd after being discovered by scandir everything works exactly as though the service definition were located directly inside the init.scm.

Now lets make a change. Since syncthing already has a -logfile flag and it has built-in log rotation that sounds better than using shepherd's #:log-file option. First we'll make our changes to the service:

(define syncthing
  (make <service>
    #:provides '(syncthing)
    #:docstring "Run `syncthing' without calling the browser"
    #:start (make-forkexec-constructor
              '("syncthing" "-no-browser"
                "-logflags=3" ; prefix with date & time
                "-logfile=/home/user/log/syncthing.log"))
    #:stop (make-kill-destructor)
    #:respawn? #t))
(register-services syncthing)

(start syncthing)

Now we stop syncthing:

$ herd stop syncthing

And we load the new service:

$ herd load root ~/.config/shepherd/init.d/syncthing.scm

This allows for quickly iterating on services without needing to stop all the services! Let's take a look at another service:

(define fccache
  (make <service>
    #:provides '(fccache)
    #:docstring "Run 'fc-cache -frv'"
    #:start (make-forkexec-constructor
              '("guix" "environment" "--ad-hoc" "fontconfig" "--"
                "fc-cache" "-frv")
              #:log-file (string-append (getenv "HOME")
                                        "/log/fccache.log"))
    #:one-shot? #t))

(register-services fccache)

In this example I want to refresh my font cache but I don't want to actually install fontconfig either system-wide or in my profile.

$ which fc-cache
which: no fc-cache in (/home/user/.config/guix/current/bin:/home/user/.guix-profile/bin:/home/user/.guix-profile/sbin:/run/setuid-programs:/run/current-system/profile/bin:/run/current-system/profile/sbin)
$ herd start fccache
Service fccache has been started.

Of course we can import other modules and leverage the code already written there. In this case, instead of using the string "guix environment --ad-hoc fontutils -- fc-cache -frv" let's use the guix environment function already available in guix scripts environment:

(use-modules (guix scripts environment))

(define fccache
  (make <service>
    #:provides '(fccache)
    #:docstring "Run 'fc-cache -frv'"
    #:start (lambda () ; Don't run immediately when registered!
              (guix-environment "--ad-hoc" "fontconfig" "--" "fc-cache" "-frv"))
    #:one-shot? #t))

(register-services fccache)
$ herd load root ~/.config/shepherd/init.d/fccache.scm
Loading /home/user/.config/shepherd/init.d/fccache.scm.
$ herd start fccache
/gnu/store/hbqlzgd8hcf6ndcmx7q7miqrsxb4dmkk-gs-fonts-8.11/share/fonts: caching, new cache contents: 0 fonts, 1 dirs
/gnu/store/hbqlzgd8hcf6ndcmx7q7miqrsxb4dmkk-gs-fonts-8.11/share/fonts/type1: caching, new cache contents: 0 fonts, 1 dirs
/gnu/store/hbqlzgd8hcf6ndcmx7q7miqrsxb4dmkk-gs-fonts-8.11/share/fonts/type1/ghostscript: caching, new cache contents: 35 fonts, 0 dirs
/home/user/.guix-profile/share/fonts: caching, new cache contents: 0 fonts, 7 dirs
/home/user/.guix-profile/share/fonts/opentype: caching, new cache contents: 8 fonts, 0 dirs
/home/user/.guix-profile/share/fonts/otf: caching, new cache contents: 12 fonts, 0 dirs
/home/user/.guix-profile/share/fonts/terminus: caching, new cache contents: 18 fonts, 0 dirs
/home/user/.guix-profile/share/fonts/truetype: caching, new cache contents: 58 fonts, 0 dirs
/home/user/.guix-profile/share/fonts/ttf: caching, new cache contents: 12 fonts, 0 dirs
/home/user/.guix-profile/share/fonts/type1: caching, new cache contents: 0 fonts, 1 dirs
/home/user/.guix-profile/share/fonts/type1/ghostscript: caching, new cache contents: 35 fonts, 0 dirs
/home/user/.guix-profile/share/fonts/woff: caching, new cache contents: 1 fonts, 0 dirs
/run/current-system/profile/share/fonts: skipping, no such directory
/home/user/.local/share/fonts: skipping, no such directory
/home/user/.fonts: skipping, no such directory
/gnu/store/hbqlzgd8hcf6ndcmx7q7miqrsxb4dmkk-gs-fonts-8.11/share/fonts/type1: skipping, looped directory detected
/home/user/.guix-profile/share/fonts/opentype: skipping, looped directory detected
/home/user/.guix-profile/share/fonts/otf: skipping, looped directory detected
/home/user/.guix-profile/share/fonts/terminus: skipping, looped directory detected
/home/user/.guix-profile/share/fonts/truetype: skipping, looped directory detected
/home/user/.guix-profile/share/fonts/ttf: skipping, looped directory detected
/home/user/.guix-profile/share/fonts/type1: skipping, looped directory detected
/home/user/.guix-profile/share/fonts/woff: skipping, looped directory detected
/gnu/store/hbqlzgd8hcf6ndcmx7q7miqrsxb4dmkk-gs-fonts-8.11/share/fonts/type1/ghostscript: skipping, looped directory detected
/home/user/.guix-profile/share/fonts/type1/ghostscript: skipping, looped directory detected
/var/cache/fontconfig: not cleaning unwritable cache directory
/home/user/.cache/fontconfig: cleaning cache directory
/home/user/.fontconfig: not cleaning non-existent cache directory
fc-cache: succeeded
herd: exception caught while executing 'start' on service 'fccache':
Throw to key `quit' with args `(0)'.

The problem with this approach is that guix-environment returns the exit code of the programs it calls and #:start expects a constructor to return #t or #f so there's some work to be done here.

This was just a quick peek into what's possible with GNU Shepherd when run as a user. Next time we'll take a look at integrating mcron to replicate some of systemd's timer functionality.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

17 May, 2020 08:00PM by Efraim Flashner

May 16, 2020

health @ Savannah

GNU Health HMIS patchset 3.6.4 released

Dear community

GNU Health 3.6.4 patchset has been released !

Priority: High

Table of Contents

  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Summary of this patchset
  • Installation notes
  • List of other issues related to this patchset

About GNU Health Patchsets

We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your
production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-3.6.4.tar.gz)

Updating your system with the GNU Health control Center

Starting GNU Health 3.x series, you can do automatic updates on the GNU Health HMIS kernel and modules using the GNU Health control center program.

Please refer to the administration manual section (https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Summary of this patchset

GNU Health 3.6.4 includes:
The most relevant features on this version are:

  • health_contact_tracing package: Allows to trace people that have been in contact with a person suspected of being positive of a infectious disease. Name, demographics, place and date of contact, sanitary region (operational sector), type of contact, exposure risk and follow-up status are some of the information that is recorded per contact.
  • Epidemiological Surveillance: A new report that provides epidemiological information on an specific health condition. Data on prevalence of the disease as well as the incidence over a period of time. It produces epi curves for new confirmed cases, deaths related to the disease (both immediate cause as well as underlying conditions) from death certificates. It also shows very relevant charts on the population affected from a demographic and socioeconomic point of view (age, gender, ethnicity, socioeconomic status).
  • Lab and lab crypto packages: When a disease is confirmed from a positive lab test result, GNU Health LIMS automatically includes the health condition in the patient medical history upon the validation of the lab manager.
  • GH Control center and gnuhealth-setup have been updated.

Installation Notes

You must apply previous patchsets before installing this patchset. If your patchset level is 3.6.3, then just follow the general instructions. You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)

In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you. 

Pre-requisites for upgrade to 3.6.4: Matplotlib (You can skip this step if you are doing a fresh installation.)
If you are upgrading from 3.6.3, you need to install the matplotlib package:

$pip3 install --upgrade --user matplotlib

Now follow the general instructions at

After applying the patches, make a full update of your GNU Health
database as explained in the documentation.

When running "gnuhealth-control" for the first time, you will see the following message: "Please restart now the update with the new control center" Please do so. Restart the process and the update will continue.

  • Restart the GNU Health server

List of other issues and tasks related to this patchset

  • bug #58104: view_attributes() method should extend list of attributes
  • bug #58358: Need current SES in main patient info
  • task #15563: Assign health condition from a confirmed lab
  • task #15562: ((updated)) Include coronavirus COVID-19 in ICD10 codes

For detailed information about each issue, you can visit https://savannah.gnu.org/bugs/?group=health

For detailed information about each task, you can visit https://savannah.gnu.org/task/?group=health

For detailed information you can read about Patches and Patchsets

Happy and healthy hacking !

--
Dr. Luis Falcon, MD, MSc
President, GNU Solidario
GNU Health: Freedom and Equity in Healthcare
http://www.gnuhealth.org
Fingerprint: ACBF C80F C891 631C 68AA 8DC8 C015 E1AE 0098 9199

16 May, 2020 05:42PM by Luis Falcon

May 13, 2020

Christopher Allan Webber

Departing Libre Lounge

Over the last year and a half I've had a good time presenting on Libre Lounge with my co-host Serge Wroclawski. I'm very proud of the topics we've decided to cover, of which there are quite a few good ones in the archive, and the audience the show has had is just the best.

However, I've decided to depart the show... Serge and I continue to be friends (and are still working on a number of projects together, such as Datashards and the recently announced grant), but in terms of the podcast I think we'd like to take things in different creative directions.

This is probably not the end of me doing podcasting, but if I start something up again it'll be a bit different in its structure... and you can be sure you'll hear about it here and on my fediverse account and over at the birdsite.

In the meanwhile, I look forward to continuing to tuning into Libre Lounge, but as a listener.

Thanks for all the support, Libre Loungers!

13 May, 2020 07:13PM by Christopher Lemmer Webber

Spritely's NLNet grant: Interface Discovery for Distributed Systems

I've been putting off making this blogpost for a while because I kept thinking, "I should wait to do it until I finish making some sort of website for Spritely and make a blogpost there!" Which, in a sense is a completely reasonable thought because right now Spritely's only "website" is a loose collection of repositories, but I'd like something that provides a greater narrative for what Spritely is trying to accomplish. But that also kind of feels like a distraction (or maybe I should just make a very minimal website) when there's something important to announce... so I'm just doing it here (where I've been making all the other Spritely posts so far anyway).

Spritely is an NLnet (in conjunction with the European Commision / Next Generation Internet initative) grant recipient! Specifically, we have received a grant for "Interface Discovery for Distributed Systems"! I'll be implementing the work alongside Serge Wroclawski.

There are two interesting sub-phrases there: "Interface Discovery" and "Distributed Systems". Regarding "distributed systems", we should really say "mutually suspicious open-world distributed systems". Those extra words change some of the requirements; we have to assume we'll be told about things we don't understand, and we have to assume that many objects we interact with may be opaque to us... they might lie about what kind of thing they are.

Choosing how to name interfaces then directly ties into something I wrote about here more recently, namely content addressed vocabulary.

I wrote more ideas and details about the interfaces ideas email to cap-talk so you can read more there if you like... but I think more details about the interfaces thoughts than that can wait until we publish a report about it (and publishing a report is baked into the grant).

The other interesting bit though is the "distributed" aspect; in order to handle distributed computation and object interaction, we need to correctly design our protocols. Thankfully there is a lot of good prior art to work from, usually some variant of "CapTP" (Capability Transport Protocol), as implemented in its original form by E, taking on a bit of a different form in the Waterken project, adapted in Cap'N Proto, as well as with the new work happening over at Agoric. Each of these variants of the core CapTP ideas have tried to tackle some different use cases, and Goblins has its own needs to be covered. Is there a possibility of convergence? Possibly... I am trying to understand the work of and communicate with the folks over at Agoric but I think it's a bit too early to be conclusive about anything. Regardless, it'll be a major milestone once Spritely Goblins is able to actually live up to its promise of distributed computation, and work on this is basically the next step to proceed on.

When I first announced Spritely about a year and a half ago I included a section that said "Who's going to pay for all this?" to which I then said, "I don't really have a funding plan, so I guess this is kind of a non-answer. However, I do have a Patreon account you could donate to." To be honest, I was fairly nervous about it... so I want to express my sincere and direct appreciation to NLnet alongside the European Commission / Next Generation Internet Initiative, along with Samsung Stack Zero, and all the folks donating on Patreon and Liberapay. With all the above, and especially the new grant from NLnet, I should have enough funding to continue working on Spritely through a large portion of 2021. I am determined to make good on the support I've received, and am looking forward to put out more interesting demonstrations of this technology over the next few months.

13 May, 2020 06:54PM by Christopher Lemmer Webber

May 12, 2020

denemo @ Savannah

Release 2.4 imminent - please test!

New Features
        Omission Criteria
            A lightweight alternative to Score Layouts
            A single flag turns on/off features of the score
        Swing Playback
            Playback with altered note durations
            Use for Jazz swing and note inègales
        Page Turner/Annotater
            Annotate while playing from digital score
            Page turn digital score from pedals
        New from Current
            Create a new score using the current one as template
            Use for books of songs, sonatas etc to keep style uniform
Bug Fixes
        Easier object edit interface
        After Grace command now fully automatic
        Crash on Windows during delete measure all staffs
        Template save bugs fixed
        Assign Instrument command in score with voices fixed.

12 May, 2020 08:22AM by Richard Shann

May 09, 2020

bison @ Savannah

Bison 3.6 released

We are extremely happy to announce the release of Bison 3.6:

- the developer can forge syntax error messages the way she wants.

- token string aliases can be internationalized, and UTF-8 sequences
  are properly preserved.

- push parsers can ask at any moment for the list of "expected tokens",
  which can be used to provide syntax-driven autocompletion.

- yylex may now tell the parser to enter error-recovery without issuing an
  error message (when the error was already reported by the scanner).

- several new examples were added, in particular "bistromathic" demonstrates
  almost all the existing bells and whistles, including interactive
  autocompletion on top of GNU readline.

Please see the much more detailed release notes below.

Many thanks to testers, bug reporters, contributors and feature requesters:
Adrian Vogelsgesang, Ahcheong Lee, Alexandre Duret-Lutz, Andy Fiddaman,
Angelo Borsotti, Arthur Schwarz, Christian Schoenebeck, Dagobert Michelsen,
Denis Excoffier, Dennis Clarke, Don Macpherson, Evan Lavelle, Frank
Heckenbach, Horst von Brand, Jannick, Nikki Valen, Paolo Bonzini, Paul
Eggert, Pramod Kumbhar and Victor Morales Cayuela.  The author also thanks
an anonymous reviewer for his precious comments.

Special thanks to Bruno Haible for his investment into making Bison
portable.

Happy parsing!

       Akim

PS/ The experimental back-end for the D programming language is still
looking for active support from the D community.

==================================================================

GNU Bison is a general-purpose parser generator that converts an annotated
context-free grammar into a deterministic LR or generalized LR (GLR) parser
employing LALR(1) parser tables.  Bison can also generate IELR(1) or
canonical LR(1) parser tables.  Once you are proficient with Bison, you can
use it to develop a wide range of language parsers, from those used in
simple desk calculators to complex programming languages.

Bison is upward compatible with Yacc: all properly-written Yacc grammars
work with Bison with no change.  Anyone familiar with Yacc should be able to
use Bison with little trouble.  You need to be fluent in C, C++ or Java
programming in order to use Bison.

Bison and the parsers it generates are portable, they do not require any
specific compilers.

GNU Bison's home page is https://gnu.org/software/bison/.

==================================================================

Here are the compressed sources:
  https://ftp.gnu.org/gnu/bison/bison-3.6.tar.gz   (5.1MB)
  https://ftp.gnu.org/gnu/bison/bison-3.6.tar.xz   (3.1MB)

Here are the GPG detached signatures[*]:
  https://ftp.gnu.org/gnu/bison/bison-3.6.tar.gz.sig
  https://ftp.gnu.org/gnu/bison/bison-3.6.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify bison-3.6.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 0DDCAA3278D5264E

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.69
  Automake 1.16.2
  Flex 2.6.4
  Gettext 0.19.8.1
  Gnulib v0.1-3382-g2ac33b29f

==================================================================

* Noteworthy changes in release 3.6 (2020-05-08) [stable]

** Backward incompatible changes

  TL;DR: replace "#define YYERROR_VERBOSE 1" by "%define parse.error verbose".

  The YYERROR_VERBOSE macro is no longer supported; the parsers that still
  depend on it will now produce Yacc-like error messages (just "syntax
  error").  It was superseded by the "%error-verbose" directive in Bison
  1.875 (2003-01-01).  Bison 2.6 (2012-07-19) clearly announced that support
  for YYERROR_VERBOSE would be removed.  Note that since Bison 3.0
  (2013-07-25), "%error-verbose" is deprecated in favor of "%define
  parse.error verbose".

** Deprecated features

  The YYPRINT macro, which works only with yacc.c and only for tokens, was
  obsoleted long ago by %printer, introduced in Bison 1.50 (November 2002).
  It is deprecated and its support will be removed eventually.

** New features

*** Improved syntax error messages

  Two new values for the %define parse.error variable offer more control to
  the user.  Available in all the skeletons (C, C++, Java).

**** %define parse.error detailed

  The behavior of "%define parse.error detailed" is closely resembling that
  of "%define parse.error verbose" with a few exceptions.  First, it is safe
  to use non-ASCII characters in token aliases (with 'verbose', the result
  depends on the locale with which bison was run).  Second, a yysymbol_name
  function is exposed to the user, instead of the yytnamerr function and the
  yytname table.  Third, token internationalization is supported (see
  below).

**** %define parse.error custom

  With this directive, the user forges and emits the syntax error message
  herself by defining the yyreport_syntax_error function.  A new type,
  yypcontext_t, captures the circumstances of the error, and provides the
  user with functions to get details, such as yypcontext_expected_tokens to
  get the list of expected token kinds.

  A possible implementation of yyreport_syntax_error is:

    int
    yyreport_syntax_error (const yypcontext_t *ctx)
    {
      int res = 0;
      YY_LOCATION_PRINT (stderr, *yypcontext_location (ctx));
      fprintf (stderr, ": syntax error");
      // Report the tokens expected at this point.
      {
        enum { TOKENMAX = 10 };
        yysymbol_kind_t expected[TOKENMAX];
        int n = yypcontext_expected_tokens (ctx, expected, TOKENMAX);
        if (n < 0)
          // Forward errors to yyparse.
          res = n;
        else
          for (int i = 0; i < n; ++i)
            fprintf (stderr, "%s %s",
                     i == 0 ? ": expected" : " or", yysymbol_name (expected[i]));
      }
      // Report the unexpected token.
      {
        yysymbol_kind_t lookahead = yypcontext_token (ctx);
        if (lookahead != YYSYMBOL_YYEMPTY)
          fprintf (stderr, " before %s", yysymbol_name (lookahead));
      }
      fprintf (stderr, "\n");
      return res;
    }

**** Token aliases internationalization

  When the %define variable parse.error is set to `custom` or `detailed`,
  one may specify which token aliases are to be translated using _().  For
  instance

    %token
        PLUS   "+"
        MINUS  "-"
      <double>
        NUM _("number")
      <symrec*>
        FUN _("function")
        VAR _("variable")

  In that case the user must define _() and N_(), and yysymbol_name returns
  the translated symbol (i.e., it returns '_("variable")' rather that
  '"variable"').  In Java, the user must provide an i18n() function.

*** List of expected tokens (yacc.c)

  Push parsers may invoke yypstate_expected_tokens at any point during
  parsing (including even before submitting the first token) to get the list
  of possible tokens.  This feature can be used to propose autocompletion
  (see below the "bistromathic" example).

  It makes little sense to use this feature without enabling LAC (lookahead
  correction).

*** Returning the error token

  When the scanner returns an invalid token or the undefined token
  (YYUNDEF), the parser generates an error message and enters error
  recovery.  Because of that error message, most scanners that find lexical
  errors generate an error message, and then ignore the invalid input
  without entering the error-recovery.

  The scanners may now return YYerror, the error token, to enter the
  error-recovery mode without triggering an additional error message.  See
  the bistromathic for an example.

*** Deep overhaul of the symbol and token kinds

  To avoid the confusion with types in programming languages, we now refer
  to token and symbol "kinds" instead of token and symbol "types".  The
  documentation and error messages have been revised.

  All the skeletons have been updated to use dedicated enum types rather
  than integral types.  Special symbols are now regular citizens, instead of
  being declared in ad hoc ways.

**** Token kinds

  The "token kind" is what is returned by the scanner, e.g., PLUS, NUMBER,
  LPAREN, etc.  While backward compatibility is of course ensured, users are
  nonetheless invited to replace their uses of "enum yytokentype" by
  "yytoken_kind_t".

  This type now also includes tokens that were previously hidden: YYEOF (end
  of input), YYUNDEF (undefined token), and YYerror (error token).  They
  now have string aliases, internationalized when internationalization is
  enabled.  Therefore, by default, error messages now refer to "end of file"
  (internationalized) rather than the cryptic "$end", or to "invalid token"
  rather than "$undefined".

  Therefore in most cases it is now useless to define the end-of-line token
  as follows:

    %token T_EOF 0 "end of file"

  Rather simply use "YYEOF" in your scanner.

**** Symbol kinds

  The "symbol kinds" is what the parser actually uses.  (Unless the
  api.token.raw %define variable is used, the symbol kind of a terminal
  differs from the corresponding token kind.)

  They are now exposed as a enum, "yysymbol_kind_t".

  This allows users to tailor the error messages the way they want, or to
  process some symbols in a specific way in autocompletion (see the
  bistromathic example below).

*** Modernize display of explanatory statements in diagnostics

  Since Bison 2.7, output was indented four spaces for explanatory
  statements.  For example:

    input.y:2.7-13: error: %type redeclaration for exp
    input.y:1.7-11:     previous declaration

  Since the introduction of caret-diagnostics, it became less clear.  This
  indentation has been removed and submessages are displayed similarly as in
  GCC:

    input.y:2.7-13: error: %type redeclaration for exp
        2 | %type <float> exp
          |       ^~~~~~~
    input.y:1.7-11: note: previous declaration
        1 | %type <int> exp
          |       ^~~~~

  Contributed by Victor Morales Cayuela.

*** C++

  The token and symbol kinds are yy::parser::token_kind_type and
  yy::parser::symbol_kind_type.

  The symbol_type::kind() member function allows to get the kind of a
  symbol.  This can be used to write unit tests for scanners, e.g.,

    yy::parser::symbol_type t = make_NUMBER ("123");
    assert (t.kind () == yy::parser::symbol_kind::S_NUMBER);
    assert (t.value.as<int> () == 123);

** Documentation

*** User Manual

  In order to avoid ambiguities with "type" as in "typing", we now refer to
  the "token kind" (e.g., `PLUS`, `NUMBER`, etc.) rather than the "token
  type".  We now also refer to the "symbol type" (e.g., `PLUS`, `expr`,
  etc.).

*** Examples

  There are now examples/java: a very simple calculator, and a more complete
  one (push-parser, location tracking, and debug traces).

  The lexcalc example (a simple example in C based on Flex and Bison) now
  also demonstrates location tracking.


  A new C example, bistromathic, is a fully featured interactive calculator
  using many Bison features: pure interface, push parser, autocompletion
  based on the current parser state (using yypstate_expected_tokens),
  location tracking, internationalized custom error messages, lookahead
  correction, rich debug traces, etc.

  It shows how to depend on the symbol kinds to tailor autocompletion.  For
  instance it recognizes the symbol kind "VARIABLE" to propose
  autocompletion on the existing variables, rather than of the word
  "variable".

09 May, 2020 08:48AM by Akim Demaille

May 05, 2020

remotecontrol @ Savannah

Two-Factor Authentication Update - Google Nest Community

https://support.google.com/googlenest/thread/44445328?hl=en

"...all Nest account users who have not enrolled in two-factor authentication or migrated to a Google account to take an extra step by verifying their identity via email when logging in to their Nest account."

05 May, 2020 12:28PM by Stephen H. Dawson DSL

May 04, 2020

Applied Pokology

Understanding Poke methods

Poke struct types can be a bit daunting at first sight. You can find all sort of things inside them: from fields, variables and functions to constraint expressions, initialization expressions, labels, other type definitions, and methods.

Struct methods can be particularly confusing for the novice poker. In particular, it is important to understand the difference between methods and regular functions defined inside struct types. This article will hopefully clear the confusion, and also will provide the reader with a better understanding on how poke works internally.

04 May, 2020 12:00AM

May 03, 2020

Multi-line output in poke pretty-printers

The ID3V1 tag format describes the format for the tags that are embeded in MP3 files, giving information about the song stored in the file, such as genre, the name of the artist, and so on. While hacking the id3v1 pickle today, I found a little dilemma on how to best present a pretty-printed version of a tag to the user.

03 May, 2020 12:00AM

May 01, 2020

remotecontrol @ Savannah

U.S. Moves to Ban Use of Some Foreign Power Gear

https://www.wsj.com/articles/u-s-moves-to-block-imports-of-some-power-equipment-11588346518

U.S. Moves to Ban Use of Some Foreign Power Gear
Timothy Puko

WASHINGTON—President Trump declared a national emergency for the nation’s power grid Friday, and signed an order to ban the import and use of equipment that poses a threat to national security if installed in U.S. power plants and transmission systems.

The move boosts U.S. efforts to protect the grid from being used as a weapon against American citizens and businesses, attacks that could have “potentially catastrophic effects,” Mr. Trump said in the order. While the order doesn’t name any country, national-security officials have said that Russia and China have the ability to temporarily disrupt the operations of electric utilities and gas pipelines.

The executive order gives the Energy Secretary more power to prevent the use of such equipment that is influenced by foreign adversaries or creates an “unacceptable risk to the national security.” It also gives the secretary responsibility over determining what parts of the system are already at risk and possibly need to be replaced.

U.S. officials will later determine what equipment is most at risk. But they will examine anything used at power plants and the nation’s transmission system, potentially including what goes into the grid’s transformers and substations, said a senior Energy Department official.

The move aims to shore up a potential vulnerability in a power supply that depends extensively on foreign-made parts. Officials are expected to use U.S. intelligence agencies’ threat assessments to help determine what equipment is most likely a risk and what may need to be banned, the official said.

Government agencies have warned repeatedly that the nation’s electricity grid is an attractive target for overseas hackers. The U.S. blamed the Russian government for a hacking campaign in 2017.

While some of these threats date back more than a decade, they have intensified in recent years. The fear is that U.S. adversaries could cut power and heat to U.S. consumers and businesses as an unconventional weapon, federal officials have said.

“It is imperative the bulk-power system be secured against exploitation and attacks by foreign threats,” Energy Secretary Dan Brouillette said in a statement. “This Executive Order will greatly diminish the ability of foreign adversaries to target our critical electric infrastructure.”

The administration is taking action specifically because of those prior efforts to infiltrate U.S. electric and natural-gas systems, which intelligence agencies say they have linked directly to Russia and China, the official said. The process will help determine which countries pose the highest risk.

The administration’s risk assessments from the past two years have pointed to power plants and the transmission grid as the most vulnerable parts of the electricity system, leading the administration to focus action there, the official said.

Under the president’s order, the Energy Secretary will work within the administration to set criteria for what power companies can safely purchase from international vendors. The secretary will create a task force to establish procurement policies and possibly a process for prequalifying international vendors to sell products for U.S. systems.

The power industry’s supply chain has been a growing problem for about 15 years because of increased outsourcing, an issue industry officials widely recognize, the administration official said. For example, though power transformers are the backbone of the U.S. system, most aren’t made in the U.S. nor is there any capability to make certain types of them, the official added.

“We need to be thoughtful and rigorous in our analysis to mitigate the risk associated with supply chains that we don’t control,” the official said.

The Trump administration has made addressing those types of risks a priority across several industries. Officials have frequently cited threats from countries, especially China and Russia, that give financial support to suppliers in telecommunications, pharmaceuticals, nuclear power, and rare-earths mining and processing, and may have influence over them.

A Wall Street Journal investigation published last year revealed that Russian hackers looking to gain access to critical American power infrastructure were able to penetrate the electrical grid by targeting subcontractors to the system.

Methods including planting malware on sites of online publications frequently read by utility engineers to help Russian operatives slip through hidden portals used by utility technicians, in some cases getting into computer systems that monitor and control electricity flows.

Write to Timothy Puko at tim.puko@wsj.com

01 May, 2020 07:30PM by Stephen H. Dawson DSL

GNU MediaGoblin

MediaGoblin 0.10.0 released

We’re pleased to announce the release of MediaGoblin 0.10.0!

It’s been a while between releases for MediaGoblin, but work has continued steadily. Highlights of this release include a new plugin for displaying video subtitles and support for transcoding and displaying video in multiple resolutions. There have also been a large number of smaller improvements and bug fixes which are listed in the release notes.

After enabling the new subtitles plugin, you can upload and edit captions for your videos. Multiple subtitle tracks are supported, such as for different languages. This feature was added by Saksham Agrawal during Google Summer of Code 2016 and mentored by Boris Bobrov. The feature has been available for some time on the master branch, but it definitely deserves a mention for this release.

A screenshot showing MediaGoblin with subtitles shown on a video
A video with subtitles added

Videos are now automatically transcoded at various video qualities such as 360p, 480p and 720p. You can choose your preferred quality while watching the video. This feature was added by Vijeth Aradhya during Google Summer of Code 2017 and mentored by Boris Bobrov. Again this feature has been available for some time on master, but is also worthy of a mention.

A screenshot showing the video quality selector in MediaGoblin
Selecting a video quality

For details on installing MediaGoblin, see Deploying MediaGoblin and for tips on upgrading, see the release notes. To join us and help improve MediaGoblin, please visit our getting involved page!

01 May, 2020 05:00AM by Ben Sturmfels