Planet GNU

Aggregation of development blogs from the GNU Project

November 28, 2014

FSF Blogs

The 2014 Giving Guide is here!

Electronics are popular gifts for the holidays, but people often overlook the restrictions that manufacturers slip under the wrapping paper. From remote deletion of files to harsh rules about copying and sharing, some gifts take more than they give. The good news is that there are ethical companies making better devices that your loved ones can enjoy with freedom and privacy.

Today, we're launching the 2014 Giving Guide, the Free Software Foundation (FSF) guide to smarter gifts, compared with their restrictive counterparts.

The 2014 Giving Guide has recommendations for laptops, 3D printers, and software that everyone on your list will love, and that they can use with complete freedom. We also include gifts to avoid--hardware and software that is not an ethical gift choice. Oh, and did we mention the savings? The Giving Guide contains special discounts on purchases from ThinkPenguin and Aleph Objects.

This holiday season, you can give with confidence. Make sure your friends and family can too--share this Giving Guide with them with the hashtag #givefreely. Then, help spread the word even further--organize a Giving Guide Giveaway to raise awareness that buying conscientious gifts applies to computers and software too. We will be releasing translations of the Giving Guide soon, so no matter where you are in the world, you'll be able to share it.

What are you waiting for, unwrap our 2014 Giving Guide to see our picks for gifts this year!

November 28, 2014 03:11 PM

November 27, 2014

Andy Wingo

scheme workshop 2014

I just got back from the US, and after sleeping for 14 hours straight I'm in a position to type about stuff again. So welcome back to the solipsism, France and internet! It is good to see you on a properly-sized monitor again.

I had the enormously pleasurable and flattering experience of being invited to keynote this year's Scheme Workshop last week in DC. Thanks to John Clements, Jason Hemann, and the rest of the committee for making it a lovely experience.

My talk was on what Scheme can learn from JavaScript, informed by my work in JS implementations over the past few years; you can download the slides as a PDF. I managed to record audio, so here goes nothing:

55 minutes, vorbis or mp3

It helps to follow along with the slides. Some day I'll augment my slide-rendering stuff to synchronize a sequence of SVGs with audio, but not today :)

The invitation to speak meant a lot to me, for complicated reasons. See, Scheme was born out of academic research labs, and to a large extent that's been its spiritual core for the last 40 years. My way to the temple was as a money-changer, though. While working as a teacher in northern Namibia in the early 2000s, fleeing my degree in nuclear engineering, trying to figure out some future life for myself, for some reason I was recording all of my expenses in Gnucash. Like, all of them, petty cash and all. 50 cents for a fat-cake, that kind of thing.

I got to thinking "you know, I bet I don't spend any money on Tuesdays." See, there was nothing really to spend money on in the village besides fat cakes and boiled eggs, and I didn't go into town to buy things except on weekends or later in the week. So I thought that it would be neat to represent that as a chart. Gnucash didn't have such a chart but I knew that they were implemented in Guile, as part of this wave of Scheme consciousness that swept the GNU project in the nineties, and that I should in theory be able to write it myself.

Problem was, I also didn't have internet in the village, at least then, and I didn't know Scheme and I didn't really know Gnucash. I think what I ended up doing was just monkey-typing out something that looked like the rest of the code, getting terrible errors but hey, it eventually worked. I submitted the code, many years ago now, some of the worst code you'll read today, but they did end up incorporating it into Gnucash and to my knowledge that report is still there.

I got more into programming, but still through the back door, so to speak. I had done some free software work before going to Namibia, on GStreamer, and wanted to build a programmable modular synthesizer with it. I read about Supercollider, and decided I wanted to do something like that but with the "unit generators" defined in GStreamer and orchestrated with Scheme. If I knew then that Scheme could be fast, I probably would have started on an entirely different course of things, but that did at least result in gainful employment doing unrelated GStreamer things, if not a synthesizer.

Scheme became my dominant language for writing programs. It was fun, and the need to re-implement a bunch of things wasn't a barrier at all -- rather a fun challenge. After a while, though, speed was becoming a problem. It became apparent that the only way to speed up Guile would be to replace its AST interpreter with a compiler. Thing is, I didn't know how to write one! Fortunately there was previous work by Keisuke Nishida, jetsam from the nineties wave of Scheme consciousness. I read and read that code, mechanically monkey-typed it into compilation, and slowly reworked it into Guile itself. In the end, around 2009, Guile was faster and I found myself its co-maintainer to boot.

Scheme has been a back door for me for work, too. I randomly met Kwindla Hultman-Kramer in Namibia, and we found Scheme to be a common interest. Some four or five years later I ended up working for him with the great folks at Oblong. As my interest in compilers grew, and it grew as I learned more about Scheme, I wanted something closer there, and that's what I've been doing in Igalia for the last few years. My first contact there was a former Common Lisp person, and since then many contacts I've had in the JS implementation world have been former Schemers.

So it was a delight when the invitation came to speak (keynote, no less!) the Scheme Workshop, behind the altar instead of in the foyer.

I think it's clear by now that Scheme as a language and a community isn't moving as fast now as it was in 2000 or even 2005. That's good because it reflects a certain maturity, and makes the lore of the tribe easier to digest, but bad in that people tend to ossify and focus on past achievements rather than future possibility. Ehud Lamm quoted Nietzche earlier today on Twitter:

By searching out origins, one becomes a crab. The historian looks backward; eventually he also believes backward.

So it is with Scheme and Schemers, to an extent. I hope my talk at the conference inspires some young Schemer to make an adaptively optimized Scheme, or to solve the self-hosted adaptive optimization problem. Anyway, as users I think we should end the era of contorting our code to please compilers. Of course some discretion in this area is always necessary but there's little excuse for actively bad code.

Happy hacking with Scheme, and au revoir!

by Andy Wingo at November 27, 2014 05:48 PM

November 26, 2014

FSF Events

Richard Stallman - "Free Software and Your Freedom" (Corning, NY)

Richard Stallman will speak about the goals and philosophy of the Free Software Movement, and the status and history of the GNU operating system, which in combination with the kernel Linux is now used by tens of millions of users world-wide.

Richard Stallman's speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Please fill out our contact form, so that we can contact you about future events in and around Corning.

If you are part of a large group that's planning to attend, we'd appreciate it if you could inform us, at the e-mail address or phone number provided here; this will help us ensure we can accommodate everyone who'll be there.

Campus map

November 26, 2014 01:45 PM

November 24, 2014

FSF Blogs

The sharks move in; lobbyists pushing forward on TPP agreements

On October 16th, WikiLeaks released an updated draft of the Trans-Pacific Partnership (TPP) Strategic Partnership Agreement chapter on copyright, patent and other proprietary interests. A previous draft had been released last year. If you aren't familiar with TPP, it is a multinational trade-agreement that is being developed through a series of secret negotiations that when enacted will have a vast effect on civil liberties, including the ability of users all around the world to enjoy software freedom.

We have been following and opposing these negotiations both in person and online for many years now. We wrote earlier this year about the dangers posed by these secret negotiations. This latest leak reveals that the countries involved in the TPP negotiations are coming closer to acceptance of a whole host of problematic agreements:

  • Penalizing the circumvention of digital restriction management, even for non-infringing uses. Previous leaked versions of the negotiations were poised to repeat the miserable failure of the United State's DMCA anti-circumvention exemption regime. While the current draft has moved away from implementing this broken system elsewhere, it still leaves in place a system where users can face penalties for circumventing digital restrictions management. Worse still, these penalties can apply even when the circumvention is done for non-infringing uses.
  • Perpetuating indefinite terms of copyright. The current draft shows the parties solidifying agreement around extending the term of copyright restriction, potentially up to life of the author plus 100 years. The U.S. has extended the term of copyright several times over the past decades, and accepting the maximum proposed change would once again break the basic bargain of the copyright system. This term would push perpetual copyright into other countries as well.
  • Failing to block the patenting of software. The current draft once again leaves open the possibility of software coming under the heading of patentable subject matter. The disaster of software patents in countries where they already exist is one of the greatest threats to user freedom, and a regime that does not block the patenting of software will only expand this problem elsewhere. Most troubling is the proposal by Mexico specifically stating that signing parties should be able to exclude software from patentable subject matter. This would be great news, if not for the fact that they are the only country signed onto the proposal.

These are just a few of the problems with the subject matter of the negotiations, not to mention the over-arching problem of the process being hidden from the public. We have asked for your help in the past in opposing TPP, but the fight is still not over. Here's what you can do to help:

November 24, 2014 10:30 PM

The FSF is hiring: Seeking a full-time outreach and communication coordinator

This position, reporting to the executive director, works closely with our campaigns, licensing, and technical staff, as well as our board of directors, to edit, publish, and promote high-quality, effective materials both digital and printed.

These materials are a critical part of advancing the FSF's work to support the GNU Project, free software adoption, free media formats, and freedom on the Internet; and to oppose DRM, software patents, and proprietary software.

Some of the position's more important responsibilities include:

  • stewarding the online publication and editing process for all outreach staff; including copyediting, formatting, posting, and maintaining material on our Web sites; and sending out e-mail messages to our lists;

  • producing and improving our monthly e-mail newsletter the Free Software Supporter;

  • improving the effectiveness of our audio and video materials use;

  • editing and building our biannual printed Bulletin;

  • promoting our work and the work of others in the area of computing freedom on social networking sites;

  • helping to produce fundraising materials and assisting with our fundraising drives;

  • cultivating the community around the LibrePlanet wiki and network, including the annual conference;

  • working with and encouraging volunteers; and

  • being an approachable, humble, and friendly representative of the FSF to our worldwide community of existing supporters and the broader public, both in person and online.

A successful candidate will have strong editing skills, especially in the area of copyediting, and will take pride in working with a team to create consistently polished and effective materials.

While this is a job for a person who is passionate about technology and its social impact, it is not primarily a technical position. The main technical requirement is a willingness to learn to use many new and possibly unfamiliar pieces of software, with a positive attitude. That being said, experience with CiviCRM and GNU/Linux will be considered a big plus, and experience with any of the following technologies should be mentioned: Plone, Drupal, Ikiwiki, Subversion, Git, CVS, Ssh, JavaScript, CSS, HTML, Emacs, LaTeX, Inkscape, GIMP, Markdown, or MediaWiki.

Because the FSF works globally and seeks to have our materials distributed in as many languages as possible, multilingual candidates will be noticed. English, German, French, Spanish, Mandarin, Malagasy, and a smattering of Japanese are represented among current FSF staff.

With our small staff of thirteen, each person makes a clear contribution. We work hard, but offer a humane and fun work environment.

Benefits and salary

The job must be worked on-site at the FSF's office in downtown Boston.

This is a union position. The salary is fixed at $49k and is non-negotiable. Other benefits include:

  • full family health coverage through Blue Cross/Blue Shield's HMO Blue program,
  • subsidized dental plan,
  • four weeks of paid vacation annually,
  • seventeen paid holidays annually,
  • public transit commuting cost reimbursement,
  • 403(b) program through TIAA-CREF,
  • yearly cost-of-living pay increases, and
  • potential for an annual performance bonus.

Application instructions

Applications must be submitted via email to The email must contain the subject line, "Outreach and Communications Coordinator". A complete application should include:

  • resume,
  • cover letter,
  • writing sample (1000 words or less),
  • links to published work online, and
  • three or more edits you would suggest to this job posting.

All materials must be in a free format (such as plain text, PDF, or OpenDocument, and not Microsoft Word). Email submissions that do not follow these instructions will probably be overlooked. No phone calls, please.

Applications will be considered on a rolling basis until the position is filled. To ensure consideration, apply before 10:00am EST on Monday, December 8th.

The FSF is an equal opportunity employer and will not discriminate against any employee or application for employment on the basis of race, color, marital status, religion, age, sex, sexual orientation, national origin, handicap, or any other legally protected status recognized by federal, state or local law. We value diversity in our workplace.

November 24, 2014 05:25 PM

November 23, 2014

grep @ Savannah

grep-2.21 released [stable]

by Jim Meyering at November 23, 2014 09:22 PM

November 22, 2014

parallel @ Savannah

GNU Parallel 20141122 ('Rosetta') released

GNU Parallel 20141122 ('Rosetta') has been released. It is available for download at:

Haiku of the month:

Hadoop bit too much?
Want a simpler syntax now?
Use GNU Parallel.
-- Ole Tange

A central piece of command generation was rewritten making this release beta quality. As always it passes the testsuite, so most functionality clearly works.

New in this release:

  • Remote systems can be divided into hostgroups (e.g. web and db) by prepending '@groupname/' to the sshlogin. Multiple groups can be given by separating groups with '+'. E.g. @web/www1 @web+db/www2 @db/mariadb
  • Remote execution can be restricted to servers that are part of one or more groups by '@groupname' as an sshlogin. Multiple groups can be given by separating groups with '+'. E.g. -S @web or -S @db+web
  • With --hostgroup you can restrict arguments to certain hostgroups by appending '@groupname' to the argument. Multiple groups can be given by separating groups with '+'. E.g. my_web_arg@web db-or-web-arg@db+web db-only-arg@db Thanks to Michel Courtine for developing a prototype for this.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job is can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at:

You can install GNU Parallel in just 10 seconds with: (wget -O - || curl | bash

Watch the intro video on

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ lists
  • Get the merchandise
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --bibtex)

If GNU Parallel saves you money:


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

by Ole Tange at November 22, 2014 08:35 PM

November 21, 2014

FSF Blogs

Back in stock: a ThinkPenguin router that respects your freedom


In September, we awarded use of the Respects Your Freedom (RYF) certification mark to the ThinkPenguin Wireless N-Broadband Router (TPE-NWIFIROUTER). But, within days of our press release, ThinkPenguin sold out of their inventory! However, today we are happy to announce that they have replenished their stocks (this time with a new case, but the same chipset and software).

This is the first home wifi router on the planet that you can go out and purchase that ships only with software that respects your freedom: libreCMC, a distribution of GNU/Linux recently endorsed by the FSF. This is awesome and you should replace your proprietary software-based wireless router at home with one of these! I've personally been using one at home for a few weeks now and I love it. I even made an unboxing video for you so you can see how simple it is to set-up:

TPE-NWIFIROUTER unboxing gif

(GIF not working? See my unboxing video.)

Or to quote artist, hacker, and GNU MediaGoblin maintainer, Chris Webber:

Prior to the ThinkPenguin router, I had no idea about any options for getting a 100% free software router. Seems exciting that someone has done all that work for me. Extra cost on top of that hardware... looks pretty cheap! Considering all the headaches I've gone through to find a phone that reasonably even comes close to respecting my freedom and then doesn't even really do so, hell, for someone doing the same for a router, thank goodness! I will be buying this as soon as we replace our router (probably soon!)

Excited that ThinkPenguin is doing this work! I hope more companies follow in said footsteps.

However, this isn't just an opportunity for individuals to be able to easily buy a 100% free software based router. It is also a big step forward for the free software community, because it gives us a platform that we can use to begin building a free software based network for communication, file sharing, social networking, and more.

This is the third product by ThinkPenguin to be awarded the use of the RYF certification mark. The first two were the TPE-N150USB Wireless N USB Adapter and the long-range TPE-N150USBL model. This combined with other products such as the LibreBoot X60 and the LulzBot 3D printer brings us one step closer to being able to recommend to users the ability to have control over all of the devices they rely upon in their day-to-day computing.

Learn more about the Respects Your Freedom hardware certification, including details on the certification of the TPE-NWIFIROUTER router as well as other RYF certified products at Hardware sellers interested in applying for certification can consult our certification criteria and should contact with any further questions.

Subscribe to the Free Software Supporter newsletter to receive announcements about future RYF products.

November 21, 2014 09:45 PM

librejs @ Savannah

GNU LibreJS 6.0.6 released

There's a new version of LibreJS - version 6.0.6.

Here's the changes since 6.0.5:
* When there is a contact email found on a site that contains
nonfree JavaScript, the email link now includes a default subject
and body when you click on it. These defaults are configurable in
the LibreJS add-on preferences.

* LibreJS now works in private browsing mode, and with Tor. When
using LibreJS and Tor at the same time, it's possible for the
website you're visiting to see that you're not running any nonfree
JavaScript it may contain. That would enable it to distinguish
you from many other Tor users, but it still won't know who you are.

* JS Web Labels links are now recognized with data-jslicense="1"
as well as rel="jslicense", in case you want the page to be
HTML5-valid. Savannah ticket #13366. Thanks to Marco Bresciani
for bringing this up.

* Fixed a bug on the whitelist page (Tools -> LibreJS) where
the "Reset whitelist to default" button wasn't working.

This project's website is here:

The source files are here:

And here's the executable you can install in your browser:

by Nik Nyby at November 21, 2014 04:06 AM

November 20, 2014

FSF Blogs

Friday Free Software Directory IRC meetup: November 21

Join the FSF and friends on Friday, November 21, from 2pm to 5pm EST (19:00 to 22:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.

Tens of thousands of people visit each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

November 20, 2014 09:09 PM

November 19, 2014

FSF Events

Richard Stallman - "A Free Digital Society" (Dhaka, Bangladesh)

There are many threats to freedom in the digital society. They include massive surveillance, censorship, digital handcuffs, nonfree software that controls users, and the War on Sharing. Other threats come from use of web services. Finally, we have no positive right to do anything in the Internet; every activity is precarious, and can continue only as long as companies are willing to cooperate with it.

This speech by Richard Stallman will be nontechnical, admission is gratis, and the public is encouraged to attend.

Registration, which can be done anonymously, while not required, is appreciated; it will help us ensure we can accommodate all the people who wish to attend.

Please fill out our contact form, so that we can contact you about future events in and around Dhaka.

November 19, 2014 02:20 PM

Richard Stallman - "Copyright vs Comunidad" (Madrid, Spain)

El copyright fue desarrollado en los tiempos de la imprenta, y fue diseñado para adecuarse al sistema centralizado de copias impuesto por la imprenta en aquella época. Pero en la actualidad, el sistema de copyright se adapta mal a las redes informáticas, y solamente puede ser impuesto mediante severas medidas de fuerza.
Las corporaciones globales que se lucran con el copyright están presionando para imponer penalidades cada vez más injustas y para incrementar su poder en materia de copyright, restringiendo al mismo tiempo el acceso del público a la tecnología. Pero si lo que queremos realmente es honorar el único propósito legítimo del copyright --promover el progreso para beneficio del público-- entonces tendremos que realizar cambios en la dirección contraria.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Madrid.

November 19, 2014 01:40 AM

November 18, 2014

guix @ Savannah

GNU Guix 0.8 released

We are pleased to announce the next alpha release of GNU Guix, version 0.8.

The release comes both with a source tarball, which allows you to install it on top a running GNU/Linux system, and a USB installation image to install the standalone operating system.

The highlights for this release include:

See the original announcement for details.

About GNU Guix

GNU Guix is the functional package manager for the GNU system, and a distribution thereof.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. It also offers a declarative approach to operating system configuration management. Guix uses low-level mechanisms from the Nix package manager, with Guile Scheme programming interfaces.

At this stage the distribution can be used on an i686 or x86_64 machine. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el.

by Ludovic Courtès at November 18, 2014 08:14 AM

November 17, 2014

Nick Clifton

November 2014 GNU Toolchain Update

Hi Guys,

  There is lots to report this month...

  * GCC now has experimental support for offloading.
    Offloading is the ability for the compiler to separate out portions of the program to be compiled by a second, different compiler.  Normally this second compiler would target a different architecture which can be accessed from the primary architecture.  Like a CPU offloading work onto a GPU in a graphics card.

    Currently only the Intel MIC architecture is supported.  See here for more information:

  * The strings program from the binutils package now defaults to using the --all option to scan the entire file.  Before the default used to be --data, which would only scan data sections in the file.

    The reason for the change is that the --data option uses the BFD library to locate data sections within the binary, which exposes the strings program to any flaws in that library.  Since security researchers often use strings to examine potential viruses this meant that these flaws could affect them.

  * GCC now has built-in pointer boundary checking: -fcheck-pointer-bounds
    This adds pointer bounds checking instrumentation to the generated code.  Warning messages about memmory access errors may also be produced at compile time unless disabled by -Wno-chkp.  Additional options can be used to disable bounds checking in certain situations, eg on reads or writes etc.  It is also possible to use attributes to disable bounds checking on specific functions and structures.

  * GCC now has some built-in functions to perform integer arithmetic with overflow checking.  For example:

       bool __builtin_sadd_overflow (int a, int b, int *res)
        bool __builtin_ssubl_overflow (long int a, long int b, long int *res)
       bool __builtin_umul_overflow (unsigned int a, unsigned int b, unsigned int *res)

    These built-in functions promote the first two operands into infinite precision signed type and perform addition (or subtraction or multiplication) on those promoted operands.  The result is then cast to the type the third pointer argument points to and stored there.  If the stored result is equal to the infinite precision result, the built-in functions return false, otherwise they return true.

  * GCC now has experimental support for NVidia's NVPTX architecure. Currently only compilation is supported.  Assembly and linking are not yet available.

  * Two new options to disable warnings have been introduced to GCC:

    Stops warnings about shifts by a negative amount.
    Stops warnings when the shift amount being more than the width of the type being shifted.

  * A new optimization has been added to GCC: -flra-remat   
    This enables "rematerialization" during the register assignment pass (lra).  What happens is that instead of storing a value in a register the optimizations chooses to recaclulate the value when needed.  Thus freeing up the register for other purposes.  Obviously this is only done when the optimization calculates that it will be worth it.  This new optimization is enabled automatically at -O2, -O3 and -Os.

  * A new profling option has been added to GCC: -fauto-profile[=<file>]
    This enables sampling based feedback directed optimizations, and optimizations generally profitable only with profile feedback available.  If <file> is specified, GCC looks in <file> to find the profile feedback data files.

    In order to collect the profile data you need to have:

    1. A linux system with linux perf support.

    2. (optional) An Intel processor with last branch record (LBR) support. This is to guarantee accurate instruction level profile, which is important for AutoFDO performance.

    To collect the profile, first use linux perf to collect raw profile.  (See  For example:
     perf record -e br_inst_retired:near_taken -b -o -- <your_program>

    Then use create_gcov tool, which takes raw profile and unstripped binary to generate AutoFDO profile that can be used by GCC.  (See

      create_gcov --binary=your_program.unstripped --gcov=profile.afdo

  * New optimization has been added to GCC: -fschedule-fusion
    This performs a target dependent pass over the instruction stream to schedule instructions of same type together because target machine can execute them more efficiently if they are adjacent to each other in the instruction flow.

    Enabled by default at levels -O2, -O3, -Os.

  * The ARM backend to GCC now supports a new option: -masm-syntax-unified
    This tells the backend that it should assume that any inline assembler is using unified asm syntax.  This matters for targets which only support Thumb1 as be defaul they assume that divided syntax is being used.

  * The MIPS backend to GCC now supports two additional variants of the o32 ABI.  These are intended to enable a transition from 32-bit to 64-bit registers.  These are FPXX (-mfpxx) and FP64A  (-mfp64 -mno-odd-spreg).
    The FPXX extension mandates that all code must execute correctly when run using 32-bit or 64-bit registers.  The code can be interlinked with either FP32 or FP64, but not both.

    The FP64A extension is similar to the FP64 extension but forbids the use of odd-numbered single-precision registers.  This can be used in conjunction with the FRE mode of FPUs in MIPS32R5 processors and allows both FP32 and FP64A code to interlink and run in the same process without changing FPU modes.

  * The linker supports a new command line option: --fix-cortex-a53-835769
    This enables a link-time workaround for erratum 835769 present on certain early revisions of Cortex-A53 processors.  The workaround is disabled by default.

  * The linker supports a new command line option: --print-sysroot

    This will display the sysroot that was configured into the linker when it was built.  If the linker was configured without sysroot support nothing will be printed.


November 17, 2014 12:47 PM

FSF Events

Richard Stallman - "Por una Sociedad Digital Libre" (Madrid, Spain)

Existen muchas amenazas a la libertad en la sociedad digital, tales como la vigilancia masiva, la censura, las esposas digitales, el software privativo que controla a los usuarios y la guerra contra la práctica de compartir. El uso de servicios web presenta otras más amenazas a la libertad de los usuarios. Por último, no contamos con ningún derecho concreto para hacer nada en Internet, todas nuestras actividades en línea son precarias y podremos continuar con ellas siempre y cuando las empresas deseen cooperar.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Madrid.

El lugar exacto de la charla será determinado.

November 17, 2014 10:42 AM

November 16, 2014

hello @ Savannah

hello-2.10 released [stable]

I'm delighted to announce version 2.10 of GNU hello. See below for
changes in this version.

Here are the compressed sources and a GPG detached signature[*]:

Use a mirror for higher download bandwidth:

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify hello-2.10.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver --recv-keys A9553245FDE9B739

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
Autoconf 2.69
Automake 1.14.1
Gnulib v0.1-263-g92b60e6


  • Noteworthy changes in release 2.10 (2014-11-16) [stable]

Most importantly this release include the 'Hello, World' message to be
part of translations. The translation bug was introduced in release 2.9.

Other changes in this release include; Make use of none-recursive build.
Removal of user-defined new-style. Include an example how to add a
section to a manual page, such as BUGS. Rather than 'fprintf (stderr'
use libc 'error()' reporting facility. Start using 'make
update-copyright' facility. Generate ChangeLog from git commit logs.
Avoid manual page generation errors when cross-compiling.

by Sami Kerola at November 16, 2014 12:24 PM

November 15, 2014

GNUnet News

Open positions for (aspiring) GNUnet hackers!

As part of my recent move to Inria in Rennes (Bretagne, France), a few new positions for research and development around GNUnet are now opening up. The positions are open for Master's students ("internships"), PhD students (Master's required) and Post-Docs (PhD required).

by Christian Grothoff at November 15, 2014 07:36 PM

November 14, 2014

FSF Events

Richard Stallman - "Por una sociedad digital libre" (Barcelona, Spain)

Existen muchas amenazas a la libertad en la sociedad digital, tales como la vigilancia masiva, la censura, las esposas digitales, el software privativo que controla a los usuarios y la guerra contra la práctica de compartir. El uso de servicios web presenta otras más amenazas a la libertad de los usuarios. Por último, no contamos con ningún derecho concreto para hacer nada en Internet, todas nuestras actividades en línea son precarias y podremos continuar con ellas siempre y cuando las empresas deseen cooperar.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El registro para esta charla se puede hacer de modo anónimo. (No es obligatorio, pero se agradecerá para facilitar la organización del evento.)

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Barcelona.

November 14, 2014 06:10 PM

Andy Wingo

on yakshave, on color, on cosines, on glitchen

Hold on to your butts, kids, because this is epic.

on yaks

As in all great epics, our prideful, stubborn hero starts in a perfectly acceptable state of things, decides on a lark to make a small excursion, and comes back much much later to inflict upon you pictures from his journey.

So. I have a web photo gallery but I don't take many pictures these days. Dealing with photos is a bit of a drag, and the ways that are easier like Instagram or what-not give me the (peer, corporate, government: choose 3) surveillance hives. So, I had vague thoughts that I should update my web gallery. Yakpoint 1.

At the same time, my web gallery was written for mod_python on the server, and I don't like hacking in Python any more and kinda wanted to switch away from Apache. Yakpoint 2.

So I rewrote the server-side part in Scheme. (Yakpoint 3.) It worked fine but I found I needed the ability to get the dimensions of files on the server, so I wrote a quick-and-dirty JPEG parser. Yakpoint 4.

I needed EXIF data as well, as the original version displayed EXIF data, and for that I used a binding to libexif that I had written a few years ago when I thought about starting this project (Yakpoint -1). However I found some crashers in the library, because it had never really been tested in production, and instead of fixing them I said "what the hell, I'll just write an EXIF parser". (Yakpoint 5.) So I did and adapted the web gallery to use it (Yakpoint 6, for the adaptation.)

At this point, I looked back, and looked forward, and looked all around, and all was good, but what was with this uneasiness I was feeling? And indeed, I hadn't actually made anything better, and I wasn't taking more photos, and the workflow was the same.

I was also concerned about the client side of things, which was still in Python and using some breakage-prone legacy libraries to do the photo scaling and transformations and what-not, and relied on a desktop application (f-spot) of dubious future. So I started to look at what it would take to port that script to Scheme (Yakpoint 7). Well it used some legacy libraries to copy files over SSH (gnome-vfs; switching away from that would be Yakpoint 8) and I didn't want to make a Scheme GIO binding (Yakpoint 9, narrowly avoided), and I then -- and then, dear reader -- so then I said "well WTF my caching story on the server is crap anyway, I never know when the sqlite database has changed or not so I never know what responses I can cache, what I really want is a functional datastore" (Yakpoint 10), which is what I have with Git and Tekuti (Yakpoint of yore), and so why not just store my photos in Git like I do in Tekuti for blog posts and serve them from there, indexing as needed? Of course I'd need some other server software (Yakpoint of fore, by which I meantersay the future), but then I could just git push to update my photo gallery, and I wouldn't have to endure the horror that is GVFS shelling out to ssh in a FUSE daemon (Yakpoint of ne'er).

So. After mulling over these thoughts for a while I decided, during an autumnal walk on the Salève in which we had the greatest views of Mont Blanc everrrrr and yet where are the photos?, that really what I needed was new photo management software, not just a web gallery. I should be able to share photos from my phone or from my desktop, fix them up either place, tag and such, and OK woo hoo! Such is the future! And the present for many people? Thing is, I also needed good permissions management (Yakpoint what, 10 I guess?), because you know a dude just out of college is not the same as that dude many years later. Which means serving things over HTTPS (Yakpoints 11-47) in such a way that the app has some good control over who gets what.

Well. Anyway. My mind ran ahead, and runs ahead, and yet we haven't actually tasted the awesome sauce yet. So! The photo management software, whereever it lives, needs to rotate photos at least, and scale them down to a few resolutions. I smell a yak! I looked at jpegtran which can do some lossless rotations but it's not available as a library, which is odd; and really I don't like shelling out for core program functionality, because every time I deal with the file system it's the wild west of concurrent mutation. If naming things is one of the two hardest problems in computer science, the file system is the worst because you have to give a global name to every intermediate value.

At the same time to scale images, what was I to do? Make a binding to libjpeg? Well I started (Yakpoint 48) but for reals kids, libjpeg is not fun. It works great and is really clever but

  1. it's approximately impossible to use from a dynamic ffi; you want a compiler to verify that you are using the right structure definitions

  2. there has been an inane ABI and format break imposed by the official IJG libjpeg but which other implementations have not followed, but how could you know which one you are using?

  3. the error handling facility encourages longjmp in C programs; somewhat terrifying

  4. off-heap image manipulation libraries always interact poorly with GC, because the GC only sees the small pointer to the off-heap image, and so doesn't GC often enough

  5. I have zero guarantee that libjpeg won't change ABI in weird ways, and I don't want to touch this software for the next 10 years

  6. I want to do jpegtran-like lossless transformations, but that's not available as a library, and it's totes ridics that binding libjpeg does not help you out here

  7. it's still an unsafe C library, battle-tested yes, but terrifyingly unsafe, and I'd be putting it on my server and who knows?

Friends, I arrived at the pasture, and I, I chose the yak less shaven. I took my lame JPEG parser and turned it into a full decoder (Yakpoint 49), realized it wasn't much more work to do an encoder (Yakpoint 50), and implemented the lossless transformations (Yakpoint 51).

on haters

Before we go on, I know some people would think "what is this kid about". I mean, custom gallery software, a custom JPEG library of all things, all bespoke, why don't you just use off-the-shelf solutions? Why aren't you normal and use a normal language and what about the best practices and where's your business case and I can't go on about this because there's a technical term for people that say this kind of thing and it's "hater".

Thing is, when did a hater ever make anything cool? Come to think of it, when did a hater make anything at all? In my experience the most vocal haters have nothing behind their names except a long series of pseudonymous rants in other people's comment boxes. So friends, in the joyful spirit of earning-anew, let's talk about JPEG!

on color

JPEG is a funny thing. Photos are our lives and our memories, our first steps and our friends, and yet I for one didn't know very much about them. My mental model that "a JPEG is a rectangle of pixels" doesn't turn out to be quite right.

If you actually look in a normal JPEG, you see three planes of information. If I take this image, for example:

If I decode it, actually I get three images. Here's the first one:

This is just the greyscale version of the image. So, storytime! Remember black and white television? We had an old one that got moved around the house sometimes, like if Mom was working at something in the kitchen. We also had a color one in the living room, and you could watch one or the other and they showed the same stuff. Strange when you think about it though -- one being in color and the other not. Well it turns out that color was literally just added on, both historically and technically. The main broadcast was still in black and white, and then in one part of the frequency band there were separate color signals, which color TVs would pick up, mix with the black and white signal, and come out with color. Wikipedia notes that "color TV" was really just "colored TV", which is a phrase whose cleverness I respect. Big ups to the W P.

In the context of JPEG, this black-and-white signal is sometimes called "luma", but is more precisely called Y', where the "prime" (the apostrophe) indicates that the signal has gamma correction applied.

In the image above, I replaced the color planes (sometimes collectively called the "chroma") with zeroes, while losslessly keeping the luma. Below is the first color plane, with the Y' plane replaced with a uniform 50% luma, and the other color plane replaced with zeros.

This color signal is technically known as CB, which may be very imperfectly understood as the bluish component of the color. Well the original image wasn't very blue, so we don't see very much here.

Indeed, our eyes have a harder time seeing differences in color than differences in intensity. Apparently this goes all the way down to biology -- we have more receptors in our eyes for "black and white" and fewer for color.

Early broadcasters took advantage of this difference in perception by actually devoting more bandwidth in their broadcasts to luma than to chroma; if you check the Wikipedia page you will see that the area in the spectrum allocation devoted to color is much smaller than the area devoted to intensity. So it is in JPEG: the above image being half-width indicates that actually we're just encoding one CB sample for every two Y' samples.

Finally, here we have the CR color plane, which can loosely be thought of as the "redness" of the image.

These test images and crops preserve the actual encoding of this photo as it came from my camera, without re-encoding. That's partly why there's not much interesting going on; with the megapixels these days, it's hard to fit much of anything in a few hundred pixels square. This particular camera is sub-sampling in the horizontal direction, but it's also common to subsample vertically as well, producing color planes that are half-width and half-height. In my limited investigations I have found that cameras tend to sub-sample just in the X direction, producing what they call 4:2:2 images, and that standard software encoders subsample in both, producing 4:2:0.

Incidentally, properly scaling up the color planes is quite an irritating endeavor -- the standard indicates that the color is sampled between the locations of the Y' samples ("centered" chroma), but these images originally have EXIF data that indicates that the color samples are taken at the position of the first Y' sample ("co-sited" chroma). I'm pretty sure libjpeg doesn't delve into the EXIF to check this though, so it would seem that all renderings I have seen of these photos are subtly off.

But how do you get proper color out of these strange luma and chroma things? Well, the Y'CBCR colorspace is really just the same color cube as RGB, except rotated: the Y' axis traverses the diagonal from (0, 0, 0) (black) to (255, 255, 255) (white). CB and CR are perpendicular to that diagonal, pointing towards blue or red respectively. So to go back to RGB, you multiply by a matrix to rotate the cube.

It's not a very intuitive color system, as you can see from the images above. For one thing, at zero or full luma, the chroma axes have no meaning; black and white can have no hue. Indeed if you imagine trying to fit a cube corner-down into a similar-sized box, you end up either having empty space in the box, or you have to cut off corners from the cube, or both. Cut corners means that bits of the Y'CBCR signal are wasted; empty space means there are RGB colors that are not representable in Y'CBCR. I'm not sure, but I think both are true for the particular formulation of Y'CBCR used in JPEG.

There's more to say about color here but frankly I don't know enough to do so, even though I worked in digital video for many years. If this is something you are mildly interested in, I highly, highly recommend watching Wim Taymans' presentation at this year's GStreamer conference. He takes a look at color in video that is constructive, building up from biology through math to engineering. His is a principled approach rather than a list of rules. It really clarified a number of things for me (and opened doors to unknown unknowns beyond).

on cosines

Where were we? Right, JPEG. So the proper way to understand what JPEG is is to understand the encoding process. We've covered colorspace conversion from RGB to Y'CBCR and sub-sampling. Next, the image canvas is divided into equal-sized "macroblocks". (These are called "minimum coded units" (MCUs) in the JPEG context, but in video they are usually called macroblocks, and it's a better name.) Without sub-sampling, each macro-block will contain one 8-sample-by-8-sample block for each component (Y', CB, CR) of the image. In my images above, the canvas space corresponding to one chroma block is the space of two luma blocks, so the macroblocks will be 16 samples wide and 8 samples tall, and contain two Y' blocks and one each of CB and CR. If the image canvas can't be evenly divided into macroblocks, it is padded to fit, usually by duplicating the last column or row of samples.

Then to make a JPEG, each block is encoded separately, then the whole thing is just written out to a file, and you're done!

This description glosses over a couple of important points, but it's a good big-picture view to have in mind. The pipeline goes from RGB pixels, to a padded RGB canvas, to separate Y'CBCR planes, to a possibly subsampled set of those planes, to macroblocks, to encoded macroblocks, to the file. Decoding is the reverse. It's a totally doable, comprehensible thing, and that was one of the big takeaways for me from this project. I took photography classes in high school and it was really cool to see how to shoot, develop, and print film, and this is similar in many ways. The real "film" is raw-format data, which some cameras produce, but understanding JPEG is like understanding enlargers and prints and fixer baths and such things. It's smelly and dark but pretty cool stuff.

So, how do you encode a block? Well peoples, this is a kinda cool thing. Maybe you remember from some math class that, given n uniformly spaced samples, you can always represent that series as a sum of n cosine functions of equally spaced frequencies. In each litle 8-by-8 block, that's what we do: a "forward discrete cosine transformation" (FDCT), which is just multiplying together some matrices for every point in the block. The FDCT is completely separable in the X and Y directions, so the space of 8 horizontal coefficients multiplies by the space of 8 vertical coefficients at each column to yield 64 total coefficients, which is not coincidentally the number of samples in a block.

Funny thing about those coefficients: each one corresponds to a particular horizontal and vertical frequency. We can map these out as a space of functions; for example giving a non-zero coefficient to (0, 0) in the upper-left block of a 8-block-by-8-block grid, and so on, yielding a 64-by-64 pixel representation of the meanings of the individual coefficients. That's what I did in the test strip above. Here is the luma example, scaled up without smoothing:

The upper-left corner corresponds to a frequency of 0 in both X and Y. The lower-right is a frequency of 4 "hertz", oscillating from highest to lowest value in both directions four times over the 8-by-8 block. I'm actually not sure why there are some greyish pixels around the right and bottom borders; it's not a compression artifact, as I constructed these DCT arrays programmatically. Anyway. Point is, your lover's smile, your sunny days, your raw urban graffiti, your child's first steps, all of these are reified in your photos as a sum of cosine coefficients.

The odd thing is that what is reified into your pictures isn't actually all of the coefficients there are! Firstly, because the coefficients are rounded to integers. Mathematically, the FDCT is a lossless operation, but in the context of JPEG it is not because the resulting coefficients are rounded. And they're not just rounded to the nearest integer; they are probably quantized further, for example to the nearest multiple of 17 or even 50. (These numbers seem exaggerated, but keep in mind that the range of coefficients is about 8 times the range of the original samples.)

The choice of what quantization factors to use is a key part of JPEG, and it's subjective: low quantization results in near-indistinguishable images, but in middle compression levels you want to choose factors that trade off subjective perception with file size. A higher quantization factor leads to coefficients with fewer bits of information that can be encoded into less space, but results in a worse image in general.

JPEG proposes a standard quantization matrix, with one number for each frequency (coefficient). Here it is for luma:

(define *standard-luma-q-table*
  #(16 11 10 16 24 40 51 61
    12 12 14 19 26 58 60 55
    14 13 16 24 40 57 69 56
    14 17 22 29 51 87 80 62
    18 22 37 56 68 109 103 77
    24 35 55 64 81 104 113 92
    49 64 78 87 103 121 120 101
    72 92 95 98 112 100 103 99))

This matrix is used for "quality 50" when you encode an 8-bit-per-sample JPEG. You can see that lower frequencies (the upper-left part) are quantized less harshly, and vice versa for higher frequencies (the bottom right).

(define *standard-chroma-q-table*
  #(17 18 24 47 99 99 99 99
    18 21 26 66 99 99 99 99
    24 26 56 99 99 99 99 99
    47 66 99 99 99 99 99 99
    99 99 99 99 99 99 99 99
    99 99 99 99 99 99 99 99
    99 99 99 99 99 99 99 99
    99 99 99 99 99 99 99 99))

For chroma (CB and CR) we see that quantization is much more harsh in general. So not only will we sub-sample color, we will also throw away more high-frequency color variation. It's interesting to think about, but also makes sense in some way; again in photography class we did an exercise where we shaded our prints with colored pencils, and the results were remarkable. My poor, lazy coloring skills somehow rendered leaves lifelike in different hues of green; really though, they were shades of grey, colored in imprecisely. "Colored TV" indeed.

With this knowledge under our chapeaux, we can now say what the "JPEG quality" setting actually is: it's simply that pair of standard quantization matrices scaled up or down. Towards "quality 100", the matrix approaches all-ones, for no quantization, and thus minimal loss (though you still have some rounding, often subsampling as well, and RGB-to-Y'CBCR gamut loss). Towards "quality 0" they scale to a matrix full of large values, for harsh quantization.

This understanding also explains those wavey JPEG artifacts you get on low-quality images. Those artifacts look like waves because they are waves. They usually occur at sharp intensity transitions, which like a cymbal crash cause lots of high frequencies that then get harshly quantized. Incidentally I suspect (but don't know) that this is the same reason that cymbals often sound bad in poorly-encoded MP3s, because of harsh quantization in the frequency domain.

Finally, the coefficients are written out to a file as a stream of bits. Each file gets a huffman code allocated to it, which ideally is built from the distribution of quantized coefficient sizes seen in all of the blocks of an image. There are usually different encodings for luma and chroma, to reflect their different quantizations. Reading and writing this bitstream is a bit of a headache but the algorithm is specified in the JPEG standard, and all you have to do is implement it. Notably, though, there is special support for encoding a run of zero-valued coefficients, which happens often after quantization. There are rarely wavey bits in a blue blue sky.

on transforms

It's terribly common for photos to be wrongly oriented. Unfortunately, the way that many editors fix photo rotation is by setting a bit in the EXIF information of the JPEG. This is ineffectual, as web browsers don't look in the EXIF information, and silly, because it turns out you can losslessly rotate most JPEG images anyway.

Consider that the body of a JPEG is an array of macroblocks. To rotate an image, you just have to rearrange those macroblocks, then rearrange the blocks inside the macroblocks (e.g. swap the two Y' blocks in my above example), then transform the blocks themselves.

The lossless transformations that you can do on a block are transposition, vertical flipping, and horizontal flipping.

Transposition flips a block along its downward-sloping diagonal. To do so, you just swap the coefficients at (u, v) with the coefficients at (v, u). Easy peasey.

Flipping is trickier. Consider the enlarged DCT image from above. What would it take to horizontally flip the function at (0, 1)? Instead of going from light to dark, you want it to go from dark to light. Simple: you just negate the coefficients! But you only want to negate those coefficients that are "odd" in the X direction, which are those coefficients whose column is odd. And actually that's all there is to it. Flipping vertically is the same, but for coefficients whose row is odd.

I said "most images" above because those whose size is not evenly divided by the macroblock size can't be losslessly rotated -- you will end up seeing some of the hidden data that falls off the edge of the canvas. Oh well. Most raw images are properly dimensioned, and if you're downscaling, you already have to re-encode anyway.

But that's just flipping and transposition, you say! What about rotation? Well it turns out that you can express rotation in terms of these operations: rotating 90 degrees clockwise is just a transpose and a horizontal flip (in that order). Together, flipping horizontally, flipping vertically, and transposing form a group, in the same way that flipping and flopping form a group for mattresses. Yeah!

on scheme

I wrote this library in Scheme because that's my language of choice these days. I didn't run into any serious impedance mismatches; Guile has a generic multi-dimensional array facility that made it possible to express many of these operations as generic folds, unfolds, or maps over arrays. The huffman coding part was a bit irritating, but all in all things were pretty good. The speed is pretty bad, but I haven't optimized it at all, and it gives me a nice test case for the compiler. Anyway, it's been fun and it suits my needs. Check out the project page if you're interested. Yes, to shave a yak you have to get a bit bovine and smelly, but yaks live in awesome places!

Finally I will leave you with a glitch, one of many that I have produced over the last couple weeks. Comments and corrections welcome below. Happy hacking!

by Andy Wingo at November 14, 2014 04:49 PM

November 12, 2014

FSF Blogs

Friday Free Software Directory IRC meetup: November 14

Join the FSF and friends on Friday, November 14, from 2pm to 5pm EST (19:00 to 22:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.

Tens of thousands of people visit each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

November 12, 2014 07:11 PM

November 10, 2014

Lonely Cactus is good for old tech docs

I saw on Undeadly a note that OpenBSD's Ted was patching the ancient bcd program, which converts text into ASCII-art representations of punch cards. Punch cards were a technology from the 1960s and 1970s (?) that stored code or data on cardstock, with holes punched out of them. Each card held a line of text. If I recall correctly, each character was a column on the card, with as many as seven holes punched out of set of 12 possible locations. There were 40 to 80 columns on the card, according to the brand and the decade.

Anyway, Ted modified the bcd program to read in the ASCII-art representation of punch cards that it generated, so that it became, essentially, a very inefficient reversible encoding, but, he was unsure where to search for documents that he could use to validate the output.

My goto place for tech docs from the 1970s is If you've never searched its collection, you should check it out.

Anyway, I did manage to find a couple of references there to punch card encoding.

For a couple of brands, anyway, punch card encoding seemed to have, for each character, a 4-bit "zone" or category and an 8-bit index. But this didn't result in a 12-bit encoding. Only a sparse subset of the available 12-bits indicated a character, for mechanical reasons, I guess. A subset of the characters now included in ASCII were encodable, but, missing some punctuation such as square brackets. It is for reasons like this that the C standard has trigraphs.

by Mike ( at November 10, 2014 06:39 AM


GnuTLS 3.3.10, 3.2.20 and 3.1.28

Released GnuTLS 3.3.10, GnuTLS 3.2.20, GnuTLS 3.1.28, which are bug-fix releases on the current and previous stable branches respectively.

Posted a security advisory on a vulnerability of the gnutls library.

by Nikos Mavrogiannopoulos ( at November 10, 2014 12:00 AM

November 09, 2014

GNU Remotecontrol

Newsletter – November 2014


The stuff going on in the big picture now…..

United States Electricity Price per KWH
Current and Past

August September Trend % Change
$0.143 $0.141 Decrease -1.40%
Year September Trend % Change % Since Difference
2004 $0.099 Same 0.00% 0.00% 0.00%
2005 $0.106 Increase 7.07% 7.07% 7.07%
2006 $0.118 Increase 11.32% 19.19% 12.12%
2007 $0.121 Increase 2.54% 22.22% 3.03%
2008 $0.130 Increase 7.44% 31.31% 9.09%
2009 $0.130 Same 0.00% 31.31% 0.00%
2010 $0.132 Increase 1.54% 33.33% 2.02%
2011 $0.135 Increase 2.27% 36.36% 3.03%
2012 $0.133 Decrease -1.48% 34.34% -2.02%
2013 $0.137 Increase 3.01% 38.38% 4.04%
2014 $0.141 Increase 2.92% 42.42% 4.04%

United Kingdom Utility Prices
Current and Past

London by night, seen from the International Space Station

The Smart Grid Educational Webinar Series Archives has changed the address of our presentation. We hope you view this presentation. It was an excellent experience for both the GNU remotecontrol Team and “…the world class experts in the field…”

The stuff that has caught our eye…..

Demand Response

  • An article, reporting an appeals court grants a stay on decision to overturn FERC Order 745.
  • An article, considering the future of Demand Response in light of this aforementioned ruling.
  • An article, discussing the first legal case in the FERC Order 745 matter.
  • An article, considering if Smart Phones are the bridge to help achieve automated demand response.
  • A review, of the Ecobee3 Wi-Fi Smart Thermostat provides unpleasant results.
  • An article, finding the Smart Grid has entered the age of high performance computing.

Smart Grid – Consumer

  • An article, reporting Nest has purchased Revolv and discontinued the Revolv offering.
  • An article, describing the cultural impact in India from having a national electric Smart Grid.
  • A security bulletin, defining point-of-sale security compromises. The ability to compromise a device such as an insecure network connected HVAC thermostat, by either design or configuration, could be much simpler to accomplish than previously understood.
  • A review, of Smart Phone enabled door locks. The findings represent insufficient security to the door lock design, increasing the risk of home invasion.
  • A survey, finding public utilities will experience strong competition in building a national electrical Smart Grid. What is unclear is how this competition will impact the price of energy, to offset these competitive costs for a public utility.
  • An article, finding PJM Interconnection is proposing many rule changes to FERC Order 745.
  • An announcement, from the United States Department of Energy, of nearly $8 million to support research and development of the next generation of heating, ventilating, and air conditioning (HVAC) technologies. This seems to be an attempt to align with the Appliance and Equipment Standards Program, in hope to find best practices for using HVAC technologies with the various other technologies prevalent today.

Smart Grid – Producer

  • An article, considering how to best modify Smart Grid analytical thinking.
  • An article, finding Southeast Asia to be the next hot spot for Smart Grid investment.
  • A thought provoking article, considering the self-contained system of a submarine to improving public utility operations.
  • An article, describing how South Korea is well-positioned to export their Smart Grid technologies they have successfully developed.

Smart Grid – Security

  • A study, finding smart meters can be hacked to reduce charges on billing cycle invoices.
  • A report, finding cellular phone companies are tracking users in possible violation of federal telecommunications and wiretapping laws. This report has been validated.
  • An article, finding the interconnection of the three United States electrical grids is held back only by financing. It is unclear how security of this newly interconnected electrical grid will occur.
  • An article, finding more vulnerabilities in the Untied States national electrical grid.

Status Update of our 2014 Plan…..

Demand Response

  • Further discussions with members of the electronics industry.
  • No other work since the April newsletter.

Unattended Server Side Automation

  • No other work since the April newsletter.

Power Line Communication

  • Further discussions with the members of the electronics industry.
  • No other work since the January newsletter.

Talk to us with your comments and suggestions on our plan for this year.

The stuff we are talking about now…..

The rise of electric vehicles is causing a faster build out of the Smart Grid. The collective build out of the Smart Grid will enable cost effectiveness to be found in achieving the network connected HVAC thermostat. The electric vehicle market will also bring innovation to the home, in the form of lower costs for Smart Grid technologies which complement the electric vehicle charging process. It is highly likely the electric vehicle market will accelerate adoption of the network connected HVAC thermostat, simply because of cost reduction to establish the necessary infrastructure in the home.

Many people have asked us about adding other types of thermostats to GNU remotecontrol. There are three questions that need to be answered before we can offer GNU remotecontrol support for any IP thermostat. These questions are:

  • How to CONNECT to it (NETWORK).
  • How to READ from it (CODE).
  • How to WRITE to it (CODE).

It is our hope to have dozens and dozens of thermostat types that work with GNU remotecontrol. Let us know if you designed or manufactured a device and you would like to test it with GNU remotecontrol.

The stuff you may want to consider…..

We have 0 new bugs and 0 fixed bugs since our last Blog posting. Please review these changes and apply to your GNU remotecontrol installation, as appropriate.

We have 0 new tasks and 1 completed tasks since our last Blog posting. Please review these changes and apply to your GNU remotecontrol installation, as appropriate.

The stuff you REALLY want to consider…..

Analytics upstart Bidgely will integrate its tracking and controlling systems with TXU Energy MyEnergy dashboard. The only way this analytical effort can be accomplished is to collect the necessary data. This offering is a milestone in history, as the first public utility now offering appliance level analysis of energy consumption. The annual Texas power consumption is ranked between the United Kingdom and Italy. Privacy on the electrical grid has irrevocably changed.

GNU remotecontrol relies on OS file access restrictions, Apache authentication, MySQL authentication, and SSL encryption to secure your data. Talk to us you want to find out how you can further strengthen the security of your system, or you have suggestions for improving the security of our current system architecture.

Whatever you do…..don’t get beat up over your Energy Management strategy. GNU remotecontrol is here to help simplify your life, not make it more complicated. Talk to us if you are stuck or cannot figure out the best option for your GNU remotecontrol framework. The chances are the answer you need is something we have already worked through. We would be happy to help you by discussing your situation with you.


Why the Affero GPL?

GNU Affero General Public License LOGO

GNU remotecontrol LOGO

by gnuremotecontrol at November 09, 2014 08:01 PM

acct @ Savannah

November 07, 2014

FSF Blogs

Recap of Friday Free Software Directory IRC meetup: November 7

In today's Friday Free Software Directory (FSD) IRC Meeting and we celebrated the launch of, worked on some new art work and icons, and added a few new entries, including:

  • Mailpile is an email server with a modern web-based client. It is dual-licensed under GNU AGPLv3 and Apache 2.0.
  • Libertree is a social network implemented over XMPP and a web app. It is licensed under the terms of the GNU AGPLv3 or (at your option) any later version.

Join us each week to help improve the Free Software Directory every Friday! Find out how to attend the Friday Free Software Directory IRC Meetings by checking our blog or by subscribing to the RSS feed.

November 07, 2014 10:40 PM

FSF News

Software Freedom Conservancy and Free Software Foundation announce

BOSTON, Massachusetts, USA -- Friday, November 7, 2014 -- Software Freedom Conservancy and the Free Software Foundation (FSF) today announce an ongoing public project that began in early 2014: Copyleft and the GNU General Public License: A Comprehensive Tutorial and Guide, and the publication of that project in its new home on the Internet at This new site will not only provide a venue for those who constantly update and improve the Comprehensive Tutorial, but is also now home to a collaborative community to share and improve information about copyleft licenses, especially the GNU General Public License (GPL), and best compliance practices.

Bradley M. Kuhn, President and Distinguished Technologist of Software Freedom Conservancy and member of FSF's Board of Directors, currently serves as editor-in-chief of the project. The text has already grown to 100 pages discussing all aspects of copyleft -- including policy motivations, detailed study of the license texts, and compliance issues. This tutorial was initially constructed from materials that Kuhn developed on a semi-regular basis over the last eleven years. Kuhn merged this material, along with other material regarding the GPL published by the FSF, into a single, coherent volume, and released it publicly for the benefit of all users of free software.

Today, Conservancy announces a specific, new contribution: an additional chapter to the Case Studies in GPL Enforcement section of the tutorial. This new chapter, co-written by Kuhn and Conservancy's compliance engineer, Denver Gingerich, discusses in detail the analysis of a complete, corresponding source (CCS) release for a real-world electronics product, and describes the process that Conservancy and the FSF use to determine whether a CCS candidate complies with the requirements of the GPL. The CCS analyzed is for ThinkPenguin's TPE-NWIFIROUTER wireless router, which the FSF recently awarded Respects Your Freedom (RYF) certification.

The copyleft guide itself is distributed under the terms of a free copyleft license, the Creative Commons Attribution-ShareAlike 4.0 International license. Kuhn, who hopes the initial release and this subsequent announcement will inspire others to contribute to the text, said, "information about copyleft -- such as why it exists, how it works, and how to comply -- should be freely available and modifiable, just as all generally useful technical information should. I am delighted to impart my experience with copyleft freely. I hope, however, that other key thinkers in the field of copyleft will contribute to help produce the best reference documentation on copyleft available."

Particularly useful are the substantial contributions already made to the guide from the FSF itself. As the author, primary interpreter, and ultimate authority on the GPL, the FSF is in a unique position to provide insights into understanding free software licensing. While the guide as a living text will not automatically reflect official FSF positions, the FSF has already approved and published one version for use at its Seminar on GPL Enforcement and Legal Ethics in March 2014. "Participants at our licensing seminar in March commented positively on the high quality of the teaching materials, including the comprehensive guide to GPL compliance. We look forward to collaborating with the community to continually improve this resource, and we will periodically review particular versions for FSF endorsement and publication," said FSF's executive director John Sullivan.

Enthusiastic new contributors can get immediately involved by visiting and editing the main wiki on, or by submitting merge requests on's gitorious site for the guide, or by joining the project mailing list and IRC channel. welcomes all contributors. The editors have already incorporated other freely licensed documents about GPL and compliance with copyleft licenses -- thus providing a central location for all such works. Furthermore, the project continues to recruit contributors who have knowledge about other copyleft licenses besides the FSF's GPL family. In particular, Mike Linksvayer, member of Conservancy's board of directors, has agreed to lead the drafting on a section about Creative Commons Attribution-ShareAlike licenses to mirror the ample text already available on GPL. "I'm glad to bring my knowledge about the Creative Commons copyleft licenses as a contribution to improve further this excellent tutorial text, and I hope that as a whole can more generally become a central location to collect interesting ideas about copyleft policy," said Linksvayer.

About is a collaborative project to create and disseminate useful information, tutorial material, and new policy ideas regarding all forms of copyleft licensing. Its primary project is currently a comprehensive tutorial and guide, which describes the policy motivations of copyleft licensing, presents a detailed analysis of the text of various copyleft licenses, and gives examples and case studies of copyleft compliance situations.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at and, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at Its headquarters are in Boston, MA, USA.

About Software Freedom Conservancy

Software Freedom Conservancy is a not-for-profit organization that promotes, improves, develops and defends Free, Libre and Open Source software projects. Conservancy is home to more than thirty software projects, each supported by a dedicated community of volunteers, developers and users. Conservancy's projects include some of the most widely used software systems in the world across many application areas, including educational software deployed in schools around the globe, embedded software systems deployed in most consumer electronic devices, distributed version control developer tools, integrated library services systems, and widely used graphics and art programs. A full list of Conservancy's member projects is available. Conservancy provides these projects with the necessary infrastructure and not-for-profit support services to enable each project's communities to focus on what they do best: creating innovative software and advancing computing for the public's benefit.

Media Contacts

Joshua Gay
Licensing & Compliance Manager
Free Software Foundation
+1 (617) 542 5942

Karen M. Sandler
Executive Director
Software Freedom Conservancy
+1 (212) 461-3245

November 07, 2014 04:15 PM

GNUnet News

We Fix the Net Assembly @ 31c3

The "We Fix the Net" assembly" is to be the perfect place at 31c3 for all hackers to do something about replacing today's broken Internet with secure alternatives. We hope to have some talks and panels like last year. Details will be posted here closer to the congress, for now, please contact us at if you are interested to present your work or organize something practical. Topics include:

by Matthias Wachs at November 07, 2014 09:56 AM

November 06, 2014

FSF Blogs

GNU Press has restocked!

GNU pin

GNU Press has restocked popular gear including shirts, hoodies, beanies, and GNU emblem brass pins. We are also reintroducing the GPLv3 shirt in gray, and lowering the price of the GNU30 commemorative travel mug. We have also added 3XL sizes for some of our most popular designs!

As always, if you can't find something in the store but think we should offer it, please add your suggestion to our Ideas page. And remember, associate members of the Free Software Foundation get a 20% discount on all purchases made through the GNU Press store, so if you are not a member already, join today!

To keep up with announcements about new products available in the GNU Press store, subscribe to the mailing list.

November 06, 2014 05:26 PM

FSF Events

Richard Stallman to speak in Ithaca, NY

This speech by Richard Stallman will be nontechnical, admission is gratis, and the public is encouraged to attend.

Title, detailed location, and start time to be determined.

Please fill out our contact form, so that we can contact you about future events in and around Ithaca.

November 06, 2014 04:20 PM

Richard Stallman - "The Free Software Movement" (Richmond, VA)

Richard Stallman will speak about the goals and philosophy of the Free Software Movement, and the status and history of the GNU operating system, which in combination with the kernel Linux is now used by tens of millions of users world-wide.

Please fill out our contact form, so that we can contact you about future events in and around Richmond.

November 06, 2014 03:15 PM