Planet GNU

Aggregation of development blogs from the GNU Project

November 21, 2014

FSF Blogs

Back in stock: a ThinkPenguin router that respects your freedom

TPE-NWIFIROUTER

In September, we awarded use of the Respects Your Freedom (RYF) certification mark to the ThinkPenguin Wireless N-Broadband Router (TPE-NWIFIROUTER). But, within days of our press release, ThinkPenguin sold out of their inventory! However, today we are happy to announce that they have replenished their stocks (this time with a new case, but the same chipset and software).

This is the first home wifi router on the planet that you can go out and purchase that ships only with software that respects your freedom: libreCMC, a distribution of GNU/Linux recently endorsed by the FSF. This is awesome and you should replace your proprietary software-based wireless router at home with one of these! I've personally been using one at home for a few weeks now and I love it. I even made an unboxing video for you so you can see how simple it is to set-up:

TPE-NWIFIROUTER unboxing gif

(GIF not working? See my unboxing video.)

Or to quote artist, hacker, and GNU MediaGoblin maintainer, Chris Webber:

Prior to the ThinkPenguin router, I had no idea about any options for getting a 100% free software router. Seems exciting that someone has done all that work for me. Extra cost on top of that hardware... looks pretty cheap! Considering all the headaches I've gone through to find a phone that reasonably even comes close to respecting my freedom and then doesn't even really do so, hell, for someone doing the same for a router, thank goodness! I will be buying this as soon as we replace our router (probably soon!)

Excited that ThinkPenguin is doing this work! I hope more companies follow in said footsteps.

However, this isn't just an opportunity for individuals to be able to easily buy a 100% free software based router. It is also a big step forward for the free software community, because it gives us a platform that we can use to begin building a free software based network for communication, file sharing, social networking, and more.

This is the third product by ThinkPenguin to be awarded the use of the RYF certification mark. The first two were the TPE-N150USB Wireless N USB Adapter and the long-range TPE-N150USBL model. This combined with other products such as the LibreBoot X60 and the LulzBot 3D printer brings us one step closer to being able to recommend to users the ability to have control over all of the devices they rely upon in their day-to-day computing.

Learn more about the Respects Your Freedom hardware certification, including details on the certification of the TPE-NWIFIROUTER router as well as other RYF certified products at fsf.org/ryf. Hardware sellers interested in applying for certification can consult our certification criteria and should contact licensing@fsf.org with any further questions.

Subscribe to the Free Software Supporter newsletter to receive announcements about future RYF products.

November 21, 2014 09:45 PM

librejs @ Savannah

GNU LibreJS 6.0.6 released

There's a new version of LibreJS - version 6.0.6.

Here's the changes since 6.0.5:
* When there is a contact email found on a site that contains
nonfree JavaScript, the email link now includes a default subject
and body when you click on it. These defaults are configurable in
the LibreJS add-on preferences.

* LibreJS now works in private browsing mode, and with Tor. When
using LibreJS and Tor at the same time, it's possible for the
website you're visiting to see that you're not running any nonfree
JavaScript it may contain. It's important to keep that in mind when
you're using both LibreJS and Tor.

* JS Web Labels links are now recognized with data-jslicense="1"
as well as rel="jslicense", in case you want the page to be
HTML5-valid. Savannah ticket #13366. Thanks to Marco Bresciani
for bringing this up.

* Fixed a bug on the whitelist page (Tools -> LibreJS) where
the "Reset whitelist to default" button wasn't working.

This project's website is here:
http://www.gnu.org/software/librejs/

The source files are here:
https://ftp.gnu.org/gnu/librejs/librejs-6.0.6.tar.gz

And here's the executable you can install in your browser:
https://ftp.gnu.org/gnu/librejs/librejs-6.0.6.xpi

by Nik Nyby at November 21, 2014 04:06 AM

November 20, 2014

FSF Blogs

Friday Free Software Directory IRC meetup: November 21

Join the FSF and friends on Friday, November 21, from 2pm to 5pm EST (19:00 to 22:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.


Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.


While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!


If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

November 20, 2014 09:09 PM

November 19, 2014

FSF Events

Richard Stallman to speak in Dhaka, Bangladesh

This speech by Richard Stallman will be nontechnical, admission is gratis, and the public is encouraged to attend.

Please fill out our contact form, so that we can contact you about future events in and around Dhaka.

Speech topic, start time, and detailed location to be determined.

November 19, 2014 02:20 PM

Richard Stallman - "Copyright vs Comunidad" (Madrid, Spain)

El copyright fue desarrollado en los tiempos de la imprenta, y fue diseñado para adecuarse al sistema centralizado de copias impuesto por la imprenta en aquella época. Pero en la actualidad, el sistema de copyright se adapta mal a las redes informáticas, y solamente puede ser impuesto mediante severas medidas de fuerza.
Las corporaciones globales que se lucran con el copyright están presionando para imponer penalidades cada vez más injustas y para incrementar su poder en materia de copyright, restringiendo al mismo tiempo el acceso del público a la tecnología. Pero si lo que queremos realmente es honorar el único propósito legítimo del copyright --promover el progreso para beneficio del público-- entonces tendremos que realizar cambios en la dirección contraria.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Madrid.

November 19, 2014 01:40 AM

November 18, 2014

guix @ Savannah

GNU Guix 0.8 released

We are pleased to announce the next alpha release of GNU Guix, version 0.8.

The release comes both with a source tarball, which allows you to install it on top a running GNU/Linux system, and a USB installation image to install the standalone operating system.

The highlights for this release include:

See the original announcement for details.

About GNU Guix

GNU Guix is the functional package manager for the GNU system, and a distribution thereof.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. It also offers a declarative approach to operating system configuration management. Guix uses low-level mechanisms from the Nix package manager, with Guile Scheme programming interfaces.

At this stage the distribution can be used on an i686 or x86_64 machine. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el.

by Ludovic Courtès at November 18, 2014 08:14 AM

November 17, 2014

Nick Clifton

November 2014 GNU Toolchain Update

Hi Guys,

  There is lots to report this month...

  * GCC now has experimental support for offloading.
    Offloading is the ability for the compiler to separate out portions of the program to be compiled by a second, different compiler.  Normally this second compiler would target a different architecture which can be accessed from the primary architecture.  Like a CPU offloading work onto a GPU in a graphics card.

    Currently only the Intel MIC architecture is supported.  See here for more information:  https://gcc.gnu.org/wiki/Offloading
  

  * The strings program from the binutils package now defaults to using the --all option to scan the entire file.  Before the default used to be --data, which would only scan data sections in the file.

    The reason for the change is that the --data option uses the BFD library to locate data sections within the binary, which exposes the strings program to any flaws in that library.  Since security researchers often use strings to examine potential viruses this meant that these flaws could affect them.


  * GCC now has built-in pointer boundary checking: -fcheck-pointer-bounds
    This adds pointer bounds checking instrumentation to the generated code.  Warning messages about memmory access errors may also be produced at compile time unless disabled by -Wno-chkp.  Additional options can be used to disable bounds checking in certain situations, eg on reads or writes etc.  It is also possible to use attributes to disable bounds checking on specific functions and structures.


  * GCC now has some built-in functions to perform integer arithmetic with overflow checking.  For example:

       bool __builtin_sadd_overflow (int a, int b, int *res)
        bool __builtin_ssubl_overflow (long int a, long int b, long int *res)
       bool __builtin_umul_overflow (unsigned int a, unsigned int b, unsigned int *res)

  
    These built-in functions promote the first two operands into infinite precision signed type and perform addition (or subtraction or multiplication) on those promoted operands.  The result is then cast to the type the third pointer argument points to and stored there.  If the stored result is equal to the infinite precision result, the built-in functions return false, otherwise they return true.


  * GCC now has experimental support for NVidia's NVPTX architecure. Currently only compilation is supported.  Assembly and linking are not yet available.


  * Two new options to disable warnings have been introduced to GCC:

     -Wno-shift-count-negative
    Stops warnings about shifts by a negative amount.
  
      -Wno-shift-count-overflow
    Stops warnings when the shift amount being more than the width of the type being shifted.


  * A new optimization has been added to GCC: -flra-remat   
    This enables "rematerialization" during the register assignment pass (lra).  What happens is that instead of storing a value in a register the optimizations chooses to recaclulate the value when needed.  Thus freeing up the register for other purposes.  Obviously this is only done when the optimization calculates that it will be worth it.  This new optimization is enabled automatically at -O2, -O3 and -Os.


  * A new profling option has been added to GCC: -fauto-profile[=<file>]
    This enables sampling based feedback directed optimizations, and optimizations generally profitable only with profile feedback available.  If <file> is specified, GCC looks in <file> to find the profile feedback data files.

    In order to collect the profile data you need to have:

    1. A linux system with linux perf support.

    2. (optional) An Intel processor with last branch record (LBR) support. This is to guarantee accurate instruction level profile, which is important for AutoFDO performance.

    To collect the profile, first use linux perf to collect raw profile.  (See https://perf.wiki.kernel.org/).  For example:
  
     perf record -e br_inst_retired:near_taken -b -o perf.data -- <your_program>

    Then use create_gcov tool, which takes raw profile and unstripped binary to generate AutoFDO profile that can be used by GCC.  (See https://github.com/google/autofdo).

      create_gcov --binary=your_program.unstripped --profile=perf.data --gcov=profile.afdo


  * New optimization has been added to GCC: -fschedule-fusion
    This performs a target dependent pass over the instruction stream to schedule instructions of same type together because target machine can execute them more efficiently if they are adjacent to each other in the instruction flow.

    Enabled by default at levels -O2, -O3, -Os.


  * The ARM backend to GCC now supports a new option: -masm-syntax-unified
    This tells the backend that it should assume that any inline assembler is using unified asm syntax.  This matters for targets which only support Thumb1 as be defaul they assume that divided syntax is being used.

  
  * The MIPS backend to GCC now supports two additional variants of the o32 ABI.  These are intended to enable a transition from 32-bit to 64-bit registers.  These are FPXX (-mfpxx) and FP64A  (-mfp64 -mno-odd-spreg).
  
    The FPXX extension mandates that all code must execute correctly when run using 32-bit or 64-bit registers.  The code can be interlinked with either FP32 or FP64, but not both.

    The FP64A extension is similar to the FP64 extension but forbids the use of odd-numbered single-precision registers.  This can be used in conjunction with the FRE mode of FPUs in MIPS32R5 processors and allows both FP32 and FP64A code to interlink and run in the same process without changing FPU modes.

  
  * The linker supports a new command line option: --fix-cortex-a53-835769
    This enables a link-time workaround for erratum 835769 present on certain early revisions of Cortex-A53 processors.  The workaround is disabled by default.

  
  * The linker supports a new command line option: --print-sysroot

    This will display the sysroot that was configured into the linker when it was built.  If the linker was configured without sysroot support nothing will be printed.


Cheers
  Nick

November 17, 2014 12:47 PM

FSF Events

Richard Stallman - "Por una Sociedad Digital Libre" (Madrid, Spain)

Existen muchas amenazas a la libertad en la sociedad digital, tales como la vigilancia masiva, la censura, las esposas digitales, el software privativo que controla a los usuarios y la guerra contra la práctica de compartir. El uso de servicios web presenta otras más amenazas a la libertad de los usuarios. Por último, no contamos con ningún derecho concreto para hacer nada en Internet, todas nuestras actividades en línea son precarias y podremos continuar con ellas siempre y cuando las empresas deseen cooperar.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Madrid.

El lugar exacto de la charla será determinado.

November 17, 2014 10:42 AM

November 16, 2014

hello @ Savannah

hello-2.10 released [stable]

I'm delighted to announce version 2.10 of GNU hello. See below for
changes in this version.

Here are the compressed sources and a GPG detached signature[*]:
http://ftpmirror.gnu.org/hello/hello-2.10.tar.gz
http://ftpmirror.gnu.org/hello/hello-2.10.tar.gz.sig

Use a mirror for higher download bandwidth:
http://www.gnu.org/order/ftp.html

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify hello-2.10.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys A9553245FDE9B739

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
Autoconf 2.69
Automake 1.14.1
Gnulib v0.1-263-g92b60e6

NEWS

  • Noteworthy changes in release 2.10 (2014-11-16) [stable]

Most importantly this release include the 'Hello, World' message to be
part of translations. The translation bug was introduced in release 2.9.

Other changes in this release include; Make use of none-recursive build.
Removal of user-defined new-style. Include an example how to add a
section to a manual page, such as BUGS. Rather than 'fprintf (stderr'
use libc 'error()' reporting facility. Start using 'make
update-copyright' facility. Generate ChangeLog from git commit logs.
Avoid manual page generation errors when cross-compiling.

by Sami Kerola at November 16, 2014 12:24 PM

November 15, 2014

GNUnet News

Open positions for (aspiring) GNUnet hackers!

As part of my recent move to Inria in Rennes (Bretagne, France), a few new positions for research and development around GNUnet are now opening up. The positions are open for Master's students ("internships"), PhD students (Master's required) and Post-Doc (PhD required).

by Christian Grothoff at November 15, 2014 07:36 PM

November 14, 2014

FSF Events

Richard Stallman - "Por una sociedad digital libre" (Barcelona, Spain)

Existen muchas amenazas a la libertad en la sociedad digital, tales como la vigilancia masiva, la censura, las esposas digitales, el software privativo que controla a los usuarios y la guerra contra la práctica de compartir. El uso de servicios web presenta otras más amenazas a la libertad de los usuarios. Por último, no contamos con ningún derecho concreto para hacer nada en Internet, todas nuestras actividades en línea son precarias y podremos continuar con ellas siempre y cuando las empresas deseen cooperar.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Barcelona.

November 14, 2014 06:10 PM

Andy Wingo

on yakshave, on color, on cosines, on glitchen

Hold on to your butts, kids, because this is epic.

on yaks

As in all great epics, our prideful, stubborn hero starts in a perfectly acceptable state of things, decides on a lark to make a small excursion, and comes back much much later to inflict upon you pictures from his journey.

So. I have a web photo gallery but I don't take many pictures these days. Dealing with photos is a bit of a drag, and the ways that are easier like Instagram or what-not give me the (peer, corporate, government: choose 3) surveillance hives. So, I had vague thoughts that I should update my web gallery. Yakpoint 1.

At the same time, my web gallery was written for mod_python on the server, and I don't like hacking in Python any more and kinda wanted to switch away from Apache. Yakpoint 2.

So I rewrote the server-side part in Scheme. (Yakpoint 3.) It worked fine but I found I needed the ability to get the dimensions of files on the server, so I wrote a quick-and-dirty JPEG parser. Yakpoint 4.

I needed EXIF data as well, as the original version displayed EXIF data, and for that I used a binding to libexif that I had written a few years ago when I thought about starting this project (Yakpoint -1). However I found some crashers in the library, because it had never really been tested in production, and instead of fixing them I said "what the hell, I'll just write an EXIF parser". (Yakpoint 5.) So I did and adapted the web gallery to use it (Yakpoint 6, for the adaptation.)

At this point, I looked back, and looked forward, and looked all around, and all was good, but what was with this uneasiness I was feeling? And indeed, I hadn't actually made anything better, and I wasn't taking more photos, and the workflow was the same.

I was also concerned about the client side of things, which was still in Python and using some breakage-prone legacy libraries to do the photo scaling and transformations and what-not, and relied on a desktop application (f-spot) of dubious future. So I started to look at what it would take to port that script to Scheme (Yakpoint 7). Well it used some legacy libraries to copy files over SSH (gnome-vfs; switching away from that would be Yakpoint 8) and I didn't want to make a Scheme GIO binding (Yakpoint 9, narrowly avoided), and I then -- and then, dear reader -- so then I said "well WTF my caching story on the server is crap anyway, I never know when the sqlite database has changed or not so I never know what responses I can cache, what I really want is a functional datastore" (Yakpoint 10), which is what I have with Git and Tekuti (Yakpoint of yore), and so why not just store my photos in Git like I do in Tekuti for blog posts and serve them from there, indexing as needed? Of course I'd need some other server software (Yakpoint of fore, by which I meantersay the future), but then I could just git push to update my photo gallery, and I wouldn't have to endure the horror that is GVFS shelling out to ssh in a FUSE daemon (Yakpoint of ne'er).

So. After mulling over these thoughts for a while I decided, during an autumnal walk on the Salève in which we had the greatest views of Mont Blanc everrrrr and yet where are the photos?, that really what I needed was new photo management software, not just a web gallery. I should be able to share photos from my phone or from my desktop, fix them up either place, tag and such, and OK woo hoo! Such is the future! And the present for many people? Thing is, I also needed good permissions management (Yakpoint what, 10 I guess?), because you know a dude just out of college is not the same as that dude many years later. Which means serving things over HTTPS (Yakpoints 11-47) in such a way that the app has some good control over who gets what.

Well. Anyway. My mind ran ahead, and runs ahead, and yet we haven't actually tasted the awesome sauce yet. So! The photo management software, whereever it lives, needs to rotate photos at least, and scale them down to a few resolutions. I smell a yak! I looked at jpegtran which can do some lossless rotations but it's not available as a library, which is odd; and really I don't like shelling out for core program functionality, because every time I deal with the file system it's the wild west of concurrent mutation. If naming things is one of the two hardest problems in computer science, the file system is the worst because you have to give a global name to every intermediate value.

At the same time to scale images, what was I to do? Make a binding to libjpeg? Well I started (Yakpoint 48) but for reals kids, libjpeg is not fun. It works great and is really clever but

  1. it's approximately impossible to use from a dynamic ffi; you want a compiler to verify that you are using the right structure definitions

  2. there has been an inane ABI and format break imposed by the official IJG libjpeg but which other implementations have not followed, but how could you know which one you are using?

  3. the error handling facility encourages longjmp in C programs; somewhat terrifying

  4. off-heap image manipulation libraries always interact poorly with GC, because the GC only sees the small pointer to the off-heap image, and so doesn't GC often enough

  5. I have zero guarantee that libjpeg won't change ABI in weird ways, and I don't want to touch this software for the next 10 years

  6. I want to do jpegtran-like lossless transformations, but that's not available as a library, and it's totes ridics that binding libjpeg does not help you out here

  7. it's still an unsafe C library, battle-tested yes, but terrifyingly unsafe, and I'd be putting it on my server and who knows?

Friends, I arrived at the pasture, and I, I chose the yak less shaven. I took my lame JPEG parser and turned it into a full decoder (Yakpoint 49), realized it wasn't much more work to do an encoder (Yakpoint 50), and implemented the lossless transformations (Yakpoint 51).

on haters

Before we go on, I know some people would think "what is this kid about". I mean, custom gallery software, a custom JPEG library of all things, all bespoke, why don't you just use off-the-shelf solutions? Why aren't you normal and use a normal language and what about the best practices and where's your business case and I can't go on about this because there's a technical term for people that say this kind of thing and it's "hater".

Thing is, when did a hater ever make anything cool? Come to think of it, when did a hater make anything at all? In my experience the most vocal haters have nothing behind their names except a long series of pseudonymous rants in other people's comment boxes. So friends, in the joyful spirit of earning-anew, let's talk about JPEG!

on color

JPEG is a funny thing. Photos are our lives and our memories, our first steps and our friends, and yet I for one didn't know very much about them. My mental model that "a JPEG is a rectangle of pixels" doesn't turn out to be quite right.

If you actually look in a normal JPEG, you see three planes of information. If I take this image, for example:

If I decode it, actually I get three images. Here's the first one:

This is just the greyscale version of the image. So, storytime! Remember black and white television? We had an old one that got moved around the house sometimes, like if Mom was working at something in the kitchen. We also had a color one in the living room, and you could watch one or the other and they showed the same stuff. Strange when you think about it though -- one being in color and the other not. Well it turns out that color was literally just added on, both historically and technically. The main broadcast was still in black and white, and then in one part of the frequency band there were separate color signals, which color TVs would pick up, mix with the black and white signal, and come out with color. Wikipedia notes that "color TV" was really just "colored TV", which is a phrase whose cleverness I respect. Big ups to the W P.

In the context of JPEG, this black-and-white signal is sometimes called "luma", but is more precisely called Y', where the "prime" (the apostrophe) indicates that the signal has gamma correction applied.

In the image above, I replaced the color planes (sometimes collectively called the "chroma") with zeroes, while losslessly keeping the luma. Below is the first color plane, with the Y' plane replaced with a uniform 50% luma, and the other color plane replaced with zeros.

This color signal is technically known as CB, which may be very imperfectly understood as the bluish component of the color. Well the original image wasn't very blue, so we don't see very much here.

Indeed, our eyes have a harder time seeing differences in color than differences in intensity. Apparently this goes all the way down to biology -- we have more receptors in our eyes for "black and white" and fewer for color.

Early broadcasters took advantage of this difference in perception by actually devoting more bandwidth in their broadcasts to luma than to chroma; if you check the Wikipedia page you will see that the area in the spectrum allocation devoted to color is much smaller than the area devoted to intensity. So it is in JPEG: the above image being half-width indicates that actually we're just encoding one CB sample for every two Y' samples.

Finally, here we have the CR color plane, which can loosely be thought of as the "redness" of the image.

These test images and crops preserve the actual encoding of this photo as it came from my camera, without re-encoding. That's partly why there's not much interesting going on; with the megapixels these days, it's hard to fit much of anything in a few hundred pixels square. This particular camera is sub-sampling in the horizontal direction, but it's also common to subsample vertically as well, producing color planes that are half-width and half-height. In my limited investigations I have found that cameras tend to sub-sample just in the X direction, producing what they call 4:2:2 images, and that standard software encoders subsample in both, producing 4:2:0.

Incidentally, properly scaling up the color planes is quite an irritating endeavor -- the standard indicates that the color is sampled between the locations of the Y' samples ("centered" chroma), but these images originally have EXIF data that indicates that the color samples are taken at the position of the first Y' sample ("co-sited" chroma). I'm pretty sure libjpeg doesn't delve into the EXIF to check this though, so it would seem that all renderings I have seen of these photos are subtly off.

But how do you get proper color out of these strange luma and chroma things? Well, the Y'CBCR colorspace is really just the same color cube as RGB, except rotated: the Y' axis traverses the diagonal from (0, 0, 0) (black) to (255, 255, 255) (white). CB and CR are perpendicular to that diagonal, pointing towards blue or red respectively. So to go back to RGB, you multiply by a matrix to rotate the cube.

It's not a very intuitive color system, as you can see from the images above. For one thing, at zero or full luma, the chroma axes have no meaning; black and white can have no hue. Indeed if you imagine trying to fit a cube corner-down into a similar-sized box, you end up either having empty space in the box, or you have to cut off corners from the cube, or both. Cut corners means that bits of the Y'CBCR signal are wasted; empty space means there are RGB colors that are not representable in Y'CBCR. I'm not sure, but I think both are true for the particular formulation of Y'CBCR used in JPEG.

There's more to say about color here but frankly I don't know enough to do so, even though I worked in digital video for many years. If this is something you are mildly interested in, I highly, highly recommend watching Wim Taymans' presentation at this year's GStreamer conference. He takes a look at color in video that is constructive, building up from biology through math to engineering. His is a principled approach rather than a list of rules. It really clarified a number of things for me (and opened doors to unknown unknowns beyond).

on cosines

Where were we? Right, JPEG. So the proper way to understand what JPEG is is to understand the encoding process. We've covered colorspace conversion from RGB to Y'CBCR and sub-sampling. Next, the image canvas is divided into equal-sized "macroblocks". (These are called "minimum coded units" (MCUs) in the JPEG context, but in video they are usually called macroblocks, and it's a better name.) Without sub-sampling, each macro-block will contain one 8-sample-by-8-sample block for each component (Y', CB, CR) of the image. In my images above, the canvas space corresponding to one chroma block is the space of two luma blocks, so the macroblocks will be 16 samples wide and 8 samples tall, and contain two Y' blocks and one each of CB and CR. If the image canvas can't be evenly divided into macroblocks, it is padded to fit, usually by duplicating the last column or row of samples.

Then to make a JPEG, each block is encoded separately, then the whole thing is just written out to a file, and you're done!

This description glosses over a couple of important points, but it's a good big-picture view to have in mind. The pipeline goes from RGB pixels, to a padded RGB canvas, to separate Y'CBCR planes, to a possibly subsampled set of those planes, to macroblocks, to encoded macroblocks, to the file. Decoding is the reverse. It's a totally doable, comprehensible thing, and that was one of the big takeaways for me from this project. I took photography classes in high school and it was really cool to see how to shoot, develop, and print film, and this is similar in many ways. The real "film" is raw-format data, which some cameras produce, but understanding JPEG is like understanding enlargers and prints and fixer baths and such things. It's smelly and dark but pretty cool stuff.

So, how do you encode a block? Well peoples, this is a kinda cool thing. Maybe you remember from some math class that, given n uniformly spaced samples, you can always represent that series as a sum of n cosine functions of equally spaced frequencies. In each litle 8-by-8 block, that's what we do: a "forward discrete cosine transformation" (FDCT), which is just multiplying together some matrices for every point in the block. The FDCT is completely separable in the X and Y directions, so the space of 8 horizontal coefficients multiplies by the space of 8 vertical coefficients at each column to yield 64 total coefficients, which is not coincidentally the number of samples in a block.

Funny thing about those coefficients: each one corresponds to a particular horizontal and vertical frequency. We can map these out as a space of functions; for example giving a non-zero coefficient to (0, 0) in the upper-left block of a 8-block-by-8-block grid, and so on, yielding a 64-by-64 pixel representation of the meanings of the individual coefficients. That's what I did in the test strip above. Here is the luma example, scaled up without smoothing:

The upper-left corner corresponds to a frequency of 0 in both X and Y. The lower-right is a frequency of 4 "hertz", oscillating from highest to lowest value in both directions four times over the 8-by-8 block. I'm actually not sure why there are some greyish pixels around the right and bottom borders; it's not a compression artifact, as I constructed these DCT arrays programmatically. Anyway. Point is, your lover's smile, your sunny days, your raw urban graffiti, your child's first steps, all of these are reified in your photos as a sum of cosine coefficients.

The odd thing is that what is reified into your pictures isn't actually all of the coefficients there are! Firstly, because the coefficients are rounded to integers. Mathematically, the FDCT is a lossless operation, but in the context of JPEG it is not because the resulting coefficients are rounded. And they're not just rounded to the nearest integer; they are probably quantized further, for example to the nearest multiple of 17 or even 50. (These numbers seem exaggerated, but keep in mind that the range of coefficients is about 8 times the range of the original samples.)

The choice of what quantization factors to use is a key part of JPEG, and it's subjective: low quantization results in near-indistinguishable images, but in middle compression levels you want to choose factors that trade off subjective perception with file size. A higher quantization factor leads to coefficients with fewer bits of information that can be encoded into less space, but results in a worse image in general.

JPEG proposes a standard quantization matrix, with one number for each frequency (coefficient). Here it is for luma:

(define *standard-luma-q-table*
  #(16 11 10 16 24 40 51 61
    12 12 14 19 26 58 60 55
    14 13 16 24 40 57 69 56
    14 17 22 29 51 87 80 62
    18 22 37 56 68 109 103 77
    24 35 55 64 81 104 113 92
    49 64 78 87 103 121 120 101
    72 92 95 98 112 100 103 99))

This matrix is used for "quality 50" when you encode an 8-bit-per-sample JPEG. You can see that lower frequencies (the upper-left part) are quantized less harshly, and vice versa for higher frequencies (the bottom right).

(define *standard-chroma-q-table*
  #(17 18 24 47 99 99 99 99
    18 21 26 66 99 99 99 99
    24 26 56 99 99 99 99 99
    47 66 99 99 99 99 99 99
    99 99 99 99 99 99 99 99
    99 99 99 99 99 99 99 99
    99 99 99 99 99 99 99 99
    99 99 99 99 99 99 99 99))

For chroma (CB and CR) we see that quantization is much more harsh in general. So not only will we sub-sample color, we will also throw away more high-frequency color variation. It's interesting to think about, but also makes sense in some way; again in photography class we did an exercise where we shaded our prints with colored pencils, and the results were remarkable. My poor, lazy coloring skills somehow rendered leaves lifelike in different hues of green; really though, they were shades of grey, colored in imprecisely. "Colored TV" indeed.

With this knowledge under our chapeaux, we can now say what the "JPEG quality" setting actually is: it's simply that pair of standard quantization matrices scaled up or down. Towards "quality 100", the matrix approaches all-ones, for no quantization, and thus minimal loss (though you still have some rounding, often subsampling as well, and RGB-to-Y'CBCR gamut loss). Towards "quality 0" they scale to a matrix full of large values, for harsh quantization.

This understanding also explains those wavey JPEG artifacts you get on low-quality images. Those artifacts look like waves because they are waves. They usually occur at sharp intensity transitions, which like a cymbal crash cause lots of high frequencies that then get harshly quantized. Incidentally I suspect (but don't know) that this is the same reason that cymbals often sound bad in poorly-encoded MP3s, because of harsh quantization in the frequency domain.

Finally, the coefficients are written out to a file as a stream of bits. Each file gets a huffman code allocated to it, which ideally is built from the distribution of quantized coefficient sizes seen in all of the blocks of an image. There are usually different encodings for luma and chroma, to reflect their different quantizations. Reading and writing this bitstream is a bit of a headache but the algorithm is specified in the JPEG standard, and all you have to do is implement it. Notably, though, there is special support for encoding a run of zero-valued coefficients, which happens often after quantization. There are rarely wavey bits in a blue blue sky.

on transforms

It's terribly common for photos to be wrongly oriented. Unfortunately, the way that many editors fix photo rotation is by setting a bit in the EXIF information of the JPEG. This is ineffectual, as web browsers don't look in the EXIF information, and silly, because it turns out you can losslessly rotate most JPEG images anyway.

Consider that the body of a JPEG is an array of macroblocks. To rotate an image, you just have to rearrange those macroblocks, then rearrange the blocks inside the macroblocks (e.g. swap the two Y' blocks in my above example), then transform the blocks themselves.

The lossless transformations that you can do on a block are transposition, vertical flipping, and horizontal flipping.

Transposition flips a block along its downward-sloping diagonal. To do so, you just swap the coefficients at (u, v) with the coefficients at (v, u). Easy peasey.

Flipping is trickier. Consider the enlarged DCT image from above. What would it take to horizontally flip the function at (0, 1)? Instead of going from light to dark, you want it to go from dark to light. Simple: you just negate the coefficients! But you only want to negate those coefficients that are "odd" in the X direction, which are those coefficients whose column is odd. And actually that's all there is to it. Flipping vertically is the same, but for coefficients whose row is odd.

I said "most images" above because those whose size is not evenly divided by the macroblock size can't be losslessly rotated -- you will end up seeing some of the hidden data that falls off the edge of the canvas. Oh well. Most raw images are properly dimensioned, and if you're downscaling, you already have to re-encode anyway.

But that's just flipping and transposition, you say! What about rotation? Well it turns out that you can express rotation in terms of these operations: rotating 90 degrees clockwise is just a transpose and a horizontal flip (in that order). Together, flipping horizontally, flipping vertically, and transposing form a group, in the same way that flipping and flopping form a group for mattresses. Yeah!

on scheme

I wrote this library in Scheme because that's my language of choice these days. I didn't run into any serious impedance mismatches; Guile has a generic multi-dimensional array facility that made it possible to express many of these operations as generic folds, unfolds, or maps over arrays. The huffman coding part was a bit irritating, but all in all things were pretty good. The speed is pretty bad, but I haven't optimized it at all, and it gives me a nice test case for the compiler. Anyway, it's been fun and it suits my needs. Check out the project page if you're interested. Yes, to shave a yak you have to get a bit bovine and smelly, but yaks live in awesome places!

Finally I will leave you with a glitch, one of many that I have produced over the last couple weeks. Comments and corrections welcome below. Happy hacking!

by Andy Wingo at November 14, 2014 04:49 PM

November 12, 2014

FSF Blogs

Friday Free Software Directory IRC meetup: November 14

Join the FSF and friends on Friday, November 14, from 2pm to 5pm EST (19:00 to 22:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.


Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.


While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!


If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

November 12, 2014 07:11 PM

November 10, 2014

Lonely Cactus

archive.org is good for old tech docs

I saw on Undeadly a note that OpenBSD's Ted was patching the ancient bcd program, which converts text into ASCII-art representations of punch cards. Punch cards were a technology from the 1960s and 1970s (?) that stored code or data on cardstock, with holes punched out of them. Each card held a line of text. If I recall correctly, each character was a column on the card, with as many as seven holes punched out of set of 12 possible locations. There were 40 to 80 columns on the card, according to the brand and the decade.

Anyway, Ted modified the bcd program to read in the ASCII-art representation of punch cards that it generated, so that it became, essentially, a very inefficient reversible encoding, but, he was unsure where to search for documents that he could use to validate the output.

My goto place for tech docs from the 1970s is archive.org. If you've never searched its collection, you should check it out.

Anyway, I did manage to find a couple of references there to punch card encoding.

For a couple of brands, anyway, punch card encoding seemed to have, for each character, a 4-bit "zone" or category and an 8-bit index. But this didn't result in a 12-bit encoding. Only a sparse subset of the available 12-bits indicated a character, for mechanical reasons, I guess. A subset of the characters now included in ASCII were encodable, but, missing some punctuation such as square brackets. It is for reasons like this that the C standard has trigraphs.

by Mike (noreply@blogger.com) at November 10, 2014 06:39 AM

GNUtls

GnuTLS 3.3.10, 3.2.20 and 3.1.28

Released GnuTLS 3.3.10, GnuTLS 3.2.20, GnuTLS 3.1.28, which are bug-fix releases on the current and previous stable branches respectively.

Posted a security advisory on a vulnerability of the gnutls library.

by Nikos Mavrogiannopoulos (nmav@gnutls.org) at November 10, 2014 12:00 AM

November 09, 2014

GNU Remotecontrol

Newsletter – November 2014

THIS MONTH…..
-TRENDS
-EYE CATCHING
-ANNUAL PLAN
-DISCUSSIONS
-EXISTING CODE
-SECURITY
-LASTLY

-TRENDS
The stuff going on in the big picture now…..

United States Electricity Price per KWH
Current and Past

August September Trend % Change
$0.143 $0.141 Decrease -1.40%
Year September Trend % Change % Since Difference
2004 $0.099 Same 0.00% 0.00% 0.00%
2005 $0.106 Increase 7.07% 7.07% 7.07%
2006 $0.118 Increase 11.32% 19.19% 12.12%
2007 $0.121 Increase 2.54% 22.22% 3.03%
2008 $0.130 Increase 7.44% 31.31% 9.09%
2009 $0.130 Same 0.00% 31.31% 0.00%
2010 $0.132 Increase 1.54% 33.33% 2.02%
2011 $0.135 Increase 2.27% 36.36% 3.03%
2012 $0.133 Decrease -1.48% 34.34% -2.02%
2013 $0.137 Increase 3.01% 38.38% 4.04%
2014 $0.141 Increase 2.92% 42.42% 4.04%

United Kingdom Utility Prices
Current and Past

London by night, seen from the International Space Station

UPDATES TO PREVIOUSLY POSTED ITEMS
The Smart Grid Educational Webinar Series Archives has changed the address of our presentation. We hope you view this presentation. It was an excellent experience for both the GNU remotecontrol Team and “…the world class experts in the field…”

-EYE CATCHING
The stuff that has caught our eye…..

Demand Response

  • An article, reporting an appeals court grants a stay on decision to overturn FERC Order 745.
  • An article, considering the future of Demand Response in light of this aforementioned ruling.
  • An article, discussing the first legal case in the FERC Order 745 matter.
  • An article, considering if Smart Phones are the bridge to help achieve automated demand response.
  • A review, of the Ecobee3 Wi-Fi Smart Thermostat provides unpleasant results.
  • An article, finding the Smart Grid has entered the age of high performance computing.

Smart Grid – Consumer

  • An article, reporting Nest has purchased Revolv and discontinued the Revolv offering.
  • An article, describing the cultural impact in India from having a national electric Smart Grid.
  • A security bulletin, defining point-of-sale security compromises. The ability to compromise a device such as an insecure network connected HVAC thermostat, by either design or configuration, could be much simpler to accomplish than previously understood.
  • A review, of Smart Phone enabled door locks. The findings represent insufficient security to the door lock design, increasing the risk of home invasion.
  • A survey, finding public utilities will experience strong competition in building a national electrical Smart Grid. What is unclear is how this competition will impact the price of energy, to offset these competitive costs for a public utility.
  • An article, finding PJM Interconnection is proposing many rule changes to FERC Order 745.
  • An announcement, from the United States Department of Energy, of nearly $8 million to support research and development of the next generation of heating, ventilating, and air conditioning (HVAC) technologies. This seems to be an attempt to align with the Appliance and Equipment Standards Program, in hope to find best practices for using HVAC technologies with the various other technologies prevalent today.

Smart Grid – Producer

  • An article, considering how to best modify Smart Grid analytical thinking.
  • An article, finding Southeast Asia to be the next hot spot for Smart Grid investment.
  • A thought provoking article, considering the self-contained system of a submarine to improving public utility operations.
  • An article, describing how South Korea is well-positioned to export their Smart Grid technologies they have successfully developed.

Smart Grid – Security

  • A study, finding smart meters can be hacked to reduce charges on billing cycle invoices.
  • A report, finding cellular phone companies are tracking users in possible violation of federal telecommunications and wiretapping laws. This report has been validated.
  • An article, finding the interconnection of the three United States electrical grids is held back only by financing. It is unclear how security of this newly interconnected electrical grid will occur.
  • An article, finding more vulnerabilities in the Untied States national electrical grid.

-ANNUAL PLAN
Status Update of our 2014 Plan…..

Demand Response

  • Further discussions with members of the electronics industry.
  • No other work since the April newsletter.

Unattended Server Side Automation

  • No other work since the April newsletter.

Power Line Communication

  • Further discussions with the members of the electronics industry.
  • No other work since the January newsletter.

Talk to us with your comments and suggestions on our plan for this year.

-DISCUSSIONS
The stuff we are talking about now…..

DRIVING THE NETWORK CONNECTED HVAC THERMOSTAT
The rise of electric vehicles is causing a faster build out of the Smart Grid. The collective build out of the Smart Grid will enable cost effectiveness to be found in achieving the network connected HVAC thermostat. The electric vehicle market will also bring innovation to the home, in the form of lower costs for Smart Grid technologies which complement the electric vehicle charging process. It is highly likely the electric vehicle market will accelerate adoption of the network connected HVAC thermostat, simply because of cost reduction to establish the necessary infrastructure in the home.

OTHER TYPES OF THERMOSTATS?
Many people have asked us about adding other types of thermostats to GNU remotecontrol. There are three questions that need to be answered before we can offer GNU remotecontrol support for any IP thermostat. These questions are:

  • How to CONNECT to it (NETWORK).
  • How to READ from it (CODE).
  • How to WRITE to it (CODE).

It is our hope to have dozens and dozens of thermostat types that work with GNU remotecontrol. Let us know if you designed or manufactured a device and you would like to test it with GNU remotecontrol.

-EXISTING CODE
The stuff you may want to consider…..

BUGS
We have 0 new bugs and 0 fixed bugs since our last Blog posting. Please review these changes and apply to your GNU remotecontrol installation, as appropriate.

TASKS
We have 0 new tasks and 1 completed tasks since our last Blog posting. Please review these changes and apply to your GNU remotecontrol installation, as appropriate.

-SECURITY
The stuff you REALLY want to consider…..

ENERGY DISAGGREGATION SOFTWARE
Analytics upstart Bidgely will integrate its tracking and controlling systems with TXU Energy MyEnergy dashboard. The only way this analytical effort can be accomplished is to collect the necessary data. This offering is a milestone in history, as the first public utility now offering appliance level analysis of energy consumption. The annual Texas power consumption is ranked between the United Kingdom and Italy. Privacy on the electrical grid has irrevocably changed.

REMEMBER
GNU remotecontrol relies on OS file access restrictions, Apache authentication, MySQL authentication, and SSL encryption to secure your data. Talk to us you want to find out how you can further strengthen the security of your system, or you have suggestions for improving the security of our current system architecture.

-LASTLY
Whatever you do…..don’t get beat up over your Energy Management strategy. GNU remotecontrol is here to help simplify your life, not make it more complicated. Talk to us if you are stuck or cannot figure out the best option for your GNU remotecontrol framework. The chances are the answer you need is something we have already worked through. We would be happy to help you by discussing your situation with you.

…..UNTIL NEXT MONTH!

Why the Affero GPL?

GNU Affero General Public License LOGO

GNU remotecontrol LOGO


by gnuremotecontrol at November 09, 2014 08:01 PM

acct @ Savannah

November 07, 2014

FSF Blogs

Recap of Friday Free Software Directory IRC meetup: November 7

In today's Friday Free Software Directory (FSD) IRC Meeting and we celebrated the launch of copyleft.org, worked on some new art work and icons, and added a few new entries, including:

  • Mailpile is an email server with a modern web-based client. It is dual-licensed under GNU AGPLv3 and Apache 2.0.
  • Libertree is a social network implemented over XMPP and a web app. It is licensed under the terms of the GNU AGPLv3 or (at your option) any later version.

Join us each week to help improve the Free Software Directory every Friday! Find out how to attend the Friday Free Software Directory IRC Meetings by checking our blog or by subscribing to the RSS feed.

November 07, 2014 10:40 PM

FSF News

Software Freedom Conservancy and Free Software Foundation announce copyleft.org

BOSTON, Massachusetts, USA -- Friday, November 7, 2014 -- Software Freedom Conservancy and the Free Software Foundation (FSF) today announce an ongoing public project that began in early 2014: Copyleft and the GNU General Public License: A Comprehensive Tutorial and Guide, and the publication of that project in its new home on the Internet at copyleft.org. This new site will not only provide a venue for those who constantly update and improve the Comprehensive Tutorial, but is also now home to a collaborative community to share and improve information about copyleft licenses, especially the GNU General Public License (GPL), and best compliance practices.

Bradley M. Kuhn, President and Distinguished Technologist of Software Freedom Conservancy and member of FSF's Board of Directors, currently serves as editor-in-chief of the project. The text has already grown to 100 pages discussing all aspects of copyleft -- including policy motivations, detailed study of the license texts, and compliance issues. This tutorial was initially constructed from materials that Kuhn developed on a semi-regular basis over the last eleven years. Kuhn merged this material, along with other material regarding the GPL published by the FSF, into a single, coherent volume, and released it publicly for the benefit of all users of free software.

Today, Conservancy announces a specific, new contribution: an additional chapter to the Case Studies in GPL Enforcement section of the tutorial. This new chapter, co-written by Kuhn and Conservancy's compliance engineer, Denver Gingerich, discusses in detail the analysis of a complete, corresponding source (CCS) release for a real-world electronics product, and describes the process that Conservancy and the FSF use to determine whether a CCS candidate complies with the requirements of the GPL. The CCS analyzed is for ThinkPenguin's TPE-NWIFIROUTER wireless router, which the FSF recently awarded Respects Your Freedom (RYF) certification.

The copyleft guide itself is distributed under the terms of a free copyleft license, the Creative Commons Attribution-ShareAlike 4.0 International license. Kuhn, who hopes the initial release and this subsequent announcement will inspire others to contribute to the text, said, "information about copyleft -- such as why it exists, how it works, and how to comply -- should be freely available and modifiable, just as all generally useful technical information should. I am delighted to impart my experience with copyleft freely. I hope, however, that other key thinkers in the field of copyleft will contribute to help produce the best reference documentation on copyleft available."

Particularly useful are the substantial contributions already made to the guide from the FSF itself. As the author, primary interpreter, and ultimate authority on the GPL, the FSF is in a unique position to provide insights into understanding free software licensing. While the guide as a living text will not automatically reflect official FSF positions, the FSF has already approved and published one version for use at its Seminar on GPL Enforcement and Legal Ethics in March 2014. "Participants at our licensing seminar in March commented positively on the high quality of the teaching materials, including the comprehensive guide to GPL compliance. We look forward to collaborating with the copyleft.org community to continually improve this resource, and we will periodically review particular versions for FSF endorsement and publication," said FSF's executive director John Sullivan.

Enthusiastic new contributors can get immediately involved by visiting and editing the main wiki on copyleft.org, or by submitting merge requests on copyleft.org's gitorious site for the guide, or by joining the project mailing list and IRC channel.

copyleft.org welcomes all contributors. The editors have already incorporated other freely licensed documents about GPL and compliance with copyleft licenses -- thus providing a central location for all such works. Furthermore, the project continues to recruit contributors who have knowledge about other copyleft licenses besides the FSF's GPL family. In particular, Mike Linksvayer, member of Conservancy's board of directors, has agreed to lead the drafting on a section about Creative Commons Attribution-ShareAlike licenses to mirror the ample text already available on GPL. "I'm glad to bring my knowledge about the Creative Commons copyleft licenses as a contribution to improve further this excellent tutorial text, and I hope that copyleft.org as a whole can more generally become a central location to collect interesting ideas about copyleft policy," said Linksvayer.

About copyleft.org

copyleft.org is a collaborative project to create and disseminate useful information, tutorial material, and new policy ideas regarding all forms of copyleft licensing. Its primary project is currently a comprehensive tutorial and guide, which describes the policy motivations of copyleft licensing, presents a detailed analysis of the text of various copyleft licenses, and gives examples and case studies of copyleft compliance situations.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

About Software Freedom Conservancy

Software Freedom Conservancy is a not-for-profit organization that promotes, improves, develops and defends Free, Libre and Open Source software projects. Conservancy is home to more than thirty software projects, each supported by a dedicated community of volunteers, developers and users. Conservancy's projects include some of the most widely used software systems in the world across many application areas, including educational software deployed in schools around the globe, embedded software systems deployed in most consumer electronic devices, distributed version control developer tools, integrated library services systems, and widely used graphics and art programs. A full list of Conservancy's member projects is available. Conservancy provides these projects with the necessary infrastructure and not-for-profit support services to enable each project's communities to focus on what they do best: creating innovative software and advancing computing for the public's benefit.

Media Contacts

Joshua Gay
Licensing & Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Karen M. Sandler
Executive Director
Software Freedom Conservancy
+1 (212) 461-3245
info@sfconservancy.org

November 07, 2014 04:15 PM

GNUnet News

We Fix the Net Assembly @ 31c3

The "We Fix the Net" assembly" is to be the perfect place at 31c3 for all hackers to do something about replacing today's broken Internet with secure alternatives. We hope to have some talks and panels like last year. Details will be posted here closer to the congress, for now, please contact us at wefixthenet@gnunet.org if you are interested to present your work or organize something practical. Topics include:

by Matthias Wachs at November 07, 2014 09:56 AM

November 06, 2014

FSF Blogs

GNU Press has restocked!

GNU pin

GNU Press has restocked popular gear including shirts, hoodies, beanies, and GNU emblem brass pins. We are also reintroducing the GPLv3 shirt in gray, and lowering the price of the GNU30 commemorative travel mug. We have also added 3XL sizes for some of our most popular designs!

As always, if you can't find something in the store but think we should offer it, please add your suggestion to our Ideas page. And remember, associate members of the Free Software Foundation get a 20% discount on all purchases made through the GNU Press store, so if you are not a member already, join today!

To keep up with announcements about new products available in the GNU Press store, subscribe to the mailing list.

November 06, 2014 05:26 PM

FSF Events

Richard Stallman to speak in Ithaca, NY

This speech by Richard Stallman will be nontechnical, admission is gratis, and the public is encouraged to attend.

Title, detailed location, and start time to be determined.

Please fill out our contact form, so that we can contact you about future events in and around Ithaca.

November 06, 2014 04:20 PM

Richard Stallman - "The Free Software Movement" (Richmond, VA)

Richard Stallman will speak about the goals and philosophy of the Free Software Movement, and the status and history of the GNU operating system, which in combination with the kernel Linux is now used by tens of millions of users world-wide.

Please fill out our contact form, so that we can contact you about future events in and around Richmond.

November 06, 2014 03:15 PM

Richard Stallman - "A Free Digital Society" (Modena, Italy)

There are many threats to freedom in the digital society. They include massive surveillance, censorship, digital handcuffs, nonfree software that controls users, and the War on Sharing. Other threats come from use of web services. Finally, we have no positive right to do anything in the Internet; every activity is precarious, and can continue only as long as companies are willing to cooperate with it.

Richard Stallman's speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Registration, which can be done anonymously, while not required, is appreciated; it will help us ensure we can accommodate all the people who wish to attend. Groups of 20 people or more are strongly encouraged (but not required) either to register or to e-mail andrea DOT bizzeti AT unimore DOT it with their head count.

Please fill out our contact form, so that we can contact you about future events in and around Modena.

November 06, 2014 01:05 PM

November 05, 2014

FSF Blogs

Friday Free Software Directory IRC meetup: November 7

Join the FSF and friends on Friday, November 7, from 2pm to 5pm EST (19:00 to 22:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.


Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.


While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!


If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

November 05, 2014 11:09 PM

Nominate your heroes for the Free Software Awards

To nominate an individual for the Award for the Advancement of Free Software or a project for the Award for Projects of Social Benefit, send your nomination along with a description of the project or individual to award-nominations@gnu.org.

What are you waiting for? Take a few minutes to give a little something to people and projects that have inspired you. Your nominations will be reviewed by our awards committee and the winners will be announced at LibrePlanet 2015.

Award for the Advancement of Free Software

The Free Software Foundation Award for the Advancement of Free Software is presented to an individual who has made a great contribution to the progress and development of free software, through activities that accord with the spirit of free software.

Individuals who describe their projects as "open" instead of "free" are eligible nonetheless, provided the software is in fact free/libre.

Award for Projects of Social Benefit

The Award for Projects of Social Benefit is presented to the project or team responsible for applying free software, or the ideas of the free software movement, in a project that intentionally and significantly benefits society in other aspects of life.

We look to recognize projects or teams that encourage people to cooperate in freedom to accomplish social tasks. A long-term commitment to one's project (or the potential for a long-term commitment) is crucial to this end.

This award stresses the use of free software in the service of humanity. We have deliberately chosen this broad criterion so that many different areas of activity can be considered. However, one area that is not included is that of free software itself. Projects with a primary goal of promoting or advancing free software are not eligible for this award (we honor individuals working on those projects with our annual Award for the Advancement of Free Software).

We will consider any project or team that uses free software or its philosophy to address a goal important to society. To qualify, a project must use free software, produce free documentation, or use the idea of free software as defined in the Free Software Definition. Projects that promote or depend on the use of non-free software are not eligible for this award. Commercial projects are not excluded, but commercial success is not our scale for judging projects.

Eligibility

In the case of both awards, previous winners are not eligible for nomination, but renomination of other previous nominees is encouraged. Only individuals are eligible for nomination for the Advancement of Free Software Award (not projects), and only projects can be nominated for the Social Benefit Award (not individuals). For a list of previous winners, please visit https://www.fsf.org/awards.

Current FSF staff and board members, as well as award committee members, are not eligible.

The tentative award committee members are: Marina Zhurakhinskaya, Matthew Garrett, Rob Savoye, Wietse Venema, Richard Stallman, Suresh Ramasubramanian, Vernor Vinge, Hong Feng, Fernanda G. Weiden, Harald Welte, Vernor Vinge, Jonas Oberg, and Yukihiro Matsumoto.

Instructions

After reviewing the eligibility rules above, please send your nominations to award-nominations@gnu.org, on or before Sunday, November 16th, 2014 at 23:59 UTC. Please submit nominations in the following format:

  • In the email message subject line, either put the name of the person you are nominating for the Award for Advancement of Free Software, or put the name of the project for the Award for Projects of Social Benefit.

  • Please include, in the body of your message, an explanation (forty lines or less) of the work done and why you think it is especially important to the advancement of software freedom or how it benefits society, respectively.

  • Please state, in the body of your message, where to find the materials (e.g., software, manuals, or writing) which your nomination is based on.

Attend the Free Software Awards at LibrePlanet 2015

Want to be in the room when the winners are announced? Register today for the LibrePlanet conference, March 21-22 2015, in Cambridge, MA. In addition to rubbing elbows with the award winners, you'll have a blast at the rest of the conference, with a program chock full of sessions free software enthusiasts will love. Remember: Free Software Foundation members attend LibrePlanet gratis!

November 05, 2014 05:34 PM

librejs @ Savannah

GNU LibreJS 6.0.5 released

There's a new version of LibreJS.

Here's the changes since 6.0.4:
* Fixed a bug where the complain button on the main LibreJS panel
would never appear.

* The torch icon on the complain panel has been replaced with the
word "Complain" to make it more obvious.

* Fixed bug #43403 - LibreJS was marking all scripts as nonfree
if the page returned a 404 status code. Thanks to Rubén
Rodríguez for reporting this.

This project's website is here:
http://www.gnu.org/software/librejs/

The source files are here:
https://ftp.gnu.org/gnu/librejs/librejs-6.0.5.tar.gz

And here's the executable you can install in your browser:
https://ftp.gnu.org/gnu/librejs/librejs-6.0.5.xpi

The main source repository for this project is in Bazaar:
http://bzr.savannah.gnu.org/lh/librejs/dev

by Nik Nyby at November 05, 2014 01:02 AM

November 04, 2014

German Arias

FisicaLab’s documentation now online

The documentation of FisicaLab is now online. Needs improvements and corrections (someone told me that in English “points” is used instead “particles”). But now is more easy to learn how to use FisicaLab. Is also available in Spanish.

Doc


by Germán Arias at November 04, 2014 07:20 PM

FSF Blogs

GNU Tools Cauldron 2014 videos posted online

GNU Tools discussion

Presentation videos from GNU Tools Cauldron 2014 have now been posted online. The conference, which this year was held from July 18 - 20, 2014 in Cambridge, England at the University of Cambridge, featured nearly thirty presentations on tools in the GNU toolchain including GCC, the GNU Compiler Collection, and GDB, the GNU Project Debugger. Developers shared tutorials and insights in addition to discussing development plans for various projects within the GNU toolchain.

In addition to the presentation videos, which you can download and view with free software tools instead of using YouTube, you can find presentation abstracts and notes from the conference on the GCC Wiki.

Happy hacking!

November 04, 2014 06:02 PM

November 03, 2014

Trisquel GNU/Linux

Trisquel 7.0 LTS Belenos

Version 7 of the Trisquel GNU/Linux distribution, codenamed Belenos after a Celtic sun god, has been released. Belenos is a Long Term Support release that will be maintained until 2019. Relevant new packages and features include:

  • Kernel Linux-libre 3.13 with lowlatency and bfq scheduling by default.
  • Custom desktop based on GNOME 3.12 fallback.
  • Abrowser 33 (a free Firefox derivative) as default browser.
    • GNU IceCat 31 available as single-click optional install from Abrowser's homepage. Complete with many extra privacy features.
  • Electrum Bitcoin Wallet preinstalled.
  • Moved to DVD format, now with 50+ languages and extra applications.
  • Improved accessibility by default.


Trisquel on a Sugar TOAST

Editions published today include the GNOME based standard edition, the LXDE mini edition, and the now officially supported Trisquel on a Sugar TOAST edition, designed for kids 0 to 12. It features the Sugar Learning environment and a preselection of educational activities.

New Website

The improvements don't stop at the software. We are also unveiling a new website theme, cleaner and more modern, to match the streamlined style of the release. This is just the first change in a series of improvements that will focus on giving the users a better community experience.

New development process

We are also making big changes to the development process, to make it easier for the community to contribute. The new code management website allows users to clone the code, hack and request their changes to be reviewed and merged. This is connected to a whole new continuous integration build system that will keep the distro up to date with less manual intervention.

Help sustain this effort

Trisquel is a non-profit community project. You can help keep this project going by becoming a member, donating or buying in our store!

 

by quidam at November 03, 2014 07:59 PM