Planet GNU

Aggregation of development blogs from the GNU Project

October 21, 2018

GNUnet News

October 18, 2018

FSF Blogs

Announcing keynote speakers for LibrePlanet -- and don't miss your chance to give a talk

Today, we are proud to announce all four keynote speakers who will appear at the LibrePlanet 2019 conference, which takes place in the Boston area, March 23-24, 2019. They are: Debian Project contributor Bdale Garbee, free software activist Micky Metts, physician Tarek Loubani, and FSF founder and president Richard Stallman, all of whom are trailblazers of free software in their own right.

REGISTER FOR THE LIBREPLANET 2019 CONFERENCE HERE!

Bdale Garbee

Bdale Garbee has contributed to the free software community since 1979. He was an early participant in the Debian Project, helped port Debian GNU/Linux to five architectures, served as the Debian Project Leader, then chairman of the Debian Technical Committee for nearly a decade, and remains active in the Debian community. For a decade, Bdale served as president of Software in the Public Interest. He also served on the board of directors of the Linux Foundation, representing individual affiliates and the developer community. Bdale currently serves on the boards of the Freedombox Foundation, the Linux Professional Institute, and Aleph Objects. He is also a member of the Evaluations Committee at the Software Freedom Conservancy. In 2008, Bdale became the first individual recipient of a Lutece d'Or award from the Federation Nationale de l'Industrie du Logiciel Libre in France.

Micky Metts

Micky Metts is an owner of Agaric, a worker-owned technology cooperative. She is an activist hacker, industry organizer, public speaker, connector, advisor, and visionary. Micky is a member of the MayFirst People Link Leadership Committee, and is a liaison between the Solidarity Economy Network (SEN) and the United States Federation of Worker Cooperatives (USFWC), with an intention to bring communities together. Micky is also a founding member of a cohort that is building a new Boston public high school based in cooperative learning: BoCoLab. She is a member of the Free Software Foundation and of Drupal.org, a community based in free software. She is a published author contributing to the book Ours to Hack and to Own, one of the top technology books of 2017 in Wired magazine.

Tarek Loubani

Dr. Tarek Loubani is an emergency physician who works at the London Health Sciences Centre in Canada and at Al Shifa Hospital in the Gaza Strip. He is a fellow of the Shuttleworth Foundation, where he focuses on free software medical devices. His organization, the Glia Project, develops free/libre medical device designs for 3D printing, in an effort to help medical systems such as Gaza's gain self-sufficiency and local independence.

Richard Stallman

Continuing an annual tradition, FSF president Richard Stallman will present the Free Software Awards and discuss opportunities for, and threats to, the free software movement.

LibrePlanet is an annual event jam-packed with interesting talks and hands-on workshops -- and we know you have a lot to say! Have an idea for your own free software-related talk or workshop? Submit it for consideration by October 26, 2018 at 10:00 EDT (14:00 UTC).

Your talk can be aimed at an audience of experienced developers, young people, newcomers to free software, activists looking for technology that aligns with their ideals, policymakers, hackers, artists, or tinkerers. Talks and workshops should examine or utilize free software, copyleft, and related issues, but beyond that, we welcome any topics that may educate, entertain, or encourage action.

Some possibilities include updates on free software projects, especially if they fulfill a High Priority free software need; the intersection of free software and other social issues or movements; how to resist the harmful effects of proprietary software by using free software; introductions to aspects of free software for newcomers generally or children specifically; hands-on workshops using free software for particular applications; copyleft and other free software legal issues; free software's intersections with government; and how to use free software in artmaking. Find more inspiration in videos of LibrePlanet 2018 talks, as well as the full program listing.

Gratis admission for FSF members

Current Associate Members and students with valid ID may attend LibrePlanet gratis. FSF Associate Membership starts at just $10/month and comes with many benefits. If you're not already a member, there's no better time to join than the present!

Travel funding available

Do you need help with the cost of travel to LibrePlanet? The FSF is able to offer a limited amount of funding to bring conference participants to Boston from all around the world. You can apply for a scholarship through Friday, November 16 at 10:00 EST (15:00 UTC). Scholarship recipients will be notified by the end of November. If you don't need a scholarship, you can help those with financial need attend LibrePlanet 2019 by making a contribution to the conference's scholarship fund.

Free Software Award nominations

Each year at LibrePlanet, the FSF presents its annual Free Software Awards. Nominations for the awards are open through Sunday, November 4th, 2018 at 19:59 EST (23:59 UTC).

Promotional opportunities

LibrePlanet is a good place to spread the word about your organization to the free software community. You can sponsor LibrePlanet or have a table in our exhibit hall (or both!). Our exhibit hall is highly visible at the LibrePlanet venue, and sponsors are highly visible at the conference and in our promotional materials. LibrePlanet is a distinctive event that centers free software, in its infrastructure, program of talks and events, and audience, and we appreciate the support of organizations that embrace free software. Apply to exhibit at LibrePlanet 2019 or email us at campaigns@fsf.org if you are interested in being a sponsor.

Volunteering

LibrePlanet is propelled by the positive energy of dozens of volunteers. We simply couldn't make this community event happen without them, and we thank them accordingly, with a gratis T-shirt and admission to the conference, and our deep gratitude. Applications for most LibrePlanet volunteer opportunities will be available soon, but if you would like to help with advance outreach, including spreading the word about the conference in online communities and your networks, and by posting flyers in schools and community spaces, please email resources@fsf.org to get started!

LibrePlanet 2019 is about 170 days away -- tell your friends and submit a talk proposal today!

Photo of Richard Stallman by by Adte.ca. This image is licensed under a CC BY-SA 4.0 license. Photo of Tarek Loubani by Tarek Loubani. This image is licensed under a CC BY-SA 4.0 license. Photo of Bdale Garbee by Karen Garbee. This image is licensed under a CC BY-SA 4.0 license. Photo of Micky Metts by Micky Metts. This image is licensed under a CC BY 4.0 license.

18 October, 2018 08:40PM

FSF News

Keynotes announced for LibrePlanet 2019 free software conference

BOSTON, Massachusetts, USA -- Thursday, October 18, 2018 -- The Free Software Foundation (FSF) today announced all four keynote speakers who will appear at the 11th annual LibrePlanet free software conference, which will take place in the Boston area, March 23-24, 2019.

Keynote speakers for the 10th annual LibrePlanet conference will include Debian Project contributor Bdale Garbee, free software activist Micky Metts, physician Tarek Loubani, and FSF founder and president Richard Stallman.

LibrePlanet is an annual conference for free software users and anyone who cares about the intersection of technology and social justice. For ten years, LibrePlanet has brought together thousands of diverse voices and knowledge bases, including free software developers, policy experts, activists, hackers, students, and people who have just begun to learn about free software.

Bdale Garbee

Bdale Garbee has contributed to the free software community since 1979. He was an early participant in the Debian Project, helped port Debian GNU/Linux to five architectures, served as the Debian Project Leader, then chairman of the Debian Technical Committee for nearly a decade, and remains active in the Debian community. For a decade, Bdale served as president of Software in the Public Interest. He also served on the board of directors of the Linux Foundation, representing individual affiliates and the developer community. Bdale currently serves on the boards of the Freedombox Foundation, the Linux Professional Institute, and Aleph Objects. He is also a member of the Evaluations Committee at the Software Freedom Conservancy. In 2008, Bdale became the first individual recipient of a Lutece d'Or award from the Federation Nationale de l'Industrie du Logiciel Libre in France.

Micky Metts

Micky Metts is an owner of Agaric, a worker-owned technology cooperative. She is an activist hacker, industry organizer, public speaker, connector, advisor, and visionary. Micky is a member of the MayFirst People Link Leadership Committee, and is a liaison between the Solidarity Economy Network (SEN) and the United States Federation of Worker Cooperatives (USFWC), with an intention to bring communities together. Micky is also a founding member of a cohort that is building a new Boston public high school based in cooperative learning: BoCoLab. She is a member of FSF.org and Drupal.org, a community based in free software. She is a published author contributing to the book Ours to Hack and to Own, one of the top technology books of 2017 in Wired magazine.

Tarek Loubani

Dr. Tarek Loubani is an emergency physician who works at the London Health Sciences Centre in Canada and at Al Shifa Hospital in the Gaza Strip. He is a fellow of the Shuttleworth Foundation, where he focuses on free software medical devices. His organization, the Glia Project, develops free/libre medical device designs for 3D printing, in an effort to help medical systems such as Gaza's gain self-sufficiency and local independence.

"This year's keynote speakers reflect the breadth of the free software community and its impact," said FSF executive director John Sullivan. "If you attend LibrePlanet or watch our free software-based livestream, you will have the opportunity to hear from dedicated contributors, activists, and people who saw an important need in our world and met it using free software."

Richard Stallman

As he does each year, FSF president Richard Stallman will present the Free Software Awards and discuss opportunities for, and threats to, the free software movement. In 1983, Stallman launched the free software movement, and he began developing the GNU operating system (see https://www.gnu.org) the following year. GNU is free software: anyone may copy it and redistribute it, with or without modifications. GNU/Linux (the GNU operating system used in combination with the kernel Linux) is used on tens of millions of computers today. Stallman has received the ACM Grace Hopper Award, a MacArthur Foundation fellowship, the Electronic Frontier Foundation's Pioneer Award, and the Takeda Award for Social/Economic Betterment, as well as several doctorates honoris causa, and has been inducted into the Internet Hall of Fame.

The call for proposals is open until October 26, 2018. General registration and exhibitor and sponsor registration are also open.

About LibrePlanet

LibrePlanet is the annual conference of the Free Software Foundation. Over the last decade, LibrePlanet has blossomed from a small gathering of FSF members into a vibrant multi-day event that attracts a broad audience of people who are interested in the values of software freedom. To sign up for announcements about LibrePlanet 2019, visit https://www.libreplanet.org/2019.

Each year at LibrePlanet, the FSF presents its annual Free Software Awards. Nominations for the awards are open through Sunday, November 4th, 2018 at 23:59 UTC.

For information on how your company can sponsor LibrePlanet or have a table in our exhibit hall, email campaigns@fsf.org.

LibrePlanet 2018 was held at MIT from March 24-25, 2018. Nearly 350 attendees came together from across the world for workshops and talks centered around the theme of "Freedom Embedded." You can watch videos from last year's conference, including the opening keynote, an exploration of the potential for the free software community to last forever by maintaining its ideals while also welcoming newcomers, by Deb Nicholson, who is now director of community operations for the Software Freedom Conservancy.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at and , are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Molly de Blanc
Campaigns Manager
Free Software Foundation
+1 (617) 542-5942
campaigns@fsf.org

Photo of Richard Stallman by by Adte.ca. This image is licensed under a CC BY-SA 4.0 license. Photo of Tarek Loubani by Tarek Loubani. This image is licensed under a CC BY-SA 4.0 license. Photo of Bdale Garbee by Karen Garbee. This image is licensed under a CC BY-SA 4.0 license. Photo of Micky Metts by Micky Metts. This image is licensed under a CC BY 4.0 license.

18 October, 2018 07:05PM

FSF Events

Richard Stallman - « ¿El software que usas deniega tu libertad? » (Madrid, Spain)

Richard Stallman hablará sobre las metas y la filosofía del movimiento del Software Libre, y el estado y la historia del sistema operativo GNU, el cual junto con el núcleo Linux, es actualmente utilizado por decenas de millones de personas en todo el mundo.

Esta charla de Richard Stallman formará parte del III Foro de la Cultura (2018-11-09–11). no será técnica y será abierta al público; todos están invitados a asistir. Será posible asistir a la charla de Stallman sin registrarse hasta completar el aforo del recinto

Lugar: Espacio Fundación Telefónica, C/ Fuencarral, 3, Madrid, España

Por favor, rellene este formulario para que podamos contactarle sobre futuros eventos en la región de Madrid

18 October, 2018 03:55PM

FSF Blogs

Single-board computer guide updated: Free software is winning on ARM!

In many geeky circles, single-board computers are popular machines. SBCs come in small form factors and generally run GNU/Linux, but unfortunately, many boards like the popular Raspberry Pi are dependent on proprietary software to use. The Free Software Foundation maintains a list of system-on-chip families, sorted by their freedom status.

Unfortunately, this list had not been updated in several years. While it was accurate when it was published, free software is constantly improving. Today, more and more boards are usable with free software. On the graphical side, the Etnaviv project has reached maturity, and the Panfrost project, with which I have been personally involved, has sprung up. The video processing unit on Allwinner chips has been reverse-engineered and liberated by the linux-sunxi community in tandem with Bootlin. Rockchip boards have become viable competitors to their better known counterparts. Even the Raspberry Pi has had a proof-of-concept free firmware replacement developed. Free software is winning on ARM.

Accordingly, I have researched the latest developments in single-board computer freedom, updating the list. The revised list includes much more detail than its predecessors, groups boards by system-on-chip rather than brand name for concision, documents previously-unidentified freedom flaws, and of course describes progress liberating the remaining elements.

The new guide is, I hope, clearer, more comprehensive, and more useful to free software users seeking to purchase a board that respects their freedom.

Check it out!

Alyssa is a former intern at the FSF -- you can read more about her work here.

18 October, 2018 03:41PM

October 15, 2018

Introducing our new associate member forum!

I'm excited to share that we've launched a new forum for our associate members. We hope that you find this forum to be a great place to share your experiences and perspectives surrounding free software and to forge new bonds with the free software community. If you're a member of the FSF, head on over to https://forum.members.fsf.org to get started. You'll be able to log in using the Central Authentication Service (CAS) account that you used to create your membership. (Until we get WebLabels working for the site, you'll have to whitelist its JavaScript in order to log in and use it, but rest assured that all of the JavaScript is free software, and a link to all source code can be found in the footer of the site.) Participation in this forum is just one of many benefits of being an FSF member – if you're not a member yet, we encourage you to join today, for as little as $10 per month, or $5 per month for students.

The purpose of this member forum is to provide a space where members can meet, communicate, and collaborate with each other about free software, using free software. While there are other places on the Internet to talk about free software, this forum is unique in that it is focused on the common interests of FSF members, who care very much about using, promoting, and creating free software.

The forum software we chose to use is Discourse.

One of the technical requirements for the forum was that it needs to work well with single sign-on (SSO) systems, specifically our CAS system. In the process of launching the new member forum, I patched our CAS server so that it would verify FSF associate membership. I also wrote a patch for the Discourse CAS SSO service so that we can require email validation when users log into Discourse for the first time.

We built our own patched instance of Discourse's base Docker image to resolve a freedom issue, and as preparation for any times in the future that we may need to make changes to the upstream source code for our local installation.

I spent some time trying to set up Discourse without using Docker, but getting email delivery to work without a Docker image proved to be very challenging. In the end, we decided that using Docker adds complexity when making patches to the software, but think that it makes using Discourse easier overall.

One of the reasons we chose Discourse is because it allows users to respond to conversations via email. Users may enable the "mailing list mode" in their user settings, which allows us to interact with the member forum as if it were a mailing list.

I would like to thank the Discourse team for creating this software, and for their responsiveness to my questions about Discourse patching, new features, configuration, and deployment. They responded very quickly to a security issue that I reported, and donated a hacker bounty to the FSF.

If you want to chat with other members via IRC, I suggest joining the #fsf-members channel on Freenode, where I made an early announcement about the member forum launch.

I hope you are excited to use our new forum. I certainly am! I look forward to the great conversations that we will have among members who care very much about free software. Happy hacking!

15 October, 2018 07:20PM

October 14, 2018

Christopher Allan Webber

Spritely: towards secure social spaces as virtual worlds

If you follow me on the fediverse, maybe you already know. I've sent an announcement to my work that I am switching to doing a project named Spritely on my own full time. (Actually I'm still going to be doing some contracting with my old job, so I'll still have some income, but I'll be putting a full 40 hours a week into Spritely.)

tl;dr: I'm working on building the next generation of the fediverse as a distributed game. You can support this work if you so wish.

What on earth is Spritely?

"Well, vaporware currently", has been my joke since announcing it, but the plans, and even some core components, are starting to congeal, and I have decided it's time to throw myself fully into it.

But I still haven't answered the question, so I'll try to do so in bullet points. Spritely:

  • Aims to bring stronger user security, better anti-abuse tooling, stronger resistance against censorship, and more interesting interactions to users of the fediverse.
  • Is based on the massively popular ActivityPub standard (which I co-authored, so I do know a thing or two about this).
  • Aims to transform distributed social networks into distributed social games / virtual worlds. The dreams of the 90s are alive in Spritely.
  • Recognizes that ActivityPub is based on the actor model, and a pure version of the actor model is itself already a secure object capability system, so we don't have to break the spec to gain those powers... just change the discipline of how we use it.
  • Will be written in Racket.
  • Is an umbrella project for a number of modular tools necessary to get to this goal. The first, an object capability actor model system for Racket named Goblins, should see its first public release in the next week or two.
  • And of course it will be 100% free/libre/open source software.

That's a lot to unpack, and it also may sound overly ambitious. The game part in particular may sound strange, but I'll defend it on three fronts. First, not too many people run federated social web servers, but a lot of people run Minecraft servers... lots of teenagers run Minecraft servers... and it's not because Minecraft has the best graphics or the best fighting (it certainly doesn't), it's because Minecraft allows you to build a world together with your friends. Second, players of old MUDs, MOOs, MUSHes and etc from the 90s may recognize that modern social networks are structurally degenerate forms of the kinds of environments that existed then, but contemporary social networks lack the concept of a sense of place and interaction. Third, many interesting projects (Python's Twisted library, Flickr, much of object capability security patterns) have come out of trying to build such massively multiplayer world systems. Because of this last one in particular, I think that shooting for the stars means that if we don't make it we're likely to at least make the moon, so failure is okay if it means other things come out of it. (Also, four: it's a fun and motivating use case for me which I have explored before.)

To keep Spritely from being total vaporware, the way I will approach the project is by regularly releasing a series of "demos", some of which may be disjoint, but will hopefully increasingly converge on the vision. Consider Spritely a skunkworks-in-the-public-interest for the federated social web.

But why?

Standardizing ActivityPub was a much more difficult effort than anticipated, but equally or more so more successful than I expected (partly due to Mastodon's adoption launching it past the sound barrier). In that sense this is great news. We now have dozens of projects adopting it, and the network has (at last I looked) over 1.5 million registered users (which isn't the same as active users).

So, mission accomplished, right? Well, there are a few things that bother me.

  • The kind of rich interactions one can do are limited by a lack of authorization policy. Again, I believe object capabilities provide this, but it's not well explained to the public how to use it. (By contrast, Access Control Lists and friends are absolutely the wrong approach.)
  • Users are currently insufficiently protected from spam, abuse, and harassment while at the same time administrators are overwhelmed. This is leading a number of servers to move to a whitelisting of servers, which both re-centralizes the system and prioritizes big instances over smaller instances (it shouldn't matter what instance size you're on; arguably we should be encouraging smaller ones even). There are some paths forward, and I will hint at just one: what would happen if instead of one inbox, we had multiple inboxes? If I don't know you, you can access me via my public inbox, but maybe that's heavily moderated or you have to pay "postage". If I do know you, you might have an address with more direct access to me.
  • Relatedly, contemporary fediverse interfaces borrow from surveillance-capitalism based popular social networks by focusing on breadth of relationships rather than depth. Ever notice how the first thing Twitter shows you when you hover over a person's face is how many followers they have? I don't know about you, but I immediately compare that to my own follower count, and I don't even want to. This encourages high school popularity contest type bullshit, and it's by design. What if instead of focusing on how many people we can connect to we instead focused on the depth of our relationships? Much of the fediverse has imported "what works" directly from Facebook and Twitter, but I'd argue there's a lot we can do if we drop the assumption that this is the ideal starting base.
  • The contemporary view in the fediverse is that social scoping is like Python scoping: locals (instance) and globals (federation). Instance administrators are even encouraged to set up to run communities based on a specific niche, which is a nice reason to motivate administrators but it causes problems: even small differences between servers' expected policies often result in servers banning each other entirely. (Sometimes this is warranted, and I'm not opposed to moderation but rather looking for more effective forms of it.) Yet most of us are one person but part of many different communities with different needs. For instance, Alice may be a computer programmer, a tabletop game enthusiast, a fanfiction author, and a member of her family. In each of those settings she may present herself differently and also have different expectations of what is acceptable behavior. Alice should not need multiple accounts for this on different servers, so it would seem the right answer for community gathering is closer to something like mailing lists. What is acceptable at the gaming table may not be acceptable at work, and what happens on the fanfiction community perhaps does not need to be shared with one's family, and each community should be empowered to moderate appropriately.
  • I'd like to bridge the gap between peer to peer and federated systems. One hint as to how to do this: what happens when you run ActivityPub servers over Tor onion services or I2P? What if instead of our messages living at http addresses that could down, they could be securely addressed by their encrypted contents?
  • Finally, I will admit the most urgent reason for these concerns... I'm very concerned politically about the state of the world and what I see as increasing authoritarianism and flagrant violations of human rights. I have a lot of worry that if we don't normalize use of decentralized and secure private systems, we will lose the ability to host them, though we've never needed them more urgently.

There are a lot of opportunities, and a lot of things I am excited about, but I am also afraid of inaction and how many regrets I will have if I don't try. I have the knowledge, the privilege, and the experience to at least attempt to make a dent in some of these things. I might not succeed. But I should try.

Who's going to pay for all this?

I don't really have a funding plan, so I guess this is kind of a non-answer. However, I do have a Patreon account you could donate to.

But should you donate? Well, I dunno, I feel like that's your call. Certainly many people are in worse positions than I am; I have a buffer and I still am doing some contracting to keep myself going for a while. Maybe you know people who need the money more than I do, or maybe you need it yourself. If this is the case, don't hesitate: take care of yourself and your loved ones first.

That said, FOSS in general has the property of being a public good but tends to have a free rider problem. While we did some fundraising for some of this stuff a few years ago, I gave the majority of the money to other people. Since then I've been mostly funding work on the federated social web myself in one way or another, usually by contracting on unrelated or quasi-related things to keep myself above the burn rate. I have the privilege and ability to do it, and I believe it's critical work. But I'd love to be able to work on this with focus, and maybe get things to the point to pull in and pay other people to help again. Perhaps if we reach that point I'll look at putting this work under a nonprofit. I do know I'm unwilling to break my FOSS principles to make it happen.

Anyway... you may even still be skeptical after reading all this about whether or not I can do it. I don't blame you... even I'm skeptical. But I'll try to convince you the way I'm going to convince myself: by pushing out demos until we reach something real.

Onwards and upwards!

14 October, 2018 07:54PM by Christopher Lemmer Webber

October 12, 2018

FSF Blogs

The completion of Sonali's Outreachy internship work on the Free Software Directory

For context, see the previous blog post, Sonali's Internship work on the Free Software Directory, part 2

After much work, I finally completed the upgrade of the Directory from the previous long term support version of MediaWiki, 1.27, to the current one, 1.31, which was released shortly after my internship started. I also made some general improvements.

  • I downloaded the Semantic MediaWiki extensions using composer;
  • I removed deprecated code in LocalSettings.php;
  • I ported the customizations to Vector skin to the new version;
  • I improved the search bar by placing it in the right navigation panel instead of the sidebar;
  • I added the FSF favicon; and
  • I spent about a week fixing bugs in the CASAuth and HeaderTabs extensions.

Upgrading the mobile site took more work, and after some testing I decided to switch from the MobileFrontend extension to the mobile friendly Timeless skin along with MobileDetect.

I recorded the shell commands required to set up the server and translated them to ansible commands. Since I was unfamiliar with ansible and yaml, I took some time to learn about it.

Then we performed the final migration. Andrew (my mentor) gave me the latest MySQL dump from the directory and made the old site read-only. I imported it to the new server and ran the upgrade script. Then he migrated the DNS. There were a few small hiccups, but after a few hours, the upgrade was complete.

It was my first internship and my first experience of working in a free software community, and I grew very attached to it. My mentors were very experienced and responsive. I was able to learn a lot from them. I am grateful that I got the opportunity to associate with such an amazing organization. Thanks to Outreachy organizers for giving me a great way to work for a distinguished organization and to develop my skills. Lastly, a big thanks to my mentors, Andrew and Ian, who helped me all along and made my internship a truly incredible experience!

12 October, 2018 04:50PM

October 11, 2018

Andy Wingo

heap object representation in spidermonkey

I was having a look through SpiderMonkey's source code today and found something interesting about how it represents heap objects and wanted to share.

I was first looking to see how to implement arbitrary-length integers ("bigints") by storing the digits inline in the allocated object. (I'll use the term "object" here, but from JS's perspective, bigints are rather values; they don't have identity. But I digress.) So you have a header indicating how many words it takes to store the digits, and the digits follow. This is how JavaScriptCore and V8 implementations of bigints work.

Incidentally, JSC's implementation was taken from V8. V8's was taken from Dart. Dart's was taken from Go. We might take SpiderMonkey's from Scheme48. Good times, right??

When seeing if SpiderMonkey could use this same strategy, I couldn't find how to make a variable-sized GC-managed allocation. It turns out that in SpiderMonkey you can't do that! SM's memory management system wants to work in terms of fixed-sized "cells". Even for objects that store properties inline in named slots, that's implemented in terms of standard cell sizes. So if an object has 6 slots, it might be implemented as instances of cells that hold 8 slots.

Truly variable-sized allocations seem to be managed off-heap, via malloc or other allocators. I am not quite sure how this works for GC-traced allocations like arrays, but let's assume that somehow it does.

Anyway, the point of this blog post. I was looking to see which part of SpiderMonkey reserves space for type information. For example, almost all objects in V8 start with a "map" word. This is the object's "hidden class". To know what kind of object you've got, you look at the map word. That word points to information corresponding to a class of objects; it's not available to store information that might vary between objects of that same class.

Interestingly, SpiderMonkey doesn't have a map word! Or at least, it doesn't have them on all allocations. Concretely, BigInt values don't need to reserve space for a map word. I can start storing data right from the beginning of the object.

But how can this work, you ask? How does the engine know what the type of some arbitrary object is?

The answer has a few interesting wrinkles. Firstly I should say that for objects that need hidden classes -- e.g. generic JavaScript objects -- there is indeed a map word. SpiderMonkey calls it a "Shape" instead of a "map" or a "hidden class" or a "structure" (as in JSC), but it's there, for that subset of objects.

But not all heap objects need to have these words. Strings, for example, are values rather than objects, and in SpiderMonkey they just have a small type code rather than a map word. But you know it's a string rather than something else in two ways: one, for "newborn" objects (those in the nursery), the GC reserves a bit to indicate whether the object is a string or not. (Really: it's specific to strings.)

For objects promoted out to the heap ("tenured" objects), objects of similar kinds are allocated in the same memory region (in kind-specific "arenas"). There are about a dozen trace kinds, corresponding to arena kinds. To get the kind of object, you find its arena by rounding the object's address down to the arena size, then look at the arena to see what kind of objects it has.

There's another cell bit reserved to indicate that an object has been moved, and that the rest of the bits have been overwritten with a forwarding pointer. These two reserved bits mostly don't conflict with any use a derived class might want to make from the first word of an object; if the derived class uses the first word for integer data, it's easy to just reserve the bits. If the first word is a pointer, then it's probably always aligned to a 4- or 8-byte boundary, so the low bits are zero anyway.

The upshot is that while we won't be able to allocate digits inline to BigInt objects in SpiderMonkey in the general case, we won't have a per-object map word overhead; and we can optimize the common case of digits requiring only a word or two of storage to have the digit pointer point to inline storage. GC is about compromise, and it seems this can be a good one.

Well, that's all I wanted to say. Looking forward to getting BigInt turned on upstream in Firefox!

11 October, 2018 02:33PM by Andy Wingo

FSF News

FSF statement on Microsoft joining the Open Invention Network

Microsoft's announcements on October 4th and 10th, that it has joined both LOT and the Open Invention Network (OIN), are significant steps in the right direction, potentially providing respite from Microsoft's well-known extortion of billions of dollars from free software redistributors.

These steps, though, do not by themselves fully address the problem of computational idea patents, or even Microsoft's specific infringement claims. They do not mean that Microsoft has dismantled or freely licensed its entire patent portfolio. The agreements for both LOT and OIN have substantial limitations and exclusions. LOT only deals with the problem of patent trolling by non-practicing entities. OIN's nonaggression agreement only covers a defined list of free software packages, and any OIN member, including Microsoft, can withdraw completely with thirty days notice.

With these limitations in mind, FSF welcomes the announcements, and calls on Microsoft to take additional steps to continue the momentum toward a complete resolution:

1) Make a clear, unambiguous statement that it has ceased all patent infringement claims on the use of Linux in Android.

2) Work within OIN to expand the definition of what it calls the "Linux System" so that the list of packages protected from patents actually includes everything found in a GNU/Linux system. This means, for example, removing the current arbitrary and very intentional exclusions for packages in the area of multimedia -- one of the primary patent minefields for free software. We suggest that this definition include every package in Debian's default public package repository.

3) Use the past patent royalties extorted from free software to fund the effective abolition of all patents covering ideas in software. This can be done by supporting grassroots efforts like the FSF's End Software Patents campaign, or by Microsoft directly urging the US Congress to pass legislation excluding software from the effects of patents, or both. Without this, the threats can come back with a future leadership change at Microsoft, or with changes in OIN's own corporate structure and licensing arrangements. This is also the best way for Microsoft to show that it does not intend to use patents as a weapon against any free software, beyond just that free software which is part of OIN's specific list.

The FSF appreciates what Microsoft joining OIN seems to signal about its changing attitude toward computational idea patents. Taking these three additional steps would remove all doubt and any potential for backsliding. We look forward to future collaboration on fully addressing the threat of patents to free software development and computer user freedom.

The FSF will also continue to monitor the situation, for any signs that Microsoft intends to still continue patent aggression, in ways permitted by the terms of LOT and OIN. We encourage anyone who is a target of such patent aggression by Microsoft to contact us at campaigns@fsf.org.

Media Contact

John Sullivan
Executive Director
+1 (617) 542-5942
campaigns@fsf.org

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

11 October, 2018 01:13AM

October 10, 2018

FSF job opportunity: program manager

The Free Software Foundation (FSF), a Massachusetts 501(c)(3) charity with a worldwide mission to protect computer user freedom, seeks a motivated and talented Boston-based individual to be our full-time program manager.

Reporting to the executive director, the program manager co-leads our campaigns team. This position develops and promotes longer-term resources and advocacy programs related to increasing the use of free software and expanding and advancing the free software movement. The program manager plays a key role in external communications, fundraising, member engagement, and special events.

Examples of job responsibilities include, but are not limited to:

  • Lead the planning and successful implementation of most events, such as our annual LibrePlanet conference;
  • Develop and maintain longer-term free software resources, such as the High Priority Projects list;
  • Coordinate two annual fundraising appeals, including goal setting, strategy, and working with outside contractors;
  • Implement the FSF's communications and messaging strategy, including serving as a primary point of contact with press and the external public;
  • Write and edit for FSF blogs, external periodical publications, and both digital and print resources;
  • Assist with planning and execution of issue campaigns, working in concert with the campaigns manager;
  • Occasional conference travel and speaking as an FSF representative.

Ideal candidates have at least three to five years of work experience with project management, fundraising, events management, and nonprofit program management. Proficiency, experience, and comfort with professional writing and media relationships preferred. Because the FSF works globally and seeks to have our materials distributed in as many languages as possible, multilingual candidates will have an advantage. With our small staff of fourteen, each person makes a clear contribution. We work hard, but offer a humane and fun work environment at an office located in the heart of downtown Boston. The FSF is a mature but growing organization that provides great potential for advancement; existing staff get the first chance at any new job openings.

Benefits and Salary

This job is a union position that must be worked on-site at the FSF's downtown Boston office. The salary is fixed at $61,672/year and is non-negotiable. Other benefits include:

  • Fully subsidized individual or family health coverage through Blue Cross Blue Shield;
  • Partially subsidized dental plan;
  • Four weeks of paid vacation annually;
  • Seventeen paid holidays annually;
  • Weekly remote work allowance;
  • Public transit commuting cost reimbursement;
  • 403(b) program with employer match;
  • Yearly cost-of-living pay increases based on government guidelines;
  • Health care expense reimbursement;
  • Ergonomic budget;
  • Relocation (to Boston area) expense reimbursement;
  • Conference travel and professional development opportunities; and
  • Potential for an annual performance bonus.

Application Instructions

Applications must be submitted via email to hiring@fsf.org. The email must contain the subject line "Program Manager." A complete application should include:

  • Cover letter
  • Resume
  • Two recent writing samples

All materials must be in a free format. Email submissions that do not follow these instructions will probably be overlooked. No phone calls, please.

Applications will be reviewed on a rolling basis until the position is filled. To guarantee consideration, submit your application by Sunday, October 28, 2018.

The FSF is an equal opportunity employer and will not discriminate against any employee or application for employment on the basis of race, color, marital status, religion, age, sex, sexual orientation, national origin, handicap, or any other legally protected status recognized by federal, state or local law. We value diversity in our workplace.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software — particularly the GNU operating system and its GNU/Linux variants — and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. We are based in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

10 October, 2018 08:32PM

FSF Events

Richard Stallman - "Can we defend freedom and privacy from computing?" (Chicago, IL)

This speech by Richard Stallman will be nontechnical, admission is gratis, and the public is encouraged to attend.

Location: Hermann Hall Auditorium, 3241 S Federal St. (CTA Red and Green Line trains, and number 1 and 29 buses), Chicago, IL

Please fill out our contact form, so that we can contact you about future events in and around Chicago.

10 October, 2018 07:25PM

GNU Guix

A packaging tutorial for Guix

Introduction

GNU Guix stands out as the hackable package manager, mostly because it uses GNU Guile, a powerful high-level programming language, one of the Scheme dialects from the Lisp family.

Package definitions are also written in Scheme, which empowers Guix in some very unique ways, unlike most other package managers that use shell scripts or simple languages.

  • Use functions, structures, macros and all of Scheme expressiveness for your package definitions.

  • Inheritance makes it easy to customize a package by inheriting from it and modifying only what is needed.

  • Batch processing: the whole package collection can be parsed, filtered and processed. Building a headless server with all graphical interfaces stripped out? It's possible. Want to rebuild everything from source using specific compiler optimization flags? Pass the #:make-flags "..." argument to the list of packages. It wouldn't be a stretch to think Gentoo USE flags here, but this goes even further: the changes don't have to be thought out beforehand by the packager, they can be programmed by the user!

The following tutorial covers all the basics around package creation with Guix. It does not assume much knowledge of the Guix system nor of the Lisp language. The reader is only expected to be familiar with the command line and to have some basic programming knowledge.

A "Hello World" package

The “Defining Packages” section of the manual introduces the basics of Guix packaging. In the following section, we will partly go over those basics again.

GNU hello is a dummy project that serves as an idiomatic example for packaging. It uses the GNU build system (./configure && make && make install). Guix already provides a package definition which is a perfect example to start with. You can look up its declaration with guix edit hello from the command line. Let's see how it looks:

(define-public hello
  (package
    (name "hello")
    (version "2.10")
    (source (origin
              (method url-fetch)
              (uri (string-append "mirror://gnu/hello/hello-" version
                                  ".tar.gz"))
              (sha256
               (base32
                "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"))))
    (build-system gnu-build-system)
    (synopsis "Hello, GNU world: An example GNU package")
    (description
     "GNU Hello prints the message \"Hello, world!\" and then exits.  It
serves as an example of standard GNU coding practices.  As such, it supports
command-line arguments, multiple languages, and so on.")
    (home-page "https://www.gnu.org/software/hello/")
    (license gpl3+)))

As you can see, most of it is rather straightforward. But let's review the fields together:

  • name: The project name. Using Scheme conventions, we prefer to keep it lower case, without underscore and using dash-separated words.
  • source: This field contains a description of the source code origin. The origin record contains these fields:

    1. The method, here url-fetch to download via HTTP/FTP, but other methods exist, such as git-fetch for Git repositories.
    2. The URI, which is typically some https:// location for url-fetch. Here the special mirror://gnu refers to a set of well known locations, all of which can be used by Guix to fetch the source, should some of them fail.
    3. The sha256 checksum of the requested file. This is essential to ensure the source is not corrupted. Note that Guix works with base32 strings, hence the call to the base32 function.
  • build-system: This is where the power of abstraction provided by the Scheme language really shines: in this case, the gnu-build-system abstracts away the famous ./configure && make && make install shell invocations. Other build systems include the trivial-build-system which does not do anything and requires from the packager to program all the build steps, the python-build-system, the emacs-build-system, and many more.
  • synopsis: It should be a concise summary of what the package does. For many packages a tagline from the project's home page can be used as the synopsis.
  • description: Same as for the synopsis, it's fine to re-use the project description from the homepage. Note that Guix uses Texinfo syntax.
  • home-page: Use HTTPS if available.
  • license: See guix/licenses.scm in the project source for a full list.

Time to build our first package! Nothing fancy here for now: we will stick to a dummy "my-hello", a copy of the above declaration.

As with the ritualistic "Hello World" taught with most programming languages, this will possibly be the most "manual" approach. We will work out an ideal setup later; for now we will go the simplest route.

Save the following to a file my-hello.scm.

(use-modules (guix packages)
             (guix download)
             (guix build-system gnu)
             (guix licenses))

(package
  (name "my-hello")
  (version "2.10")
  (source (origin
            (method url-fetch)
            (uri (string-append "mirror://gnu/hello/hello-" version
                                ".tar.gz"))
            (sha256
             (base32
              "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"))))
  (build-system gnu-build-system)
  (synopsis "Hello, Guix world: An example custom Guix package")
  (description
   "GNU Hello prints the message \"Hello, world!\" and then exits.  It
serves as an example of standard GNU coding practices.  As such, it supports
command-line arguments, multiple languages, and so on.")
  (home-page "https://www.gnu.org/software/hello/")
  (license gpl3+))

We will explain the extra code in a moment.

Feel free to play with the different values of the various fields. If you change the source, you'll need to update the checksum. Indeed, Guix refuses to build anything if the given checksum does not match the computed checksum of the source code. To obtain the correct checksum of the package declaration, we need to download the source, compute the sha256 checksum and convert it to base32.

Thankfully, Guix can automate this task for us; all we need is to provide the URI:

$ guix download mirror://gnu/hello/hello-2.10.tar.gz

Starting download of /tmp/guix-file.JLYgL7
From https://ftpmirror.gnu.org/gnu/hello/hello-2.10.tar.gz...
following redirection to `https://mirror.ibcp.fr/pub/gnu/hello/hello-2.10.tar.gz'...
 …10.tar.gz  709KiB                                 2.5MiB/s 00:00 [##################] 100.0%
/gnu/store/hbdalsf5lpf01x4dcknwx6xbn6n5km6k-hello-2.10.tar.gz
0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i

In this specific case that the output tells us which mirror was chosen. If the result of the above command is not the same as in the above snippet, update your my-hello declaration accordingly.

Note that GNU package tarballs come with an OpenPGP signature, so you should definitely check the signature of this tarball with gpg to authenticate it before going further:

$ guix download mirror://gnu/hello/hello-2.10.tar.gz.sig

Starting download of /tmp/guix-file.03tFfb
From https://ftpmirror.gnu.org/gnu/hello/hello-2.10.tar.gz.sig...
following redirection to `https://ftp.igh.cnrs.fr/pub/gnu/hello/hello-2.10.tar.gz.sig'...
 ….tar.gz.sig  819B                                                                                                                       1.2MiB/s 00:00 [##################] 100.0%
/gnu/store/rzs8wba9ka7grrmgcpfyxvs58mly0sx6-hello-2.10.tar.gz.sig
0q0v86n3y38z17rl146gdakw9xc4mcscpk8dscs412j22glrv9jf
$ gpg --verify /gnu/store/rzs8wba9ka7grrmgcpfyxvs58mly0sx6-hello-2.10.tar.gz.sig /gnu/store/hbdalsf5lpf01x4dcknwx6xbn6n5km6k-hello-2.10.tar.gz
gpg: Signature made Sun 16 Nov 2014 01:08:37 PM CET
gpg:                using RSA key A9553245FDE9B739
gpg: Good signature from "Sami Kerola <kerolasa@iki.fi>" [unknown]
gpg:                 aka "Sami Kerola (http://www.iki.fi/kerolasa/) <kerolasa@iki.fi>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 8ED3 96E3 7E38 D471 A005  30D3 A955 3245 FDE9 B739

Now you can happily run

$ guix package --install-from-file=my-hello.scm

You should now have my-hello in your profile!

$ guix package --list-installed=my-hello
my-hello    2.10    out
/gnu/store/f1db2mfm8syb8qvc357c53slbvf1g9m9-my-hello-2.10

We've gone as far as we could without any knowledge of Scheme. Now is the right time to introduce the minimum we need from the language before we can proceed.

A Scheme crash-course

As we've seen above, basic packages don't require much Scheme knowledge, if none at all. But as you progress and your desire to write more and more complex packages grows, it will become both necessary and empowering to hone your Lisper skills.

Since an extensive Lisp course is very much out of the scope of this tutorial, we will only cover some basics here.

Guix uses the Guile implementation of Scheme. To start playing with the language, install it with guix package --install guile and start a REPL by running guile from the command line.

Alternatively you can also run guix environment --ad-hoc guile -- guile if you'd rather not have Guile installed in your user profile.

In the following examples we use the > symbol to denote the REPL prompt, that is, the line reserved for user input. See the Guile manual for more details on the REPL.

  • Scheme syntax boils down to a tree of expressions (or s-expression in Lisp lingo). An expression can be a literal such numbers and strings, or a compound which is a parenthesized list of compounds and literals. #t and #f stand for the booleans "true" and "false", respectively.

    Examples of valid expressions:

    > "Hello World!"
    "Hello World!"
    > 17
    17
    > (display (string-append "Hello " "Guix" "\n"))
    "Hello Guix!"
  • This last example is a function call embedded in another function call. When a parenthesized expression is evaluated, the first term is the function and the rest are the arguments passed to the function. Every function returns the last evaluated expression as value.

  • Anonymous functions are declared with the lambda term:

    > (lambda (x) (* x x))
    #<procedure 120e348 at <unknown port>:24:0 (x)>

    The above lambda returns the square of its argument. Since everything is an expression, the lambda expression returns an anonymous function, which can in turn be applied to an argument:

    > ((lambda (x) (* x x)) 3)
    9
  • Anything can be assigned a global name with define:

    > (define a 3)
    > (define square (lambda (x) (* x x)))
    > (square a)
    9
  • Procedures can be defined more concisely with the following syntax:

    (define (square x) (* x x))
  • A list structure can be created with the list procedure:

    > (list 2 a 5 7)
    (2 3 5 7)
  • The quote disables evaluation of a parenthesized expression: the first term is not called over the other terms. Thus it effectively returns a list of terms.

    > '(display (string-append "Hello " "Guix" "\n"))
    (display (string-append "Hello " "Guix" "\n"))
    > '(2 a 5 7)
    (2 a 5 7)
  • The quasiquote disables evaluation of a parenthesized expression until a comma re-enables it. Thus it provides us with fine-grained control over what is evaluated and what is not.

    > `(2 a 5 7 (2 ,a 5 ,(+ a 4)))
    (2 a 5 7 (2 3 5 7))

    Note that the above result is a list of mixed elements: numbers, symbols (here a) and the last element is a list itself.

  • Multiple variables can be named locally with let:

    > (define x 10)
    > (let ((x 2)
            (y 3))
        (list x y))
    (2 3)
    > x
    10
    > y
    ERROR: In procedure module-lookup: Unbound variable: y

    Use let* to allow later variable declarations to refer to earlier definitions.

    > (let* ((x 2)
             (y (* x 3)))
        (list x y))
    (2 6)
  • The keyword syntax is #:, it is used to create unique identifiers. See also the Keywords section in the Guile manual.

  • The percentage % is typically used for read-only global variables in the build stage. Note that it is merely a convention, like _ in C. Scheme Lisp treats % exactly the same as any other letter.

  • Modules are created with define-module. For instance

    (define-module (guix build-system ruby)
      #:use-module (guix store)
      #:export (ruby-build
                ruby-build-system))

    defines the module ruby which must be located in guix/build-system/ruby.scm somewhere in GUILE_LOAD_PATH. It depends on the (guix store) module and it exports two symbols, ruby-build and ruby-build-system.

For a more detailed introduction, check out Scheme at a Glance, by Steve Litt.

One of the reference Scheme books is the seminal Structure and Interpretation of Computer Programs, by Harold Abelson and Gerald Jay Sussman, with Julie Sussman. You'll find a free copy online, together with videos of the lectures by the authors. The book is available in Texinfo format as the sicp Guix package. Go ahead, run guix package --install sicp and start reading with info sicp (or with the Emacs Info reader). An unofficial ebook is also available.

You'll find more books, tutorials and other resources at https://schemers.org/.

Setup

Now that we know some Scheme basics we can detail the different possible setups for working on Guix packages.

There are several ways to set up a Guix packaging environment.

We recommend you work directly on the Guix source checkout since it makes it easier for everyone to contribute to the project.

But first, let's look at other possibilities.

Local file

This is what we previously did with my-hello. Now that we know more Scheme, let's explain the leading chunks. As stated in guix package --help:

-f, --install-from-file=FILE
                       install the package that the code within FILE
                       evaluates to

Thus the last expression must return a package, which is the case in our earlier example.

The use-modules expression tells which of the modules we need in the file. Modules are a collection of values and procedures. They are commonly called "libraries" or "packages" in other programming languages.

GUIX_PACKAGE_PATH

Note: Starting from Guix 0.16, the more flexible Guix "channels" are the preferred way and supersede GUIX_PACKAGE_PATH. See below.

It can be tedious to specify the file from the command line instead of simply calling guix package --install my-hello as you would do with the official packages.

Guix makes it possible to streamline the process by adding as many "package declaration paths" as you want.

Create a directory, say ~./guix-packages and add it to the GUIX_PACKAGE_PATH environment variable:

$ mkdir ~/guix-packages
$ export GUIX_PACKAGE_PATH=~/guix-packages

To add several directories, separate them with a colon (:).

Our previous my-hello needs some adjustments though:

(define-module (my-hello)
  #:use-module (guix licenses)
  #:use-module (guix packages)
  #:use-module (guix build-system gnu)
  #:use-module (guix download))

(define-public my-hello
  (package
    (name "my-hello")
    (version "2.10")
    (source (origin
              (method url-fetch)
              (uri (string-append "mirror://gnu/hello/hello-" version
                                  ".tar.gz"))
              (sha256
               (base32
                "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"))))
    (build-system gnu-build-system)
    (synopsis "Hello, Guix world: An example custom Guix package")
    (description
     "GNU Hello prints the message \"Hello, world!\" and then exits.  It
serves as an example of standard GNU coding practices.  As such, it supports
command-line arguments, multiple languages, and so on.")
    (home-page "https://www.gnu.org/software/hello/")
    (license gpl3+)))

Note that we have assigned the package value to an exported variable name with define-public. This is effectively assigning the package to the my-hello variable so that it can be referenced, among other as dependency of other packages.

If you use guix package --install-from-file=my-hello.scm on the above file, it will fail because the last expression, define-public, does not return a package. If you want to use define-public in this use-case nonetheless, make sure the file ends with an evaluation of my-hello:

; ...
(define-public my-hello
  ; ...
  )

my-hello

This last example is not very typical.

Now my-hello should be part of the package collection like all other official packages. You can verify this with:

$ guix package --show=my-hello

Guix channels

Guix 0.16 features channels, which is very similar to GUIX_PACKAGE_PATH but provides better integration and provenance tracking. Channels are not necessarily local, they can be maintained as a public Git repository for instance. Of course, several channels can be used at the same time.

See the “Channels” section in the manual for setup details.

Direct checkout hacking

Working directly on the Guix project is recommended: it reduces the friction when the time comes to submit your changes upstream to let the community benefit from your hard work!

Unlike most software distributions, the Guix repository holds in one place both the tooling (including the package manager) and the package definitions. This choice was made so that it would give developers the flexibility to modify the API without breakage by updating all packages at the same time. This reduces development inertia.

Check out the official Git repository:

$ git clone https://git.savannah.gnu.org/git/guix.git

In the rest of this article, we use $GUIX_CHECKOUT to refer to the location of the checkout.

Follow the instruction from the "Contributing" chapter in the manual to set up the repository environment.

Once ready, you should be able to use the package definitions from the repository environment.

Feel free to edit package definitions found in $GUIX_CHECKOUT/gnu/packages.

The $GUIX_CHECKOUT/pre-inst-env script lets you use guix over the package collection of the repository.

  • Search packages, such as Ruby:

    $ cd $GUIX_CHECKOUT
    $ ./pre-inst-env guix package --list-available=ruby
        ruby    1.8.7-p374      out     gnu/packages/ruby.scm:119:2
        ruby    2.1.6   out     gnu/packages/ruby.scm:91:2
        ruby    2.2.2   out     gnu/packages/ruby.scm:39:2
  • Build a package, here Ruby version 2.1:

    $ ./pre-inst-env guix build --keep-failed ruby@2.1
    /gnu/store/c13v73jxmj2nir2xjqaz5259zywsa9zi-ruby-2.1.6
  • Install it to your user profile:

    $ ./pre-inst-env guix package --install ruby@2.1
  • Check for common mistakes:

    $ ./pre-inst-env guix lint ruby@2.1

Guix strives at maintaining a high packaging standard; when contributing to the Guix project, remember to

Once you are happy with the result, you are welcome to send your contribution to make it part of Guix. This process is also detailed in the manual.

It's a community effort so the more join in, the better Guix becomes!

Extended example

The above "Hello World" example is as simple as it goes. Packages can be more complex than that and Guix can handle more advanced scenarios. Let's look at another, more sophisticated package (slightly modified from the source):

(define-module (gnu packages version-control)
  #:use-module ((guix licenses) #:prefix license:)
  #:use-module (guix utils)
  #:use-module (guix packages)
  #:use-module (guix git-download)
  #:use-module (guix build-system cmake)
  #:use-module (gnu packages ssh)
  #:use-module (gnu packages web)
  #:use-module (gnu packages pkg-config)
  #:use-module (gnu packages python)
  #:use-module (gnu packages compression)
  #:use-module (gnu packages tls))

(define-public my-libgit2
  (let ((commit "e98d0a37c93574d2c6107bf7f31140b548c6a7bf")
        (revision "1"))
    (package
      (name "my-libgit2")
      (version (git-version "0.26.6" revision commit))
      (source (origin
                (method git-fetch)
                (uri (git-reference
                      (url "https://github.com/libgit2/libgit2/")
                      (commit commit)))
                (file-name (git-file-name name version))
                (sha256
                 (base32
                  "17pjvprmdrx4h6bb1hhc98w9qi6ki7yl57f090n9kbhswxqfs7s3"))
                (patches (search-patches "libgit2-mtime-0.patch"))
                (modules '((guix build utils)))
                (snippet '(begin
                            ;; Remove bundled software.
                            (delete-file-recursively "deps")
                            #t))))
      (build-system cmake-build-system)
      (outputs '("out" "debug"))
      (arguments
       `(#:tests? #t                            ; Run the test suite (this is the default)
         #:configure-flags '("-DUSE_SHA1DC=ON") ; SHA-1 collision detection
         #:phases
         (modify-phases %standard-phases
           (add-after 'unpack 'fix-hardcoded-paths
             (lambda _
               (substitute* "tests/repo/init.c"
                 (("#!/bin/sh") (string-append "#!" (which "sh"))))
               (substitute* "tests/clar/fs.h"
                 (("/bin/cp") (which "cp"))
                 (("/bin/rm") (which "rm")))
               #t))
           ;; Run checks more verbosely.
           (replace 'check
             (lambda _ (invoke "./libgit2_clar" "-v" "-Q")))
           (add-after 'unpack 'make-files-writable-for-tests
               (lambda _ (for-each make-file-writable (find-files "." ".*")))))))
      (inputs
       `(("libssh2" ,libssh2)
         ("http-parser" ,http-parser)
         ("python" ,python-wrapper)))
      (native-inputs
       `(("pkg-config" ,pkg-config)))
      (propagated-inputs
       ;; These two libraries are in 'Requires.private' in libgit2.pc.
       `(("openssl" ,openssl)
         ("zlib" ,zlib)))
      (home-page "https://libgit2.github.com/")
      (synopsis "Library providing Git core methods")
      (description
       "Libgit2 is a portable, pure C implementation of the Git core methods
provided as a re-entrant linkable library with a solid API, allowing you to
write native speed custom Git applications in any language with bindings.")
      ;; GPLv2 with linking exception
      (license license:gpl2))))

(In those cases were you only want to tweak a few fields from a package definition, you should rely on inheritance instead of copy-pasting everything. See below.)

Let's discuss those fields in depth.

git-fetch method

Unlike the url-fetch method, git-fetch expects a git-reference which takes a Git repository and a commit. The commit can be any Git reference such as tags, so if the version is tagged, then it can be used directly. Sometimes the tag is prefixed with a v, in which case you'd use (commit (string-append "v" version)).

To ensure that the source code from the Git repository is stored in a unique directory with a readable name we use (file-name (git-file-name name version)).

Note that there is also a git-version procedure that can be used to derive the version when packaging programs for a specific commit.

Snippets

Snippets are quoted (i.e. non-evaluated) Scheme code that are a means of patching the source. They are a Guix-y alternative to the traditional .patch files. Because of the quote, the code in only evaluated when passed to the Guix daemon for building.

There can be as many snippet as needed.

Snippets might need additional Guile modules which can be imported from the modules field.

Inputs

First, a syntactic comment: See the quasi-quote / comma syntax?

(native-inputs
 `(("pkg-config" ,pkg-config)))

is equivalent to

(native-inputs
 (list (list "pkg-config" pkg-config)))

You'll mostly see the former because it's shorter.

There are 3 different input types. In short:

  • native-inputs: Required for building but not runtime – installing a package through a substitute won't install these inputs.
  • inputs: Installed in the store but not in the profile, as well as being present at build time.
  • propagated-inputs: Installed in the store and in the profile, as well as being present at build time.

See the package reference in the manual for more details.

The distinction between the various inputs is important: if a dependency can be handled as an input instead of a propagated input, it should be done so, or else it "pollutes" the user profile for no good reason.

For instance, a user installing a graphical program that depends on a command line tool might only be interested in the graphical part, so there is no need to force the command line tool into the user profile. The dependency is a concern to the package, not to the user. Inputs make it possible to handle dependencies without bugging the user by adding undesired executable files (or libraries) to their profile.

Same goes for native-inputs: once the program is installed, build-time dependencies can be safely garbage-collected. It also matters when a substitute is available, in which case only the inputs and propagated inputs will be fetched: the native inputs are not required to install a package from a substitute.

Outputs

Just like how a package can have multiple inputs, it can also produce multiple outputs.

Each output corresponds to a separate directory in the store.

The user can choose which output to install; this is useful to save space or to avoid polluting the user profile with unwanted executables or libraries.

Output separation is optional. When the outputs field is left out, the default and only output (the complete package) is referred to as "out".

Typical separate output names include debug and doc.

It's advised to separate outputs only when you've shown it's worth it: if the output size is significant (compare with guix size) or in case the package is modular.

Build system arguments

The arguments is a keyword-value list used to configure the build process.

The simplest argument #:tests? can be used to disable the test suite when building the package. This is mostly useful when the package does not feature any test suite. It's strongly recommended to keep the test suite on if there is one.

Another common argument is :make-flags, which specifies a list of flags to append when running make, as you would from the command line. For instance, the following flags

#:make-flags (list (string-append "prefix=" (assoc-ref %outputs "out"))
                   "CC=gcc")

translate into

$ make CC=gcc prefix=/gnu/store/...-<out>

This sets the C compiler to gcc and the prefix variable (the installation directory in Make parlance) to (assoc-ref %outputs "out"), which is a build-stage global variable pointing to the destination directory in the store (something like /gnu/store/...-my-libgit2-20180408).

Similarly, it's possible to set the "configure" flags.

#:configure-flags '("-DUSE_SHA1DC=ON")

The %build-inputs variable is also generated in scope. It's an association table that maps the input names to their store directories.

The phases keyword lists the sequential steps of the build system. Typically phases include unpack, configure, build, install and check. To know more about those phases, you need to work out the appropriate build system definition in $GUIX_CHECKOUT/guix/build/gnu-build-system.scm:

(define %standard-phases
  ;; Standard build phases, as a list of symbol/procedure pairs.
  (let-syntax ((phases (syntax-rules ()
                         ((_ p ...) `((p . ,p) ...)))))
    (phases set-SOURCE-DATE-EPOCH set-paths install-locale unpack
            bootstrap
            patch-usr-bin-file
            patch-source-shebangs configure patch-generated-file-shebangs
            build check install
            patch-shebangs strip
            validate-runpath
            validate-documentation-location
            delete-info-dir-file
            patch-dot-desktop-files
            install-license-files
            reset-gzip-timestamps
            compress-documentation)))

Or from the REPL:

> (add-to-load-path "/path/to/guix/checkout")
> ,module (guix build gnu-build-system)
> (map first %standard-phases)
(set-SOURCE-DATE-EPOCH set-paths install-locale unpack bootstrap
patch-usr-bin-file patch-source-shebangs configure
patch-generated-file-shebangs build check install patch-shebangs strip
validate-runpath validate-documentation-location delete-info-dir-file
patch-dot-desktop-files install-license-files reset-gzip-timestamps
compress-documentation)

If you want to know more about what happens during those phases, consult the associated procedures.

For instance, as of this writing the definition of unpack for the GNU build system is

(define* (unpack #:key source #:allow-other-keys)
  "Unpack SOURCE in the working directory, and change directory within the
source.  When SOURCE is a directory, copy it in a sub-directory of the current
working directory."
  (if (file-is-directory? source)
      (begin
        (mkdir "source")
        (chdir "source")

        ;; Preserve timestamps (set to the Epoch) on the copied tree so that
        ;; things work deterministically.
        (copy-recursively source "."
                          #:keep-mtime? #t))
      (begin
        (if (string-suffix? ".zip" source)
            (invoke "unzip" source)
            (invoke "tar" "xvf" source))
        (chdir (first-subdirectory "."))))
  #t)

Note the chdir call: it changes the working directory to where the source was unpacked. Thus every phase following the unpack will use the source as a working directory, which is why we can directly work on the source files. That is to say, unless a later phase changes the working directory to something else.

We modify the list of %standard-phases of the build system with the modify-phases macro as per the list of specified modifications, which may have the following forms:

  • (add-before PHASE NEW-PHASE PROCEDURE): Run PROCEDURE named NEW-PHASE before PHASE.
  • (add-after PHASE NEW-PHASE PROCEDURE): Same, but afterwards.
  • (replace PHASE PROCEDURE).
  • (delete PHASE).

The PROCEDURE supports the keyword arguments inputs and outputs. Each input (whether native, propagated or not) and output directory is referenced by their name in those variables. Thus (assoc-ref outputs "out") is the store directory of the main output of the package. A phase procedure may look like this:

(lambda* (#:key inputs outputs #:allow-other-keys)
  (let (((bash-directory (assoc-ref inputs "bash"))
         (output-directory (assoc-ref outputs "out"))
         (doc-directory (assoc-ref outputs "doc"))
  ; ...
  #t)

The procedure must return #t on success. It's brittle to rely on the return value of the last expression used to tweak the phase because there is no guarantee it would be a #t. Hence the trailing #t to ensure the right value is returned on success.

Code staging

The astute reader may have noticed the quasi-quote and comma syntax in the argument field. Indeed, the build code in the package declaration should not be evaluated on the client side, but only when passed to the Guix daemon. This mechanism of passing code around two running processes is called code staging.

"Utils" functions

When customizing phases, we often need to write code that mimics the equivalent system invocations (make, mkdir, cp, etc.) commonly used during regular "Unix-style" installations.

Some like chmod are native to Guile. See the Guile reference manual for a complete list.

Guix provides additional helper functions which prove especially handy in the context of package management.

Some of those functions can be found in $GUIX_CHECKOUT/guix/guix/build/utils.scm. Most of them mirror the behaviour of the traditional Unix system commands:

  • which: Like the which system command.
  • find-files: Akin to the find system command.
  • mkdir-p: Like mkdir -p, which creates all parents as needed.
  • install-file: Similar to install when installing a file to a (possibly non-existing) directory. Guile has copy-file which works like cp.
  • copy-recursively: Like cp -r.
  • delete-file-recursively: Like rm -rf.
  • invoke: Run an executable. This should be used instead of system*.
  • with-directory-excursion: Run the body in a different working directory, then restore the previous working directory.
  • substitute*: A "sed-like" function.

Module prefix

The license in our last example needs a prefix: this is because of how the license module was imported in the package, as #:use-module ((guix licenses) #:prefix license:). The Guile module import mechanism gives the user full control over namespacing: this is needed to avoid clashes between, say, the zlib variable from licenses.scm (a license value) and the zlib variable from compression.scm (a package value).

Other build systems

What we've seen so far covers the majority of packages using a build system other than the trivial-build-system. The latter does not automate anything and leaves you to build everything manually. This can be more demanding and we won't cover it here for now, but thankfully it is rarely necessary to fall back on this system.

For the other build systems, such as ASDF, Emacs, Perl, Ruby and many more, the process is very similar to the GNU build system except for a few specialized arguments.

Learn more about build systems in

Programmable and automated package definition

We can't repeat it enough: having a full-fledged programming language at hand empowers us in ways that reach far beyond traditional package management.

Let's illustrate this with some awesome features of Guix!

Recursive importers

You might find some build systems good enough that there is little to do at all to write a package, to the point that it becomes repetitive and tedious after a while. A raison d'être of computers is to replace human beings at those boring tasks. So let's tell Guix to do this for us and create the package definition of an R package from CRAN (the output is trimmed for conciseness):

$ guix import cran --recursive walrus

(define-public r-mc2d
    ; ...
    (license gpl2+)))

(define-public r-jmvcore
    ; ...
    (license gpl2+)))

(define-public r-wrs2
    ; ...
    (license gpl3)))

(define-public r-walrus
  (package
    (name "r-walrus")
    (version "1.0.3")
    (source
      (origin
        (method url-fetch)
        (uri (cran-uri "walrus" version))
        (sha256
          (base32
            "1nk2glcvy4hyksl5ipq2mz8jy4fss90hx6cq98m3w96kzjni6jjj"))))
    (build-system r-build-system)
    (propagated-inputs
      `(("r-ggplot2" ,r-ggplot2)
        ("r-jmvcore" ,r-jmvcore)
        ("r-r6" ,r-r6)
        ("r-wrs2" ,r-wrs2)))
    (home-page "https://github.com/jamovi/walrus")
    (synopsis "Robust Statistical Methods")
    (description
      "This package provides a toolbox of common robust statistical tests, including robust descriptives, robust t-tests, and robust ANOVA.  It is also available as a module for 'jamovi' (see <https://www.jamovi.org> for more information).  Walrus is based on the WRS2 package by Patrick Mair, which is in turn based on the scripts and work of Rand Wilcox.  These analyses are described in depth in the book 'Introduction to Robust Estimation & Hypothesis Testing'.")
    (license gpl3)))

The recursive importer won't import packages for which Guix already has package definitions, except for the very first.

Not all applications can be packaged this way, only those relying on a select number of supported systems. Read about the full list of importers in the guix import section of the manual.

Automatic update

Guix can be smart enough to check for updates on systems it knows. It can report outdated package definitions with

$ guix refresh hello

In most cases, updating a package to a newer version requires little more than changing the version number and the checksum. Guix can do that automatically as well:

$ guix refresh hello --update

Inheritance

If you've started browsing the existing package definitions, you might have noticed that a significant number of them have a inherit field:

(define-public adwaita-icon-theme
  (package (inherit gnome-icon-theme)
    (name "adwaita-icon-theme")
    (version "3.26.1")
    (source (origin
              (method url-fetch)
              (uri (string-append "mirror://gnome/sources/" name "/"
                                  (version-major+minor version) "/"
                                  name "-" version ".tar.xz"))
              (sha256
               (base32
                "17fpahgh5dyckgz7rwqvzgnhx53cx9kr2xw0szprc6bnqy977fi8"))))
    (native-inputs
     `(("gtk-encode-symbolic-svg" ,gtk+ "bin")))))

All unspecified fields are inherited from the parent package. This is very convenient to create alternative packages, for instance with different source, version or compilation options.

Getting help

Sadly, some applications can be tough to package. Sometimes they need a patch to work with the non-standard filesystem hierarchy enforced by the store. Sometimes the tests won't run properly. (They can be skipped but this is not recommended.) Other times the resulting package won't be reproducible.

Should you be stuck, unable to figure out how to fix any sort of packaging issue, don't hesitate to ask the community for help.

See the Guix homepage for information on the mailing lists, IRC, etc.

Conclusion

This tutorial was an showcase of the sophisticated package management that Guix boasts. At this point we have mostly restricted this introduction to the gnu-build-system which is a core abstraction layer on which more advanced abstractions are based.

Now where do we go from here? Next we ought to dissect the innards of the build system by removing all abstractions, using the trivial-build-system: this should give us a thorough understanding of the process before investigating some more advanced packaging techniques and edge cases.

Other features worth exploring are the interactive editing and debugging capabilities of Guix provided by the Guile REPL.

Those fancy features are completely optional and can wait; now is a good time to take a well-deserved break. With what we've introduced here you should be well armed to package lots of programs. You can get started right away and hopefully we will see your contributions soon!

References

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64, ARMv7, and AArch64 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

10 October, 2018 02:00PM by Pierre Neidhardt

GNU Guile

GNU Guile 2.9.1 (beta) released

We are delighted to announce GNU Guile 2.9.1, the first beta release in preparation for the upcoming 3.0 stable series.

This release adds support for just-in-time (JIT) native code generation, speeding up all Guile programs. Currently support is limited to x86-64 platforms, but will expand to all architectures supported by GNU Lightning.

GNU Guile 2.9.1 is a beta release, and as such offers no API or ABI stability guarantees. Users needing a stable Guile are advised to stay on the stable 2.2 series.

See the release announcement for full details and a download link. Happy hacking, and please do any bugs you might find to bug-guile@gnu.org.

10 October, 2018 09:23AM by Andy Wingo (guile-devel@gnu.org)

October 09, 2018

Parabola GNU/Linux-libre

Important notice for OpenRC users on i686

The newest version of PAM has an unspecified dependency on the 'audit' package. Upgrading that package today causes su and sudo to fail and makes logins impossible.

To avoid any trouble, you should explicitly install the 'audit' package before attempting to upgrade the system. If you upgrade without first installing the 'audit' package, then you will need to chroot into the system and install it.

09 October, 2018 11:30PM by bill auger

October 04, 2018

GNU Guix

Join GNU Guix through Outreachy

We are happy to announce that for the second time this year, GNU Guix offers a three-month internship through Outreachy, the inclusion program for groups traditionally underrepresented in free software and tech. We currently propose one subject to work on:

  1. Create user video documentation for GNU Guix.

Eligible persons should apply by November 22th.

If you’d like to contribute to computing freedom, Scheme, functional programming, or operating system development, now is a good time to join us. Let’s get in touch on the mailing lists and on the #guix channel on the Freenode IRC network!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on i686, x86_64, and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

04 October, 2018 08:00AM by Gábor Boskovits

October 03, 2018

FSF Events

John Sullivan - "How can free communication tools win?" (freenode #live, Bristol, UK)

FSF executive director John Sullivan will be speaking at freenode #live (2018-11-03–4).

The free software movement aims to have all software be free “as in freedom.” But communication tools are especially important, because they are fundamental to the movement’s infrastructure, and its self image. We are supposed to be the experts in distributed, online collaboration. Communication and collaboration tools are also where we have had some of our greatest disappointments and challenges in recent years – consider the popularity and subsequent network effects of tools like Skype, Slack, WhatsApp, and Facebook Messenger. Can free tools – IRC, XMPP, GNU Ring, WebRTC, and others – overcome or even just compete with the network effect of the proprietary platforms? If so, how? What’s the current state of affairs and what should we be focusing on?

Location: We The Curious (formerly At-Bristol Science Centre) , Bristol, UK

We hope you can attend the speech, or meet John at the conference, or visit us at the FSF booth.

Please fill out our contact form, so that we can contact you about future events in and around Bristol.

03 October, 2018 04:59PM

Molly de Blanc - "Insecure Connections: Love and mental health in our digital lives" (SeaGL, Seattle, WA)

FSF campaigns manager Molly de Blanc will be speaking at SeaGL (2018-11-09–10).

The lens through which we view--and know--what it means to love, to be ourselves, and to connect with others is now backed by microchips and millions of lines of code. As our lives continue to become increasingly managed by our devices, we need to ask ourselves what we're gaining--and what we're giving up--by allowing technology into the spaces that make our hearts ache and that keep us up at night.

This talk will weave together two narratives essential to many people: health and love. It will examine the ways in which both of these topics have become entwined with computing, what that means for us as individuals, and what that means for our individual and societal user freedoms.

Location: Theatre, Seattle Central College, Seattle, WA

We hope you can attend the speech, or meet Molly at the conference, or visit us at the FSF booth.

Please fill out our contact form, so that we can contact you about future events in and around Seattle.

03 October, 2018 04:50PM

FSF Blogs

Thank you for participating in International Day Against DRM 2018!

idad poster

Thank you everyone for helping to make September 18th another successful International Day Against DRM (IDAD)! Digital Restrictions Management (DRM) is an issue we have to face every day. In rallying together for a single day against DRM, we sent a powerful message: DRM is just wrong and we can live in a society without it.

Hundreds of you around the world took action on IDAD: going out into your campuses, communities, and around the Web, and sharing your opposition to how DRM restricts your freedom as a user of software and media. The 17 participating organizations took their own actions, creating videos, releasing reports, and writing articles. Here in Boston, we visited the Apple Store and talked with shoppers about their digital rights and how Apple devices abuse those rights using DRM.

Even though IDAD 2019 is a year away, you can continue taking action against DRM. You can join the DRM Elimination crew and sign up for the Defective by Design email list on DefectiveByDesign.org. You can follow us on social media or join the discussion on IRC on the channel #DefectiveByDesign.

We appreciate everything you did this year, and want to give one last big thank you to everyone who was involved.

Thank you!

Photo courtesy of rafial on Flickr. This image is licensed under a CC BY SA 2.0 license.

03 October, 2018 03:20PM

October 02, 2018

FSF Events

Rubén Rodríguez - "Freedom and privacy in the Web: Fighting licensing, tracking, fingerprinting and other issues, from both sides of the cable" (SeaGL, Seattle, WA)

FSF senior systems administrator Rubén Rodríguez will be speaking at SeaGL (2018-11-09–10). The talk will be somewhat technical; the public is encouraged to attend.

Websites and the technologies they are built on continue to evolve from the static text documents of old (with the occasional image, tiled background and blinking marquee) to very elaborate pieces of interactive software, with both local and remote code execution that brings all kinds of overlooked privacy and user freedom concerns. Fueled by practicality and monetization incentives, web developers regularly impose non-free JavaScript to their visitors, who also have to suffer being tracked and fingerprinted by all kinds of third parties.

In this talk we will explore the sometimes obscure issues at play, and we will discuss tools and practices for end users to protect their freedom and privacy when browsing the Web, and for site developers to be able to offer an interactive, feature-rich experience to their visitors in freedom-respecting ways.

Attendees will learn about the JavaScript Trap and its solutions, as well as fingerprinting and other forms of browser tracking, and how to work around those and other issues from both the visitor and the webmaster side.

Location: Room 5104, Seattle Central College, Seattle, WA

We hope you can attend the speech, or meet Rubén at the conference, or visit us at the FSF booth.

Please fill out our contact form, so that we can contact you about future events in and around Seattle.

02 October, 2018 09:44AM

September 29, 2018

GNU Guix

Upcoming Talk: "Everyday Use of GNU Guix"

At SeaGL 2018, Chris Marusich will present a talk introducing GNU Guix to people of all skill levels and backgrounds. SeaGL is an annual GNU/Linux conference in Seattle. Attendance is gratis.

If you're in the Seattle area, please consider coming! Even if you can't make it in person, the talk will be recorded and later made available on the SeaGL website, so you can watch it at your convenience after it's been uploaded.

Abstract

Everyday Use of GNU Guix

In this talk, I will introduce GNU Guix: a liberating, dependable, and hackable package manager that follows the "purely functional software deployment model" pioneered by Nix.

I will demonstrate some common use cases of Guix and show you how I use it in my everyday life. In addition, I will briefly explain the basic idea behind the functional model and how it enables Guix to provide useful features like the following:

  • Transactional upgrades and roll-back of installed software.
  • Unprivileged users can simultaneously install multiple versions of software.
  • Transparently build from source or download pre-built binaries.
  • Installed software is bootstrappable, trustable, and auditable all the way down to your compiler's compiler.
  • Eliminates an entire class of "works on my system" type problems.

No prior knowledge of Guix, Nix, or the functional model is required. When you leave this talk, I hope you will have a basic understanding of what Guix is, how to use it, and why it will help make your life brighter.

Schedule

The talk will take place at the following time and location:

For details, please refer to the official SeaGL page for the talk.

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

29 September, 2018 03:00PM by Chris Marusich

Riccardo Mottola

first release of StepSync!

I'm proud to announce the first release of StepSync, a file sync tool for GNUstep and MacOS (even for venerable PowerPC).

StepSync allows synchronization of folders, optionally recursively descending in sub-folders. It allows thus various options of performing backups: pure insertion, updates and including full synchronization by importing changes from target to source.

After months of development and testing, I consider it stable enough, I tested it with thousands of files and folders.

You can find it at the GNUstep Application Project. I already have plans for new features!

29 September, 2018 11:04AM by Riccardo (noreply@blogger.com)

September 25, 2018

dico @ Savannah

Version 2.7

Version 2.7 of GNU dico is available for download.

Important changes in this version:

1. Support for virtual databases
2. The dictorg module improved
3. Support for building with WordNet on Debian-based systems
4. Default m4 quoting characters changed to [ ]
5. Dicoweb: graceful handling of unsupported content types.

25 September, 2018 04:27PM by Sergey Poznyakoff

September 22, 2018

parallel @ Savannah

GNU Parallel 20180922 ('Danske') released [stable]

GNU Parallel 20180922 ('Danske') [stable] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

No new functionality was introduced so this is a good candidate for a stable release.

Quote of the month:

I know I'm late to the party but GNU Parallel is truly amazing!
-- Sam Diaz-Munoz @sociovirology

New in this release:

  • Minix is supported again.
  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 September, 2018 09:15PM by Ole Tange

September 20, 2018

Parabola GNU/Linux-libre

Server loss

The rather long outage (2018-08-27 through 2018-09-18; 22 days) of the proton.parabola.nu server has been resolved, with all services being migrated to winston.parabola.nu. Please notify us if you encounter any lingering issues.

We'd like to specifically thank Jonathan "n1md4" Gower for graciously hosting us at Positive Internet all these years--since before most of us were even Parabola users, let alone contributors.

However, that sponsorship has come to an end. We are alright for now; the server that 1984 Hosting is sponsoring us with is capable of covering our immediate needs. We are looking for a replacement server and are favoring a proprietor that is a "friend of freedom," if anyone in the community has a suggestion.

20 September, 2018 03:45PM by Luke Shumaker

September 19, 2018

Boot problems with Linux-libre 4.18 on older CPUs

Due to a known bug in upstream Linux 4.18, users with older multi-core x86 CPUs (Core 2 Duo and earlier?) may not correctly boot up with linux-libre 4.18 when using the default clocksource.

If you are affected, and the CPU is new enough to support HPET, you may work around this by adding clocksource=hpet to the kernel command line. GRUB users can accomplish this by inserting GRUB_CMDLINE_LINUX_DEFAULT+=" clocksource=hpet" in to /etc/default/grub and re-running grub-mkconfig.

If your CPU is too old to support HPET, you may work around this by using the linux-libre-lts (4.14) kernel instead of linux-libre (4.18).

As this is fixed in 4.18.9, you may also want to wait for a package update.

19 September, 2018 09:33PM by Luke Shumaker

September 17, 2018

FSF News

FSF takes international day of action for a Day Without DRM on September 18th

On Tuesday, September 18th, there will be two rallies in Boston – one from 12:00pm - 2:00pm at the Boston Public Library at 700 Boylston Street, and one from 6:00pm - 7:00pm in front of the Apple Store at 815 Boylston Street.

DRM is the practice of imposing technological restrictions that control what users can do with digital media. DRM creates a damaged good: it prevents you from doing what would be possible without it. This concentrates control over production and distribution of media, giving DRM peddlers the power to carry out massive digital book-burnings and conduct large-scale surveillance over people's media viewing habits.

Organized by the Defective by Design team, IDAD has occurred annually since 2006. Each year, participants take action through protests, rallies, and the sharing of DRM-free media and materials. Participating nonprofits, activist groups, and companies from around the world include the Electronic Frontier Foundation, Open Rights Group, Public Knowledge, The Document Foundation, and others (for a complete list, see: https://dayagainstdrm.org). These groups will share the message by writing about why DRM is harmful, organizing events, and offering discounts on DRM-free media.

"DRM is a major problem for computer user freedom, artistic expression, free speech, and media," said John Sullivan, executive director of the FSF. "International Day Against DRM has allowed us to, year after year, empower people to rise up together and in one voice declare that DRM is harmful to everyone."

This year's theme is A Day Without DRM – the FSF invites people around the world to avoid DRM for the day. DRM is lurking in many electronic devices we use, both online and offline, and you'll find it everywhere from media files to vehicles. Its impact is echoed in the fight for the Right to Repair and the fight for the right to investigate the software in medical devices. Examples of flagrant DRM abuses include:

  • In a classic example from 2009, Amazon remotely deleted thousands of copies of George Orwell's 1984 from Kindle ebook readers. Given this power, corporations like Amazon could fully disappear a book from existence if they chose, committing a massive digital book-burning. Amazon still has the power to do this, and has remotely deleted at least one user's library since then.

  • A US law called the Digital Millennium Copyright Act (DMCA) makes it illegal to remove DRM from media using widely-available online tools. These policies have a chilling effect among security researchers, those who wish to repair their devices, and anyone who wants to understand how their technologies work.

  • Media companies including Netflix pressured the World Wide Web Consortium to add DRM as a Web standard, normalizing DRM and giving it the opportunity to become even more prevalent.

DRM-supporting companies and device manufacturers claim it makes technology and media more secure, enhances user experience, and protects rights holders. In reality, the technologies behind DRM have been used as a vulnerability since 2005 to attack end-users' computer systems and devices. DRM limits what users can do with their media: access is limited by the whims of rights holders. Rather than protecting people who create media, it protects the interests of large companies that aggregate media.

For a thorough overview of DRM abuses, please visit the Defective by Design FAQ.

About Defective by Design

Defective by Design is an initiative of the Free Software Foundation. It is a participatory and grassroots campaign exposing DRM-encumbered devices and media for what they really are: Defective by Design. It works together with activists and others to eliminate DRM as a threat to innovation in media, reader privacy, and freedom for computer users.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software –- particularly the GNU operating system and its GNU/Linux variants –- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Molly de Blanc
Campaigns Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

17 September, 2018 02:55PM

September 14, 2018

gnuzilla @ Savannah

IceCat 60.2.0 Pre-release

GNUzilla is the GNU version of the Mozilla suite, and GNU IceCat is the GNU version of the Firefox browser. Its main advantage is an ethical one: it is entirely free software. While the Firefox source code from the Mozilla project is free software, they distribute and recommend non-free software as plug-ins and addons. Also their trademark license restricts distribution in ways that hinder freedom 0.

GNU IceCat has multiple practical advantages as well, such as better privacy and security settings, extensive blocking of sites that may track the user's browsing habits, or the inclusion of LibreJS and other extensions that help browse without running non-free javascript.

https://www.gnu.org/software/gnuzilla/

GPG key ID:D7E04784 GNU IceCat releases
Fingerprint: A573 69A8 BABC 2542 B5A0 368C 3C76 EED7 D7E0 4784
https://savannah.gnu.org/project/memberlist-gpgkeys.php?group=gnuzilla

======

This is a pre-release for version 60.2.0 of GNU IceCat, available at http://alpha.gnu.org/gnu/gnuzilla/60.2.0/

This release contains substantial design and usability changes from the previous major version (v52.x ESR) so I'm publishing it at alpha.gnu.org to request testing and comments before moving it to ftp.gnu.org. Source Code plus binaries for GNU/Linux x86 and x86_64 are available.

The main differences (other than those provided from upstream changes from v52.x to v60.x) are:

  • LibreJS 7.x, now based in the WebExtensions API. It currently provides a very similar set of features compared with the version shipped with IceCat 52.x but testing, comments and advice are welcome.
  • A set of companion extensions for LibreJS by Nathan Nichols (https://addons.mozilla.org/en-US/firefox/user/NateN1222/) are pre-installed, and provide workarounds to use some services at USPS, RSF.org, SumOfUs.org, pay.gov, McDonald's, goteo.org and Google Docs without using nonfree JavaScript.
  • A series of configuration changes and tweaks were applied to ensure that IceCat does not initiate network connections that the user has not explicitly requested. This implies not downloading feeds, updates, blacklists or any other similar data needed during startup.
  • A new homepage shows the most important privacy and freedom options available, with explanations for the user to tune IceCat's behavior to their specific needs.
  • We no longer include SpyBlock, which was IceCat's fork of AdBlockPlus that allowed to block all third-party requests during "Private Browsing" mode. Now, we include an extension that blocks all third party requests by default, and provides a simple interface that allows to whitelist specific third-party resources on a per-site basis. This change is the most significant usability change from IceCat 52.x and I'd like to get testers to provide an opinion on it. One of the reasons for its inclusion is that unlike other blockers it doesn't need to download any files to do its job, thus avoiding the previously mentioned unrequested network connections.

Thanks to Giorgio Maone, Nathan Nichols, Nyk Nyby and Zach Wick for their contribution to LibreJS and IceCat, and happy testing!

14 September, 2018 01:55AM by Ruben Rodriguez

September 13, 2018

librejs @ Savannah

LibreJS 7.17 released

GNU LibreJS aims to address the JavaScript problem described in Richard Stallman's article The JavaScript Trap*. LibreJS is a free add-on for GNU IceCat and other Mozilla-based browsers. It blocks nonfree nontrivial JavaScript while allowing JavaScript that is free and/or trivial. * https://www.gnu.org/philosophy/javascript-trap.en.html

The source tarball for this release can be found at:
http://ftp.gnu.org/gnu/librejs/librejs-7.17.0.tar.gz
http://ftp.gnu.org/gnu/librejs/librejs-7.17.0.tar.gz.sig

The installable extension file (compatible with Mozilla-based browsers version >= v60) is available here:
http://ftp.gnu.org/gnu/librejs/librejs-7.17.0.xpi
http://ftp.gnu.org/gnu/librejs/librejs-7.17.0.xpi.sig

GPG key:05EF 1D2F FE61 747D 1FC8 27C3 7FAC 7D26 472F 4409
https://savannah.gnu.org/project/memberlist-gpgkeys.php?group=librejs

This release introduces a new interface for management of the whitelist/blacklist, along with several bug fixes:

  • Temporary hiding complain to owner feature until ready for prime time.
  • Adjust directory layout and packaging to allow Storage.js to be shared with the settings page in the xpi release.
  • Refactored panel visual styles to be reused by the settings page.
  • Support for batch async list operations.
  • Fix navigating the same url with # erases activity report information.

All contributions thanks to Giorgio Maone.

13 September, 2018 09:45PM by Ruben Rodriguez

September 12, 2018

German Arias

New release of FisicaLab for Windows

Due to some problems reported by Windows users, I decide to release a new Windows installer of FisicaLab with the alternative interface using IUP. This version is the number 0.3.5.1 and you can download it here. I will add some new features before release the version 0.4.0. If you have some problem with this new installer please write me.

12 September, 2018 05:26AM by Germán Arias

September 11, 2018

Luca Saiu

Thanks for fighting against the European copyright directive

As I am writing this, the European Parliament is debating the disastrously liberticide copyright Directive. After out previous mailing campaign (The European Parliament has rejected the copyright directive, for now ()) organized along with a group of GNU friends, we again contacted the Members of the European Parliament before the forthcoming vote. I wish to name all the people who helped by translating the text into several languages and improve it, working tirelessly and with very little time: Christopher Dimech, Yavor Doganov, Rafael Fontenelle, Alexandre Garreau, Bruno Haible, José Marchesi, Tom Uijldert. Thank you all, friends. — Luca Saiu, 2018-09-11 21:50 ... [Read more]

11 September, 2018 07:50PM by Luca Saiu (positron@gnu.org)

September 09, 2018

guile-cv @ Savannah

Guile-CV version 0.2.0

Guile-CV version 0.2.0 is released! (September 2018)

Changes since the previous version

This is a 'milestone' release, which introduces image texture measures. In addition (a) the default installation locations have changed; (b) there is a new configure option; (c) some new interfaces; (d) matrix multiplication performances have been greatly improved; (d) a few interface (name) have changed.

For a list of changes since the previous version, visit the NEWS file. For a complete description, consult the git summary and git log

09 September, 2018 02:47AM by David Pirotte

September 06, 2018

indent @ Savannah

GNU indent 2.2.12

GNU indent version 2.2.12 (signature) has just been releases, the first release GNU indent saw in eight years.

Highlights include:

  • New options:
    • -pal / --pointer-align-left and -par / --pointer-align-right
    • -fnc / --fix-nested-comment
    • -gts / --gettext-strings
    • -slc / --single-line-conditionals
    • -as / --align-with-spaces
    • -ut / --use-tabs
    • -nut / --no-tabs
    • -sar / --spaces-around-initializers
    • -ntac / --dont-tab-align-comments
  • C99 and C11 keywords and typeof are now recognised.
  • -linux preset now includes -nbs.
  • -kr preset now includes -par.
  • Lots of bug fixes

I’d like to thank all of the contributors of this release, most importantly:

  • Tim Hentenaar for all of the fixes and refactoring he’s done in his branch
  • Petr Písař, who maintains GNU indent in Red Hat and its derivatives, who’s submitted a lot of fixes and kept supporting users on the mailing list when I couldn’t
  • Santiago Vila, who maintains GNU indent in Debian
  • Daniel P. Valentine, who helped me a lot when I initially took over the maintenance of GNU indent
  • And lots of others who submitted their patches

06 September, 2018 11:33AM by Andrej Shadura

September 05, 2018

librejs @ Savannah

LibreJS 7.16 released

GNU LibreJS aims to address the JavaScript problem described in Richard Stallman's article The JavaScript Trap*. LibreJS is a free add-on for GNU IceCat and other Mozilla-based browsers. It blocks nonfree nontrivial JavaScript while allowing JavaScript that is free and/or trivial. * https://www.gnu.org/philosophy/javascript-trap.en.html

The source tarball for this release can be found at:
http://ftp.gnu.org/gnu/librejs/librejs-7.16.0.tar.gz
http://ftp.gnu.org/gnu/librejs/librejs-7.16.0.tar.gz.sig

The installable extension file (compatible with Mozilla-based browsers version >= v60) is available here:
http://ftp.gnu.org/gnu/librejs/librejs-7.16.0.xpi
http://ftp.gnu.org/gnu/librejs/librejs-7.16.0.xpi.sig

GPG key:05EF 1D2F FE61 747D 1FC8 27C3 7FAC 7D26 472F 4409
https://savannah.gnu.org/project/memberlist-gpgkeys.php?group=librejs

The main improvement in version 7.16.0 is the implementation of WebLabels (https://www.fsf.org/blogs/licensing/rel-jslicense), which was the remaining missing feature since LibreJS got reimplemented using the WebExtensions format. On top of that, multiple bugfixes and performance improvements were added. All contributions thanks to Giorgio Maone.

Changes since version 7.15 (excerpt from the git changelog):

Fixes missing feedback for actions on the report UI when in a tab.
Fixes missing feedback on tab reload from UI panel.
Refactored HTML loading, parsing and serialization mechanisms.
Moved external licenses check into response pre-processing
WebLabels-based license checking implementation.
Fix Back/forth navigation not changing tab status information

05 September, 2018 09:40PM by Ruben Rodriguez

FSF News

Eleventh annual LibrePlanet conference set for March 23-24, 2019

The call for proposals is open now, until October 26, 2018. General registration and exhibitor and sponsor registration are also open.

LibrePlanet is an annual conference for free software users and anyone who cares about the intersection of technology and social justice. For a decade, LibrePlanet has brought together thousands of diverse voices and knowledge bases, including free software developers, policy experts, activists, hackers, students, and people who have just begun to learn about free software.

LibrePlanet 2019 will feature sessions for all ages and experience levels, including newcomers. Sharon Woods, general counsel for the Defense Digital Service (US Department of Defense) said, “Last year was my first LibrePlanet... I walked away a complete believer in free software.” In just the last three years, over a thousand people from around the world have attended LibrePlanet, with many more participating online by watching the free software-powered livestream, joining the conversation on IRC, or viewing nearly 40 hours of archived video on the FSF's GNU MediaGoblin instance.

LibrePlanet 2019's theme is "Trailblazing Free Software." In 1983, the free software movement was born with the announcement of the GNU Project. FSF founder Richard Stallman saw the dangers of proprietary code from the beginning: when code was kept secret from users, they would be controlled by the technology they used, instead of vice versa. In contrast, free software emphasized a community-oriented philosophy of sharing code freely, enabling people to understand how the programs they used worked, to build off of each other's code, to pay it forward by sharing their own code, and to create useful software that treated users fairly.

"Every year, ideas are introduced, discussed, and developed at LibrePlanet that advance the free software movement and help technology and associated law actually serve the people using them," said FSF executive director John Sullivan. "People will leave the next edition doubly motivated to chart a path away from dependency on unfree software companies like Facebook, Apple, Uber, and Microsoft, and with new knowledge about tools to help them do so."

When he identified control over one's own computer as a requirement for ethical, trustworthy computing, Stallman anticipated some of the most toxic aspects of today's proprietary software-filled world, including Digital Restrictions Management (DRM), bulk surveillance, and Service as a Software Substitute (SaaSS). With a new and growing generation of free software enthusiasts, we can take this conference as an opportunity to discuss both the present and the future of the free software movement. Using the Four Freedoms as a litmus test for ethical computing, we ask, "How will free software continue to bring to life trailblazing, principled new technologies and new approaches to the world?"

Call for Proposals

LibrePlanet 2019's talks and hands-on workshops can be for developers, young people, newcomers to free software, activists looking for technology that aligns with their ideals, policymakers, hackers, artists, and tinkerers. Potential talks should examine or utilize free software, copyleft, and related issues.

"Each year, newcomers and longtime free software activists of all ages surprise us with unique ideas they propose to explore at LibrePlanet," said Georgia Young, program manager at the FSF. "We are excited to see what trailblazing talk and workshop possibilities people bring to the conference for 2019."

Submissions to the call for proposals are being accepted through Friday, October 26, 2018 at 10:00 EDT (14:00 UTC).

About LibrePlanet

LibrePlanet is the annual conference of the Free Software Foundation. Over the last decade, LibrePlanet has blossomed from a small gathering of FSF members into a vibrant multi-day event that attracts a broad audience of people who are interested in the values of software freedom. To sign up for announcements about LibrePlanet 2019, visit https://www.libreplanet.org/2019.

Each year at LibrePlanet, the FSF presents its annual Free Software Awards. Nominations for the awards are open through Sunday, November 4th, 2018 at 23:59 UTC.

For information on how your company can sponsor LibrePlanet or have a table in our exhibit hall, email campaigns@fsf.org.

LibrePlanet 2018 was held at MIT from March 24-25, 2018. Nearly 350 attendees came together from across the world for workshops and talks centered around the theme of "Freedom Embedded." You can watch videos from last year's conference, including the opening keynote, an exploration of the potential for the free software community to last forever by maintaining its ideals while also welcoming newcomers, by Deb Nicholson, who is now director of community operations for the Software Freedom Conservancy.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Georgia Young
Program Manager
Free Software Foundation
+1 (617) 542-5942
campaigns@fsf.org

05 September, 2018 02:55PM

August 28, 2018

bison @ Savannah

bison-3.1 released [stable]

We are very happy to announce the release of GNU Bison 3.1. It introduces
new features such as typed midrule actions, brings improvements in the
diagnostics, fixes several bugs and portability issues, improves the
examples, and more.

See the NEWS below for more details.

Enjoy!

28 August, 2018 04:28AM by Akim Demaille

August 27, 2018

dico @ Savannah

Version 2.6

New version of GNU dico is available for download. This version introduces support for Guile 2.2 and later, and for Python 3.5 and later. Support for Guile 1.8 has been withdrawn.

27 August, 2018 09:51AM by Sergey Poznyakoff

August 25, 2018

Christopher Allan Webber

Privilege isn't a sin, but it's a responsibility and a debt to be repaid

Recently I was on a private mailing list thread where there was debate about whether or not the project should take on steps to improve diversity. One of the mailing list participants was very upset about this idea, and said that they didn't like when people accused them of the "original sin" of having white male privilege.

I suspect this is at the root of a lot of misunderstanding and frustration around the term "privilege". Privilege is not a sin... you are not a sinner for having privilege. However it is a responsibility and a debt to be repaid and corrected for, stemming from injustices in society.

A popular social narrative is that everyone has an equal place at the starting line, so the winners and losers of a race are equally based on their merit. Unfortunately this isn't true. Privilege is being able to show up at the starting line having had sleep and a good meal and the resources to train... you still worked to be able to get to the finish line and having privilege does not take that away. But if we look at the other people on the track we could see that they not only maybe didn't get enough sleep or were not able to allocate time to train (maybe they had to work multiple jobs on the side) or couldn't afford to eat as healthily. Some of them actually may even have to start back farther from the starting line, there are rocks and weeds and potholes in their paths. If we really want to treat everyone based on merit, we'd have to give everyone an equal place at the starting line, an equal track, etc. Unfortunately, due to the way the race is set up, that does mean needing to correct for some things, and it requires actual effort to repair the track.

My spouse Morgan Lemmer-Webber is an art historian and recently got into free software development (and software development in general). She has faced sexism, as all women do, her entire life, but it was immediately more visible and severe once she entered the technology space. For example, she wrote a web application for her work at the university. I helped train her, but I refused to write any code because I wanted her to learn, and she did. Eventually the project got larger and she became a manager and hired someone whom she was to train to take over development. He took a look at the code, emailed her and said "Wow, this file looks really good, I assume your husband must have written this code?"

What a thing to say! Can you imagine how that must have felt? If I heard something like that said to me I'd want to curl up in a ball and die. And she had more experiences like this too, experiences she never had until she entered the technology space. And if you talk to women in this field, you'll hear these stories are common, and more: dismissal, harassment, rape threats if you become too visible or dare to speak out... not to mention there's the issue that most of the people in tech don't look like you, so you wonder if you really actually belonged, and you wonder if everyone else believes that too. Likewise with people of color, likewise with people in the LGBTQ space... stones and disrepair on the path, and sometimes you have to start a bit farther back.

To succeed at the race with privilege, of course you have to work hard. You have to train, you have to dedicate yourself, you have to run hard. This isn't meant to take away your accomplishments, or to say you didn't work hard. You did! It's to say that others have all these obstacles that we need to help clear from their path. No wonder so many give up the race. And there comes the responsibility and debt to be repaid: if you have privilege, put it to good work: pitch in. If you want that dream of everyone to have an equal place at the starting line to be true, help fix the track. But there's a lot of damage there... it's going to take a long time.

25 August, 2018 12:50PM by Christopher Lemmer Webber

August 22, 2018

parallel @ Savannah

GNU Parallel 20180822 ('Genova') released

GNU Parallel 20180822 ('Genova') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

GNU parallel is a thing of magic.

-- Josh Meyer @joshmeyerphd@twitter

New in this release:

  • parset sets exit code.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 August, 2018 10:55PM by Ole Tange

August 21, 2018

Parabola GNU/Linux-libre

unar being replaced by unarchiver

The unar package has been dropped in favor of Arch's unarchiver. This was discussed in the mailing list some months ago.

If you are using unar, just install unarchiver. You'll be asked if you want to replace it, just accept and continue as any normal package installation.

21 August, 2018 02:43AM by David P.

August 17, 2018

Riccardo Mottola

Graphos 0.7 released

Graphos 0.7 has been released a couple of days ago!

What's new for GNUstep's vector editor?
  • improved Bezier path editor (add/remove points)
  • Knife (Bezier Path splitting) instrument fixed and re-enabled (broken since original GDraw import!)
  • important crash fixes (Undo/Redo related)
  • Interface improvements to be more usable with Tablet/Pen digitizer.
Graphos continues to work on GNUstep for Linux/BSD as well as natively on MacOS.

Graphos running on MacOS:

17 August, 2018 08:58PM by Riccardo (noreply@blogger.com)

August 15, 2018

Parabola GNU/Linux-libre

netctl 1.18-1 and systemd 239.0-2.parabola7 may require manual intervention

The new versions of netctl and systemd-resolvconf may not be installed together. Users who have both netctl and systemd-resolvconf installed will need to manually switch to from systemd-resolvconf to openresolv before upgrading.

If you get an error

:: unable to satisfy dependency 'resolvconf' required by netctl

use

pacman -S openresolv

prior to upgrading.

15 August, 2018 05:05PM by Luke Shumaker

August 14, 2018

GNUnet News

GSoC 2018 - GNUnet Web-based User Interface

What was done?
In the context of Google Summer of Code 2018, my mentor (Martin Schanzenbach) and I have worked on creating and extending the REST API of GNUnet. Currently, we mirrored the functionality of following commands:

gnunet-identity
gnunet-namestore
gnunet-gns
gnunet-peerinfo

Additionally, we developed a website with the Javascript framework Angular 6 and the design framework iotaCSS to use the new REST API. The REST API of GNUnet is now documented with Sphinx.

14 August, 2018 07:55AM by Phil Buschmann

August 13, 2018

GNU Guix

GSoC 2018 report: Cuirass Web interface

For the last three months I have been working with the Guix team as a Google Summer of Code intern. The title of my project is "GNU Guix (Cuirass): Adding a web interface similar to the Hydra web interface".

Cuirass is a continuous integration system which monitors the Guix git repository, schedules builds of Guix packages, and presents the build status of all Guix packages. Before my project, Cuirass did not have a web interface. The goal of the project was to implement an interface for Cuirass which would allow a user to view the overall build progress, details about evaluations, build failures, etc. The web interface of Hydra is a good example of such a tool.

In this post, I present a final report on the project. The Cuirass repository with the changes made during the project is located at http://git.savannah.gnu.org/cgit/guix/guix-cuirass.git. A working instance of the implemented interface is available at https://berlin.guixsd.org/. You can find more examples and demonstrations of the achieved results below.

About Cuirass

Cuirass is designed to monitor a git repository containing Guix package definitions and build binaries from these package definitions. The state of planned builds is stored in a SQLite database. The key concepts of the Cuirass internal state are:

  • Job specification. Specifications state what has actually to be done by Cuirass. A specification is defined by a Scheme data structure (an association list) which includes a job name, repository URL, as well as the branch and a procedure proc that specifies how this is to be built.

  • Evaluation. An evaluation is a high-level build action related to a certain revision of a repository of a given specification. For each specification, Cuirass continuously produces new evaluations which build different versions of the project represented by revisions of the corresponding repository. Derivations and builds (see below) each belong to a specific evaluation.

  • Derivation. Derivations represent low-level build actions. They store such information as name of a build script and its arguments, input and output of a build action, target system type, and necessary environment variables.

  • Build. A build is a result of build actions that are prescribed by a derivation. This could be a failed build or a directory containing the files that were generated by compiling a package.

Besides the core which executes build actions and records their results in the database, Cuirass includes a web server which previously only responded to a handful of API requests with JSON containing information about the current status of builds.

Web interface

The Cuirass web interface implemented during the project is served by the Cuirass web server whose functionality has been extended to generating HTML responses and serving static files. General features of the interface are listed below.

  • The backend is written in Guile and implements request processing procedures which parse request parameters and extract specific data to be displayed from the database.

  • The frontend consists of HTML templates represented with Guile SXML and the Bootstrap 4 CSS library.

  • The appearance is minimalistic. Every page includes only specific content information and basic navigation tools.

  • The interface is lightweight and widely accessible. It does not use JavaScript which makes it available to users who do not want to have JavaScript running in the browser.

Structure

Let's review the structure of the interface and take a look at the information you can find in it. Note that the web-interface screenshots presented below were obtained with synthetic data loaded into Cuirass database.

Main page

The main page is accessible on the root request endpoint (/). The main page displays a list of all the specifications stored in the Cuirass database. Each entry of the list is a clickable link which leads to a page about the evaluations of the corresponding specification (see below).

Here is an example view of the main page.

Main page screenshot

Evaluations list

The evaluations list of a given specification with name <name> is located at /jobset/<name>/. On this page, you can see a list of evaluations of the given project starting from the most recent ones. You can navigate to older evaluations using the pagination buttons at the bottom of the page. In the table, you can find the following information:

  • The ID of the evaluation which is clickable and leads to a page with information about all builds of the evaluation (see below).

  • List of commits corresponding to the evaluation.

  • Build summary of the evaluation: number of succeeded (green), failed (red), and scheduled (grey) builds of this evaluation. You can open the list of builds with a certain status by clicking on one of these three links.

Here is a possible view of the evaluations list page:

Screenshoot of evaluations list

Builds list

The builds list of a evaluation with ID <id> is located at /eval/<id>/. On this page, you can see a list of builds of the given evaluation ordered by their stop time starting from the most recent one. Similarly to the evaluation list, there are pagination buttons located at the bottom of the page. For each build in the list, there is information about the build status (succeeded, failed, or scheduled), stop time, nixname (name of the derivation), system, and also a link to the corresponding build log. As said above, it is possible to filter builds with a certain status by clicking on the status link in the evaluations list.

Screenshot of builds list

Summary

Cuirass now has the web interface which makes it possible for users to get an overview on the status of Guix package builds in a user-friendly way. As the result of my GSoC internship, the core of the web interface was developed. Now there are several possibilities for future improvements and I would like to welcome everyone to contribute.

It was a pleasure for me to work with the Guix team. I would like to thank you all for this great experience! Special thanks to my GSoC mentors: Ricardo Wurmus, Ludovic Courtès, and Gábor Boskovits, and also to Clément Lassieur and Danny Milosavljevic for their guidance and help throughout the project.

13 August, 2018 03:00PM by Tatiana Sholokhova

libredwg @ Savannah

libredwg-0.6 released [alpha]

See https://www.gnu.org/software/libredwg/

API breaking changes:
* Removed dwg_obj_proxy_get_reactors(), use dwg_obj_get_reactors() instead.
* Renamed SORTENTSTABLE.owner_handle to SORTENTSTABLE.owner_dict.
* Renamed all -as-rNNNN program options to --as-rNNNN.

Other newsworthy changes:
* Removed all unused type-specific reactors and xdicobjhandle fields,
use the generic object and entity fields instead.
* Added signed BITCODE_RLd and BITCODE_BLd (int32_t) types.
* Added unknown_bits field to all UNSTABLE/DEBUGGING classes.
* Custom CFLAGS are now honored.
* Support for GNU parallel and coreutils timeout logfile and picat processing.

Important bugfixes:
* Fixed previously empty strings for r2007+ for some objects and entities (#34).
* Fixed r2010+ picture size calculation (DXF 160, 310), leading to wrong entity offsets.
* Added more checks for unstable objects: empty handles, controls, overflows, isnan.
* Fixed some common_entity_data, mostly with non-indexed colors and gradient filled HATCH
(#27, #28, #31)
* Fixed some proper relative handles, which were previously treated as NULL handle.
* Fixed writing TV strings, now the length includes the final \0 char.
* Fixed the initial minimal hash size, fixing an endless loop on very small
(truncated) DWG's (<1000 bytes).
* Much less memory leaks.
* Improved free, i.e. no more double free with EED data. (#33)
* Better perl bindings build support on Windows, prefer local dwg.h over
installed dwg.h on testing (#29).
* Fixed dejagnu compilation on C11 by using -fgnu89-inline (#2)

New features:
* Added unstable support for the objects ASSOCDEPENDENCY, ASSOCPLANESURFACEACTIONBODY,
DBCOLOR, DIMASSOC, DYNAMICBLOCKPURGEPREVENTER, HELIX, LIGHT, PERSSUBENTMANAGER,
UNDERLAYDEFINITION and the entities MULTILEADER, UNDERLAY.
* Added getopt_long() support to all programs, position independent options.
* Implemented examples/unknown to find field layouts of unknown objects.
With bd and bits helpers to decode unknowns.
Now with a http://picat-lang.org helper. See also HACKING and savannah News.
* Implemented parsing ACIS version 2 to the binary SAB format.
* Added all missing dwg_object_to_OBJECT() functions for objects.
* Added dwg_ent_minsert_set_num_cols(), dwg_ent_minsert_set_num_rows()
* Added --disable-dxf, --enable--debug configure options. With debug there are many
more unstable objects available.
* Added libredwg.pc (#30)
* Added valgrind supressions for known darwin/glibc leaks.
* Changed and clarified the semver version numbering on development checkouts with
major.minor[.patch[.build.nonmastercommits-gittag]]. See HACKING.

Here are the compressed sources:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.6.tar.gz (9.4MB)
http://ftp.gnu.org/gnu/libredwg/libredwg-0.6.tar.xz (3.5MB)

https://github.com/LibreDWG/libredwg/releases/tag/0.6 (also window binaries)

Here are the GPG detached signatures[*]:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.6.tar.gz.sig
http://ftp.gnu.org/gnu/libredwg/libredwg-0.6.tar.xz.sig

Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html

Here are the SHA256 checksums:

995da379a27492646867fb490ee406f18049f145d741273e28bf1f38cabc4d5c libredwg-0.6.tar.gz
6d525ca849496852f62ad6a11b7b801d0aafd1fa1366c45bdb0f11a90bd6f878 libredwg-0.6.tar.xz
21d9619c6858ea25f95a9b6d8d6946e387309023ec17810f4433d8f61e8836af libredwg-0.6-win32.zip
d029d35715b8d86a68f8dacc7fdb5a5ac6405bc0a1b3457e75fc49c6c4cf6e06 libredwg-0.6-win64.zip

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify libredwg-0.6.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

13 August, 2018 09:48AM by Reini Urban

August 11, 2018

GNUnet News

irc bot status

As of 2018-08-09 in the early morning, we are having problems with our current IRC bot.
update 2018-08-13: I have started working on a replacement - the drupal bot is not coming back.
We have plans to migrate to a new bot as soon as possible, but hope to restore functionality to the existing one soon enough.

This post will be updated as soon as the bot is back online. The logs themselves are not affected.

We apologize for any inconvenience.

11 August, 2018 05:30PM by ng0

unifont @ Savannah

Unifont 11.0.02 Released

10 August 2018

Unifont 11.0.02 is now available. This is an interim release, with another released planned in the autumn of 2018. The main addition in this release is David Corbett's contribution of the over 600 glyphs in the Sutton SignWriting Unicode block.

Download this release at:

https://ftpmirror.gnu.org/unifont/unifont-11.0.02/

or if that fails,

ftp://ftp.gnu.org/gnu/unifont/unifont-11.0.02/

Enjoy!

Paul Hardy

11 August, 2018 01:53PM by Paul Hardy

August 07, 2018

librejs @ Savannah

LibreJS 7.15 released

GNU LibreJS aims to address the JavaScript problem described in Richard Stallman's article The JavaScript Trap*. LibreJS is a free add-on for GNU IceCat and other Mozilla-based browsers. It blocks nonfree nontrivial JavaScript while allowing JavaScript that is free and/or trivial. * https://www.gnu.org/philosophy/javascript-trap.en.html

The source tarball for this release can be found at:
http://ftp.gnu.org/gnu/librejs/librejs-7.15.0.tar.gz
http://ftp.gnu.org/gnu/librejs/librejs-7.15.0.tar.gz.sig

The installable extension file (compatible with Mozilla-based browsers version >= v60) is available here:
http://ftp.gnu.org/gnu/librejs/librejs-7.15.0.xpi
http://ftp.gnu.org/gnu/librejs/librejs-7.15.0.xpi.sig

GPG key:05EF 1D2F FE61 747D 1FC8 27C3 7FAC 7D26 472F 4409
https://savannah.gnu.org/project/memberlist-gpgkeys.php?group=librejs

Version 7.15 includes a partial rework of the mechanism for script loading and parsing, improving performance, reliability and code maintainability. The release also adds the implementation of per-script white/blacklisting, and many smaller bugfixes. All contributions thanks to the work of Giorgio Maone.

Changes since version 7.14 (excerpt from the git changelog):

Fixed whitelisting of scripts with query strings in URL.
Fixed report attempts when no tabId is available.
UI rewrite for better responsiveness and simplicity.
Broader detection of UTF-8 encoding in responses.
Fixed badge shouldn't be shown on privileged pages.
Fixed sub-frames resetting badge to green.
Uniform conventions for module importing paths.
Temporarily display back hidden old UI elements.
Refactoring list management in its own class.
Bug fixing and simplifying UI synchronization.
Whitelisted/blackilisted reporting and modification support.
Stateful response processing support.
Implement early whitelisting / blacklisting logic.
Display actual extension version number in UI.
White/Black lists back-end refactoring.
Refactor and fix HTTP response filtering.

07 August, 2018 07:16PM by Ruben Rodriguez

August 01, 2018

libc @ Savannah

The GNU C Library version 2.28 is now available

The GNU C Library
=================

The GNU C Library version 2.28 is now available.

The GNU C Library is used as the C library in the GNU system and
in GNU/Linux systems, as well as many other systems that use Linux
as the kernel.

The GNU C Library is primarily designed to be a portable
and high performance C library. It follows all relevant
standards including ISO C11 and POSIX.1-2008. It is also
internationalized and has one of the most complete
internationalization interfaces known.

The GNU C Library webpage is at http://www.gnu.org/software/libc/

Packages for the 2.28 release may be downloaded from:
http://ftpmirror.gnu.org/libc/
http://ftp.gnu.org/gnu/libc/

The mirror list is at http://www.gnu.org/order/ftp.html

NEWS for version 2.28
=====================

Major new features:

  • The localization data for ISO 14651 is updated to match the 2016

Edition 4 release of the standard, this matches data provided by
Unicode 9.0.0. This update introduces significant improvements to the
collation of Unicode characters. This release deviates slightly from
the standard in that the collation element ordering for lowercase and
uppercase LATIN script characters is adjusted to ensure that regular
expressions with ranges like [a-z] and [A-Z] don't interleave e.g. A
is not matched by [a-z]. With the update many locales have been
updated to take advantage of the new collation information. The new
collation information has increased the size of the compiled locale
archive or binary locales.

  • The GNU C Library can now be compiled with support for Intel CET, AKA

Intel Control-flow Enforcement Technology. When the library is built
with --enable-cet, the resulting glibc is protected with indirect
branch tracking (IBT) and shadow stack (SHSTK). CET-enabled glibc is
compatible with all existing executables and shared libraries. This
feature is currently supported on i386, x86_64 and x32 with GCC 8 and
binutils 2.29 or later. Note that CET-enabled glibc requires CPUs
capable of multi-byte NOPs, like x86-64 processors as well as Intel
Pentium Pro or newer. NOTE: --enable-cet has been tested for i686,
x86_64 and x32 on non-CET processors. --enable-cet has been tested
for x86_64 and x32 on CET SDVs, but Intel CET support hasn't been
validated for i686.

  • The GNU C Library now has correct support for ABSOLUTE symbols

(SHN_ABS-relative symbols). Previously such ABSOLUTE symbols were
relocated incorrectly or in some cases discarded. The GNU linker can
make use of the newer semantics, but it must communicate it to the
dynamic loader by setting the ELF file's identification (EI_ABIVERSION
field) to indicate such support is required.

  • Unicode 11.0.0 Support: Character encoding, character type info, and

transliteration tables are all updated to Unicode 11.0.0, using
generator scripts contributed by Mike FABIAN (Red Hat).

  • <math.h> functions that round their results to a narrower type are added

from TS 18661-1:2014 and TS 18661-3:2015:

- fadd, faddl, daddl and corresponding fMaddfN, fMaddfNx, fMxaddfN and
fMxaddfNx functions.

- fsub, fsubl, dsubl and corresponding fMsubfN, fMsubfNx, fMxsubfN and
fMxsubfNx functions.

- fmul, fmull, dmull and corresponding fMmulfN, fMmulfNx, fMxmulfN and
fMxmulfNx functions.

- fdiv, fdivl, ddivl and corresponding fMdivfN, fMdivfNx, fMxdivfN and
fMxdivfNx functions.

  • Two grammatical forms of month names are now supported for the following

languages: Armenian, Asturian, Catalan, Czech, Kashubian, Occitan, Ossetian,
Scottish Gaelic, Upper Sorbian, and Walloon. The following languages now
support two grammatical forms in abbreviated month names: Catalan, Greek,
and Kashubian.

  • Newly added locales: Lower Sorbian (dsb_DE) and Yakut (sah_RU) also

include the support for two grammatical forms of month names.

  • Building and running on GNU/Hurd systems now works without out-of-tree

patches.

  • The renameat2 function has been added, a variant of the renameat function

which has a flags argument. If the flags are zero, the renameat2 function
acts like renameat. If the flag is not zero and there is no kernel
support for renameat2, the function will fail with an errno value of
EINVAL. This is different from the existing gnulib function renameatu,
which performs a plain rename operation in case of a RENAME_NOREPLACE
flags and a non-existing destination (and therefore has a race condition
that can clobber the destination inadvertently).

  • The statx function has been added, a variant of the fstatat64

function with an additional flags argument. If there is no direct
kernel support for statx, glibc provides basic stat support based on
the fstatat64 function.

  • IDN domain names in getaddrinfo and getnameinfo now use the system libidn2

library if installed. libidn2 version 2.0.5 or later is recommended. If
libidn2 is not available, internationalized domain names are not encoded
or decoded even if the AI_IDN or NI_IDN flags are passed to getaddrinfo or
getnameinfo. (getaddrinfo calls with non-ASCII names and AI_IDN will fail
with an encoding error.) Flags which used to change the IDN encoding and
decoding behavior (AI_IDN_ALLOW_UNASSIGNED, AI_IDN_USE_STD3_ASCII_RULES,
NI_IDN_ALLOW_UNASSIGNED, NI_IDN_USE_STD3_ASCII_RULES) have been
deprecated. They no longer have any effect.

  • Parsing of dynamic string tokens in DT_RPATH, DT_RUNPATH, DT_NEEDED,

DT_AUXILIARY, and DT_FILTER has been expanded to support the full
range of ELF gABI expressions including such constructs as
'$ORIGIN$ORIGIN' (if valid). For SUID/GUID applications the rules
have been further restricted, and where in the past a dynamic string
token sequence may have been interpreted as a literal string it will
now cause a load failure. These load failures were always considered
unspecified behaviour from the perspective of the dynamic loader, and
for safety are now load errors e.g. /foo/${ORIGIN}.so in DT_NEEDED
results in a load failure now.

  • Support for ISO C threads (ISO/IEC 9899:2011) has been added. The

implementation includes all the standard functions provided by
<threads.h>:

- thrd_current, thrd_equal, thrd_sleep, thrd_yield, thrd_create,
thrd_detach, thrd_exit, and thrd_join for thread management.

- mtx_init, mtx_lock, mtx_timedlock, mtx_trylock, mtx_unlock, and
mtx_destroy for mutual exclusion.

- call_once for function call synchronization.

- cnd_broadcast, cnd_destroy, cnd_init, cnd_signal, cnd_timedwait, and
cnd_wait for conditional variables.

- tss_create, tss_delete, tss_get, and tss_set for thread-local storage.

Application developers must link against libpthread to use ISO C threads.

Deprecated and removed features, and other changes affecting compatibility:

  • The nonstandard header files <libio.h> and <_G_config.h> are no longer

installed. Software that was using either header should be updated to
use standard <stdio.h> interfaces instead.

  • The stdio functions 'getc' and 'putc' are no longer defined as macros.

This was never required by the C standard, and the macros just expanded
to call alternative names for the same functions. If you hoped getc and
putc would provide performance improvements over fgetc and fputc, instead
investigate using (f)getc_unlocked and (f)putc_unlocked, and, if
necessary, flockfile and funlockfile.

  • All stdio functions now treat end-of-file as a sticky condition. If you

read from a file until EOF, and then the file is enlarged by another
process, you must call clearerr or another function with the same effect
(e.g. fseek, rewind) before you can read the additional data. This
corrects a longstanding C99 conformance bug. It is most likely to affect
programs that use stdio to read interactive input from a terminal.
(Bug #1190.)

  • The macros 'major', 'minor', and 'makedev' are now only available from

the header <sys/sysmacros.h>; not from <sys/types.h> or various other
headers that happen to include <sys/types.h>. These macros are rarely
used, not part of POSIX nor XSI, and their names frequently collide with
user code; see https://sourceware.org/bugzilla/show_bug.cgi?id=19239 for
further explanation.

<sys/sysmacros.h> is a GNU extension. Portable programs that require
these macros should first include <sys/types.h>, and then include
<sys/sysmacros.h> if _GNU_LIBRARY_ is defined.

  • The tilegx*-*-linux-gnu configurations are no longer supported.
  • The obsolete function ustat is no longer available to newly linked

binaries; the headers <ustat.h> and <sys/ustat.h> have been removed. This
function has been deprecated in favor of fstatfs and statfs.

  • The obsolete function nfsservctl is no longer available to newly linked

binaries. This function was specific to systems using the Linux kernel
and could not usefully be used with the GNU C Library on systems with
version 3.1 or later of the Linux kernel.

  • The obsolete function name llseek is no longer available to newly linked

binaries. This function was specific to systems using the Linux kernel
and was not declared in a header. Programs should use the lseek64 name
for this function instead.

  • The AI_IDN_ALLOW_UNASSIGNED and NI_IDN_ALLOW_UNASSIGNED flags for the

getaddrinfo and getnameinfo functions have been deprecated. The behavior
previously selected by them is now always enabled.

  • The AI_IDN_USE_STD3_ASCII_RULES and NI_IDN_USE_STD3_ASCII_RULES flags for

the getaddrinfo and getnameinfo functions have been deprecated. The STD3
restriction (rejecting '_' in host names, among other things) has been
removed, for increased compatibility with non-IDN name resolution.

  • The fcntl function now have a Long File Support variant named fcntl64. It

is added to fix some Linux Open File Description (OFD) locks usage on non
LFS mode. As for others *64 functions, fcntl64 semantics are analogous with
fcntl and LFS support is handled transparently. Also for Linux, the OFD
locks act as a cancellation entrypoint.

  • The obsolete functions encrypt, encrypt_r, setkey, setkey_r, cbc_crypt,

ecb_crypt, and des_setparity are no longer available to newly linked
binaries, and the headers <rpc/des_crypt.h> and <rpc/rpc_des.h> are no
longer installed. These functions encrypted and decrypted data with the
DES block cipher, which is no longer considered secure. Software that
still uses these functions should switch to a modern cryptography library,
such as libgcrypt.

  • Reflecting the removal of the encrypt and setkey functions above, the

macro _XOPEN_CRYPT is no longer defined. As a consequence, the crypt
function is no longer declared unless _DEFAULT_SOURCE or _GNU_SOURCE is
enabled.

  • The obsolete function fcrypt is no longer available to newly linked

binaries. It was just another name for the standard function crypt,
and it has not appeared in any header file in many years.

  • We have tentative plans to hand off maintenance of the passphrase-hashing

library, libcrypt, to a separate development project that will, we hope,
keep up better with new passphrase-hashing algorithms. We will continue
to declare 'crypt' in <unistd.h>, and programs that use 'crypt' or
'crypt_r' should not need to change at all; however, distributions will
need to install <crypt.h> and libcrypt from a separate project.

In this release, if the configure option --disable-crypt is used, glibc
will not install <crypt.h> or libcrypt, making room for the separate
project's versions of these files. The plan is to make this the default
behavior in a future release.

Changes to build and runtime requirements:

GNU make 4.0 or later is now required to build glibc.

Security related changes:

CVE-2016-6261, CVE-2016-6263, CVE-2017-14062: Various vulnerabilities have
been fixed by removing the glibc-internal IDNA implementation and using
the system-provided libidn2 library instead. Originally reported by Hanno
Böck and Christian Weisgerber.

CVE-2017-18269: An SSE2-based memmove implementation for the i386
architecture could corrupt memory. Reported by Max Horn.

CVE-2018-11236: Very long pathname arguments to realpath function could
result in an integer overflow and buffer overflow. Reported by Alexey
Izbyshev.

CVE-2018-11237: The mempcpy implementation for the Intel Xeon Phi
architecture could write beyond the target buffer, resulting in a buffer
overflow. Reported by Andreas Schwab.

The following bugs are resolved with this release:

[1190] stdio: fgetc()/fread() behaviour is not POSIX compliant
[6889] manual: 'PWD' mentioned but not specified
[13575] libc: SSIZE_MAX defined as LONG_MAX is inconsistent with ssize_t,
when __WORDSIZE != 64
[13762] regex: re_search etc. should return -2 on memory exhaustion
[13888] build: /tmp usage during testing
[13932] math: dbl-64 pow unexpectedly slow for some inputs
[14092] nptl: Support C11 threads
[14095] localedata: Review / update collation data from Unicode / ISO
14651
[14508] libc: -Wformat warnings
[14553] libc: Namespace pollution loff_t in sys/types.h
[14890] libc: Make NT_PRFPREG canonical.
[15105] libc: Extra PLT references with -Os
[15512] libc: __bswap_constant_16 not compiled when -Werror -Wsign-
conversion is given
[16335] manual: Feature test macro documentation incomplete and out of
date
[16552] libc: Unify umount implementations in terms of umount2
[17082] libc: htons et al.: statement-expressions prevent use on global
scope with -O1 and higher
[17343] libc: Signed integer overflow in /stdlib/random_r.c
[17438] localedata: pt_BR: wrong d_fmt delimiter
[17662] libc: please implement binding for the new renameat2 syscall
[17721] libc: __restrict defined as /* Ignore */ even in c11
[17979] libc: inconsistency between uchar.h and stdint.h
[18018] dynamic-link: Additional $ORIGIN handling issues (CVE-2011-0536)
[18023] libc: extend_alloca is broken (questionable pointer comparison,
horrible machine code)
[18124] libc: hppa: setcontext erroneously returns -1 as exit code for
last constant.
[18471] libc: llseek should be a compat symbol
[18473] soft-fp: [powerpc-nofpu] __sqrtsf2, __sqrtdf2 should be compat
symbols
[18991] nss: nss_files skips large entry in database
[19239] libc: Including stdlib.h ends up with macros major and minor being
defined
[19463] libc: linknamespace failures when compiled with -Os
[19485] localedata: csb_PL: Update month translations + add yesstr/nostr
[19527] locale: Normalized charset name not recognized by setlocale
[19667] string: Missing Sanity Check for malloc calls in file 'testcopy.c'
[19668] libc: Missing Sanity Check for malloc() in file 'tst-setcontext-
fpscr.c'
[19728] network: out of bounds stack read in libidn function
idna_to_ascii_4i (CVE-2016-6261)
[19729] network: out of bounds heap read on invalid utf-8 inputs in
stringprep_utf8_nfkc_normalize (CVE-2016-6263)
[19818] dynamic-link: Absolute (SHN_ABS) symbols incorrectly relocated by
the base address
[20079] libc: Add SHT_X86_64_UNWIND to elf.h
[20251] libc: 32bit programs pass garbage in struct flock for OFD locks
[20419] dynamic-link: files with large allocated notes crash in
open_verify
[20530] libc: bswap_16 should use __builtin_bswap16() when available
[20890] dynamic-link: ldconfig: fsync the files before atomic rename
[20980] manual: CFLAGS environment variable replaces vital options
[21163] regex: Assertion failure in pop_fail_stack when executing a
malformed regexp (CVE-2015-8985)
[21234] manual: use of CFLAGS makes glibc detect no optimization
[21269] dynamic-link: i386 sigaction sa_restorer handling is wrong
[21313] build: Compile Error GCC 5.4.0 MIPS with -0S
[21314] build: Compile Error GCC 5.2.0 MIPS with -0s
[21508] locale: intl/tst-gettext failure with latest msgfmt
[21547] localedata: Tibetan script collation broken (Dzongkha and Tibetan)
[21812] network: getifaddrs() returns entries with ifa_name == NULL
[21895] libc: ppc64 setjmp/longjmp not fully interoperable with static
dlopen
[21942] dynamic-link: _dl_dst_substitute incorrectly handles $ORIGIN: with
AT_SECURE=1
[22241] localedata: New locale: Yakut (Sakha) locale for Russia (sah_RU)
[22247] network: Integer overflow in the decode_digit function in
puny_decode.c in libidn (CVE-2017-14062)
[22342] nscd: NSCD not properly caching netgroup
[22391] nptl: Signal function clear NPTL internal symbols inconsistently
[22550] localedata: es_ES locale (and other es_* locales): collation
should treat ñ as a primary different character, sync the collation
for Spanish with CLDR
[22638] dynamic-link: sparc: static binaries are broken if glibc is built
by gcc configured with --enable-default-pie
[22639] time: year 2039 bug for localtime etc. on 64-bit platforms
[22644] string: memmove-sse2-unaligned on 32bit x86 produces garbage when
crossing 2GB threshold (CVE-2017-18269)
[22646] localedata: redundant data (LC_TIME) for es_CL, es_CU, es_EC and
es_BO
[22735] time: Misleading typo in time.h source comment regarding
CLOCKS_PER_SECOND
[22753] libc: preadv2/pwritev2 fallback code should handle offset=-1
[22761] libc: No trailing `%n' conversion specifier in FMT passed from
`__assert_perror_fail ()' to `__assert_fail_base ()'
[22766] libc: all glibc internal dlopen should use RTLD_NOW for robust
dlopen failures
[22786] libc: Stack buffer overflow in realpath() if input size is close
to SSIZE_MAX (CVE-2018-11236)
[22787] dynamic-link: _dl_check_caller returns false when libc is linked
through an absolute DT_NEEDED path
[22792] build: tcb-offsets.h dependency dropped
[22797] libc: pkey_get() uses non-reserved name of argument
[22807] libc: PTRACE_* constants missing for powerpc
[22818] glob: posix/tst-glob_lstat_compat failure on alpha
[22827] dynamic-link: RISC-V ELF64 parser mis-reads flag in ldconfig
[22830] malloc: malloc_stats doesn't restore cancellation state on stderr
[22848] localedata: ca_ES: update date definitions from CLDR
[22862] build: _DEFAULT_SOURCE is defined even when _ISOC11_SOURCE is
[22884] math: RISCV fmax/fmin handle signalling NANs incorrectly
[22896] localedata: Update locale data for an_ES
[22902] math: float128 test failures with GCC 8
[22918] libc: multiple common of `__nss_shadow_database'
[22919] libc: sparc32: backtrace yields infinite backtrace with
makecontext
[22926] libc: FTBFS on powerpcspe
[22932] localedata: lt_LT: Update of abbreviated month names from CLDR
required
[22937] localedata: Greek (el_GR, el_CY) locales actually need ab_alt_mon
[22947] libc: FAIL: misc/tst-preadvwritev2
[22963] localedata: cs_CZ: Add alternative month names
[22987] math: [powerpc/sparc] fdim inlines errno, exceptions handling
[22996] localedata: change LC_PAPER to en_US in es_BO locale
[22998] dynamic-link: execstack tests are disabled when SELinux is
disabled
[23005] network: Crash in __res_context_send after memory allocation
failure
[23007] math: strtod cannot handle -nan
[23024] nss: getlogin_r is performing NSS lookups when loginid isn't set
[23036] regex: regex equivalence class regression
[23037] libc: initialize msg_flags to zero for sendmmsg() calls
[23069] libc: sigaction broken on riscv64-linux-gnu
[23094] localedata: hr_HR: wrong thousands_sep and mon_thousands_sep
[23102] dynamic-link: Incorrect parsing of multiple consecutive $variable
patterns in runpath entries (e.g. $ORIGIN$ORIGIN)
[23137] nptl: s390: pthread_join sometimes block indefinitely (on 31bit
and libc build with -Os)
[23140] localedata: More languages need two forms of month names
[23145] libc: _init/_fini aren't marked as hidden
[23152] localedata: gd_GB: Fix typo in "May" (abbreviated)
[23171] math: C++ iseqsig for long double converts arguments to double
[23178] nscd: sudo will fail when it is run in concurrent with commands
that changes /etc/passwd
[23196] string: __mempcpy_avx512_no_vzeroupper mishandles large copies
(CVE-2018-11237)
[23206] dynamic-link: static-pie + dlopen breaks debugger interaction
[23208] localedata: New locale - Lower Sorbian (dsb)
[23233] regex: Memory leak in build_charclass_op function in file
posix/regcomp.c
[23236] stdio: Harden function pointers in _IO_str_fields
[23250] nptl: Offset of __private_ss differs from GCC
[23253] math: tgamma test suite failures on i686 with -march=x86-64
-mtune=generic -mfpmath=sse
[23259] dynamic-link: Unsubstituted ${ORIGIN} remains in DT_NEEDED for
AT_SECURE
[23264] libc: posix_spawnp wrongly executes ENOEXEC in non compat mode
[23266] nis: stringop-truncation warning with new gcc8.1 in nisplus-
parser.c
[23272] math: fma(INFINITY,INFIITY,0.0) should be INFINITY
[23277] math: nan function should not have const attribute
[23279] math: scanf and strtod wrong for some hex floating-point
[23280] math: wscanf rounds wrong; wcstod is ok for negative numbers and
directed rounding
[23290] localedata: IBM273 is not equivalent to ISO-8859-1
[23303] build: undefined reference to symbol
'__parse_hwcap_and_convert_at_platform@@GLIBC_2.23'
[23307] dynamic-link: Absolute symbols whose value is zero ignored in
lookup
[23313] stdio: libio vtables validation and standard file object
interposition
[23329] libc: The __libc_freeres infrastructure is not properly run across
DSO boundaries.
[23349] libc: Various glibc headers no longer compatible with
<linux/time.h>
[23351] malloc: Remove unused code related to heap dumps and malloc
checking
[23363] stdio: stdio-common/tst-printf.c has non-free license
[23396] regex: Regex equivalence regression in single-byte locales
[23422] localedata: oc_FR: More updates of locale data
[23442] build: New warning with GCC 8
[23448] libc: Out of bounds access in IBM-1390 converter
[23456] libc: Wrong index_cpu_LZCNT
[23458] build: tst-get-cpu-features-static isn't added to tests
[23459] libc: COMMON_CPUID_INDEX_80000001 isn't populated for Intel
processors
[23467] dynamic-link: x86/CET: A property note parser bug

Release Notes
=============

https://sourceware.org/glibc/wiki/Release/2.28

Contributors
============

This release was made possible by the contributions of many people.
The maintainers are grateful to everyone who has contributed
changes or bug reports. These include:

Adhemerval Zanella
Agustina Arzille
Alan Modra
Alexandre Oliva
Amit Pawar
Andreas Schwab
Andrew Senkevich
Andrew Waterman
Aurelien Jarno
Carlos O'Donell
Chung-Lin Tang
DJ Delorie
Daniel Alvarez
David Michael
Dmitry V. Levin
Dragan Stanojevic - Nevidljivi
Florian Weimer
Flávio Cruz
Francois Goichon
Gabriel F. T. Gomes
H.J. Lu
Herman ten Brugge
Hongbo Zhang
Igor Gnatenko
Jesse Hathaway
John David Anglin
Joseph Myers
Leonardo Sandoval
Maciej W. Rozycki
Mark Wielaard
Martin Sebor
Michael Wolf
Mike FABIAN
Patrick McGehearty
Patsy Franklin
Paul Pluzhnikov
Quentin PAGÈS
Rafal Luzynski
Rajalakshmi Srinivasaraghavan
Raymond Nicholson
Rical Jasan
Richard Braun
Robert Buj
Rogerio Alves
Samuel Thibault
Sean McKean
Siddhesh Poyarekar
Stefan Liebler
Steve Ellcey
Sylvain Lesage
Szabolcs Nagy
Thomas Schwinge
Tulio Magno Quites Machado Filho
Valery Timiriliyev
Vincent Chen
Wilco Dijkstra
Zack Weinberg
Zong Li

01 August, 2018 07:08AM by Carlos O'Donell

July 30, 2018

gdbm @ Savannah

Lonely Cactus

On the Joys and Perils of YouTube

For want of a social aspect to my technology addiction, of late I have been recording video content and placing it on YouTube.  It has been an interesting endeavor, because it involves skills that I heretofore have never trained.  How does one look good on camera?  What does one do with one's hands?  What it the efficient way to record and edit video.  What is the right way to do lighting and audio?  It has been fun so far, largely because the videos I've recorded still look so amateurish.  That I will be able to learn and progress at something new enchants me.

YouTube is an amazing platform, and the result of untold man-years of effort. The voice recognition involved in the automatic closed captioning is impressive.

But as any graybeard GNU-ster will attest, placing you content solely in the hands of a faceless evil corporation like Google is unwise, since I am not their customer.  Their advertisers are their customers, and their users are just free content creators and a source of training data for their AI.  So, in parallel, I've been revisiting the idea of resurrecting my website.

It is a somewhat overwhelming idea for me because there are infinite possibilities.  I could (and should) just do a WordPress instance and call it a day, for that would be efficient, but, I would love to take the opportunity to learn something new.

Over the weekend, I enumerated my many sources of internet content.  So far, I've discovered
  • YouTube
  • Twitter
  • A code blog on Blogger
  • A personal website, hosted by a hosting provided, that is never updated
  • Another personal website that is just a parked domain right now
  • Yet another personal website, on my home PC, that is rarely updated
  • A security camera
Over a weekend's pondering, I have decided that I will keep three projects, and that each of these will just be different skins on the same content.
  • The true content backend -- not publicly visible -- which will be the source.  Video, images, audio will be stored in their native resolution and formats
  • A website.  It will be GNU-friendly.  No weird javascript.  Video will be medium resolution to split the difference quality and download time.  Probably 720p Ogg+Theora+Vorbis.  Audio will be Ogg+Vorbis or MP3.  Images will be JPEG.
  • YouTube.
  • A gopher server where the 1990s will live on forever.  Video will be shrunk to 352x288 pixel MP4 or CIF-sized 3gp+h.263+AMR_NB.  Audio will be MP3.  Images will be 640x480 GIF.
I will probably end up with a LAMP instance with Python/Django or whatever because hosting VMs like that.

---

In my personal archaeology, I also found these projects that are not externally visible or not working
  • An instance of the never-completed telnet PupperBBS, which has no content
  • A real-time chat service called Jozabad
  • A shoutcast/icecast server that has been serving up the same song for who knows how long

30 July, 2018 03:09PM by Mike (noreply@blogger.com)

July 27, 2018

libredwg @ Savannah

Revealing unknown DWG classes (2)

I've added more solver code and a more detailled explanation to the HACKING file, to find the binary layout of unknown DWG classes, in reference to public docs and generated DXF files. See https://savannah.gnu.org/forum/forum.php?forum_id=9197 for the first part.

So this is now the real AI part of examples/unknown. I've added the spec of the following classes in the meantime, in various states of completeness:
ASSOCDEPENDENCY, DIMASSOC, ASSOCACTION, the 4 SURFACE classes, HELIX, UNDERLAY and UNDERLAYDEFINITION (PDF, DGN, DWF), ASSOCALIGNEDDIMACTIONBODY, DYNAMICBLOCKPURGEPREVENTER, DBCOLOR, PERSSUBENTMANAGER, ASSOC2DCONSTRAINTGROUP, EVALUATION_GRAPH, ASSOCPERSSUBENTMANAGER, ASSOCOSNAPPOINTREFACTIONPARAM, ASSOCNETWORK, SUNSTUDY.
Many more are in work, as with the picat solver and backtracker I can now create the most promising solutions at scale.

There's a lot of code related to examples/unknown to automatically
find the field layout of yet unknown classes. At first you need
DWG/DXF pairs of unknown entities or objects and put them into
test/test-data/. At creation take care to create uniquely identifiable
names and numbers, not to create DXF fields all with the same value 0.
Then you'll never known which field in the DWG is which.

Then run make -C examples regen-unknown, which does this:

run ./logs.sh to create -v5 logfiles with the binary blobs for all
UNKNOWN_OBJ and UNKNOWN_ENT instances in those DWG's.

Then the perl script log_unknown.pl creates the include file
alldwg.inc adding all those blobs.

The next perl script log_unknown_dxf.pl parses alldwg.inc and looks
for matching DXF files, and creates the 3 include files alldxf_0.inc
with the matching blob data from alldwg.inc, alldxf_1.inc with the
matching field types and values from the DXF and alldxf_2.inc to
workaround some static initialization issues in the C file.

Next run make unknown, which does this:

Compiles and runs examples/unknown, which creates for a every string
value in the DXF some bits representations and tries to find them in
the UNKNOWN blobs. If it doesn't find them, either the string-to-bit
conversion lost too much precision to be able to find them, esp. with
doubles, or we have a different problem. make unknown creates a big
log file unknown-`git describe`.log in which you can see the
individual statistics and initial layout guesses.

E.g.
42/230=18.3%
possible: [34433333344443333334444333333311xxxxxxxxxx3443333...
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
11 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 11 1]

The x stands for a fixed field, the numbers and a dot for the number
of variants this bit is used for (the dot for >9) and a space means
this is a hole for a field which is not represented as DXF field, i.e.
a FIELD_*(name, 0) in the dwg.spec with DXF group code 0.

unknown also creates picat data files in examples/ which are then used with
picat from http://picat-lang.org/ to enhance the search for the best layout
guess for each particular class. picat is a nice mix of a functional
programming tool with an optional constraint solver. The first part in
the picat process does almost the same as unknown.c, finding the fixed
layout, possible variants and holes in a straight-forward functional
fashion. This language is very similar to erlang, untyped haskell or prolog.
The second optimization part of picat uses a solver with
constraints to improve the layout of the found variants and holes to
find the best guess for the needed dwg.spec layout.
Note that picat list and array indices are one-based, so you need to
subtract 1 from each found offset. 1-32 mean the bits 0-31.

The field names are filled in by examples/log_unknown_dxf.pl automatically.
We could parse dwg.spec for this, but for now I went with a manual solution,
as the number of unknown classes gets less, not more.

E.g. for ACAD_EVALUATION_GRAPH.pi with a high percentage from the above
possible layout, it currently produces this:

Definite result:
----------------
HOLE([1,32],01000000010100000001010000000110) len = 32
FIELD_BL (edge_flags, 93); // 32 [33,42]
HOLE([43,52],0100000001) len = 10
FIELD_BL (node_edge1, 92); // -1 [53,86]
FIELD_BL (node_edge2, 92); // -1 [87,120]
FIELD_BL (node_edge3, 92); // -1 [121,154]
FIELD_BL (node_edge4, 92); // -1 [155,188]
HOLE([189,191],100) len = 3
FIELD_H (parenthandle, 330); // 6.0.0 [192,199]
FIELD_H (evalexpr, 360); // 3.2.2E2 [200,223]
HOLE([224,230],1100111) len = 7
----------------
Todo: 32 + 178 = 210, Missing: 20
FIELD_BL (has_graph, 96); // 1 0100000001 [[1,10],[11,20],[21,30],[43,52]]
FIELD_BL (unknown1, 97); // 1 0100000001 [[1,10],[11,20],[21,30],[43,52]]
FIELD_BL (nodeid, 91); // 0 10 [[2,3],[10,11],[12,13],[20,21],[22,23],[31,32],[44,45],[52,53],[189,190],[225,226]]
FIELD_BL (num_evalexpr, 95); // 1 0100000001 [[1,10],[11,20],[21,30],[43,52]]

The next picat steps do automate the following reasoning:

The first hole 1-32 is filled by the 3 1 values from BL96, BL97 and
BL95, followed by the 0 value from BL91. The second hole is clearly
another unknown BL with value 1. The third hole at 189-191
is padding before the handle stream, and can be ignored. This is from
a r2010 file, which has seperate handle and text streams. The last
hole 224-230 could theoretically hold almost another unknown handle, but
practically it's also just padding. The last handles are always optional
reactors and the xdicobject handle for objects, and 7 bits is not enough
for a handle value. A code 4 null-handle would be 01000000.

You start by finding the DXF documentation and the ObjectARX header
file of the class, to get the names and description of the class.

You add the names and types to dwg.h and dwg.spec, change the class
type in classes.inc to DEBUGGING or UNTESTED. With DEBUGGING add the
-DDEBUG_CLASSES flag to CFLAGS in src/Makefile and test the dwg's with
programs/dwgread -v4. Some layouts are version dependent, some need
a REPEAT loop or vector with a num_field field.

The picat constraints module examples/unknown.pi is still being worked
and is getting better and better identifying all missing classes
automatically. The problem with AutoCAD DWG's is that everybody can
add their own custom classes as ObjectARX application, and that
reverse-engineering them never stops. So it has to be automated somehow.

27 July, 2018 09:55AM by Reini Urban

July 24, 2018

GNU Guix

Multi-dimensional transactions and rollbacks, oh my!

One of the highlights of version 0.15.0 was the overhaul of guix pull, the command that updates Guix and its package collection. In Debian terms, you can think of guix pull as:

apt-get update && apt-get install apt

Let’s be frank, guix pull does not yet run as quickly as this apt-get command—in the “best case”, when pre-built binaries are available, it currently runs in about 1m30s on a recent laptop. More about the performance story in a future post…

One of the key features of the new guix pull is the ability to roll back to previous versions of Guix. That’s a distinguishing feature that opens up new possibilities.

“Profile generations”

Transactional upgrades and rollbacks have been a distinguishing feature of Guix since Day 1. They come for free as a consequence of the functional package management model inherited from the Nix package manager. To many users, this alone is enough to justify using a functional package manager: if an upgrade goes wrong, you can always roll back. Let’s recap how this all works.

As a user, you install packages in your own profile, which defaults to ~/.guix-profile. Then from time to time you update Guix and its package collection:

$ guix pull

This updates ~/.config/guix/current, giving you an updated guix executable along with an updated set of packages. You can now upgrade the packages that are in your profile:

$ guix package -u
The following packages will be upgraded:
   diffoscope   93 → 96     /gnu/store/…-diffoscope-96
   emacs    25.3 → 26.1     /gnu/store/…-emacs-26.1
   gimp     2.8.22 → 2.10.4 /gnu/store/…-gimp-2.10.4
   gnupg    2.2.7 → 2.2.9   /gnu/store/…-gnupg-2.2.9

The upgrade creates a new generation of your profile—the previous generation of your profile, with diffoscope 93, emacs 25.3, and so on is still around. You can list profile generations:

$ guix package --list-generations
Generation 1  Jun 08 2018 20:06:21
   diffoscope   93     out   /gnu/store/…-diffoscope-93
   emacs        25.3   out   /gnu/store/…-emacs-25.3
   gimp         2.8.22 out   /gnu/store/…-gimp-2.8.22
   gnupg        2.2.7  out   /gnu/store/…-gnupg-2.2.7
   python       3.6.5  out   /gnu/store/…-python-3.6.5

Generation 2  Jul 12 2018 12:42:08     (current)
-  diffoscope   93     out   /gnu/store/…-diffoscope-93
-  emacs        25.3   out   /gnu/store/…-emacs-25.3
-  gimp         2.8.22 out   /gnu/store/…-gimp-2.8.22
-  gnupg        2.2.7  out   /gnu/store/…-gnupg-2.2.7
+  diffoscope   96     out   /gnu/store/…-diffoscope-96
+  emacs        26.1   out   /gnu/store/…-emacs-26.1
+  gimp         2.10.4 out   /gnu/store/…-gimp-2.10.4
+  gnupg        2.2.9  out   /gnu/store/…-gnupg-2.2.9

That shows our two generations with the diff between Generation 1 and Generation 2. We can at any time run guix package --roll-back and get our previous versions of gimp, emacs, and so on. Each generation is just a bunch of symlinks to those packages, so what we have looks like this:

Image of the profile generations.

Notice that python was not updated, so it’s shared between both generations. And of course, all the dependencies that didn’t change in between—e.g., the C library—are shared among all packages.

guix pull generations

Like I wrote above, guix pull brings the latest set of package definitions from Git master. The Guix package collection usually contains only the latest version of each package; for example, current master only has version 26.1 of Emacs and version 2.10.4 of the GIMP (there are notable exceptions such as GCC or Python.) Thus, guix package -i gimp, from today’s master, can only install gimp 2.10.4. Often, that’s not a problem: you can keep old profile generations around, so if you really need that older version of Emacs, you can run it from your previous generation.

Still, having guix pull keep track of the changes to Guix and its package collection is useful. Starting from 0.15.0, guix pull creates a new generation, just like guix package does. After you’ve run guix pull, you can now list Guix generations as well:

$ guix pull -l
Generation 10   Jul 14 2018 00:02:03
  guix 27f7cbc
    repository URL: https://git.savannah.gnu.org/git/guix.git
    branch: origin/master
    commit: 27f7cbc91d1963118e44b14d04fcc669c9618176
Generation 11   Jul 20 2018 10:44:46
  guix 82549f2
    repository URL: https://git.savannah.gnu.org/git/guix.git
    branch: origin/master
    commit: 82549f2328c59525584b92565846217c288d8e85
  14 new packages: bsdiff, electron-cash, emacs-adoc-mode,
    emacs-markup-faces, emacs-rust-mode, inchi, luakit, monero-gui,
    nethack, openbabel, qhull, r-txtplot, stb-image, stb-image-write
  52 packages upgraded: angband@4.1.2, aspell-dict-en@2018.04.16-0,
    assimp@4.1.0, bitcoin-core@0.16.1, botan@2.7.0, busybox@1.29.1,
    …
Generation 12   Jul 23 2018 15:22:52    (current)
  guix fef7bab
    repository URL: https://git.savannah.gnu.org/git/guix.git
    branch: origin/master
    commit: fef7baba786a96b7a3100c9c7adf8b45782ced37
  20 new packages: ccrypt, demlo, emacs-dired-du,
    emacs-helm-org-contacts, emacs-ztree, ffmpegthumbnailer, 
    go-github-com-aarzilli-golua, go-github-com-kr-text, 
    go-github-com-mattn-go-colorable, go-github-com-mattn-go-isatty, 
    go-github-com-mgutz-ansi, go-github-com-michiwend-golang-pretty, 
    go-github-com-michiwend-gomusicbrainz, go-github-com-stevedonovan-luar, 
    go-github-com-wtolson-go-taglib, go-github-com-yookoala-realpath, 
    go-gitlab-com-ambrevar-damerau, go-gitlab-com-ambrevar-golua-unicode,
    guile-pfds, u-boot-cubietruck
  27 packages upgraded: c-toxcore@0.2.4, calibre@3.28.0,
    emacs-evil-collection@20180721-2.5d739f5, 
    …

The nice thing here is that guix pull provides high-level information about the differences between two subsequent generations of Guix.

In the end, Generation 1 of our profile was presumably built with Guix Generation 11, while Generation 2 of our profile was built with Guix Generation 12. We have a clear mapping between Guix generations as created by guix pull and profile generations as created with guix package:

Image of the Guix generations.

Each generation created by guix pull corresponds to one commit in the Guix repo. Thus, if I go to another machine and run:

$ guix pull --commit=fef7bab

then I know that I get the exact same Guix instance as my Generation 12 above. From there I can install diffoscope, emacs, etc. and I know I’ll get the exact same binaries as those I have above, thanks to reproducible builds.

These are very strong guarantees in terms of reproducibility and provenance tracking—properties that are typically missing from “applications bundles” à la Docker.

In addition, you can easily run an older Guix. For instance, this is how you would install the version of gimp that was current as of Generation 10:

$ ~/.config/guix/current-10-link/bin/guix package -i gimp

At this point your profile contains gimp coming from an old Guix along with packages installed from the latest Guix. Past and present coexist in the same profile. The historical dimension of the profile no longer matches exactly the history of Guix itself.

Composing Guix revisions

Some people have expressed interest in being able to compose packages coming from different revisions of Guix—say to create a profile containing old versions of Python and NumPy, but also the latest and greatest GCC. It may seem far-fetched but it has very real applications: there are large collections of scientific packages and in particular bioinformatics packages that don’t move as fast as our beloved flagship free software packages, and users may require ancient versions of some of the tools.

We could keep old versions of many packages but maintainability costs would grow exponentially. Instead, Guix users can take advantage of the version control history of Guix itself to mix and match packages coming from different revisions of Guix. As shown above, it’s already possible to achieve this by running the guix program off the generation of interest. It does the job, but can we do better?

In the process of enhancing guix pull we developed a high-level API that allows an instance of Guix to “talk” to a different instance of Guix—an “inferior”. It’s what allows guix pull to display the list of packages that were added or upgraded between two revisions. The next logical step will be to provide seamless integration of packages coming from an inferior. That way, users would be able to refer to “past” package graphs right from a profile manifest or from the command-line. Future work!

On coupling

The time traveler in you might be wondering: Why are package definitions coupled with the package manager, doesn’t it make it harder to compose packages coming from different revisions? Good point!

Tight coupling certainly complicates this kind of composition: we can’t just have any revision of Guix load package definitions from any other revision; this could fail altogether, or it could provide a different build result. Another potential issue is that guix pulling an older revision not only gives you an older set of packages, it also gives you older tools, bug-for-bug.

The reason for this coupling is that a package definition like this one doesn’t exist in a vacuum. Its meaning is defined by the implementation of package objects, by gnu-build-system, by a number of lower-level abstractions that are all defined as extensions of the Scheme language in Guix itself, and ultimately by Guile, which implements the language Guix is written in. Each instance created by guix pull brings all these components. Because Guix is implemented as a set of programming language extensions and libraries, that package definitions depend on all these parts becomes manifest. Instead of being frozen, the APIs and package definitions evolve together, which gives us developers a lot of freedom on the changes we can make.

Nix results from a different design choice. Nix-the-package-manager implements the Nix language, which acts as a “frozen” interface. Package definitions in Nixpkgs are written in that language, and a given version of Nix can possibly interpret both current and past package definitions without further ado. The Nix language does evolve though, so at one point an old Nix inevitably becomes unable to evaluate a new Nixpkgs, and vice versa.

These two approaches make different tradeoffs. Nix’ loose coupling simplifies the implementation and makes it easy to compose old and new package definitions, to some extent; Guix’ tight coupling makes such composition more difficult to implement, but it leaves developers more freedom and, we hope, may support “time travels” over longer period of times. Time will tell!

It’s like driving a DeLorean

Inside the cabin of the DeLorean time machine in “Back to the Future.”

Picture of a DeLorean cabin by Oto Godfrey and Justin Morton, under CC-BY-SA 4.0.

That profile generations are kept around already gave users a time machine of sorts—you can always roll back to a previous state of your software environment. With the addition of roll-back support for guix pull, this adds another dimension to the time machine: you can roll-back to a previous state of Guix itself and from there create alternative futures or even mix bits from the past with bits from the present. We hope you’ll enjoy it!

24 July, 2018 12:30PM by Ludovic Courtès

July 22, 2018

parallel @ Savannah

GNU Parallel 20180722 ('Crimson Hexagon') released [alpha]

GNU Parallel 20180722 ('Crimson Hexagon') [alpha] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

This release has significant changes and is considered alpha quality.

Quote of the month:

I've been using GNU Parallel very much and effectively lately.
Such an easy way to get huge speed-ups with my simple bash/Perl/Python
programs -- parallelize them!
-- Ken Youens-Clark @kycl4rk@twitter

New in this release:

  • The quoting engine has been changed. Instead of using \-quoting GNU Parallel now uses '-quoting in bash/ash/dash/ksh. This should improve compatibility with different locales. This is a big change causing this release to be alpha quality.
  • The CPU calculation has changed. By default GNU Parallel uses the number of CPU threads as the number of CPUs. This can be changed to the number of CPU cores or number of CPU sockets with --use-cores-instead-of-threads or --use-sockets-instead-of-threads.
  • The detected number of sockets, cores, and threads can be shown with --number-of-sockets, --number-of-cores, and --number-of-threads.
  • env_parallel now support mksh using env_parallel.mksh.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, April 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 July, 2018 07:41AM by Ole Tange

July 17, 2018

Riccardo Mottola

Graphos GNUstep and Tablet interface

I have acquired a Thinkpad X41 Tablet and worked quite a bit on it making it usable and then installing Linux and of course GNUstep on it. The original battery was dead and the compatible replacement I got is bigger, it works very well, but makes the device unbalanced.

Anyway, my interest about it how usable GNUstep applications would be and especially Graphos, its (and my) drawing application.

Using the interface in Tablet mode is different: the stylus is very precise and allows clicking by pointing the tip and a second button is also possible. However, contrary to the mouse use, the keyboard is folded so no keyboard modifiers are possible. Furthermore GNUstep has no on-screen keyboard so typing is not possible.

The classic OpenStep-style Menus work exceedingly well with a touch interface:the menus are easy to click and teared-out they remain like palettes, making toolbars not necessary.
This is a good start!

However, Graphos was not easy to use: aside from typing text, with no keyboard, several components requried typing (e.g. inserting Line Width).
I worked on the interface so that all these elements also had a clickable interface (e.g. Stepper Arrows). Duplicating certain items available in context-menus in regular menus, which can be detached, provided also an enhancements.
Standard items like the color wheel already work very well


Drawing on the screen is definitely very precise and convenient. Stay tuned for the upcoming release!

17 July, 2018 10:41AM by Riccardo (noreply@blogger.com)

July 13, 2018

libredwg @ Savannah

Revealing unknown DWG classes

I implemented three major buzzwords today in some trivial ways.

  • massive parallel processing
  • asynchronous processing
  • machine-learning: a self-improving program

The problem is mostly trivial, and the solutions also. I need to
reverse-engineer a binary closed file-format, but got some hints from
a related ASCII file-format, DWG vs DXF.

I have several pairs of files, and a helper library to convert the
ASCII data to the binary representation in the DWG. There are various
variants for most data values, and several fields are unknown, they
are not represented in the DXF, only in the DWG. So I wrote an example
program called unknown, which walks over all unknown binary blobs and
tries to find the matching known values. If a bitmap is found only
once, we have a unique match, if it's found multiple times, we have
several possibilities the fields could be laid out or if it is not
found, we have a problem, the binary representation is wrong.

When preparing the program called unknown, I decided to parse to log
files in perl and store the unknown blobs as C `.inc` files, to be
compiled into unknown as array of structs.

Several DWG files are too large and either produce too large log files
filling my hard disc or cannot be parsed properly leading to overly huge
mallocs and invalid loops, so these files need to be killed after some
timeout of 10s.

So instead of

for d in test/test-data/*.dwg; do
log=`basename "$d" .dwg`.log
echo $d
programs/dwgread -v5 "$d" 2>$log
done

I improved it to

for d in test/test-data/*.dwg; do
log=`basename "$d" .dwg`.log
echo $d
programs/dwgread -v5 "$d" 2>$log &
(sleep 10s; kill %1 2>/dev/null) &
done

The dwgread program is put into the background, with %1 being its PID,
and `sleep 10s; kill %1` implements a simple timeout via bash, not via
perl. Both processes are in the background and the second optionally
kills the first. So with some 100 files in test-data, this is
**massive parallelization**, as the dwgread processes immediately
return, and it's output appears some time later, when the process is
finished or killed. So it's also **asynchronous**, as I cannot see the
result of each individual process anymore, which returned SUCCESS,
which returned ERROR and which was killed. You need to look at the
logfiles, similar to debugging hard real-world problems, like
real-time controllers. This processing is also massively faster, but
mostly I did it to implement a simple timeout mechanism in bash.

The next problem with the background processing is that I don't know
when all the background processes stopped, so I had to add one more
line:

while pgrep dwgread; do sleep 1; done

Otherwise I would continue processing the logfiles, creating my C
structs from these, but some logfiles would still grow and I would
miss several unknown classes.

The processed data is ~10GB gigabyte large, so massive parallel
processing saves some time. The log files are only temporarily needed
to extract the binary blobs and can be removed later.

Eventually I turnned off the massive parallelization using another
timeout solution:

for d in test/test-data/*.dwg; do
log=`basename "$d" .dwg`.log
echo $d
timeout -k 1 10 programs/dwgread -v5 "$d" 2>$log
done

I could also use GNU [parallel](https://www.gnu.org/software/parallel/parallel_tutorial.html) with timeout instead to re-enable the
parallelization though and collect the async results properly.

parallel timeout 10 programs/dwgread -v5 {} \2\>{/.}.log ::: test/test-data/*.dwg
cd test/test-data
parallel timeout 10 ../../programs/dwgread -v5 {} \2\>../../{/.}_{//}.log ::: \*/\*.dwg

So now the other interesting problem, the machine-learning part.
Let me show you first some real data I'm creating.

Parsing the logfiles and DXF data via some trivial perl scripts creates an array of such structs:

{ "ACDBASSOCOSNAPPOINTREFACTIONPARAM", "test/test-data/example_2000.dxf", 0x393, /* 473 */
"\252\100\152\001\000\000\000\000\000\000\074\057\340\014\014\200\345\020\024\126\310\100", 176, NULL },

/* ACDBASSOCOSNAPPOINTREFACTIONPARAM 393 in test/test-data/example_2000.dxf */
static const struct _unknown_field unknown_dxf_473[] = {
{ 5, "393", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 330, "392", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 100, "AcDbAssocActionParam", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 1, "", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 100, "AcDbAssocCompoundActionParam", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "1", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 360, "394", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 330, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 100, "ACDBASSOCOSNAPPOINTREFACTIONPARAM", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 90, "1", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 40, "-1.0", NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1} },
{ 0, NULL, NULL, 0, BITS_UNKNOWN, {-1,-1,-1,-1,-1}}
};

I prefer the data to be compiled in, so it's not on the heap but in
the .DATA segment as const. And from this data, the programs creates
this log:

ACDBASSOCOSNAPPOINTREFACTIONPARAM: 0x393 (176) test/test-data/example_2000.dxf
=bits: 010101010000001001010110100000000000000000000000000000000000000000000
0000000000000111100111101000000011100110000001100000000000110100111000010000
0101000011010100001001100000010
handle 0.2.393 (0)
search 5:"393" (24 bits of type HANDLE [0]) in 176 bits
=search (0): 010000001100000011001001
handle 4.2.392 (0)
search 330:"392" (24 bits of type HANDLE [1]) in 176 bits
=search (0): 010000101100000001001001
handle 8.0.0 (393)
search 330:"392" (8 bits of type HANDLE [1]) in 176 bits
=search (0): 00000001
330: 392 [HANDLE] found 3 at offsets 75-82, 94, 120 /176
100: AcDbAssocActionParam
search 90:"0" (2 bits of type BL [3]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 1:"" (2 bits of type TV [4]) in 176 bits
=search (0): 00
1: [TV] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
100: AcDbAssocCompoundActionParam
search 90:"0" (2 bits of type BL [6]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"0" (2 bits of type BL [7]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"1" (10 bits of type BL [8]) in 176 bits
=search (0): 0000001000
field 90 already found at 176
search 90:"1" (10 bits of type BL [8]) in 7 bits
=search (169): 0000001000
search 90:"1" (10 bits of type BS [8]) in 7 bits
=search (169): 0000001000
handle 3.2.394 (0)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010011001100000000101001
handle 2.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010001001100000000101001
handle 3.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010011001100000000101001
handle 4.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010000101100000000101001
handle 5.2.394 (393)
search 360:"394" (24 bits of type HANDLE [9]) in 176 bits
=search (0): 010010101100000000101001
handle 6.0.0 (393)
search 360:"394" (8 bits of type HANDLE [9]) in 176 bits
=search (0): 00000110
360: 394 [HANDLE] found 2 at offsets 109-116, 122-129 /176
search 90:"0" (2 bits of type BL [10]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"0" (2 bits of type BL [11]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
handle 4.0.0 (0)
search 330:"0" (8 bits of type HANDLE [12]) in 176 bits
=search (0): 00000010
330: 0 [HANDLE] found 2 at offsets 8-15, 168-175 /176
100: ACDBASSOCOSNAPPOINTREFACTIONPARAM
search 90:"0" (2 bits of type BL [14]) in 176 bits
=search (0): 00
90: 0 [BL] found >5 at offsets 8-9, 9, 10, 11, 12, ... /176
search 90:"1" (10 bits of type BL [15]) in 176 bits
=search (0): 0000001000
field 90 already found at 168
search 90:"1" (10 bits of type BL [15]) in 7 bits
=search (169): 0000001000
search 90:"1" (10 bits of type BS [15]) in 7 bits
=search (169): 0000001000
search 40:"-1.0" (66 bits of type BD [16]) in 176 bits
=search (0): 000000000000000000000000000000000000000000000000001111001111010000
40: -1.0 [BD] found 1 at offset 32-97 /176
66/176=37.5%
possible: [ 8....8187 7......xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx..81 77 7....8118.........9211 77 7..7 7...7 7..7
7..7 77 xxxxxxxx]

It converts each blob into bits of 0 or 1, converts each DXF field to
some binary types, also logged as bits, then some handwritten **membits()**
search similar to `memmem()` or `strstr()` searches the bitmask in the blob
and records all found instances. In the printed **possible [ ]** array the
xxxx represents unique finds, `1-9` and `.` multiple finds and space
holes for unknown fields, not represented in the DXF, DWG only
fields. These can be guessed from the documentation and some thinking.

Most fields itself are specialized run-length bit-encoded, so a 0.0
needs only 2 bits 10, an empty string needs the same 2 bits 10, the
number 0 as BL (bitlong) needs 2 bits 10, and 90:"1", i.e. the number
1 as BL (bitlong) needs 10 bits 0010000000. So you really need unique
values in the sample data to get enough unique finds. In this bad
example, which so far really is the best example in my data I get 37.5% of
exact matches, with 6x 90:0, i.e. 6 times the number 0. You won't know
which binary 00 is the the real number 0. Conversion from
strings to floats is also messy and inprecise. While representing a
double as 64 binary bits is always properly defined in Intel chips,
the reverse is not true, respresenting the string as double can lead
to various representations and I search various variants cutting off
the mantissa precision to find our matching binary double.

What I'll be doing then is to shuffle the values a bit in the DXF to
represent uniquely identifiable values, like 1,2,3,4,5,6, convert this
DXF back to a DWG, and analyse this pair again. This is e.g. what I did to
uniquely identify the position of the header variables in earlier DWG
versions.

Bad percentages are not processed anymore and removed from the
program. So I create a constant feedback loop, with the program
creating these logs, and a list of classes to skip and permute to
create better matches. I could do this within my program for a
complete self-learning process, but I rather create logfiles,
re-analyse them, adjust the data structures shown above, re-compile
the program and run it again. Previously I did such a re-compilation
step via shared modules, which I can compile from within the program,
dlload and dlunload it, but this is silly. Simple C structs on disc
are easier to follow than shared libs in memory. I also store the
intermediate steps in git, so I can check how the progress of the
self-improvement evolves, if at all. Several such objects were getting
worse and worse deviating to 0%, because there were no unique values
and finds anymore. Several representations are also still wrong, some
text values are really values from some external references, such as
the layer name.

So this was a short intro into massive parallel processing,
asynchronous processing, and machine-learning: a self-improving program.
The code in question is here:
https://github.com/LibreDWG/libredwg/tree/master/examples
All the `*.inc` and `*.skip` files are automatically created by make -C examples regen-unknown.

The initial plan was to create a more complex backtracking solver to
find the best matches for all possible variants, but step-wise
refinement in controllable loops and usage of several trivial tools is
so far much easier than real AI. AI really is trivial if you do it
properly.

Followed by https://savannah.gnu.org/forum/forum.php?forum_id=9203 for the actual AI part with picat.

13 July, 2018 10:11PM by Reini Urban

July 12, 2018

Riccardo Mottola

DataBasin + DataBasinKit 1.0 released

A new release (1.0) for DataBasin and its framework DataBasinKit is out!

This release provides lots of news, most of the enhancements coming from the framework and exposed by the GUI:
  • Update login endpoint to login.salesforce.com (back again!)
  • Implement retrieve (get fields from a list of IDs, natively)
  • Support nillable fields on create
  • save HTML tables and pseudo-XLS in HTML-typed formats
  • Fix cloning of connections in case of threading
  • Implement Typing of fields after describing query elements (DBSFDataTypes)

DataBasin is a tool to access and work with SalesForce.com. It allows to perform queries remotely, export and import data, inspect single records and describe objects. DataBasinKit is its underlying framework which implements the APIs in Objective-C. Works on GNUstep (major Unix variants and MinGW on windows) and natively on macOS.

12 July, 2018 04:44PM by Riccardo (noreply@blogger.com)

July 05, 2018

Luca Saiu

The European Parliament has rejected the copyright directive, for now

The EU copyright directive in its present form has deep and wide implications reaching far beyond copyright, and erodes into core human rights and values. For more information I recommend Julia Reda’s analysis at , which is accessible to the casual reader but also contains pointers to the text of the law. Today on June 5, following a few weeks of very intense debate, campaigning and lobbying including deliberate attempts to mislead politicians (), the European Parliament voted in plenary session to reject the directive in its current form endorsed by the JURI committee, and instead reopen the debate. It ... [Read more]

05 July, 2018 10:47PM by Luca Saiu (positron@gnu.org)

libredwg @ Savannah

libredwg-0.5 released [alpha]

See https://www.gnu.org/software/libredwg/ and http://git.savannah.gnu.org/cgit/libredwg.git/tree/NEWS?h=0.5

Here are the compressed sources:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.gz (9.2MB)
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.xz (3.4MB)

Here are the GPG detached signatures[*]:
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.gz.sig
http://ftp.gnu.org/gnu/libredwg/libredwg-0.5.tar.xz.sig

Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html

Here are the SHA256 checksums:

920c1f13378c849d41338173764dfac06b2f2df1bea54e5069501af4fab14dd1 libredwg-0.5.tar.gz
fd7b6d029ec1c974afcb72c0849785db0451d4ef148e03ca4a6c4a4221b479c0 libredwg-0.5.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:

gpg --verify libredwg-0.5.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

05 July, 2018 05:35AM by Reini Urban

July 02, 2018

GNU Guile

GNU Guile 2.2.4 released

We are delighted to announce GNU Guile 2.2.4, the fourth bug-fix release in the new 2.2 stable release series. It fixes many bugs that had accumulated over the last few months, in particular bugs that could lead to crashes of multi-threaded Scheme programs. This release also brings documentation improvements, the addition of SRFI-71, and better GDB support.

See the release announcement for full details and a download link. Enjoy!

02 July, 2018 09:00AM by Ludovic Courtès (guile-devel@gnu.org)