Planet GNU

Aggregation of development blogs from the GNU Project

February 16, 2020

Christopher Allan Webber

Vats and Propagators: towards a global brain

(This is a writeup for future exploration; I will be exploring a small amount of this soon as a side effect of some UI building I am doing, but not a full system. A full system will come later, maybe even by years. Consider this a desiderata document. Also a forewarning that this document was originally written for an ocap-oriented audience, and some terms are left unexpanded; for instance, "vat" really just means a one-turn-at-a-time single-threaded event loop that a bunch of actors live in.)

We have been living the last couple of decades with networks that are capable of communicating ideas. However, by and large it is left to the humans to reason about these ideas that are propagated. Most machines that operate on the network merely execute the will of humans that have carefully constructed them. Recently neural network based machine learning has gotten much better, but merely resembles intuition, not reasoning. (The human brain succeeds by combining both, and a successful system likely will too.) Could we ever achieve a network that itself reasons? And can it be secure enough not to tear itself apart?

Near-term background

In working towards building out a demonstration of petname systems in action in a social network, I ran into the issue of changes to a petname database automatically being reflected through the UI. This lead me back down a rabbit hole of exploring reactive UI patterns, and also lead me back to exploring that section, and the following propagator section, of SICP again. This also lead me to rewatch one of my favorite talks: We Don't Really Know How to Compute! by Gerald Sussman.

At 24:54 Sussman sets up an example problem: specifically, an expert in electrical systems having a sense of how to be able to handle and solve an electrical wiring diagram. (The kind of steps explained are not dissimilar to the kind of steps that programmers go through while reasoning about debugging a coding problem.) Sussman then launches into an exploration of propagators, and how they can solve the problem. Sussman's explanation is better than mine would be, so I'll leave you to watch the video to see how it's used to solve various problems.

Okay, a short explanation of propagators

Well, I guess I'll give a little introduction to propagators and why I think they're interesting.

Propagators have gone through some revisions since the SICP days; relevant reading are the Revised Report on the Propagator Model, The Art of the Propagator, and to really get into depth with the ideas, Propagation networks: a flexible and expressive substrate for computation (Radul's PhD thesis).

In summary, a propagator model has the following properties:

  • There are cells with accumulate information about a value. Note! This is a big change from previous propagator versions! In the modern version of a propagator model, a cell doesn't hold a value, it accrues information about a value which must be non-contradictory.
  • Such cell information may be complete (the number 42 is all there is to know), whereas some other information may be a range of possibilities (hm, could be anywhere between -5 to 45...). As more information is made available, we can "narrow down" what we know.
  • Cells are connected together with propagators.
  • Information is (usually) bidirectional. For example, with the slope formula of y = (m * x) + b, we don't need to just solve for y... we could solve for m, x, or b given the other information. Similarly, partial information can propagate.
  • Contradictions are not allowed. Attempting to introduce contradictory information into the network will throw an exception.
  • We can "play with" different ideas via a Truth Maintenance System. What do we believe? Changes in our beliefs can result in changes to the generated topology of the network.
  • Debugging is quite possible. One of the goals of propagator networks is that you should be able to investigate and determine blame for a result. Relationships are clear and well defined. As Sussman says (roughly paraphrased), "if an autonomous car drives off the car of the road, I could sue the car manufacturer, but I'd rather sue the car... I want to hold it accountable for its decision making". The ability to hold accountability and determine blame stands in contrast to squishier systems like neural nets, genetic programs, etc (which are still useful, but not as easy to interrogate).

There are a lot of things that can be built with propagators as the general case of constraint solving and reasoning; functional reactive UIs, type checkers, etc etc.

Bridging vats and propagators

The prototype implementations are written in Scheme. The good news is, this means we could implement propagators on top of something like Spritely Goblins.

However (and, granted, I haven't completed it) I think there is one thing that is inaccurately described in Radul's thesis and Sussman's explanations, but which I think actually is no problem at all if we apply the vat model of computation (as in E, Agoric, Goblins): how distributed can these cells and propagators be? Section 2.1 of Radul's thesis explains propagators as asynchronous and completely autonomous, as if cells and their propagators could live anywhere on the computer network with no change in effectiveness. I think this is only partially true. The reference implementation actually does not fully explore this because it uses a single-threaded event loop that processes events until there are no more to process, during which it may encounter a contradiction and raise it. However I believe that the ability to "stop the presses" as it were is one of the nicest features of propagators and actually should not be lost... if we introduced asynchronous events coming in, there may be multiple events that come in at the same time and which try making changes to the propagator network in parallel. Thankfully a nice answer comes in form of a the vat model: it should be possible to have a propagator network within a single vat. Spritely Goblins' implementation of the vat model is transactional, so this means that if we try to introduce a contradiction, we could roll back immediately. This is the right behavior. As it turns out, this is very close to the propagator system in the way it's implemented in the reference implementation... I think the reference implementation did something more or less right while trying to do the simplest thing. Combined with a proper ocap vat model this should work great.

Thus, I believe that a propagator system (here I mean a propagator network, meaning a network of propagator-connected cells) should actually be vat-local. But wait, we talked about network (as in internet) based reasoning, and here I am advocating locality! What gives?

The right answer seems to me that propagator networks should be able to be hooked together, but a change to a vat-contained propagator system can trigger message passing to another vat-contained propagator system, which can even happen over a computer network such as the internet. We will have to treat propagator systems and changes to them as vat-local, but they can still communicate with other propagator systems. (This is a good idea anyway; if you communicate an idea with me and it's inconsistent with my worldview, it should be important for me to be able to realize that and use that as an opportunity to correct our misunderstandings between each other.)

However, cells are still objects with classic object references. This means it is possible to hold onto one and use it as either a local or networked capability. Attenuation also composes nicely; it should be possible to produce a facet of a cell that only allows read access or only allows adding information. It's clear and easily demonstrated that ocaps can be the right security model for the propagator model simply by realizing that both the propagator prototype system is written in scheme, and so is Jonathan Rees' W7 security kernel.

This is all to say, if we built the propagator model on top of an ocap-powered vat model, we'd already have a good network communication model, a good security model, and a transactional model. Sounds great to me.

Best of all, a propagator system can live alongside normal actors. We don't have to choose one or the other... a multi-paradigm approach can work great.

Speaking the same language

One of the most important things in a system that communicates is that ideas should be able to be expressed and considered in such a way that both parties understand. Of course, humans do this, and we call it "language".

Certain primitives exist in our system already; for optimization reasons, we are unlikely to want to build numbers out of mere tallying of numbers (such as in Peano arithmetic); we instead build in primitives for integers and a means of combination for them. So we will of course want to have several primitive data types.

But at some point we will want to talk about concepts that are not encoded in the system. If I would like to tell you about a beautiful red bird I saw, where would I even begin? Well obviously at minimum, we will have to have ways of communicating ideas such as "red" and "bird". We will have to build a vocabulary together.

Natural language vocabulary has a way of becoming ambiguous fast. A "note" passed in class versus a "note" in a musical score versus that I would like to "note" a topic of interest to you are all different things.

Linked data (formerly "semantic web") folks have tried to use full URIs as a way to get around this problem. For instance, two ActivityPub servers which are communicating are very likely speaking about the same thing if they both use "https://www.w3.org/ns/activitystreams#Note", which is to say they are talking about some written note-like message (probably a (micro)blog post). This is not a guarantee; vocabulary drift is still possible, but it is much less likely.

Unfortunately, http(s) based URIs are a poor choice for hosting vocabulary. Domains expire, websites go down, and choosing whether to extend a vocabulary in some namespace is (in the author's experience) a governance nightmare. A better option is "content-addressed vocabulary"; instead of "https://www.w3.org/ns/activitystreams#Note" we could instead simply take the text from the standard:

"Represents a short written work typically less than a single paragraph in length."

Hash that and you get "urn:sha256:54c14cbd844dc9ae3fa5f5f7b8c1255ee32f55b8afaba88ce983a489155ac398". No governance or liveness issues required. (Hashing mechanism upgrades, however, do pose some challenge; mapping old hashes to new ones for equivalence can be a partial solution.)

This seems sufficient to me; groups can collaborate somewhere to hammer out the definition of some term, simply hash the definition of it, and use that as the terminology URI. This also avoids hazards from choosing a different edge of Zooko's Triangle for vocabulary.

Now that we have this, we can express advanced new ideas across the network and experiment with new terms. Better yet, we might be even able to use our propagator networks to associate ideas with them. I think in many systems, content-addressed-vocabulary could be a good way to describe beliefs that could be considered, accepted, rejected in truth maintenance systems.

Cerealize me, cap'n!

One observation from Agoric is that it is possible to treat systems that do not resemble traditional live actor'y vats still as vats (and "machines") and develop semantics for message passing between them (and performing promise resolution) nonetheless, for instance blockchains.

Similarly, above we have observed that propagator systems can be built on top of actors; I believe it is also possible to describe propagator networks in terms of pure data. It should be possible to describe changes to a propagator network as a standard serialized ledger that can be transferred from place to place or reproduced.

However, the fact that interoperability with actors is possible is good, desirable, and thankfully a nice transitional place for experimentation (porting propagator model semantics to Spritely Goblins should not be hard).

Where to from here?

That's a lot of ideas above, but how likely is any of this stuff to be usable soon? I'm not anticipating dropping any current work to try to make this happen, but I probably will be experimenting in my upcoming UI work to try to have the UI powered by a propagator system (possibly even a stripped down version) so that the experimental seeds are in place to see if such a system can be grown. But I am not anticipating that we'll see anything like a fully distributed propagator system doing something interesting from my own network soon... but sometimes I end up surprised.

Closing the loop

I mentioned before that human brains are a combination of faster intuitive methods (resembling current work on neural nets) and slower, more calculating reasoning systems (resembling propagators or some logic programming languages). That's also to say nothing about the giant emotional soup that a mind/body tends to live in.

Realistically the emergence of a fully sapient system won't involve any of these systems independently, but rather a networked interconnection of many of them. I think the vat model of execution is a nice glue system for it; pulling propagators into the system could bring us one step closer, maybe.

Or maybe it's all just fantastical dreaming! Who knows. But it could be interesting to play and find out at some point... perhaps some day we can indeed get a proper brain into a vat.

16 February, 2020 08:55PM by Christopher Lemmer Webber

February 14, 2020

FSF Blogs

Register today for LibrePlanet -- or organize your own satellite instance

LibrePlanet started out as a gathering of Free Software Foundation (FSF) associate members, and has remained a community event ever since. We are proud to bring so many different people together to discuss the latest developments and the future of free software. We envision that some day there will be satellite instances all over the globe livestreaming our annual conference on technology and social justice -- and you can create your own today! All you need is a venue, a screen, and a schedule of LibrePlanet events, which we'll be releasing soon. This year, a free software supporter in Ontario, Canada, has confirmed an event, and we encourage you to host one, too.

Of course, ideally you'll be able to join us in person for LibrePlanet 2020: "Free the Future." If you can come, please register now to let us know -- FSF associate members attend gratis. We are looking forward to receiving the community at the newly confirmed Back Bay Events Center this year. We've put together some information on where to eat, sleep, and park in the vicinity of the new venue.

However, we know that not every free software enthusiast can make it to Boston, which is why we livestream the entire event. You can view it solo, with friends, or even with a large group of like-minded free software enthusiasts! It is a great opportunity to bring other people in your community together to view some of the foremost speakers in free software, including Internet Archive founder and Internet Hall of Famer Brewster Kahle.

We will also host an IRC instance, #libreplanet on Freenode, through which you can be in direct contact with the room monitors, who can relay any questions you may have about the talks going on here in Boston.

If you are working on getting a group of people together for the event, please let us and others know by announcing it on the LibrePlanet wiki and the LibrePlanet email list. If you have any questions, if you need any help organizing, if you'd like some free FSF sticker packs, or if you just want to let us know about a satellite instance, email us at campaigns@fsf.org. We look forward to receiving you here in Boston and all over the world.

LibrePlanet needs volunteers -- maybe you!

LibrePlanet has grown every year in size and scope -- and its continued success is thanks to dozens of volunteers who help prepare for and run the conference. Volunteering is a great way to meet fellow community members and contribute to LibrePlanet, even if you can't attend in person. And yes, remote volunteers are definitely needed to help us moderate IRC chat rooms -- you can help us out from anywhere in the world!

If you are interested in volunteering for LibrePlanet 2020, email resources@fsf.org. We thank all of our in-person volunteers by offering them gratis conference admission, lunch, and a LibrePlanet T-shirt.

Help others attend!

Take your support for LibrePlanet to the next level by helping others attend. We get a lot of requests from people internationally who would like to attend the event. We try to help as many as we can, and with your support, we can really put the "planet" in LibrePlanet.

We also hope that you'll spread the word about LibrePlanet 2020: write a blog, or take it to social media to let people know that you'll be there, using the hashtag #libreplanet.

We hope to see you in March!

14 February, 2020 04:44PM

February 13, 2020

Why freeing Windows 7 opens doors

Since its launch on January 24th, we've had an overwhelming amount of support in our call to "upcycle" Windows 7. Truthfully, the signature count flew far faster than we ever expected it to, even despite our conservative (if aptly numbered) goal of 7,777 signatures. We have seen the campaign called quixotic and even "completely delusional," but in every case, people have recognized the "pragmatic idealism" that is at the core of the FSF's message. Even where this campaign has been attacked, it's nevertheless been understood that the FSF really does want all software to be free software. We recommend every fully free operating system that we are aware of, and want to be able to expand that list to include every operating system. So long as any remain proprietary, we will always work to free them.

Over the last few weeks, we have been carefully watching the press coverage, and are glad to see the message of software freedom popping up in so many places at once. We received a lot of support, and have responded to dozens of comments expressing support, concern, and even outrage over why the FSF would think that upcycling Windows 7 was a good idea, and why it was something we would want to demand.

Microsoft can free Windows. They already have all of the legal rights necessary or the leverage to obtain them. Whether they choose to do so or not is up to them. In the past weeks, we've given them the message that thousands of people around the world want Windows to be freed. Next, we'll give them the medium.

This afternoon we will be mailing an upcycled hard drive along with the signatures to Microsoft's corporate offices. It's as easy as copying the source code, giving it a license notice, and mailing it back to us. As the author of the most popular free software license in the world, we're ready to give them all of the help we can. All they have to do is ask.

We want them to show exactly how much love they have for the "open source" software they mention in their advertising. If they really do love free software -- and we're willing to give them the benefit of the doubt -- they have the opportunity to show it to the world. We hope they're not just capitalizing on the free software development model in the most superficial and exploitative way possible: by using it as a marketing tool to fool us into thinking that they care about our freedom.

Together, we've stood up for our principles. They can reject us, or ignore us, but what they cannot do is stop us. We'll go on campaigning, until all of us are free.

13 February, 2020 05:05PM

February 11, 2020

"I Love Free Software Day": Swipe (copy)left on dating apps

Every year, Free Software Foundation Europe (FSFE) encourages supporters to celebrate Valentine’s Day as “I Love Free Software Day,” a day for supporters to show their gratitude to the people who enable them to enjoy software freedom, including maintainers, contributors, and other activists. It seems appropriate on this holiday to once again address how seeking love on the Internet is, unfortunately, laden with landmines for your freedom and privacy. But today, I’m also going to make the argument that our community should think seriously about developing a freedom-respecting alternative.

Before we get started, though: make sure to show your love and gratitude for free software on February 14 and beyond! Share the graphic below with the hashtag #ilovefs:

fsfe free i love free software day banner

With that said: as you probably heard earlier this year, the hydra-headed Match Group, which divides its customers among Tinder, OKCupid, Match.com, Hinge, and others, as well as several other dating companies, was revealed to be sharing user information in flagrant violation of privacy laws. OKCupid was caught sharing what was described as “highly personal data about sexuality, drug use, political views, and more,” and Grindr has been caught multiple times sharing users' HIV status. All of these apps also tell Facebook everything, whether a user has a profile or not (remember, even if you're not a user, you probably have a shadow Facebook profile!). This is typical behavior for modern technology companies, but the fact that it’s so ordinary makes it neither less ugly nor less flagrant.

Why do people put up with this? It isn’t that they don’t know that their personal information is being treated like candy tossed from a parade float: in 2014, Pew Research Center found that 91% of poll participants “agree or strongly agree that people have lost control over how personal information is collected and used by all kinds of entities.” A 2017 survey found that only 9% of social media users felt sure that Facebook and their ilk were protecting their data. And a 2017 Pew study led researchers to conclude that “a higher percentage of online participation certainly does not indicate a higher level of trust.” One anonymous commenter quipped, “People will expect data breaches, but will use online services anyway because of their convenience. It’s like when people accepted being mugged as the price of living in New York.”

It turns out that even if they're aware of how these companies are mistreating us, many people are making a cost-benefit analysis, and perceiving the benefits they get from these downright skeevy programs as valuable enough to be worth the ever-increasing exposure to the advertisers’ panopticon. As one anonymous Web and mobile developer from the Pew study said, “Being able to buy groceries when you’re commuting, talking with colleagues when doing a transatlantic flight, or simply ordering food for your goldfish right before skydiving will allow people to take more advantage of the scarcest good of our modern times: time itself.”

Here at the Free Software Foundation (FSF), we disagree strongly that the tradeoff is worth it, and it’s central to our mission to convince software users that letting developers pull their strings is destructive to their lives and dangerous to our society. When you use proprietary software, the program controls you, and the people who develop that program can use it as a tool to manipulate you in many absolutely terrifying ways. The same can also be true of services where the software is not distributed at all and is therefore neither free nor nonfree; but step one is to ditch all of the proprietary apps and JavaScript these companies try to get people to use.

Nevertheless, our battle is going to be an uphill one when a majority of people perceive conveniences to be worth the cost. In the case of dating Web sites, by 2015, 59% of people polled by Pew agreed that “online dating is a good way to meet people.” And it’s perceived, at least to some degree, as being effective: according to Pew, “nearly half of the public knows someone who uses online dating or who has met a spouse or partner via online dating.” eHarmony claimed, according to this 2019 article, that four percent of US marriages begin on their site, while a poll by The Knot found that twenty two percent of spouses polled met online. (The eHarmony stats may be questionable, but as part of a sales pitch, it definitely works to draw people in.)

Conversely, the alternative to online dating doesn’t feel very rosy to an increasing number of people. The same poll on The Knot found that one in five couples polled were introduced in a more traditional way, through their personal network, which sounds terrific, except for one small problem: our IRL social networks are shrinking. In 2009, Psychology Today reported that 25% of Americans have not a single friend or family member they can count on, and half of all Americans had nobody outside of their immediate family. So, how do you meet the elusive love of your life? It’s unsurprising that many people reluctantly choose the less obvious potential harms of OKCupid over the more tangible harms of isolation and loneliness. (After all, they’re not exactly trumpeting on their front page, “We’ll help you find a date, but in the meantime, we have information about what you’re into in bed, and we’ll give it to whoever we like!”)

This quandary sets up an extraordinarily unfair proposition: nobody should be forced to sacrifice their freedom in the name of a perceived shot at happiness. At the end of the day, we maintain that it’s not worth it, and you should keep Mark Zuckerberg as far away from your love life as possible, but I don’t think we should stop there, either. I believe that ethical, freedom-respecting online services that facilitate people’s social lives, from finding someone to date to staying in touch with friends far away, are an important social good, and that the free software movement has something unique and important to contribute.

Just as we have encouraged free software enthusiasts to move their social media presence from the walled gardens of Facebook to decentralized, federated services like Mastodon, GNU social, Pixelfed, and Diaspora, we would love to be able to point lovelorn free software supporters to an online dating site that will treat them like a human being rather than a commodity to be dissected into chunks of profitable data. So while we can’t endorse a project that’s barely gotten started at all, much less one that’s being built on Kickstarter, we were pleased to see a Redditor introduce the idea of Alovoa, which “aims to be the first widespread free software dating Web application on the Web.” Alovoa is licensed under AGPLv3, which is an excellent signpost for ethical behavior in the future.

Is Alovoa the solution? It’s far too early to say -- but we do know that the only acceptable solution will be a dating site that is 100% free software. And we also know that the free software community possesses the talent and conviction to make that alternative happen. When you’re freely permitted to use, share, study, modify, and share the modifications of the software you own, there are no shackles on your creativity: you can build the programs that you need, and make them available to everyone else who needs them. Perhaps we can solve the problem of how to find love online without sacrificing your privacy, and that’s only the beginning of the many problems we can solve. If we can build free software that offers ordinary people the conveniences they crave without the ethical tradeoffs, then someday, we will have a future where all software is free.

11 February, 2020 03:55PM

February 10, 2020

Christopher Allan Webber

State of Spritely for February 2020

We are now approximately 50% of the way through the Samsung Stack Zero grant for Spritely, and only a few months more since I announced the Spritely project at all. I thought this would be a good opportunity to review what has happened so far and what's on the way.

In my view, quite a lot has happened over the course of the last year:

  • Datashards grew out of two Spritely projects, Magenc and Crystal. This provides the "secure storage layer" for the system, and by moving into Datashards has even become its own project (now mostly under the maintainership of Serge Wroclawski, who as it turns out is also co-host with me of Libre Lounge. There's external interest in this from the rest of the federated social web, and it was a topic of discussion in the last meeting of the SocialCG. While not as publicly visible recently, the project is indeed active; I am currently helping advise and assist Serge with some of the ongoing work on optimizations for smaller files, fixing the manifest format to permit larger files, and a more robust HTTP API for stores/registries. (Thank you Serge also for taking on a large portion of this work and responsibility!)

  • Spritely Goblins, the actor model layer of Spritely, continues its development. We are now up to release v0.5. I don't consider the API to be stable, but it is stabilizing. In particular, the object/update model, the synchronous communication layer, and the transactional update support are all very close to stable. Asynchronous programming mostly works but has a few bugs I need to work out, and the distributed programming environment design is coming together enough where I expect to be able to demo it soon.

  • In addition, I have finally started to write docs for Spritely Goblins. I think the tutorial above is fairly nice, and I've had a good amount of review from various parties, and those who have tried it seem to think it is fairly nice. (Please be advised that it requires working with the dev branch of Goblins at the time of writing.) v0.6 should the first release to have documentation after the major overhaul I did last summer (effectively an entire rewrite of the system, including many changes to the design after doing research into ocap practices). I cannot recommend that anyone else write production-level code using the system yet, but I hope that by the summer things will have congealed enough that this will change.

  • I have made a couple of publicly visible demos of Goblins' design. Weirdly enough all of these have involved ascii art.

    • The proto-version was the Let's Just Be Weird Together demo. Actually it's a bit strange to say this because the LJBWT demo didn't use Goblins, it used a library called DOS/HURD. However, writing this library (and adapting it from DOS/Win) directly informed the rewrite of Goblins, Goblinoid which eventually became Goblins itself, replacing all the old code. This is why I advocate demo-driven-development: the right design of an architecture flows out of a demo of it. (Oh yeah, and uh, it also allowed me to make a present for my 10th wedding anniversary, too.)

    • Continuing in a similar vein, I made the "Season's Greetings" postcard, which Software Freedom Conservancy actually used in their funding campaign this year. This snowy scene used the new rewrite of Goblins and allowed me to try to push the new "become" feature of Goblins to its limit (the third principle of actor model semantics, taken very literally). It wasn't really obvious to anyone else that this was using Goblins in any interesting way, but I'll say that writing this really allowed me to congeal many things about the update layer and it also lead to uncovering a performance problem, leading to a 10x speedup. Having written this demo, I was starting to get the hang of things in the Goblins synchronous layer.

    • Finally there was the Terminal Phase demo. (See the prototype announcement blogpost and the 1.0 announcement.) This was originally designed as a reward for donors for hitting $500/mo on my Patreon account (you can still show up in the credits by donating!), though once 1.0 made it out the door it seems like it raised considerable excitement on the r/linux subreddit and on Hacker News, which was nice to see. Terminal Phase helped me finish testing and gaining confidence in the transactional object-update and synchronous call semantics of Spritely Goblins, and I now have no doubt that this layer has a good design. But I think Terminal Phase was the first time that other people could see why Spritely Goblins was exciting, especially once I showed off the time travel debugging in Terminal Phase demo. That last post lead people to finally start pinging me asking "when can I use Spritely Goblins"? That's good... I'm glad it's obvious now that Goblins is doing something interesting (though the most interesting things are yet to be demo'ed).

  • I participated in, keynoted, and drummed up enthusiasm for ActivityPub Conference 2019. (I didn't organize though, that was Morgan Lemmer-Webber's doing, alongside Sebastian Lasse and with DeeAnn Little organizing the video recording.) We had a great speaker list and even got Mark S. Miller to keynote. Videos of the event are also available. While that event was obviously much bigger than Spritely, the engagement of the ActivityPub community is obviously important for its success.

  • Relatedly, I continue to co-chair the SocialCG but Nightpool has joined as co-chair which should relieve some pressure there, as I was a bit too overloaded to be able to handle this all on my own. The addition of the SocialHub community forum has also allowed the ActivityPub community to be able to coordinate in a way that does not rely on me being a blocker. Again, not Spritely related directly, but the health of the ActivityPub community is important to Spritely's success.

  • At Rebooting Web of Trust I coordinated with a number of contributors (including Mark Miller) on sketching out plans for secure UI designs. Sadly the paper is incomplete but has given me the framework for understanding the necessary UI components for when we get to the social network layer of Spritely.

  • Further along the lines of sketching out the desiderata of federated social networks, I have written a nearly-complete OcapPub: towards networks of consent. However, there are still some details to be figured out; I have been hammering them out on the cap-talk mailing list (see this post laying out a very ocappub-like design with some known problems, and then this analysis). The ocap community has thankfully been very willing to participate in working with me to hammer out the right security foundations, and I think we're close to the right design details. Of course, the proof of the pudding is in the demo, which has yet to be written.

Okay, so I hope I've convinced you that a lot has happened, and hopefully you feel that I am using my time reasonably well. But there is much, much, much ahead for Spritely to succeed in its goals. So, what's next?

  • I need to finish cleaning up the Goblins documentation and do a v0.6 release with it included. At that point I can start recommending some brave souls to use it for some simple applications.

  • A demo of Spritely Goblins working in a primarily asynchronous environment. This might simply be a port of mudsync as a first step. (Recorded demo of mudsync from a few years ago.) I'm not actually sure. The goal of this isn't to be the "right" social network design (not full OcapPub), just to test the async behaviors of Spritely Goblins. Like the synchronous demos that have already been done, the purpose of this is to congeal and ensure the quality of the async primitives. I expect this and the previous bullet point to be done within the next couple of months, so hopefully by the end of April.

  • Distributed networked programming in Goblins, and associated demo. May expand on the previous demo. Probably will come out about two months later, so end of June.

  • Prototype of the secure UI concepts from the forementioned secure UIs paper. I expect/hope this to be usable by end of third quarter 2020.

  • Somewhere in-between all this, I'd like to add a demo of being able to securely run untrusted code from third parties, maybe in the MUD demo. Not sure when yet.

  • All along, I continue to expect to push out new updates to Terminal Phase with more fun enemies and powerups to continue to reward donors to the Patreon campaign.

This will probably take most of this year. What you will notice is that this does not explicitly state a tie-in with the ActivityPub network. This is intentional, because the main goal of all the above demos are to prove more foundational concepts before they are all fully integrated. I think we'll see the full integration and it coming together with the existing fediverse beginning in early 2021.

Anyway, that's a lot of stuff ahead. I haven't even mentioned my involvement in Libre Lounge, which I've been on hiatus from due to a health issue that has made recording difficult, and from being busy trying to deliver on these foundations, but I expect to be coming back to LL shortly.

I hope I have instilled you with some confidence that I am moving steadily along the abstract Spritely roadmap. (Gosh, I ought to finally put together a website for Spritely, huh?) Things are happening, and interesting ones I think.

But how do you think things are going? Maybe you would like to leave me feedback. If so, feel free to reach out.

Until next time...

10 February, 2020 09:30PM by Christopher Lemmer Webber

FSF Blogs

Thank you for supporting the FSF

On January 17th, we closed the Free Software Foundation (FSF)'s end of the year fundraiser and associate membership drive, bringing 368 new associate members to the FSF community.

This year's fundraiser began with a series of shareable images aiming to bring user freedom issues to the kitchen table, helping to start conversations about the impact that proprietary software has on the autonomy and privacy of our everyday lives. Your enthusiasm in sharing these has been inspiring. We also debuted the ShoeTool video, an animated short presenting a day in the life of an unfortunate elf who is duped into forking over his liberty for the sake of convenience. And we also sent out our biannual issue of the Free Software Bulletin, which had FSF staff writing on topics as diverse as ethical software licensing and online dating.

It is your support of the FSF that makes all of our work possible. Your generosity impacts us on a direct level. It doesn't just keep the lights on, but is also the source of our motivation to fight full-time for software freedom. Your support is at the heart of our work advocating for the use of copyleft and the GPL. It's also what brought seventeen new devices to the RYF program this year, and is what drives our campaigning against Digital Restrictions Management (DRM). We are deeply grateful for the new memberships and donations we have received this year, not to mention the existing members and recurring donors that have enabled us to reach this point. And not to worry, we're working hard to send you the premium gifts we offered as soon as possible!

2020 has started off strong already, with our petition calling on Microsoft to "upcycle" Windows 7 by releasing it as free software, which has reached more than 12,000 signatures in less than a week. And there is much more to come. The campaigns, tech, and licensing teams are all working on ambitious projects that we hope will drive the fight for freedom forward, especially as the FSF enters its 35th year of free software activism.

This year's LibrePlanet: "Free the Future" conference is almost upon us as well, and we're all putting our best into the planning process. LibrePlanet 2020 will see keynotes from speakers including Internet Archive founder Brewster Kahle, and there are still more surprises to come. We hope to see you there.

10 February, 2020 03:51PM

February 09, 2020

Andy Wingo

state of the gnunion 2020

Greetings, GNU hackers! This blog post rounds up GNU happenings over 2019. My goal is to celebrate the software we produced over the last year and to help us plan a successful 2020.

Over the past few months I have been discussing project health with a group of GNU maintainers and we were wondering how the project was doing. We had impressions, but little in the way of data. To that end I wrote some scripts to collect dates and versions for all releases made by GNU projects, as far back as data is available.

In 2019, I count 243 releases, from 98 projects. Nice! Notably, on ftp.gnu.org we have the first stable releases from three projects:

GNU Guix
GNU Guix is perhaps the most exciting project in GNU these days. It's a package manager! It's a distribution! It's a container construction tool! It's a package-manager-cum-distribution-cum-container-construction-tool! Hearty congratulations to Guix on their first stable release.
GNU Shepherd
The GNU Daemon Shepherd is a modern dependency-based init service, written in Guile Scheme, and used in Guix. When you install Guix as an operating system, it actually stages Scheme programs from the operating system definition into the Shepherd configuration. So cool!
GNU Backgammon
Version 1.06.002 is not GNU Backgammon's first stable release, but it is the earliest version which is available on ftp.gnu.org. Formerly hosted on the now-defunct gnubg.org, GNU Backgammon is a venerable foe, and uses neural networks since before they were cool. Welcome back, GNU Backgammon!

The total release counts above are slightly above what Mike Gerwitz's scripts count in his "GNU Spotlight", posted on the FSF blog. This could be because in addition to files released on ftp.gnu.org, I also manually collected release dates for most packages that upload their software somewhere other than gnu.org. I don't count alpha.gnu.org releases, and there were a handful of packages for which I wasn't successful at retrieving their release dates. But as a first approximation, it's a relatively complete data set.

I put my scripts in git repository if anyone is interested in playing with the data. Some raw CSV files are there as well.

where we at?

Hair toss, check my nails, baby how you GNUing? Hard to tell!

To get us closer to an answer, I calculated the active package count per year. There can be other definitions, but my reading is that an active package is one that has had a stable release within the preceding 3 calendar years. So for 2019, for example, a GNU package is considered active if it had a stable release in 2017, 2018, or 2019. What I got was a graph that looks like this:

What we see is nothing before 1991 -- surely pointing to lacunae in my data set -- then a more or less linear rise in active package count until 2002, some stuttering growth rising to a peak in 2014 at 208 active packages, and from there a steady decline down to 153 active packages in 2019.

Of course, as a metric, active package count isn't precisely the same as project health; GNU ed is indeed the standard editor but it's not GCC. But we need to look for measurements that indirectly indicate project health and this is what I could come up with.

Looking a little deeper, I tabulated the first and last release date for each GNU package, and then grouped them by year. In this graph, the left blue bars indicate the number of packages making their first recorded release, and the right green bars indicate the number of packages making their last release. Obviously a last release in 2019 indicates an active package, so it's to be expected that we have a spike in green bars on the right.

What this graph indicates is that GNU had an uninterrupted growth phase from its beginning until 2006, with more projects being born than dying. Things are mixed until 2012 or so, and since then we see many more projects making their last release and above all, very few packages "being born".

where we going?

I am not sure exactly what steps GNU should take in the future but I hope that this analysis can be a good conversation-starter. I do have some thoughts but will post in a follow-up. Until then, happy hacking in 2020!

09 February, 2020 07:44PM by Andy Wingo

GNU Guix

Outreachy May 2020 to August 2020 Status Report I

We are happy to announce that for the fourth time GNU Guix offers a three-month internship through Outreachy, the inclusion program for groups traditionally underrepresented in free software and tech. We currently propose three subjects to work on:

  1. Create Netlink bindings in Guile.
  2. Improve internationalization support for the Guix Data Service.
  3. Integration of desktop environments into GNU Guix.

The initial application deadline is on Feb. 25, 2020 at 4PM UTC.

The final project list is announced on Feb. 25, 2020.

Should you have any questions regarding the internship, please check out the timeline, information about the application process, and the eligibility rules.

If you’d like to contribute to computing freedom, Scheme, functional programming, or operating system development, now is a good time to join us. Let’s get in touch on the mailing lists and on the #guix channel on the Freenode IRC network!

Last year we had the pleasure to welcome Laura Lazzati as an Outreachy intern working on documentation video creation, which led to the videos you can now see on the home page.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

09 February, 2020 02:30PM by Gábor Boskovits

February 08, 2020

Sylvain Beucler

Escoria - point-and-click system for the Godot engine

Escoria, the point-and-click system for the Godot game engine, is now working again with the latest Godot (3.2).

Godot is a general-purpose game engine. It comes with an extensive graphic editor with skeleton and animation support, can create all sorts of games and mini-games, making it an interesting choice for point-and-click's.

The Escoria point-and-click template provides notably a dialog system and the Esc language to write the story and interactions. It was developed for the Dog Mendonça and Pizzaboy crowdfunded game and later released as free software. A community is developing the next version, but the current version has been incompatible with the current Godot engine. So I upgraded the game template as well as the Escoria in Daïza tutorial game to Godot 3.2. Enjoy!

HTML5 support is still lacking, so I might get a compulsive need to fix it in the future ;)

08 February, 2020 04:32PM

February 07, 2020

Andy Wingo

lessons learned from guile, the ancient & spry

Greets, hackfolk!

Like just about every year, last week I took the train up to Brussels for FOSDEM, the messy and wonderful carnival of free software and of those that make it. Mostly I go for the hallway track: to see old friends, catch up, scheme about future plans, and refill my hacker culture reserves.

I usually try to see if I can get a talk or two in, and this year was no exception. First on my mind was the recent release of Guile 3. This was the culmination of a 10-year plan of work and so obviously there are some things to say! But at the same time, I wanted to reflect back a bit and look at the past with a bit of distance.

So in the end, my one talk was two talks. Let's start with the first one. (I'm trying a new thing where I share my talks as blog posts. We'll see how this goes. I know the rendering can be a bit off relative to the slides, but hopefully it's good enough. If you prefer, you can just watch the video instead!)

Celebrating Guile 3

FOSDEM 2020, Brussels

Andy Wingo | wingo@igalia.com

wingolog.org | @andywingo

So yeah let's celebrate! I co-maintain the Guile implementation of Scheme. It's a programming language. Guile 3, in summary, is just Guile, but faster. We added a simple just-in-time compiler as well as a bunch of ahead-of-time optimizations. The result is that it runs faster -- sometimes by a lot!

In the image above you can see Guile 3's performance on a number of microbenchmarks, relative to Guile 2.2, sorted by speedup. The baseline is 1.0x as fast. You can see that besides the first couple microbenchmarks where things are a bit inconclusive (click for full-size image), everything gets faster. Most are at least 2x as fast, and one benchmark is even 32x as fast. (Note the logarithmic scale on the Y axis.)

I only took a look at microbenchmarks at the end of the Guile 3 series; before that, I was mostly going by instinct. It's a relief to find out that in this case, my instincts did align with improvement.

mini-benchmark: eval

(primitive-eval
 ’(let fib ((n 30))
    (if (< n 2)
        n
        (+ (fib (- n 1)) (fib (- n 2))))))

Guile 1.8: primitive-eval written in C

Guile 2.0+: primitive-eval in Scheme

Taking a look at a more medium-sized benchmark, let's compute the 30th fibonacci number, but using the interpreter instead of compiling the procedure. In Guile 2.0 and up, the interpreter (primitive-eval) is implemented in Scheme, so it's a good test of an important small Scheme program.

Before 2.0, though, primitive-eval was actually implemented in C. This had a number of disadvantages, notably that it prevented tail calls between interpreted and compiled code. When we switched to a Scheme implementation of primitive-eval, we knew we would have a performance hit, but we thought that we would gain it back eventually as the compiler got better.

As you can see, it took a while before the compiler and run-time improved to the point that primitive-eval in Scheme reached the speed of its old hand-tuned C implementation, but for Guile 3, we finally got there. Note again the logarithmic scale on the Y axis.

macro-benchmark: guix

guix build libreoffice ghc-pandoc guix \
  –dry-run --derivation

7% faster

guix system build config.scm \
  –dry-run --derivation

10% faster

Finally, taking a real-world benchmark, the Guix package manager is implemented entirely in Scheme. All ten thousand packages are defined in Scheme, the building scripts are in Scheme, the initial RAM disk is in Scheme -- you get the idea. Guile performance in Guix can have an important effect on user experience. As you can see, Guile 3 lowered elapsed time for some operations by around 10 percent or so. Of course there's a lot of I/O going on in addition to computation, so Guile running twice as fast will rarely make Guix run twice as fast (Amdahl's law and all that).

spry /sprī/

  • adjective: active; lively

So, when I was thinking about words that describe Guile, the word "spry" came to mind.

spry /sprī/

  • adjective: (especially of an old person) active; lively

But actually when I went to look up the meaning of "spry", Collins Dictionary says that it especially applies to the agèd. At first I was a bit offended, but I knew in my heart that the dictionary was right.

Lessons Learned from Guile, the Ancient & Spry

FOSDEM 2020, Brussels

Andy Wingo | wingo@igalia.com

wingolog.org | @andywingo

That leads me into my second talk.

guile is ancient

2010: Rust

2009: Go

2007: Clojure

1995: Ruby

1995: PHP

1995: JavaScript

1993: Guile (33 years before 3.0!)

It's common for a new project to be lively, but Guile is definitely not new. People have been born, raised, and earned doctorates in programming languages in the time that Guile has been around.

built from ancient parts

1991: Python

1990: Haskell

1990: SCM

1989: Bash

1988: Tcl

1988: SIOD

Guile didn't appear out of nothing, though. It was hacked up from the pieces of another Scheme implementation called SCM, which itself was initially based on Scheme in One Defun (SIOD), back before the Berlin Wall fell.

written in an ancient language

1987: Perl

1984: C++

1975: Scheme

1972: C

1958: Lisp

1958: Algol

1954: Fortran

1958: Lisp

1930s: λ-calculus (34 years ago!)

But it goes back further! The Scheme language, of which Guile is an implementation, dates from 1975, before I was born; and you can, if you choose, trace the lines back to the lambda calculus, created in mid-30s as a notation for computation. I suppose at this point I should say mid-2030s, to disambiguate.

The point is, Guile is old! Statistically, most software projects from olden times are now dead. How has Guile managed to survive and (sometimes) thrive? Surely there must be some lesson or other that can be learned here.

ancient & spry

Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past.

The tradition of all dead generations weighs like a nightmare on the brains of the living. [...]

Eighteenth Brumaire of Louis Bonaparte, Marx, 1852

I am no philospher of history, but I know that there are some ways of looking at the past that do not help me understand things. One is the arrow of enlightened progress, in which events exist in a causal chain, each producing the next. It doesn't help me understand the atmosphere, tensions, and possibilities inherent at any particular point. I find the "progress" theory of history to be an extreme form of selection bias.

Much more helpful to me is the Hegelian notion of dialectics: that at an given point in time there are various tensions at work. In our field, an example could be memory safety versus systems programming. These tensions create an environment that favors actions that lead towards resolution of the tensions. It doesn't mean that there's only one way to resolve the tensions, and it's not an automatic process -- people still have to do things. But the tendency is to ratchet history forward to a new set of tensions.

The history of a project, to me, is then a process of dialectic tensions and resolutions. If the project survives, as Guile has, then it should teach us something about the way this process works in practice.

ancient & spry

Languages evolve; how to remain minimal?

Dialectic opposites

  • world and guile

  • stable and active

  • ...

Lessons learned from inside Hegel’s motor of history

One dialectic is the tension between the world's problems and what tools Guile offers to understand and solve them. In 1993, the web didn't really exist. In 2033, if Guile doesn't run well in a web browser, probably it will be dead. But this process operates very slowly, for an old project; Guile isn't built on CORBA or something ephemeral like that, so we don't have very much data here.

The tension between being a stable base for others to build on, and in being a dynamic project that improves and changes, is a key tension that this talk investigates.

In the specific context of Guile, and for the audience of the FOSDEM minimal languages devroom, we should recognize that for a software project, age and minimalism don't necessarily go together. Software gets features over time and becomes bigger. What does it mean for a minimal language to evolve?

hill-climbing is insufficient

Ex: Guile 1.8; Extend vs Embed

One key lesson that I have learned is that the strategy of making only incremental improvements is a recipe for death, in the long term. The natural result is that you reach what you perceive to be the most optimal state of your project. Any change can only make it worse, so you stop moving.

This is what happened to Guile around version 1.8: we had taken the paradigm of the interpreter as language implementation strategy as far as it could go. There were only around 150 commits to Guile in 2007. We were stuck.

users stay unless pushed away

Inertial factor: interface

  • Source (API)

  • Binary (ABI)

  • Embedding (API)

  • CLI

  • ...

Ex: Python 3; local-eval; R6RS syntax; set!, set-car!

So how do we make change, in such a circumstance? You could start a new project, but then you wouldn't have any users. It would be nice to change and keep your users. Fortunately, it turns out that users don't really go away; yes, they trickle out if you don't do anything, but unless you change in an incompatible way, they stay with you, out of inertia.

Inertia is good and bad. It does conflict with minimalism as a principle; if you were to design Scheme in 2020, you would not include mutable variables or even mutable pairs. But they are still with us because if we removed them, we'd break too many users.

Users can even make you add back things that you had removed. In Guile 2.0, we removed the capability to evaluate an expression at run-time within the lexical environment of an expression, as we didn't know how to implement this outside an interpreter. It turns out this was so important to users that we had to add local-eval back to Guile, later in the 2.0 series. (Fortunately we were able to do it in a way that layered on lower-level facilities; this approach reconciled me to the solution.)

you can’t keep all users

What users say: don’t change or remove existing behavior

But: sometimes losing users is OK. Hard to know when, though

No change at all == death

  • Natural result of hill-climbing

Ex: psyntax; BDW-GC mark & finalize; compile-time; Unicode / locales

Unfortunately, the need to change means that sometimes you will lose users. It's either a dead project, or losing users.

In Guile 1.8, for example, the macro expander ran lazily: it would only expand code the first time it ran it. This was good for start-up time, because not all code is evaluated in the course of a simple script. Lazy expansion allowed us to start doing important work sooner. However, this approach caused immense pain to people that wanted "proper" Scheme macros that preserved lexical scoping; the state of the art was to eagerly expand an entire file. So we switched, and at the same time added a notion of compile-time. This compromise kept good start-up time while allowing fancy macros.

But eager expansion was a change. Users that relied on side effects from macro expansion would see them at compile-time instead of run-time. Users of old "defmacros" that could previously splice in live Scheme closures as literals in expanded source could no longer do that. I think it was the right choice but it did lose some users. In fact I just got another bug report related to this 10-year-old change last week.

every interface is a cost

Guile binary ABI: libguile.so; compiled Scheme files

Make compatibility easier: minimize interface

Ex: scm_sym_unquote, GOOPS, Go, Guix

So if you don't want to lose users, don't change any interface. The easiest way to do this is to minimize your interface surface. In Go, for example, they mostly haven't had dynamic-linking problems because that's not a thing they do: all code is statically linked into binaries. Similarly, Guix doesn't define a stable API, because all of its code is maintained in one "monorepo" that can develop in lock-step.

You always have some interfaces, though. For example Guix can't change its command-line interface from one day to the next, for example, because users would complain. But it's been surprising to me the extent to which Guile has interfaces that I didn't consider. Recently for example in the 3.0 release, we unexported some symbols by mistake. Users complained, so we're putting them back in now.

parallel installs for the win

Highly effective pattern for change

  • libguile-2.0.so

  • libguile-3.0.so

https://ometer.com/parallel.html

Changed ABI is new ABI; it should have a new name

Ex: make-struct/no-tail, GUILE_PKG([2.2]), libtool

So how does one do incompatible change? If "don't" isn't a sufficient answer, then parallel installs is a good strategy. For example in Guile, users don't have to upgrade to 3.0 until they are ready. Guile 2.2 happily installs in parallel with Guile 3.0.

As another small example, there's a function in Guile called make-struct (old doc link), whose first argument is the number of "tail" slots, followed by initializers for all slots (normal and "tail"). This tail feature is weird and I would like to remove it. Unfortunately I can't just remove the argument, so I had to make a new function, make-struct/no-tail, which exists in parallel with the old version that I can't break.

deprecation facilitates migration

__attribute__ ((__deprecated__))
(issue-deprecation-warning
 "(ice-9 mapping) is deprecated."
 "  Use srfi-69 or rnrs hash tables instead.")
scm_c_issue_deprecation_warning
  ("Arbiters are deprecated.  "
   "Use mutexes or atomic variables instead.");

begin-deprecated, SCM_ENABLE_DEPRECATED

Fortunately there is a way to encourage users to migrate from old interfaces to new ones: deprecation. In Guile this applies to all of our interfaces (binary, source, etc). If a feature is marked as deprecated, we cause its use to issue a warning, ideally at compile-time when users responsible for the package can fix it. You can even add __attribute__((__deprecated__)) on C types!

the arch-pattern

Replace, Deprecate, Remove

All change is possible; question is only length of deprecation period

Applies to all interfaces

Guile deprecation period generally one stable series

Ex: scm_t_uint8; make-struct; Foreign objects; uniform vectors

Finally, you end up in a situation where you have replaced the old interface and issued deprecation warnings to help users migrate. The next step is to remove the old interface. If you don't do this, you are failing as a project maintainer -- your project becomes literally unmaintainable as it just grows and grows.

This strategy applies to all changes. The deprecation period may last a while, and it may be that the replacement you built doesn't serve the purpose. There is still a dialog with the users that needs to happen. As an example, I made a replacement for the "SMOB" facility in Guile that allows users to define new types, backed by C interfaces. This new "foreign object" facility might not actually be good enough to replace SMOBs; since I haven't formally deprecatd SMOBs, I don't know yet because users are still using the old thing!

change produces a new stable point

Stability within series: only additions

Corollary: dependencies must be at least as stable as you!

  • for your definition of stable

  • social norms help (GNU, semver)

Ex: libtool; unistring; gnulib

In my experience, the old management dictum that "the only constant is change" does not describe software. Guile changes, then it becomes stable for a while. You need an unstable series escape hill-climbing, then once you found your new hill, you start climbing again in the stable series.

Once you reach your stable point, the projects you rely on need to exhibit the same degree of stability that you envision for your project. You can't build a web site that you expect to maintain for 10 years on technology that fundamentally changes every 6 months. But stable dependencies isn't something you can ensure technically; rather it relies on social norms of who makes the software you use.

who can crank the motor of history?

All libraries define languages

Allow user to evolve the language

  • User functionality: modules (Guix)

  • User syntax: macros (yay Scheme)

Guile 1.8 perf created tension

  • incorporate code into Guile

  • large C interface “for speed”

Compiler removed pressure on C ABI

Empowered users need less from you

A dialectic process does not progress on its own: it requires actions. As a project maintainer, some of my actions are because I want to do them. Others are because users want me to do them. The user-driven actions are generally a burden and as a lazy maintainer, I want to minimize them.

Here I think Guile has to a large degree escaped some of the pressures that weigh on other languages, for example Python. Because Scheme allows users to define language features that exist on par with "built-in" features, users don't need my approval or intervention to add (say) new syntax to the language they work in. Furthermore, their work can still compose with the work of others, even if the others don't buy in to their language extensions.

Still, Guile 1.8 did have a dynamic whereby the relatively poor performance of having to run all code through primitive-eval meant that users were pushed towards writing extensions in C. This in turn pushed Guile to expose all of its guts for access from C, which obviously has led to an overbloated C API and ABI. Happily the work on the Scheme compiler has mostly relieved this pressure, and we may therefore be able to trim the size of the C API and ABI over time.

contributions and risk

From maintenance point of view, all interface is legacy

Guile: Sometimes OK to accept user modules when they are more stable than Guile

In-tree users keep you honest

Ex: SSAX, fibers, SRFI

It can be a good strategy to "sediment" solutions to common use cases into Guile itself. This can improve the minimalism of an entire ecosystem of code. The maintenance burden has to be minimal, however; Guile has sometimes adopted experimental code into its repository, and without active maintenance, it soon becomes stale relative to what users and the module maintainers expect.

I would note an interesting effect: pieces of code that were adopted into Guile become a snapshot of the coding style at that time. It's useful to have some in-tree users because it gives you a better idea about how a project is seen from the outside, from a code perspective.

sticky bits

Memory management is an ongoing thorn

Local maximum: Boehm-Demers-Weiser conservative collector

How to get to precise, generational GC?

Not just Guile; e.g. CPython __del__

There are some points that resist change. The stickiest of these is the representation of heap-allocated Scheme objects in C. Guile currently uses a garbage collector that "automatically" finds all live Scheme values on the C stack and in registers. It was the right choice at the time, given our maintenance budget. But to get the next bump in performance, we need to switch to a generational garbage collector. It's hard to do that without a lot of pain to C users, essentially because the C language is too weak to express the patterns that we would need. I don't know how to proceed.

I would note, though, that memory management is a kind of cross-cutting interface, and that it's not just Guile that's having problems changing; I understand PyPy has had a lot of problems regarding changes on when Python destructors get called due to its switch from reference counting to a proper GC.

future

We are here: stability

And then?

  • Parallel-installability for source languages: #lang

  • Sediment idioms from Racket to evolve Guile user base

Remove myself from “holding the crank”

So where are we going? Nowhere, for the moment; or rather, up the hill. We just released Guile 3.0, so let's just appreciate that for the time being.

But as far as next steps in language evolution, I think in the short term they are essentially to further enable change while further sedimenting good practices into Guile. On the change side, we need parallel installability for entire languages. Racket did a great job facilitating this with #lang and we should just adopt that.

As for sedimentation, we should step back and if any common Guile use patterns built by our users should be include core Guile, and widen our gaze to Racket also. It will take some effort both on a technical perspective but also on a social/emotional consensus about how much change is good and how bold versus conservative to be: putting the dialog into dialectic.

dialectic, boogie woogie woogie

https://gnu.org/s/guile

https://wingolog.org/

#guile on freenode

@andywingo

wingo@igalia.com

Happy hacking!

Hey that was the talk! Hope you enjoyed the writeup. Again, video and slides available on the FOSDEM web site. Happy hacking!

07 February, 2020 11:38AM by Andy Wingo

February 06, 2020

FSF News

GNU-FSF cooperation update

The Free Software Foundation and the GNU Project leadership are defining how these two separate groups cooperate. Our mutual aim is to work together as peers, while minimizing change in the practical aspects of this cooperation, so we can advance in our common free software mission.

Alex Oliva, Henry Poole and John Sullivan (board members or officers of the FSF), and Richard Stallman (head of the GNU Project), have been meeting to develop a general framework which will serve as the foundation for further discussion about specific areas of cooperation. Together we have been considering the input received from the public on fsf-and-gnu@fsf.org and gnu-and-fsf@gnu.org. We urge people to send any further input by February 13, because we expect to finish this framework soon.

This joint announcement can also be read on https://www.gnu.org/gnu/2020-announcement-1.html.

06 February, 2020 10:00PM

February 05, 2020

screen @ Savannah

GNU Screen v.4.8.0

I'm announcing availability of GNU Screen v.4.8.0

Screen is a full-screen window manager that multiplexes a physical
terminal between several processes, typically interactive shells.

This release
  * Improves startup time by only polling for already open files to
    close
  * Fixes:
       - Fix for segfault if termcap doesn't have Km entry
       - Make screen exit code be 0 when checking --version
       - Fix potential memory corruption when using OSC 49

As last fix, fixes potential memory overwrite of quite big size (~768
bytes), and even though I'm not sure about potential exploitability of
that issue, I highly recommend everyone to upgrade as soon as possible.
This issue is present at least since v.4.2.0 (haven't checked earlier).
Thanks to pippin who brought this to my attention.

For full list of changes see
https://git.savannah.gnu.org/cgit/screen.git/log/?h=v.4.8.0

Release is available for download at:
https://ftp.gnu.org/gnu/screen/
or your closest mirror (may have some delay)
https://ftpmirror.gnu.org/screen/

Please report any bugs or regressions.

05 February, 2020 08:48PM by Amadeusz Sławiński

February 02, 2020

Applied Pokology

Hyperlink Support in GNU Poke

FOSDEM 2020 is over, and hyperlink support has just landed for GNU Poke!

Wait, Hyperlinks!?

What do hyperlinks, a web concept, mean for GNU Poke, a terminal application?

For many years now, terminal emulators have been detecting http:// URLs in the output of any program and giving the user a chance to click on them and immediately navigate to the corresponding web page. In 2017, Egmont Kob made a proposal for supporting general hyperlinks in terminal emulators. Gnome Terminal, iTerm and a few other terminal emulators have already implemented this proposal in their latest releases. With Egmont's proposal, an application can emit any valid URI and have the terminal emulator take the user to that resource.

02 February, 2020 12:00AM

February 01, 2020

libc @ Savannah

The GNU C Library version 2.31 is now available

The GNU C Library
=================

The GNU C Library version 2.31 is now available.

The GNU C Library is used as the C library in the GNU system and
in GNU/Linux systems, as well as many other systems that use Linux
as the kernel.

The GNU C Library is primarily designed to be a portable
and high performance C library.  It follows all relevant
standards including ISO C11 and POSIX.1-2017.  It is also
internationalized and has one of the most complete
internationalization interfaces known.

The GNU C Library webpage is at http://www.gnu.org/software/libc/

Packages for the 2.31 release may be downloaded from:
        http://ftpmirror.gnu.org/libc/
        http://ftp.gnu.org/gnu/libc/

The mirror list is at http://www.gnu.org/order/ftp.html

NEWS for version 2.31
=====================

Major new features:

  • The GNU C Library now supports a feature test macro _ISOC2X_SOURCE

  to enable features from the draft ISO C2X standard.  Only some
  features from this draft standard are supported by the GNU C
  Library, and as the draft is under active development, the set of
  features enabled by this macro is liable to change.  Features from
  C2X are also enabled by _GNU_SOURCE, or by compiling with "gcc
  -std=gnu2x".

  • The <math.h> functions that round their results to a narrower type

  now have corresponding type-generic macros in <tgmath.h>, as defined
  in TS 18661-1:2014 and TS 18661-3:2015 as amended by the resolution
  of Clarification Request 13 to TS 18661-3.

  • The function pthread_clockjoin_np has been added, enabling join with

  a terminated thread with a specific clock.  It allows waiting
  against CLOCK_MONOTONIC and CLOCK_REALTIME.  This function is a GNU
  extension.

  • New locale added: mnw_MM (Mon language spoken in Myanmar).
  • The DNS stub resolver will optionally send the AD (authenticated

  data) bit in queries if the trust-ad option is set via the options
  directive in /etc/resolv.conf (or if RES_TRUSTAD is set in
  _res.options).  In this mode, the AD bit, as provided by the name
  server, is available to applications which call res_search and
  related functions.  In the default mode, the AD bit is not set in
  queries, and it is automatically cleared in responses, indicating a
  lack of DNSSEC validation.  (Therefore, the name servers and the
  network path to them are treated as untrusted.)

Deprecated and removed features, and other changes affecting
compatibility:

  • The totalorder and totalordermag functions, and the corresponding

  functions for other floating-point types, now take pointer arguments
  to avoid signaling NaNs possibly being converted to quiet NaNs in
  argument passing.  This is in accordance with the resolution of
  Clarification Request 25 to TS 18661-1, as applied for C2X.
  Existing binaries that pass floating-point arguments directly will
  continue to work.

  • The obsolete function stime is no longer available to newly linked

  binaries, and its declaration has been removed from <time.h>.
  Programs that set the system time should use clock_settime instead.

  • We plan to remove the obsolete function ftime, and the header

  <sys/timeb.h>, in a future version of glibc.  In this release, the
  header still exists but calling ftime will cause a compiler warning.
  All programs should use gettimeofday or clock_gettime instead.

  • The gettimeofday function no longer reports information about a

  system-wide time zone.  This 4.2-BSD-era feature has been deprecated
  for many years, as it cannot handle the full complexity of the
  world's timezones, but hitherto we have supported it on a
  best-effort basis.  Changes required to support 64-bit time_t on
  32-bit architectures have made this no longer practical.

  As of this release, callers of gettimeofday with a non-null 'tzp'
  argument should expect to receive a 'struct timezone' whose
  tz_minuteswest and tz_dsttime fields are zero.  (For efficiency
  reasons, this does not always happen on a few Linux-based ports.
  This will be corrected in a future release.)

  All callers should supply a null pointer for the 'tzp' argument to
  gettimeofday.  For accurate information about the time zone
  associated with the current time, use the localtime function.

  gettimeofday itself is obsolescent according to POSIX.  We have no
  plans to remove access to this function, but portable programs
  should consider using clock_gettime instead.

  • The settimeofday function can still be used to set a system-wide

  time zone when the operating system supports it.  This is because
  the Linux kernel reused the API, on some architectures, to describe
  a system-wide time-zone-like offset between the software clock
  maintained by the kernel, and the "RTC" clock that keeps time when
  the system is shut down.

  However, to reduce the odds of this offset being set by accident,
  settimeofday can no longer be used to set the time and the offset
  simultaneously.  If both of its two arguments are non-null, the call
  will fail (setting errno to EINVAL).

  Callers attempting to set this offset should also be prepared for
  the call to fail and set errno to ENOSYS; this already happens on
  the Hurd and on some Linux architectures.  The Linux kernel
  maintainers are discussing a more principled replacement for the
  reused API.  After a replacement becomes available, we will change
  settimeofday to fail with ENOSYS on all platforms when its 'tzp'
  argument is not a null pointer.

  settimeofday itself is obsolescent according to POSIX.  Programs
  that set the system time should use clock_settime and/or the adjtime
  family of functions instead.  We may cease to make settimeofday
  available to newly linked binaries after there is a replacement for
  Linux's time-zone-like offset API.

  • SPARC ISA v7 is no longer supported.  v8 is still supported, but

  only if the optional CAS instruction is implemented (for instance,
  LEON processors are still supported, but SuperSPARC processors are
  not).

  As the oldest 64-bit SPARC ISA is v9, this only affects 32-bit
  configurations.

  • If a lazy binding failure happens during dlopen, during the

  execution of an ELF constructor, the process is now terminated.
  Previously, the dynamic loader would return NULL from dlopen, with
  the lazy binding error captured in a dlerror message.  In general,
  this is unsafe because resetting the stack in an arbitrary function
  call is not possible.

  • For MIPS hard-float ABIs, the GNU C Library will be configured to

  need an executable stack unless explicitly configured at build time
  to require minimum kernel version 4.8 or newer.  This is because
  executing floating-point branches on a non-executable stack on Linux
  kernels prior to 4.8 can lead to application crashes for some MIPS
  configurations. While currently PT_GNU_STACK is not widely used on
  MIPS, future releases of GCC are expected to enable non-executable
  stack by default with PT_GNU_STACK by default and is thus likely to
  trigger a crash on older kernels.

  The GNU C Library can be built with --enable-kernel=4.8.0 in order
  to keep a non-executable stack while dropping support for older
  kernels.

  • System call wrappers for time system calls now use the new time64

  system calls when available. On 32-bit targets, these wrappers
  attempt to call the new system calls first and fall back to the
  older 32-bit time system calls if they are not present.  This may
  cause issues in environments that cannot handle unsupported system
  calls gracefully by returning -ENOSYS. Seccomp sandboxes are
  affected by this issue.

Changes to build and runtime requirements:

  • It is no longer necessary to have recent Linux kernel headers to

  build working (non-stub) system call wrappers on all architectures
  except 64-bit RISC-V.  64-bit RISC-V requires a minimum kernel
  headers version of 5.0.

  • The ChangeLog file is no longer present in the toplevel directory of

  the source tree.  ChangeLog files are located in the ChangeLog.old
  directory as ChangeLog.N where the highest N has the latest entries.

Security related changes:

  CVE-2019-19126: ld.so failed to ignore the LD_PREFER_MAP_32BIT_EXEC
  environment variable during program execution after a security
  transition, allowing local attackers to restrict the possible
  mapping addresses for loaded libraries and thus bypass ASLR for a
  setuid program.  Reported by Marcin Kościelnicki.

The following bugs are resolved with this release:

  [12031] localedata: iconv -t ascii//translit with Greek characters
  [15813] libc: Multiple issues in __gen_tempname
  [17726] libc: [arm, sparc] profil_counter should be compat symbol
  [18231] libc: ipc_perm struct's mode member has wrong type in
    sys/ipc.h
  [19767] libc: vdso is not used with static linking
  [19903] hurd: Shared mappings not being inherited by children
    processes
  [20358] network: RES_USE_DNSSEC sets DO; should also have a way to set
    AD
  [20839] dynamic-link: Incomplete rollback of dynamic linker state on
    linking failure
  [23132] localedata: Missing transliterations in Miscellaneous
    Mathematical Symbols-A/B Unicode blocks
  [23518] libc: Eliminate __libc_utmp_jump_table
  [24026] malloc: malloc_info() returns wrong numbers
  [24054] localedata: Many locales are missing date_fmt
  [24214] dynamic-link: user defined ifunc resolvers may run in ldd mode
  [24304] dynamic-link: Lazy binding failure during ELF
    constructors/destructors is not fatal
  [24376] libc: RISC-V symbol size confusion with _start
  [24682] localedata: zh_CN first weekday should be Monday per GB/T
    7408-2005
  [24824] libc: test-in-container does not install charmap files
    compatible with localedef
  [24844] regex: regex bad pointer / leakage if malloc fails
  [24867] malloc: Unintended malloc_info formatting changes
  [24879] libc: login: utmp alarm timer can arrive after lock
    acquisition
  [24880] libc: login: utmp implementation uses struct flock with
    fcntl64
  [24882] libc: login: pututline uses potentially outdated cache
  [24899] libc: Missing nonstring attributes in <utmp.h>, <utmpx.h>
  [24902] libc: login: Repeating pututxline on EINTR/EAGAIN causes stale
    utmp entries
  [24916] dynamic-link: [MIPS] Highest EI_ABIVERSION value not raised to
    ABSOLUTE ABI
  [24930] dynamic-link: dlopen of PIE executable can result in
    _dl_allocate_tls_init assertion failure
  [24950] localedata: Top-of-tree glibc does not build with top-of-tree
    GCC (stringop-overflow error)
  [24959] time: librt IFUNC resolvers for clock_gettime and clock_*
    functions other  can lead to crashes
  [24967] libc: jemalloc static linking causes runtime failure
  [24986] libc: alpha: new getegid, geteuid and getppid syscalls used
    unconditionally
  [25035] libc: sbrk() failure handled poorly in tunables_strdup
  [25087] dynamic-link: ldconfig mishandles unusual .dynstr placement
  [25097] libc: new -Warray-bounds with GCC 10
  [25112] dynamic-link: dlopen must not make new objects accessible when
    it still can fail with an error
  [25139] localedata: Please add the new mnw_MM locale
  [25149] regex: Array bounds violation in proceed_next_node
  [25157] dynamic-link: Audit cookie for the dynamic loader is not
    initialized correctly
  [25189] libc: glibc's __glibc_has_include causes issues with clang
    -frewrite-includes
  [25194] malloc: malloc.c: do_set_mxfast incorrectly casts the mallopt
    value to an unsigned
  [25204] dynamic-link: LD_PREFER_MAP_32BIT_EXEC not ignored in setuid
    binaries (CVE-2019-19126)
  [25225] libc: ld.so fails to link on x86 if GCC defaults to -fcf-
    protection
  [25226] string: strstr: Invalid result if needle crosses page on s390-
    z15 ifunc variant.
  [25232] string: <string.h> does not enable const correctness for
    strchr et al. for Clang++
  [25233] localedata: Consider "." as the thousands separator for sl_SI
    (Slovenian)
  [25241] nptl: __SIZEOF_PTHREAD_MUTEX_T defined twice for x86
  [25251] build: Failure to run tests when CFLAGS contains -DNDEBUG.
  [25271] libc: undeclared identifier PTHREAD_MUTEX_DEFAULT when
    compiling with -std=c11
  [25323] localedata: km_KH: d_t_fmt contains "m" instead of "%M"
  [25324] localedata: lv_LV: d_t_fmt contains suspicious words in the
    time part
  [25396] dynamic-link: Failing dlopen can leave behind dangling GL
    (dl_initfirst) link map pointer
  [25401] malloc: pvalloc must not have _attribute_alloc_size_
  [25423] libc: Array overflow in backtrace on powerpc
  [25425] network: Missing call to __resolv_context_put in
    getaddrinfo.c:gethosts

Release Notes
=============

https://sourceware.org/glibc/wiki/Release/2.31

Contributors
============

This release was made possible by the contributions of many people.
The maintainers are grateful to everyone who has contributed changes
or bug reports.  These include:

Adhemerval Zanella
Alexandra Hájková
Alistair Francis
Andreas Schwab
Andrew Eggenberger
Arjun Shankar
Aurelien Jarno
Carlos O'Donell
Chung-Lin Tang
DJ Delorie
Dmitry V. Levin
Dragan Mladjenovic
Egor Kobylkin
Emilio Cobos Álvarez
Emilio Pozuelo Monfort
Feng Xue
Florian Weimer
Gabriel F. T. Gomes
Gustavo Romero
H.J. Lu
Ian Kent
James Clarke
Jeremie Koenig
John David Anglin
Joseph Myers
Kamlesh Kumar
Krzysztof Koch
Leandro Pereira
Lucas A. M. Magalhaes
Lukasz Majewski
Marcin Kościelnicki
Matheus Castanho
Mihailo Stojanovic
Mike Crowe
Mike FABIAN
Niklas Hambüchen
Paul A. Clarke
Paul Eggert
Petr Vorel
Rafal Luzynski
Rafał Lużyński
Rajalakshmi Srinivasaraghavan
Raoni Fassina Firmino
Richard Braun
Samuel Thibault
Sandra Loosemore
Siddhesh Poyarekar
Stefan Liebler
Svante Signell
Szabolcs Nagy
Talachan Mon
Thomas Schwinge
Tim Rühsen
Tulio Magno Quites Machado Filho
Wilco Dijkstra
Xuelei Zhang
Zack Weinberg
liqingqing

01 February, 2020 01:31PM by Siddhesh Poyarekar

January 30, 2020

FSF News

Libiquity Wi-Fri ND2H Wi-Fi card now FSF-certified to Respect Your Freedom

wi-fri nd2h wifi card

BOSTON, Massachusetts, USA -- Thursday, January 30, 2020 -- The Free Software Foundation (FSF) today awarded Respects Your Freedom (RYF) certification to the Libiquity dual-band 802.11a/b/g/n Wi-Fi card, from Libiquity LLC. The RYF certification mark means that Libiquity's distribution of this device meets the FSF's standards in regard to users' freedom, control over the product, and privacy.

Libiquity currently sells this device as part of its previously-certified Taurinus X200 laptop. Technoethical also offers the same hardware with their RYF-certified Technoethical N300DB Dual Band Wireless Card. With today's certification, Libiquity is able to sell the Libiquity Wi-Fri ND2H Wi-Fi card as a stand-alone product for the first time, and now has two RYF-certified devices available.

"In the years since first joining the RYF program, we at Libiquity have worked to improve and expand our catalog. For anyone looking to join distant or congested 2.4-GHz or 5-GHz wireless networks, the Wi-Fri ND2H is a great internal Wi-Fi card for laptops, desktops, servers, single-board computers, and more. Most importantly, in an era when more and more hardware disrespects your freedom, we're proud to offer a Wi-Fi card branded with the RYF logo on the product itself, as a trusted symbol of its compatibility with free software such as GNU Linux-libre," said Patrick McDermott, Founder and CEO, Libiquity LLC.

With this certification, the total number of RYF-certified wireless adapters grows to thirteen. The Libiquity Wi-Fri ND2H Wi-Fi card enables users to have wireless connectivity without having to rely on nonfree drivers or firmware.

"We are especially glad to see the certification mark printed directly on the product. While not a requirement of the program, this helps us get closer to the world we are aiming for, where people shopping can immediately and easily see what products are best for their freedom," said the FSF's executive director, John Sullivan.

Like other previously certified peripheral devices, the Libiquity Wi-Fri ND2H Wi-Fi card was tested using an FSF-endorsed GNU/Linux distro to ensure that it works using only free software. The device does not ship with any software included, as all the free software needed is already provided by fully free distributions.

"Expanding the availability of hardware that works with fully free systems like Trisquel GNU/Linux is always something to celebrate. It's great to see Libiquity offering this device as a stand-alone product so that users can customize and upgrade their own setup," said the FSF's licensing and compliance manager, Donald Robertson, III.

To learn more about the Respects Your Freedom certification program, including details on the certification of this Libiquity device, please visit https://ryf.fsf.org.

Retailers interested in applying for certification can consult https://ryf.fsf.org/about/vendors.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

libiquity logo

About Libiquity

Founded by CEO Patrick McDermott, Libiquity is a privately held New Jersey, USA company that provides world-class technologies which put customers in control of their computing. The company develops and sells electronics products, provides firmware and embedded systems services, and leads the development of the innovative and flexible ProteanOS embedded operating system. More information about Libiquity and its offerings can be found on its Web site at https://www.libiquity.com.

Media Contacts

Donald Robertson, III
Licensing and Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Patrick McDermott
Founder and CEO
Libiquity LLC
info@libiquity.com

30 January, 2020 08:55PM

January 28, 2020

FSF Blogs

LibrePlanet 2020 needs you: Volunteer today!

Volunteer at LibrePlanet 2020

The LibrePlanet 2020 conference is coming very soon, on March 14 and 15 at the Back Bay Events Center in Boston, and WE NEED YOU to make the world's premier gathering of free software enthusiasts a success.

Volunteers are needed for several different tasks at LibrePlanet, from an audio/visual crew to point cameras and adjust microphones, to room monitors to introduce speakers, to a set-up and clean-up crew to make our conference appear and disappear at the Event Center, and more! You can volunteer for as much or as little time as you like, whether you choose to help out for an hour or two, or the entirety of both days. Either way, we'll provide you with a VERY handsome LibrePlanet 2020 shirt in your size, in addition to free admission to the entire conference and lunch and our eternal gratitude.

Excited? If you're ready to help put on an excellent conference, we are more than ready to show you how. One important step is to come to an in-person training and info session at the Free Software Foundation office, in downtown Boston. We have scheduled six training sessions beginning late February; the last one is the afternoon of the day immediately before LibrePlanet, which is perfect for people arriving from far away for the event. Please come to one if you can! Some volunteer tasks (room monitors, A/V crew) require more training than others, but there are some important things we need all volunteers to know, and attending a training will ensure that you're fully informed. The schedule for trainings is at the bottom of this email.

You're interested? Wonderful. Please reply to this email or write to resources@fsf.org. Let me know your T-shirt size (we'll have unisex S-XXXXL and fitted S-XXXL) and which training you can make it to. You can certainly volunteer without making it to a training -- I'll send you some info via email -- but your role may be a little less glamorous. Please also feel free to contact me with any questions or suggestions you may have; I will respond eagerly to your queries.

THANK YOU for supporting the Free Software Foundation and THANK YOU for volunteering for an excellent LibrePlanet!


LIBREPLANET 2020 VOLUNTEER TRAINING & INFO SESSION DATES:

All except one of these take place from 6 PM to 8 PM at the FSF office, 51 Franklin Street, Fifth floor, Downtown Crossing, Boston:

  • Wednesday, February 19
  • Tuesday, February 25
  • Thursday, February 27 (includes A/V training)
  • Wednesday, March 4 (includes A/V training
  • Tuesday, March 10 (includes A/V training
  • Friday, March 13: This is an afternoon session for people coming to town late, starting at 3 PM! It will also be at the FSF office, prior to the Friday night open house.

28 January, 2020 04:27PM

recutils @ Savannah

Pre-release 1.8.90 in alpha.gnu.org

The pre-release recutils-1.8.90.tar.gz is now available at ftp://alpha.gnu.org/gnu/recutils/recutils-1.8.90.tar.gz

The NEWS file in the tarball contains a list of the changes since 1.8.

The planned date for releasing 1.9 is Saturday 1 February 2020.

Please report any problem found with the pre-release to bug-recutils@gnu.org.

Thanks!

28 January, 2020 11:31AM by Jose E. Marchesi

January 24, 2020

Gary Benson

Container debugging minihint

What’s in my container?

  1. bash$ podman ps --ns
    CONTAINER ID  NAMES            PID    CGROUPNS  IPC         MNT         NET         PIDNS       USERNS      UTS
    fe11359293e8  eloquent_austin  11090            4026532623  4026532621  4026532421  4026532624  4026531837  4026532622
  2. bash$ sudo ls -l /proc/11090/root/
    total 22628
    lrwxrwxrwx.   1 root root        7 Jul 25  2019 bin -> usr/bin
    dr-xr-xr-x.   2 root root        6 Jul 25  2019 boot
    drwxr-xr-x.   5 root root      360 Jan 24 12:03 dev
    drwxr-xr-x.   1 root root      183 Jan 23 16:43 etc
     ...

Thank you.
[28 Jan@1135UTC] UPDATE—This doesn’t seem to work with newer systems, I’m investigating…

24 January, 2020 03:01PM by gbenson

GNU Guix

Guile 3 & Guix

Version 3.0 of GNU Guile, an implementation of the Scheme programming language, was released just last week. This is a major milestone for Guile, which gets compiler improvements and just-in-time (JIT) native code generation, leading to significant performance improvements over 2.2. It’s also great news for all the users of Guile, and in particular for Guix!

Guile 3 logo.

This post discusses what it means for Guix to migrate to Guile 3 and how that migration is already taking place.

Guile in Guix

Most users interact with Guix through its command-line interface, and we work hard to make it as approachable as possible. As any user quickly notices, Guix uses the Scheme programming language uniformly for its configuration—from channels to manifests and operating systems—and anyone who starts packaging software knows that package definitions are in fact Scheme code as well.

This is a significant departure from many other, and in particular from Nix. While Nix defines several domain-specific languages (DSLs) for these aspects—the Nix language but also specific configuration languages—Guix chooses Scheme as the single language for all this, together with the definition of high-level embedded domain-specific languages (EDSLs).

It goes beyond that: in Guix System, all the things traditionally implemented in C or as a set of Perl or shell scripts are implemented in Scheme. That includes the init system, package builds, the initial RAM disk (initrd), system tests, and more. Because this leads to several layers of Scheme code, executed at different points in time, Guix includes a code staging mechanism built upon the nice properties of Scheme.

Why do that? The arguments, right from the start, were twofold: using a general-purpose language allows us to benefit from its implementation tooling, and having interfaces for “everything” in Scheme makes it easy for users to navigate their distro or OS code and to reuse code to build new features or applications. Guix developers benefit from the ease of code reuse every day; demonstrative examples include the use of Guix container facilities in the init system, the development of many tools providing facilities around packages, the implementation of additional user interfaces, and work on applications that use Guix as a library such as the Guix Workflow Language and Guix-Jupyter.

As for the benefits of the host general-purpose language, these are rather obvious: Guix developers benefit from an expressive language, an optimizing compiler, a debugger, a powerful read-eval-print loop (REPL), an interactive development environment, and all sorts of libraries. Moving to Guile 3 should add to that better performance, essentially for free. To be comprehensive, Guile 3 may well come with a set of brand new bugs too, but so far we seem to be doing OK!

Migrating to Guile 3

What does it mean for Guix to migrate to Guile 3? We’ve seen above different ways in which Guix relies on Guile. In short, we can say that migration is threefold:

  1. Guix is a distro that ships Guile-related packages. Like any other distro, it will have to upgrade its guile package to 3.0 and to ensure packages that depend on it and updated as well.
  2. Guix is a program written in Guile. As such, we need to make sure that all its dependencies (half a dozen of Guile libraries) work with Guile 3 and that Guix itself runs fine with Guile 3.
  3. Guix ties together operating system components. In particular, the init system (the Shepherd) and other boot-time facilities will also migrate.

The packages

Updating the distro is the boring part, but it’s best to get it right. Guix makes it possible to have unrelated versions of variants of packages in different environments or different profiles, which is very nice. We’ll have performed a smooth transition if users and tools see that the packages named guile and guile-ssh (say) transparently move from Guile 2.2 to 3.0, in lockstep.

Put differently, most of the upgrade work upon a programming language version bump deals with conventions, and in particular package names. Currently, guile corresponds to the 2.2 stable series and all the guile-* packages are built against it. In the meantime, the package for Guile 3 is named guile-next and packages built against it are called guile3.0-*. Over the last few weeks we created guile3.0- variants for most Guile packages, something that’s easily achieved with Guix.

The big switch will consist in renaming all current guile-* packages to guile2.2-* packages, for use with the legacy 2.2 series, and renaming all the guile3.0-* packages to guile-*. We will switch soon, but before getting there, we’re making sure important packages are available for 3.0.

Guix-the-program

A more interesting part is “porting” Guix itself from Guile 2.2 to Guile 3. It seems that developers have become wary of 2-to-3 transitions for programming languages. Fear not! Switching from Guile 2 to Guile 3 turned out to be an easy task. In fact, very little changed in the language itself; what did change—e.g., semantics on fine points of the module system, support for structured exceptions—is either optional or backwards-compatible.

As Guile 2.9 pre-releases trickled in, we started testing all the Guile libraries Guix relies on against 2.9. For the vast majority of them, all we had to do was to update their configure.ac to allow builds with 3.0.

Guix itself was a bit more work, mostly because it’s a rather large code base with a picky test suite. The bit that required most work has to do with the introduction of declarative modules, an optional semantic change in modules to support more compiler optimizations. We had several “white-box tests” where tests would happily peek at private module bindings through the magical-evil @@ operator. Because we chose to enable declarative modules, we also had to adjust our tests to no longer do that. And well, that’s about it!

At that point, we were able to create a guile3.0-guix package variant, primarily for testing purposes. Soon after, we told guix pull to build Guix with 3.0 instead of 2.2. Thus, Guix users who upgrade will transparently find themselves running Guix on Guile 3.0.

The main benefit is improved performance. Guile 3 is known to be up to 32 times faster than Guile 2.2 on some micro-benchmarks. Assessing the performance gains on a “real-world” application like Guix is the real test. What would be a relevant benchmark? At its core, Guix is essentially a compiler from high-level descriptions of packages, operating systems, and the like, to low-level build instructions (derivations). Thus, a good benchmark is a command that exercises little more than this compilation step:

guix build libreoffice ghc-pandoc guix --dry-run --derivation

or:

guix system build config.scm --dry-run --derivation

On x86_64, the guix build command above on Guile 3 is 7% faster than on Guile 2.2, and guix system build, which is more computing-intensive, is 10% faster (heap usage is ~5% higher). This is lower than the skyrocketing speedups observed on some microbenchmarks, but it’s probably no surprise: these guix commands are short-lived (a couple of seconds) and they’re rather I/O- and GC-intensive—something JIT compilation cannot help with.

On 32-bit ARM, we temporarily disabled JIT due to a bug; there we observe a slight slowdown compared to 2.2. This can be explained by the fact that virtual machine (VM) instructions in 3.0 are lower-level than in 2.2 and will hopefully be more than compensated for when JIT is re-enabled.

Gluing it all together

The last part of the Guile 3 migration has to do with how Guix, and in particular Guix System, glues things together. As explained above, Guix manipulates several stages of Scheme code that will run a different points in time.

Firstly, the code that runs package builds, such as the one that runs ./configure && make && make install, is Guile code. Currently that code runs on Guile 2.2, but on the next major rebuild-the-world upgrade, we will switch to Guile 3.

Additionally, Guix produces Scheme code consumed by the Shepherd, by GNU mcron, and for the graphical installer. These will soon switch to Guile 3 as well. This kind of change is made easy by the fact that both the package definitions and the staged code that depends on those packages live in the same repository.

Long live Guile 3!

Migrating Guix to Guile 3 is a bit of work because of the many ways Guix interacts with Guile and because of the sheer size of the code base. For a “2-to-3” transition though, it was easy. And fundamentally, it remains a cheap transition compared to what it brings: better performance and new features. That’s another benefit of using a general-purpose language.

Thumbs up to everyone involved in its development, and long live Guile 3!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

24 January, 2020 03:00PM by Ludovic Courtès

January 23, 2020

Christopher Allan Webber

Time travel debugging in Spritely Goblins, previewed through Terminal Phase

Time travel in Spritely Goblins shown through Terminal Phase

Okay, by now pretty much everyone is probably sick of hearing about Terminal Phase. Terminal Phase this, and Terminal Phase that! Weren't you getting back to other hacking on Spritely Goblins, Chris? And in fact I am, I just decided it was a good idea to demo one of the things that makes Goblins interesting.

What you're seeing above is from the experimental tt-debugger branch of Terminal Phase (not committed yet because it's a proof-of-concept, and not as clean as I'd like it to be, and also you need the "dev" branch of Goblins currently). When the user presses the "t" key, they are presented with a menu by which they can travel backwards and forwards in time. The player can select a previous state of the game from every two seconds and switch to that.

Here's the cool part: I didn't change a single line of game code to make this occur. I just added some code around the game loop that snapshotted the state as it currently existed and exposed it to the programmer.

What kind of time sorcery is this?

Dr. Who/Dr. Sussman fez comparison

Well, we're less the time-lord kind, more the functional programmer kind. Except, quasi-functional.

If you watched the part of the recent Terminal Phase video I made that shows off Goblins you'll remember that the way that objects work is that a reference to a Goblins object/actor is actually a reference that indirectly refers to a procedure for handling immediate calls and asynchronous messages. Relative to themselves (and in true actor fashion), objects specify first their initial version of themselves, and later can use a special "become" capability to specify a future version of themselves. From the perspective of the actor, this looks very functional. But from the perspective of one object/actor performing a call against another object/actor, it appears that things change.

Here is the simplest example of such an object, a cell that holds a single value:

;; Constructor for a cell.  Takes an optional initial value, defaults
;; to false.
(define (^cell bcom [val #f])
  (case-lambda
    ;; Called with no arguments; return the current value
    [() val]
    ;; Called with one argument, we become a version of ourselves
    ;; with this new value
    [(new-val)
     (bcom (^cell bcom new-val))]))

If you can't read Racket/Scheme, not a big deal; I'll just tell you that this cell can be called with no arguments to get the current value, and with one argument to set a value. But you'll see that in the former case, the value we would like to return to the caller is returned; in the latter case, we return the handler we would like to be for handling future messages (wrapped up in that bcom capability). In both cases, we aren't performing side effects, just returning something.. but in the latter case the kernel observes this and updates the current transaction's delta reflecting that this is the "new us". (Not shown here but supported: both becoming a new handler and returning a value.)

Without going into details, this makes it extremely easy to accomplish several things in Goblins:

  • Transactionality: Each "turn" of an event loop in Goblins is transactional. Rather than being applied immediately, a transaction is returned. Whether we choose to commit this or not is up to us; we will probably not, for instance, if an exception occurs, but we can record the exception (a default event loop is provided that does the default right-thing for you).
  • Snapshotting time: We can, as shown above, snapshot history and actually run code against previous state (assuming, again, that state is updated through the usual Goblins actor "become" means).
  • Time-travel debugging: Yeah, not just for Elm! I haven't built a nice interface for it in the demo above, but it's absolutely possible to expose a REPL at each snapshot in time in the game to "play around with" what's happening to debug difficult problems.

This is only a small portion of what makes Spritely Goblins interesting. The really cool stuff will come soon in the distributed programming stuff. But I realized that this is one of the more obviously cool aspects of Spritely Goblins, and before I start showing off a bunch of other interesting new things, I should show off a cool feature that exists in the code we already have!

Anyway, that's it... I hope I gave you a good sense that I'm up to interesting things. If you're excited by this stuff and you aren't already, consider donating to keep this work advancing.

Whew! I guess it's time I start writing some docs for Goblins, eh?

23 January, 2020 08:55PM by Christopher Lemmer Webber

January 22, 2020

parallel @ Savannah

GNU Parallel 20200122 ('Soleimani') released

GNU Parallel 20200122 ('Soleimani') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

GNU Parallel is 10 years old next year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17.

See https://www.gnu.org/software/parallel/10-years-anniversary.html

Quote of the month:

  GNU parallel is straight up incredible.
    -- Ben Johnson @biobenkj@twtter

New in this release:

  • --blocktimeout dur - Time out for reading block when using --pipe. If it takes longer than dur to read a full block, use the partial block read so far.
  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 3374ec53bacb199b245af2dda86df6c9
    12345678 3374ec53 bacb199b 245af2dd a86df6c9
    $ md5sum install.sh | grep 029a9ac06e8b5bc6052eac57b2c3c9ca
    029a9ac0 6e8b5bc6 052eac57 b2c3c9ca
    $ sha512sum install.sh | grep f517006d9897747bed8a4694b1acba1b
    40f53af6 9e20dae5 713ba06c f517006d 9897747b ed8a4694 b1acba1b 1464beb4
    60055629 3f2356f3 3e9c4e3c 76e3f3af a9db4b32 bd33322b 975696fc e6b23cfb
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 January, 2020 05:57PM by Ole Tange

January 19, 2020

make @ Savannah

GNU Make 4.3 Released!

The next stable version of GNU make, version 4.3, has been released and is available for download from https://ftp.gnu.org/gnu/make/

Please see the NEWS file that comes with the GNU make distribution for details on user-visible changes.

19 January, 2020 10:44PM by Paul D. Smith

Christopher Allan Webber

Terminal Phase 1.0

Testing Terminal Phase image

I'm pleased to announce that Terminal Phase, a space shooter game you can play in your terminal, has achieved version 1.0. The game is completely playable and is a fun game (well, at least a number of playtesters told me they thought it was fun). It includes two levels (one of which is more balanced than the other), and more content is on its way (1.0 isn't the end!). You can see it being played above in cool-retro-term but it works in all sorts of terminals, including gnome-terminal and etc.

I also released a video recently (archive.org mirror) of me doing a live playtest of the game and also showing off how to make new levels and program new enemies (which serves as kind of an introduction, but probably not the best one, to Spritely Goblins).

Terminal Phase was actually a reward for hitting the $500/mo milestone on my Patreon account, which we achieved a little over a week ago. I aimed to get 1.0 out the door by midnight on Wednesday but I actually released it a couple of hours later, closer to 2:30am, because I was trying to make the credits look cool:

Terminal Phase Credits

I think I succeeded, right? Maybe you would like your name in there; you can still do so by selecting a tier on my Patreon account. I released the game as FOSS, so whether you donate or not, you can still reap the benefits. But I figure making the credits look cool and putting peoples' names in there would be a good way of making people feel motivated. And there are more releases on the way; I'll be adding to this now and then and releasing more stuff occasionally. In fact you may notice the cool parallax scrolling starfield in the gif at the top of this post; I added that after 1.0. I guess it's a bit sneaky to put that on top of a post labeled 1.0, but the good news is that this means that 1.1 is not far away, which will include some new enemies (maybe a boss?), new levels, and yes, parallax starfields (and maybe your name in the credits if it isn't already).

Anyway, enough self-shilling; let's talk more about the game itself. Terminal Phase really had a number of goals:

  • Fun. Games are fun, and making them is (well, mostly) fun and interesting. And I feel like the FOSS world could use more fun.
  • Fundraising. I do a lot of work to enrich the commons; funding that stuff can be brutally hard, and obviously this was a fundraising tactic.
  • A litmus test. I wanted to see, "Do people care about funding FOSS games, in particular? Does this matter to people?" My suspicion is that there is an interest, even if niche, and that seems to have been validated. Great.
  • Pushing the medium of interactive terminal-based / ascii art content. Probably because it's a retro area, it's not really one where we see a lot of new content. We see a lot more terminal-based turn-based games, most notably roguelikes; why not more live action stuff? (Note that I have done two other projects I did this year in this same vein.)
  • Thanking donors. I had this Patreon account and that's great that people were being generous, but I felt like it would be nice to have something that felt quasi-tactile, like you got something visible back from it. I hope people feel like that succeeded.
  • But most importantly, advancing Spritely Goblins. Terminal Phase is a program to demonstrate and test how well about half of what Goblins does works well, namely transactional object interactions.

I feel like all of those were a success, but I really want to write more about that last one. Except, well, I already have in some detail, and maybe I'd repeat myself. But I'll say that developing Terminal Phase has made me dramatically more confident that the core parts of Spritely Goblins work well and make sense. That's good, and I can say that without a bunch of hand-waving; I built something that feels nice to use and to program.

That lets me move forward with solidifying and documenting what I have and focusing on the next layer: the asynchronous programming and distributed networked objects layers. The former of those two exists, the latter of those needs work, but both will be tested in a similar way soon; I plan on building some highly interactive demos to show off their ideas.

Anyway, I hope you enjoy the game, and thank you to everyone who donated and made it possible! Again, I plan to release more soon, including new levels, new enemies, boss battles, and yes, even some powerups. And if you like the game, consider becoming a supporter if you aren't already!

Now back to working on Spritely Goblins itself...

19 January, 2020 10:10PM by Christopher Lemmer Webber

January 16, 2020

GNU Guile

GNU Guile 3.0.0 released

We are ecstatic and relieved to announce the release of GNU Guile 3.0.0. This is the first release in the new stable 3.0 release series.

See the release announcement for full details and a download link.

The principal new feature in Guile 3.0 is just-in-time (JIT) native code generation. This speeds up the performance of all programs. Compared to 2.2, microbenchmark performance is around twice as good on the whole, though some individual benchmarks are up to 32 times as fast.

Comparison of microbenchmark performance for Guile 3.0 versus 2.2

For larger use cases, notably, this finally makes the performance of "eval" as written in Scheme faster than "eval" written in C, as in the days of Guile 1.8.

Other new features in 3.0 include support for interleaved definitions and expressions in lexical contexts, native support for structured exceptions, better support for the R6RS and R7RS Scheme standards, along with a pile of optimizations. See the NEWS file for a complete list of user-visible changes.

Guile 3.0.0 and all future releases in the 3.0.x series are parallel-installable with other stable release series (e.g. 2.2). As the first release in a new stable series, we anticipate that Guile 3.0.0 might have build problems on uncommon platforms; bug reports are very welcome. Send any bug reports you might have as email at to bug-guile@gnu.org.

Happy hacking with Guile 3!

16 January, 2020 11:30AM by Andy Wingo (guile-devel@gnu.org)

Parabola GNU/Linux-libre

[From Arch] rsync compatibility

Our rsync package was shipped with bundled zlib to provide compatibility with the old-style --compress option up to version 3.1.0. Version 3.1.1 was released on 2014-06-22 and is shipped by all major distributions now.

So we decided to finally drop the bundled library and ship a package with system zlib. This also fixes security issues, actual ones and in future. Go and blame those running old versions if you encounter errors with rsync 3.1.3-3.

16 January, 2020 04:44AM by Isaac David

[From Arch] Now using Zstandard instead of xz for package compression

As announced on the Arch-dev mailing list, on Friday, Dec 27 2019, the package compression scheme has changed from xz (.pkg.tar.xz) to zstd (.pkg.tar.zst).

zstd and xz trade blows in their compression ratio. Recompressing all packages to zstd with their options yields a total ~0.8% increase in package size on all of their packages combined, but the decompression time for all packages saw a ~1300% speedup.

We already have hundreds of zstd-compressed packages in our repositories, and as packages get updated more will keep rolling in. No user-facing issues have been found as of yet, so things appear to be working.

As an end-user no manual intervention is required, assuming that you have read and followed the news post from late last year.

16 January, 2020 04:34AM by Isaac David

January 15, 2020

FSF News

First LibrePlanet 2020 keynote announcement: Internet Archive founder Brewster Kahle

BOSTON, Massachusetts, USA -- Wednesday, January 15, 2020 -- The Free Software Foundation (FSF) today announced Brewster Kahle as its first keynote speaker for LibrePlanet 2020. The annual technology and social justice conference will be held in the Boston area on March 14 and 15, 2020, with the theme "Free the Future." Attendees can register at https://my.fsf.org/civicrm/event/info?id=87&reset=1.

Internet archivist, digital librarian, and Internet Hall of Famer Brewster Kahle has been announced as the first of multiple keynote speakers for the FSF's annual LibrePlanet conference. Kahle is renowned as the founder of the Internet Archive, a nonprofit dedicated to preserving the cultural history of the Web.

With its mission to provide "universal access to all knowledge," the Internet Archive is an inspiration to digital activists from all over the world. Through its "Wayback Machine," the Internet Archive provides historically indexed versions of millions of Web pages. For his work as an Internet activist and digital librarian, Brewster was inducted into the Internet Hall of Fame in 2012.

Commenting on his selection as a LibrePlanet keynote speaker, Kahle said, "Free software is crucial in building a digital ecosystem with many winners. The Internet Archive is completely dependent, as are millions of others, on free software but also free content. I look forward to presenting at LibrePlanet, but mostly from learning from those attending as to where free software is going."

FSF executive director John Sullivan welcomed Kahle's announcement as a keynote speaker by saying, "The Internet Archive plays an important role in our lives, ensuring that Internet users for years to come will be able to view all of the Web exactly as it was at a specific point in history. Our focus at this year's LibrePlanet is to 'free the future,' and Brewster's work reminds all of us that we cannot have a future without a reliable history. The FSF is honored to have Brewster keynoting the conference."

The FSF will announce further keynote speakers before the start of the conference, and the full LibrePlanet 2020 schedule is expected very soon. Thousands of people have attended LibrePlanet over the years: some in person, and some by tuning into the fully free software livestream the FSF has of the event. LibrePlanet has welcomed visitors from up to fifteen countries each year, and individuals from many others participate online. The conference's video archive contains talks recorded throughout the conference's history, including keynote talks by Edward Snowden and Cory Doctorow.

About LibrePlanet

LibrePlanet is the annual conference of the Free Software Foundation. Over the last decade, LibrePlanet has blossomed from a small gathering of FSF associate members into a vibrant multi-day event that attracts a broad audience of people who are interested in the values of software freedom. LibrePlanet 2020 will be held on March 14th and 15th, 2020. To sign up for announcements about LibrePlanet 2020, visit https://lists.gnu.org/mailman/listinfo/libreplanet-discuss.

Registration for LibrePlanet: "Free the Future" is open. Attendance is free of charge to FSF associate members and students.

For information on how your company can sponsor LibrePlanet or have a table in our exhibit hall, email campaigns@fsf.org.

Keynote speakers at LibrePlanet 2019 included Bdale Garbee, who has contributed to the free software community since 1979, and Tarek Loubani, who runs the Glia Project, which seeks to provide medical supplies to impoverished locations. The closing keynote was given by Micky Metts, a hacker, activist and organizer, as well as a member of Agaric, a worker-owned cooperative of Web developers.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://www.fsf.org and https://www.gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

MEDIA CONTACT

Greg Farough
Campaigns Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

Photo by Vera de Kok Š 2015. Licensed under CC-BY-SA 4.0.

15 January, 2020 10:41PM

sed @ Savannah

sed-4.8 released [stable]

This is to announce sed-4.8, a stable release.

There have been 21 commits by 2 people in the 56 weeks since 4.7.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Assaf Gordon (4)
  Jim Meyering (17)

Jim [on behalf of the sed maintainers]
==================================================================

Here is the GNU sed home page:
    http://gnu.org/s/sed/

For a summary of changes and contributors, see:
  http://git.sv.gnu.org/gitweb/?p=sed.git;a=shortlog;h=v4.8
or run this command from a git-cloned sed directory:
  git shortlog v4.7..v4.8

To summarize the 865 gnulib-related changes, run these commands
from a git-cloned sed directory:
  git checkout v4.8
  git submodule summary v4.7

Here are the compressed sources:
  https://ftp.gnu.org/gnu/sed/sed-4.8.tar.gz   (2.2MB)
  https://ftp.gnu.org/gnu/sed/sed-4.8.tar.xz   (1.3MB)

Here are the GPG detached signatures[*]:
  https://ftp.gnu.org/gnu/sed/sed-4.8.tar.gz.sig
  https://ftp.gnu.org/gnu/sed/sed-4.8.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify sed-4.8.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 7FD9FCCB000BEEEE

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.69.202-d78a
  Automake 1.16a
  Gnulib v0.1-3167-g6b9d15b8b

NEWS

* Noteworthy changes in release 4.8 (2020-01-14) [stable]

** Bug fixes

  "sed -i" now creates temporary files with correct umask (limited to u=rwx).
  Previously sed would incorrectly set umask on temporary files, resulting
  in problems under certain fuse-like file systems.
  [bug introduced in sed 4.2.1]

** Release

  distribute gzip-compressed tarballs once again

** Improvements

  a year's worth of gnulib development, including improved DFA performance

15 January, 2020 04:45AM by Jim Meyering

January 14, 2020

GNU Guix

Reproducible computations with Guix

This post is about reproducible computations, so let's start with a computation. A short, though rather uninteresting, C program is a good starting point. It computes π in three different ways:

#include <math.h>
#include <stdio.h>

int main()
{
    printf( "M_PI                         : %.10lf\n", M_PI);
    printf( "4 * atan(1.)                 : %.10lf\n", 4.*atan(1.));
    printf( "Leibniz' formula (four terms): %.10lf\n", 4.*(1.-1./3.+1./5.-1./7.));
    return 0;
}

This program uses no random element, such as a random number generator or parallelism. It's strictly deterministic. It is reasonable to expect it to produce exactly the same output, on any computer and at any point in time. And yet, many programs whose results should be perfectly reproducible are in fact not. Programs using floating-point arithmetic, such as this short example, are particularly prone to seemingly inexplicable variations.

My goal is to explain why deterministic programs often fail to be reproducible, and what it takes to fix this. The short answer to that question is "use Guix", but even though Guix provides excellent support for reproducibility, you still have to use it correctly, and that requires some understanding of what's going on. The explanation I will give is rather detailed, to the point of discussing parts of the Guile API of Guix. You should be able to follow the reasoning without knowing Guile though, you will just have to believe me that the scripts I will show do what I claim they do. And in the end, I will provide a ready-to-run Guile script that will let you explore package dependencies right from the shell.

Dependencies: what it takes to run a program

One keyword in discussions of reproducibility is "dependencies". I will revisit the exact meaning of this term later, but to get started, I will define it loosely as "any software package required to run a program". Running the π computation shown above is normally done using something like

gcc pi.c -o pi
./pi

C programmers know that gcc is a C compiler, so that's one obvious dependency for running our little program. But is a C compiler enough? That question is surprisingly difficult to answer in practice. Your computer is loaded with tons of software (otherwise it wouldn't be very useful), and you don't really know what happens behind the scenes when you run gcc or pi.

Containers are good

A major element of reproducibility support in Guix is the possibility to run programs in well-defined environments that contain exactly the software packages you request, and no more. So if your program runs in an environment that contains only a C compiler, you can be sure it has no other dependencies. Let's create such an environment:

guix environment --container --ad-hoc gcc-toolchain

The option --container ensures the best possible isolation from the standard environment that your system installation and user account provide for day-to-day work. This environment contains nothing but a C compiler and a shell (which you need to type in commands), and has access to no other files than those in the current directory.

If the term "container" makes you think of Docker, note that this is something different. Note also that the option --container requires support from the Linux kernel, which may not be present on your system, or may be disabled by default. Finally, note that by default, a containerized environment has no network access, which may be a problem. If for whatever reason you cannot use --container, use --pure instead. This yields a less isolated environment, but it is usually good enough. For a more detailed discussion of these options, see the Guix manual.

The above command leaves me in a shell inside my environment, where I can now compile and run my little program:

gcc pi.c -o pi
./pi
M_PI                         : 3.1415926536
4 * atan(1.)                 : 3.1415926536
Leibniz' formula (four terms): 2.8952380952

It works! So now I can be sure that my program has a single dependency: the Guix package gcc-toolchain. I'll leave that special-environment shell by typing Ctrl-D, as otherwise the following examples won't work.

Perfectionists who want to exclude the possibility that my program requires a shell could run each step in a separate container:

guix environment --container --ad-hoc gcc-toolchain -- gcc pi.c -o pi
guix environment --container --ad-hoc gcc-toolchain -- ./pi
M_PI                         : 3.1415926536
4 * atan(1.)                 : 3.1415926536
Leibniz' formula (four terms): 2.8952380952

Welcome to dependency hell!

Now that we know that our only dependency is gcc-toolchain, let's look at it in more detail:

guix show gcc-toolchain
name: gcc-toolchain
version: 9.2.0
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@9.2.0 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 8.3.0
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@8.3.0 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 7.4.0
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@7.4.0 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 6.5.0
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@6.5.0 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 5.5.0
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@5.5.0 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 4.9.4
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@4.9.4 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

name: gcc-toolchain
version: 4.8.5
outputs: out debug static
systems: x86_64-linux i686-linux
dependencies: binutils@2.32 gcc@4.8.5 glibc@2.29 ld-wrapper@0
location: gnu/packages/commencement.scm:2532:4
homepage: https://gcc.gnu.org/
license: GPL 3+
synopsis: Complete GCC tool chain for C/C++ development  
description: This package provides a complete GCC tool chain for C/C++
+ development to be installed in user profiles.  This includes GCC, as well as
+ libc (headers an d binaries, plus debugging symbols in the `debug' output),
+ and Binutils.

Guix actually knows about several versions of this toolchain. We didn't ask for a specific one, so what we got is the first one in this list, which is the one with the highest version number. Let's check that this is true:

guix environment --container --ad-hoc gcc-toolchain -- gcc --version
gcc (GCC) 9.2.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

The output of guix show contains a line about dependencies. These are the dependencies of our dependency, and you may already have guessed that they will have dependencies as well. That's why reproducibility is such a difficult job in practice! The dependencies of gcc-toolchain@9.2.0 are:

guix show gcc-toolchain@9.2.0 | recsel -P dependencies
binutils@2.32 gcc@9.2.0 glibc@2.29 ld-wrapper@0

To dig deeper, we can try feeding these dependencies to guix show, one by one, in order to learn more about them:

guix show binutils@2.32
name: binutils
version: 2.32
outputs: out
systems: x86_64-linux i686-linux
dependencies: 
location: gnu/packages/base.scm:415:2
homepage: https://www.gnu.org/software/binutils/
license: GPL 3+
synopsis: Binary utilities: bfd gas gprof ld  
description: GNU Binutils is a collection of tools for working with binary
+ files.  Perhaps the most notable are "ld", a linker, and "as", an assembler.
+ Other tools include programs to display binary profiling information, list the
+ strings in a binary file, and utilities for working with archives.  The "bfd"
+ library for working with executable and object formats is also included.
guix show gcc@9.2.0
guix show: error: gcc@9.2.0: package not found

This looks a bit surprising. What's happening here is that gcc is defined as a hidden package in Guix. The package is there, but it is hidden from package queries. There is a good reason for this: gcc on its own is rather useless, you need gcc-toolchain to actually use the compiler. But if both gcc and gcc-toolchain showed up in a search, that would be more confusing than helpful for most users. Hiding the package is a way of saying "for experts only".

Let's take this as a sign that it's time to move on to the next level of Guix hacking: Guile scripts. Guile, an implementation of the Scheme language, is Guix' native language, so using Guile scripts, you get access to everything there is to know about Guix and its packages.

A note in passing: the emacs-guix package provides an intermediate level of Guix exploration for Emacs users. It lets you look at hidden packages, for example. But much of what I will show in the following really requires Guile scripts. Another nice tool for package exploration is guix graph, which creates a diagram showing dependency relations between packages. Unfortunately that diagram is legible only for a relatively small number of dependencies, and as we will see later, most packages end up having lots of them.

Anatomy of a Guix package

From the user's point of view, a package is a piece of software with a name and a version number that can be installed using guix install. The packager's point of view is quite a bit different. In fact, what users consider a package is more precisely called the package's output in Guix jargon. The package is a recipe for creating this output.

To see how all these concepts fit together, let's look at an example of a package definition: xmag. I have chosen this package not because I care much about it, but because its definition is short while showcasing all the features I want to explain. You can access it most easily by typing guix edit xmag. Here is what you will see:

(package
  (name "xmag")
  (version "1.0.6")
  (source
   (origin
     (method url-fetch)
     (uri (string-append
           "mirror://xorg/individual/app/" name "-" version ".tar.gz"))
     (sha256
      (base32
       "19bsg5ykal458d52v0rvdx49v54vwxwqg8q36fdcsv9p2j8yri87"))))
  (build-system gnu-build-system)
  (arguments
   `(#:configure-flags
     (list (string-append "--with-appdefaultdir="
                          %output ,%app-defaults-dir))))
  (inputs
   `(("libxaw" ,libxaw)))
  (native-inputs
   `(("pkg-config" ,pkg-config)))
  (home-page "https://www.x.org/wiki/")
  (synopsis "Display or capture a magnified part of a X11 screen")
  (description "Xmag displays and captures a magnified snapshot of a portion
of an X11 screen.")
  (license license:x11))

The package definition starts with the name and version information you expected. Next comes source, which says how to obtain the source code and from where. It also provides a hash that allows to check the integrity of the downloaded files. The next four items, build-system, arguments, inputs, and native-inputs supply the information required for building the package, which is what creates its outputs. The remaining items are documentation for human consumption, important for other reasons but not for reproducibility, so I won't say any more about them. (See this packaging tutorial if you want to define your own package.)

The example package definition has native-inputs in addition to "plain" inputs. There's a third variant, propagated-inputs, but xmag doesn't have any. The differences between these variants don't matter for my topic, so I will just refer to "inputs" from now on. Another omission I will make is the possibility to define several outputs for a package. This is done for particularly big packages, in order to reduce the footprint of installations, but for the purposes of reproducibility, it's OK to treat all outputs of a package a single unit.

The following figure illustrates how the various pieces of information from a package are used in the build process (done explicitly by guix build, or implicitly when installing or otherwise using a package): Diagram of a Guix package.

It may help to translate the Guix jargon to the vocabulary of C programming:

| Guix package | C program        |
|--------------+------------------|
| source code  | source code      |
| inputs       | libraries        |
| arguments    | compiler options |
| build system | compiler         |
| output       | executable       |

Building a package can be considered a generalization of compiling a program. We could in fact create a "GCC build system" for Guix that would simply run gcc. However, such a build system would be of little practical use, since most real-life software consists of more than just one C source code file, and requires additional pre- or post-processing steps. The gnu-build-system used in the example is based on tools such as make and autoconf, in addition to gcc.

Package exploration in Guile

Guile uses a record type called <package> to represent packages, which is defined in module (guix packages). There is also a module (gnu packages), which contains the actual package definitions - be careful not to confuse the two (as I always do). Here is a simple Guile script that shows some package information, much like the guix show command that I used earlier:

(use-modules (guix packages)
             (gnu packages)) 

(define gcc-toolchain
  (specification->package "gcc-toolchain"))

(format #t "Name   : ~a\n" (package-name gcc-toolchain))
(format #t "Version: ~a\n" (package-version gcc-toolchain))
(format #t "Inputs : ~a\n" (package-direct-inputs gcc-toolchain))
Name   : gcc-toolchain
Version: 9.2.0
Inputs : ((gcc #<package gcc@9.2.0 gnu/packages/gcc.scm:524 7fc2d76af160>) (ld-wrapper #<package ld-wrapper@0 gnu/packages/base.scm:505 7fc2d306f580>) (binutils #<package binutils@2.32 gnu/packages/commencement.scm:2187 7fc2d306fdc0>) (libc #<package glibc@2.29 gnu/packages/commencement.scm:2145 7fc2d306fe70>) (libc-debug #<package glibc@2.29 gnu/packages/commencement.scm:2145 7fc2d306fe70> debug) (libc-static #<package glibc@2.29 gnu/packages/commencement.scm:2145 7fc2d306fe70> static))

This script first calls specification->package to look up the package using the same rules as the guix command line interface: pick the latest available version if none is explicitly requested. Then it extracts various information about the package. Note that package-direct-inputs returns the combination of package-inputs, package-native-inputs, and package-propagated-inputs. As I said above, I don't care about the distinction here.

The inputs are not shown in a particularly nice form, so let's write two Guile functions to improve it:

(use-modules (guix packages)
             (gnu packages)
             (ice-9 match))

(define (package->specification package)
  (format #f "~a@~a"
          (package-name package)
          (package-version package)))

(define (input->specification input)
  (match input
    ((label (? package? package) . _)
     (package->specification package))
    (other-item
     (format #f "~a" other-item))))

(define gcc-toolchain
  (specification->package "gcc-toolchain"))

(format #t "Package: ~a\n"
        (package->specification gcc-toolchain))
(format #t "Inputs : ~a\n"
        (map input->specification (package-direct-inputs gcc-toolchain)))
Package: gcc-toolchain@9.2.0
Inputs : (gcc@9.2.0 ld-wrapper@0 binutils@2.32 glibc@2.29 glibc@2.29 glibc@2.29)

That looks much better. As you can see from the code, a list of inputs is a bit more than a list of packages. It is in fact a list of labelled package outputs. That also explains why we see glibc three times in the input list: glibc defines three distinct outputs, all of which are used in gcc-toolchain. For reproducibility, all we care about is the package references. Later on, we will deal with much longer input lists, so as a final cleanup step, let's show only unique package references from the list of inputs:

(use-modules (guix packages)
             (gnu packages)
             (srfi srfi-1)
             (ice-9 match))

(define (package->specification package)
  (format #f "~a@~a"
          (package-name package)
          (package-version package)))

(define (input->specification input)
  (match input
    ((label (? package? package) . _)
     (package->specification package))
    (other-item
     (format #f "~a" other-item))))

(define (unique-inputs inputs)
  (delete-duplicates
   (map input->specification inputs)))

(define gcc-toolchain
  (specification->package "gcc-toolchain"))

(format #t "Package: ~a\n"
        (package->specification gcc-toolchain))
(format #t "Inputs : ~a\n"
        (unique-inputs (package-direct-inputs gcc-toolchain)))
Package: gcc-toolchain@9.2.0
Inputs : (gcc@9.2.0 ld-wrapper@0 binutils@2.32 glibc@2.29)

Dependencies

You may have noticed the absence of the term "dependency" from the last two sections. There is a good reason for that: the term is used in somewhat different meanings, and that can create confusion. Guix jargon therefore avoids it.

The figure above shows three kinds of input to the build system: source, inputs, and arguments. These categories reflect the packagers' point of view: source is what the authors of the software supply, inputs are other packages, and arguments is what the packagers themselves add to the build procedure. It is important to understand that from a purely technical point of view, there is no fundamental difference between the three categories. You could, for example, define a package that contains C source code in the build system arguments, but leaves source empty. This would be inconvenient, and confusing for others, so I don't recommend you actually do this. The three categories are important, but for humans, not for computers. In fact, even the build system is not fundamentally distinct from its inputs. You could define a special-purpose build system for one package, and put all the source code in there. At the level of the CPU and the computer's memory, a build process (as in fact any computation) looks like Image of a computation. It is human interpretation that decomposes this into Code and data. and in a next step into Data, program, and environment. We can go on and divide the environment into operating system, development tools, and application software, for example, but the further we go in decomposing the input to a computation, the more arbitrary it gets.

From this point of view, a software's dependencies consist of everything required to run it in addition to its source code. For a Guix package, the dependencies are thus,

  • its inputs
  • the build system arguments
  • the build system itself
  • Guix (which is a piece of software as well)
  • the GNU/Linux operating system (kernel, file system, etc.).

In the following, I will not mention the last two items any more, because they are a common dependency of all Guix packages, but it's important not to forget about them. A change in Guix or in GNU/Linux can actually make a computation non-reproducible, although in practice that happens very rarely. Moreover, Guix is actually designed to run older versions of itself, as we will see later.

Build systems are (mostly) packages as well

I hope that by now you have a good idea of what a package is: a recipe for building outputs from source and inputs, with inputs being the outputs of other packages. The recipe involves a build system and arguments supplied to it. So... what exactly is a build system? I have introduced it as a generalization of a compiler, which describes its role. But where does a build system come from in Guix?

The ultimate answer is of course the source code. Build systems are pieces of Guile code that are part of Guix. But this Guile code is only a shallow layer orchestrating invocations of other software, such as gcc or make. And that software is defined by packages. So in the end, from a reproducibility point of view, we can replace the "build system" item in our list of dependenies by "a bundle of packages". In other words: more inputs.

Before Guix can build a package, it must gather all the required ingredients, and that includes replacing the build system by the packages it represents. The resulting list of ingredients is called a bag, and we can access it using a Guile script:

(use-modules (guix packages)
             (gnu packages)
             (srfi srfi-1)
             (ice-9 match))

(define (package->specification package)
  (format #f "~a@~a"
          (package-name package)
          (package-version package)))

(define (input->specification input)
  (match input
    ((label (? package? package) . _)
     (package->specification package))
    ((label (? origin? origin))
     (format #f "[source code from ~a]"
             (origin-uri origin)))
    (other-input
     (format #f "~a" other-input))))

(define (unique-inputs inputs)
  (delete-duplicates
   (map input->specification inputs)))

(define hello
  (specification->package "hello"))

(format #t "Package       : ~a\n"
        (package->specification hello))
(format #t "Package inputs: ~a\n"
        (unique-inputs (package-direct-inputs hello)))
(format #t "Build inputs  : ~a\n"
        (unique-inputs
         (bag-direct-inputs
          (package->bag hello))))
Package       : hello@2.10
Package inputs: ()
Build inputs  : ([source code from mirror://gnu/hello/hello-2.10.tar.gz] tar@1.32 gzip@1.10 bzip2@1.0.6 xz@5.2.4 file@5.33 diffutils@3.7 patch@2.7.6 findutils@4.6.0 gawk@5.0.1 sed@4.7 grep@3.3 coreutils@8.31 make@4.2.1 bash-minimal@5.0.7 ld-wrapper@0 binutils@2.32 gcc@7.4.0 glibc@2.29 glibc-utf8-locales@2.29)

I have used a different example, hello, because for gcc-toolchain, there is no difference between package inputs and build inputs (check for yourself if you want!) My new example, hello (a short demo program printing "Hello, world" in the language of the system installation), is interesting because it has no package inputs at all. All the build inputs except for the source code have thus been contributed by the build system.

If you compare this script to the previous one that printed only the package inputs, you will notice two major new features. In input->specification, there is an additional case for the source code reference. And in the last statement, package->bag constructs a bag from the package, before bag-direct-inputs is called to get that bag's input list.

Inputs are outputs

I have mentioned before that one package's inputs are other packages' outputs, but that fact deserves a more in-depth discussion because of its crucial importance for reproducibility. A package is a recipe for building outputs from source and inputs. Since these inputs are outputs, they must have been built as well. Package building is therefore a process consisting of multiple steps. An immediate consequence is that any computation making use of packaged software is a multi-step computation as well.

Remember the short C program computing π from the beginning of this post? Running that program is only the last step in a long series of computations. Before you can run pi, you must compile pi.c. That requires the package gcc-toolchain, which must first be built. And before it can be built, its inputs must be built. And so on. If you want the output of pi to be reproducible, the whole chain of computations must be reproducible, because each step can have an impact on the results produced by pi.

So... where does this chain start? Few people write machine code these days, so almost all software requires some compiler or interpreter. And that means that for every package, there are other packages that must be built first. The question of how to get this chain started is known as the bootstrapping problem. A rough summary of the solution is that the chain starts on somebody else's computer, which creates a bootstrap seed, an ideally small package that is downloaded in precompiled form. See this post by Jan Nieuwenhuizen for details of this procedure. The bootstrap seed is not the real start of the chain, but as long as we can retrieve an identical copy at a later time, that's good enough for reproducibility. In fact, the reason for requiring the bootstrap seed to be small is not reproducibility, but inspectability: it should be possible to audit the seed for bugs and malware, even in the absence of source code.

Reaching closure

Now we are finally ready for the ultimate step in dependency analysis: identifying all packages on which a computation depends, right up to the bootstrap seed. The starting point is the list of direct inputs of the bag derived from a package, which we looked at in the previous script. For each package in that list, we must apply this same procedure, recursively. We don't have to write this code ourselves, because the function package-closure in Guix does that job. These closures have nothing to do with closures in Lisp, and even less with the Clojure programming language. They are a case of what mathematicians call transitive closures: starting with a set of packages, you extend the set repeatedly by adding the inputs of the packages that are already in the set, until there is nothing more to add. If you have a basic knowledge of Scheme, you should now be able to understand implementation of this function. Let's add it to our dependency analysis code:

(use-modules (guix packages)
             (gnu packages)
             (srfi srfi-1)
             (ice-9 match))

(define (package->specification package)
  (format #f "~a@~a"
          (package-name package)
          (package-version package)))

(define (input->specification input)
  (match input
    ((label (? package? package) . _)
     (package->specification package))
    ((label (? origin? origin))
     (format #f "[source code from ~a]"
             (origin-uri origin)))
    (other-input
     (format #f "~a" other-input))))

(define (unique-inputs inputs)
  (delete-duplicates
   (map input->specification inputs)))

(define (length-and-list lists)
  (list (length lists) lists))

(define hello
  (specification->package "hello"))

(format #t "Package        : ~a\n"
        (package->specification hello))
(format #t "Package inputs : ~a\n"
        (length-and-list (unique-inputs (package-direct-inputs hello))))
(format #t "Build inputs   : ~a\n"
        (length-and-list
         (unique-inputs
          (bag-direct-inputs
           (package->bag hello)))))
(format #t "Package closure: ~a\n"
        (length-and-list
         (delete-duplicates
          (map package->specification
               (package-closure (list hello))))))
Package        : hello@2.10
Package inputs : (0 ())
Build inputs   : (20 ([source code from mirror://gnu/hello/hello-2.10.tar.gz] tar@1.32 gzip@1.10 bzip2@1.0.6 xz@5.2.4 file@5.33 diffutils@3.7 patch@2.7.6 findutils@4.6.0 gawk@5.0.1 sed@4.7 grep@3.3 coreutils@8.31 make@4.2.1 bash-minimal@5.0.7 ld-wrapper@0 binutils@2.32 gcc@7.4.0 glibc@2.29 glibc-utf8-locales@2.29))
Package closure: (84 (m4@1.4.18 libatomic-ops@7.6.10 gmp@6.1.2 libgc@7.6.12 libltdl@2.4.6 libunistring@0.9.10 libffi@3.2.1 pkg-config@0.29.2 guile@2.2.6 libsigsegv@2.12 lzip@1.21 ed@1.15 perl@5.30.0 guile-bootstrap@2.0 zlib@1.2.11 xz@5.2.4 ncurses@6.1-20190609 libxml2@2.9.9 attr@2.4.48 gettext-minimal@0.20.1 gcc-cross-boot0-wrapped@7.4.0 libstdc++@7.4.0 ld-wrapper-boot3@0 bootstrap-binaries@0 ld-wrapper-boot0@0 flex@2.6.4 glibc-intermediate@2.29 libstdc++-boot0@4.9.4 expat@2.2.7 gcc-mesboot1-wrapper@4.7.4 mesboot-headers@0.19 gcc-core-mesboot@2.95.3 bootstrap-mes@0 bootstrap-mescc-tools@0.5.2 tcc-boot0@0.9.26-6.c004e9a mes-boot@0.19 tcc-boot@0.9.27 make-mesboot0@3.80 gcc-mesboot0@2.95.3 binutils-mesboot0@2.20.1a make-mesboot@3.82 diffutils-mesboot@2.7 gcc-mesboot1@4.7.4 glibc-headers-mesboot@2.16.0 glibc-mesboot0@2.2.5 binutils-mesboot@2.20.1a linux-libre-headers@4.19.56 linux-libre-headers-bootstrap@0 gcc-mesboot@4.9.4 glibc-mesboot@2.16.0 gcc-cross-boot0@7.4.0 bash-static@5.0.7 gettext-boot0@0.19.8.1 python-minimal@3.5.7 perl-boot0@5.30.0 texinfo@6.6 bison@3.4.1 gzip@1.10 libcap@2.27 acl@2.2.53 glibc-utf8-locales@2.29 gcc-mesboot-wrapper@4.9.4 file-boot0@5.33 findutils-boot0@4.6.0 diffutils-boot0@3.7 make-boot0@4.2.1 binutils-cross-boot0@2.32 glibc@2.29 gcc@7.4.0 binutils@2.32 ld-wrapper@0 bash-minimal@5.0.7 make@4.2.1 coreutils@8.31 grep@3.3 sed@4.7 gawk@5.0.1 findutils@4.6.0 patch@2.7.6 diffutils@3.7 file@5.33 bzip2@1.0.6 tar@1.32 hello@2.10))

That's 84 packages, just for printing "Hello, world!". As promised, it includes the boostrap seed, called bootstrap-binaries. It may be more surprising to see Perl and Python in the dependency list of what is a pure C program. The explanation is that the build process of gcc and glibc contains Perl and Python code. Considering that both Perl and Python are written in C and use glibc, this hints at why bootstrapping is a hard problem!

Get ready for your own analyses

As promised, here is a Guile script that you can download and run from the command line to do dependency analyses much like the ones I have shown. Just give the packages whose combined list of dependencies you want to analyze. For example:

./show-dependencies.scm hello
Packages: 1
  hello@2.10
Package inputs: 0 packages
 
Build inputs: 20 packages
  [source code from mirror://gnu/hello/hello-2.10.tar.gz] bash-minimal@5.0.7 binutils@2.32 bzip2@1.0.6 coreutils@8.31 diffutils@3.7 file@5.33 findutils@4.6.0 gawk@5.0.1 gcc@7.4.0 glibc-utf8-locales@2.29 glibc@2.29 grep@3.3 gzip@1.10 ld-wrapper@0 make@4.2.1 patch@2.7.6 sed@4.7 tar@1.32 xz@5.2.4
Package closure: 84 packages
  acl@2.2.53 attr@2.4.48 bash-minimal@5.0.7 bash-static@5.0.7 binutils-cross-boot0@2.32 binutils-mesboot0@2.20.1a binutils-mesboot@2.20.1a binutils@2.32 bison@3.4.1 bootstrap-binaries@0 bootstrap-mes@0 bootstrap-mescc-tools@0.5.2 bzip2@1.0.6 coreutils@8.31 diffutils-boot0@3.7 diffutils-mesboot@2.7 diffutils@3.7 ed@1.15 expat@2.2.7 file-boot0@5.33 file@5.33 findutils-boot0@4.6.0 findutils@4.6.0 flex@2.6.4 gawk@5.0.1 gcc-core-mesboot@2.95.3 gcc-cross-boot0-wrapped@7.4.0 gcc-cross-boot0@7.4.0 gcc-mesboot-wrapper@4.9.4 gcc-mesboot0@2.95.3 gcc-mesboot1-wrapper@4.7.4 gcc-mesboot1@4.7.4 gcc-mesboot@4.9.4 gcc@7.4.0 gettext-boot0@0.19.8.1 gettext-minimal@0.20.1 glibc-headers-mesboot@2.16.0 glibc-intermediate@2.29 glibc-mesboot0@2.2.5 glibc-mesboot@2.16.0 glibc-utf8-locales@2.29 glibc@2.29 gmp@6.1.2 grep@3.3 guile-bootstrap@2.0 guile@2.2.6 gzip@1.10 hello@2.10 ld-wrapper-boot0@0 ld-wrapper-boot3@0 ld-wrapper@0 libatomic-ops@7.6.10 libcap@2.27 libffi@3.2.1 libgc@7.6.12 libltdl@2.4.6 libsigsegv@2.12 libstdc++-boot0@4.9.4 libstdc++@7.4.0 libunistring@0.9.10 libxml2@2.9.9 linux-libre-headers-bootstrap@0 linux-libre-headers@4.19.56 lzip@1.21 m4@1.4.18 make-boot0@4.2.1 make-mesboot0@3.80 make-mesboot@3.82 make@4.2.1 mes-boot@0.19 mesboot-headers@0.19 ncurses@6.1-20190609 patch@2.7.6 perl-boot0@5.30.0 perl@5.30.0 pkg-config@0.29.2 python-minimal@3.5.7 sed@4.7 tar@1.32 tcc-boot0@0.9.26-6.c004e9a tcc-boot@0.9.27 texinfo@6.6 xz@5.2.4 zlib@1.2.11

You can now easily experiment yourself, even if you are not at ease with Guile. For example, suppose you have a small Python script that plots some data using matplotlib. What are its dependencies? First you should check that it runs in a minimal environment:

guix environment --container --ad-hoc python python-matplotlib -- python my-script.py

Next, find its dependencies:

./show-dependencies.scm python python-matplotlib

I won't show the output here because it is rather long - the package closure contains 499 packages!

OK, but... what are the real dependencies?

I have explained dependencies along these lines in a few seminars. There's one question that someone in the audience is bound to ask: What do the results of a computation really depend on? The output of hello is "Hello, world!", no matter which version of gcc I use to compile it, and no matter which version of python was used in building glibc. The package closure is a worst-case estimate: it contains everything that can potentially influence the results, though most of it doesn't in practice. Unfortunately, there is no way to identify the dependencies that matter automatically, because answering that question in general (i.e. for arbitrary software) is equivalent to solving the halting problem.

Most package managers, such as Debian's apt or the multi-platform conda, take a different point of view. They define the dependencies of a program as all packages that need to be loaded into memory in order to run it. They thus exclude the software that is required to build the program and its run-time dependencies, but can then be discarded. Whereas Guix' definition errs on the safe side (its dependency list is often longer than necessary but never too short), the run-time-only definition is both too vast and too restrictive. Many run-time dependencies don't have an impact on most programs' results, but some build-time dependencies do.

One important case where build-time dependencies matter is floating-point computations. For historical reasons, they are surrounded by an aura of vagueness and imprecision, which goes back to its early days, when many details were poorly understood and implementations varied a lot. Today, all computers used for scientific computing respect the IEEE 754 standard that precisely defines how floating-point numbers are represented in memory and what the result of each arithmetic operation must be. Floating-point arithmetic is thus perfectly deterministic and even perfectly portable between machines, if expressed in terms of the operations defined by the standard. However, high-level languages such as C or Fortran do not allow programmers to do that. Its designers assume (probably correctly) that most programmers do not want to deal with the intricate details of rounding. Therefore they provide only a simplified interface to the arithmetic operations of IEEE 754, which incidentally also leaves more liberty for code optimization to compiler writers. The net result is that the complete specification of a program's results is its source code plus the compiler and the compilation options. You thus can get reproducible floating-point results if you include all compilation steps into the perimeter of your computation, at least for code running on a single processor. Parallel computing is a different story: it involves voluntarily giving up reproducibility in exchange for speed. Reproducibility then becomes a best-effort approach of limiting the collateral damage done by optimization through the clever design of algorithms.

Reproducing a reproducible computation

So far, I have explained the theory behind reproducible computations. The take-home message is that to be sure to get exactly the same results in the future, you have to use the exact same versions of all packages in the package closure of your immediate dependencies. I have also shown you how you can access that package closure. There is one missing piece: how do you actually run your program in the future, using the same environment?

The good news is that doing this is a lot simpler than understanding my lengthy explanations (which is why I leave this for the end!). The complex dependency graphs that I have analyzed up to here are encoded in the Guix source code, so all you need to re-create your environment is the exact same version of Guix! You get that version using

guix describe
Generation 15 Jan 06 2020 13:30:45    (current)
  guix 769b96b
    repository URL: https://git.savannah.gnu.org/git/guix.git
    branch: master
    commit: 769b96b62e8c09b078f73adc09fb860505920f8f

The critical information here is the unpleasantly looking string of hexadecimal digits after "commit". This is all it takes to uniquely identify a version of Guix. And to re-use it in the future, all you need is Guix' time machine:

guix time-machine --commit=769b96b62e8c09b078f73adc09fb860505920f8f -- environment --ad-hoc gcc-toolchain
Updating channel 'guix' from Git repository at 'https://git.savannah.gnu.org/git/guix.git'...
gcc pi.c -o pi
./pi
M_PI                         : 3.1415926536
4 * atan(1.)                 : 3.1415926536
Leibniz' formula (four terms): 2.8952380952

The time machine actually downloads the specified version of Guix and passes it the rest of the command line. You are running the same code again. Even bugs in Guix will be reproduced faithfully! As before, guix environment leaves us in a special-environment shell which needs to be terminated by Ctrl-D.

For many practical use cases, this technique is sufficient. But there are two variants you should know about for more complicated situations:

  • If you need an environment with many packages, you should use a manifest rather than list the packages on the command line. See the manual for details.

  • If you need packages from additional channels, i.e. packages that are not part of the official Guix distribution, you should store a complete channel description in a file using

guix describe -f channels > guix-version-for-reproduction.txt

and feed that file to the time machine:

guix time-machine --channels=guix-version-for-reproduction.txt -- environment --ad-hoc gcc-toolchain
Updating channel 'guix' from Git repository at 'https://git.savannah.gnu.org/git/guix.git'...
gcc pi.c -o pi
./pi
M_PI                         : 3.1415926536
4 * atan(1.)                 : 3.1415926536
Leibniz' formula (four terms): 2.8952380952

Last, if your colleagues do not use Guix yet, you can pack your reproducible software for use on other systems: as a tarball, or as a Docker or Singularity container image. For example:

guix pack            \
     -f docker       \
     -C none         \
     -S /bin=bin     \
     -S /lib=lib     \
     -S /share=share \
     -S /etc=etc     \
     gcc-toolchain
/gnu/store/iqn9yyvi8im18g7y9f064lw9s9knxp0w-docker-pack.tar

will produce a Docker container image, and with the knowledge of the Guix commit (or channel specification), you will be able in the future to reproduce this container bit-to-bit using guix time-machine.

And now... congratulations for having survived to the end of this long journey! May all your computations be reproducible, with Guix.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

14 January, 2020 04:30PM by Konrad Hinsen

January 13, 2020

Join GNU Guix through Outreachy

We are happy to announce that for the fourth time GNU Guix offers a three-month internship through Outreachy, the inclusion program for groups traditionally underrepresented in free software and tech. We currently propose four subjects to work on:

  1. Implement netlink bindings for Guile.
  2. Improve internationalization support for the Guix Data Service.
  3. Add accessibility support for the Guix System Installer.
  4. Add monitoring support for the Guix daemon and Cuirass.

The initial applications for this round open on Jan. 20, 2020 at 4PM UTC and the initial application deadline is on Feb. 25, 2020 at 4PM UTC.

The final project list is announced on Feb. 25, 2020.

For further information, check out the timeline, information about the application process, and the eligibility rules.

If you’d like to contribute to computing freedom, Scheme, functional programming, or operating system development, now is a good time to join us. Let’s get in touch on the mailing lists and on the #guix channel on the Freenode IRC network, or come chat with us at FOSDEM!

Last year we had the pleasure to welcome Laura Lazzati as an Outreachy intern working on documentation video creation, which led to the videos you can now see on the home page.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

13 January, 2020 02:30PM by Gábor Boskovits

libredwg @ Savannah

libredwg-0.10.1 released

Major bugfixes:
  * Fixed dwg2SVG htmlescape overflows and off-by-ones (#182)
  * Removed direct usages of fprintf and stderr in the lib. All can be
    redefined now. (#181)

Minor bugfixes:
  * Fuzzing fixes for dwg2SVG, dwgread. (#182)
  * Fixed eed.raw leaks

Here are the compressed sources:

  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.1.tar.gz   (10.9MB)
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.1.tar.xz   (4.5MB)

Here are the GPG detached signatures[*]:

  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.1.tar.gz.sig
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.1.tar.xz.sig

Use a mirror for higher download bandwidth:

  https://www.gnu.org/order/ftp.html

Here are more binaries:

  https://github.com/LibreDWG/libredwg/releases/tag/0.10.1

Here are the SHA256 checksums:

6539a9a762f74e937f08000e2bb3d3d4dddd326b85b5361f7532237b68ff0ae3  libredwg-0.10.1.tar.gz
0fa603d5f836dfceb8ae4aac28d1e836c09dce3936ab98703bb2341126678ec3  libredwg-0.10.1.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify libredwg-0.10.1.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

13 January, 2020 09:40AM by Reini Urban

GNU Guile

GNU Guile 2.9.9 (beta) released

We are delighted to announce the release of GNU Guile 2.9.9. This is the ninth and final pre-release of what will eventually become the 3.0 release series.

See the release announcement for full details and a download link.

This release fixes a number of bugs, omissions, and regressions. Notably, it fixes the build on 32-bit systems.

We plan to release a final Guile 3.0.0 on 17 January: this Friday! Please do test this prerelease; build reports, good or bad, are very welcome; send them to guile-devel@gnu.org. If you know you found a bug, please do send a note to bug-guile@gnu.org. Happy hacking!

13 January, 2020 08:44AM by Andy Wingo (guile-devel@gnu.org)

Applied Pokology

First Poke-Conf at Mont-Soleil - A report

pokists in Mont-Soleil
Poking at Mont-Soleil

This last weekend we had the first gathering of poke developers, as part of the GNU Hackers Meeting at Mont-Soleil, in Switzerland. I can say we had a lot of fun, and it was a quite productive meeting too: many patches were written, and many technical aspects designed and clarified.

13 January, 2020 12:00AM

January 12, 2020

GNUnet News

GNUnet 0.12.2

GNUnet 0.12.2 released

We are pleased to announce the release of GNUnet 0.12.2.
This is a new bugfix release. In terms of usability, users should be aware that there are still a large number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.12.2 release is still only suitable for early adopters with some reasonable pain tolerance.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.12.2 (since 0.12.1)

  • GNS: Resolver clients are now able to specify a recursion depth limit.
  • TRANSPORT/TNG: The transport rewrite (aka TNG) is underway and various transport components have been worked on, including TCP, UDP and UDS communicators.
  • RECLAIM: Added preliminary support for third party attested credentials.
  • UTIL: The cryptographic changes introduced in 0.12.0 broke ECDSA ECDH and consequently other components. The offending ECDSA key normalization was dropped.

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, Florian Dold, Christian Ulrich, dvn, lynx and Martin Schanzenbach.

12 January, 2020 11:00PM

January 11, 2020

GNS Specification Milestone 2/4

GNS Technical Specification Milestone 2/4

We are happy to announce the completion of the second milestone for the GNS Specification. The second milestone consists of documenting the GNS name resolution process and record handling.
With the release of GNUnet 0.12.x, the currently specified protocol is implemented according to the specification. As before, the draft specification LSD001 can be found at:

As already announced on the mailing list, the Go implementation of GNS is also proceeding as planned and implements the specification.

The next and third milestone will cover namespace revocation.

This work is generously funded by NLnet as part of their Search and discovery fund.

11 January, 2020 11:00PM

January 10, 2020

GNU Guix

Meet Guix at FOSDEM

As usual, GNU Guix will be present at FOSDEM on February 1st and 2nd. This year, we’re happy to say that there will be quite a few talks about Guix and related projects!

The Minimalistic, Experimental and Emerging Languages devroom will also feature talks about about Racket, Lua, Crystal, Nim, and Pharo that you should not miss under any circumstances!

Guix Days logo.

For the third time, we are also organizing the Guix Days as a FOSDEM fringe event, a two-day Guix workshop where contributors and enthusiasts will meet. The workshop takes place on Thursday Jan. 30st and Friday Jan. 31st at the Institute of Cultural Affairs (ICAB) in Brussels.

Again this year there will be few talks; instead, the event will consist primarily of “unconference-style” sessions focused on specific hot topics about Guix, the Shepherd, continuous integration, and related tools and workflows.

Attendance to the workshop is free and open to everyone, though you are invited to register (there are only a few seats left!). Check out the workshop’s wiki page for registration and practical info. Hope to see you in Brussels!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

10 January, 2020 02:30PM by Manolis Ragkousis

January 08, 2020

libredwg @ Savannah

libredwg-0.10 released

Some minor API changes and bugfixes, mostly stabilization.

API breaking changes:
  * added a new int *isnewp argument to all dynapi utf8text getters,
    if the returned string is freshly malloced or not.
  * removed the UNKNOWN supertype, there are only UNKNOWN_OBJ and UNKNOWN_ENT
    left, with common_entity_data.
  * renamed BLOCK_HEADER.preview_data to preview, preview_data_size to preview_size
  * renamed SHAPE.shape_no to style_id
  * renamed CLASS.wasazombie to is_zombie

Major bugfixes:
  * Improved building the perl5 binding, proper dependencies.
    Set proper -I and -L paths, create LibreDWG.c not swig_perl.c
  * Harmonized INDXFB with INDXF, removed extra src/in_dxfb.c (#134).
    Slimmed the .so size by 260Kb. Still untested though.
  * Fixed encoding of added r2000 AUXHEADER address (broken since 0.9)
  * Fixed EED encoding from dwgrewrite (a dxf2dwg regression from 0.9) (#180)

Minor bugfixes:
  * Many fuzzing and static analyzer fixes for dwg2dxf, dxf2dwg, dwgrewrite,
    including a stack-overflow on outdxf cquote. (#172-174, #178, #179).
    dwgrewrite and indxf are pretty robust now, but still highly experimental,
    as many dxf2dwg import and DWG validity tests are missing.
    indxf still has many asserts on many structural DXF errors.
  * Protect indxf from many NULL ptr, overflows and truncation.
  * Fixed most indxf and encode leaks. (#151)
  * More section decoders protections from invalid (fuzzed) values.
  * Stabilized the ASAN leak tests for make check.
  * Fix MULTILEADER.ctx.lline handles <r2010
  * Fix indxf color.alpha; at DXF 440
  * Fixed most important make scan-build warnings, the rest are mostly bogus.

Other newsworthy changes:
  * Added LIBREDWG_VERSION et al to include/dwg.h
  * Added support for AcDb3dSolid history_id (r2007+)
  * Improved the indxf speed in new_object. Do a proper linear search, and
    break on first found type.
  * Rename the ./dxf helper to ./dwg, and added a ./dxf test helper.
  * dxf2dwg got a new experimental --force-free option to check for leaks and
    UAF or double-free's.
  * Allow -o /dev/null sinks for dxf2dwg and dwg2dxf, for faster fuzzing.
  * Harmonized *.spec formatting and adjusted gen-dynapi.pl
  * Harmonized out_dxfb with out_dxf, e.g. the new mspace improvements (#173).

Here are the compressed sources:
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.tar.gz   (10.9MB)
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.tar.xz   (4.5MB)

Here are the GPG detached signatures[*]:
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.tar.gz.sig
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.10.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are more binaries:
  https://github.com/LibreDWG/libredwg/releases/tag/0.10

Here are the SHA256 checksums:

e890b4d3ab8071c78c4eb36e6f7ecd30e7f54630b0e2f051b3fe51395395d5f7  libredwg-0.10.tar.gz
8c37c4ef985e4135e3d2020c502c887b6115cdbbab2148b2e730875d5659cd66  libredwg-0.10.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify libredwg-0.10.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

08 January, 2020 04:52PM by Reini Urban

January 06, 2020

remotecontrol @ Savannah

January 04, 2020

Sylvain Beucler

SCP Foundation needs you!

SCP is a mind-blowing, diverse, high-quality collection of writings and illustrations, all released under the CC-BY-SA free license.
If you never read horror stories written with scientific style -- have a try :)

[obviously this has nothing to do with OpenSSH Secure CoPy ;)]

Faced with a legal threat through the aggressive use of a RU/EU trademark, the SCP project is raising a legal fund.
I suggest you have a look.

04 January, 2020 11:50PM

Reproducible Windows builds

I'm working again on making reproducible .exe-s. I thought I'd share my process:

Pros:

  • End users get a bit-for-bit reproducible .exe, known not to contain trojan and auditable from sources
  • Point releases can reuse the exact same build process and avoid introducing bugs

Steps:

  • Generate a source tarball (non reproducibly)
  • Debian Docker as a base, with fixed version + snapshot.debian.org sources.list
    • Dockerfile: install packaged dependencies and MXE(.cc) from a fixed Git revision
    • Dockerfile: compile MXE with SOURCE_DATE_EPOCH + fix-ups
  • Build my project in the container with SOURCE_DATE_EPOCH and check SHA256
  • Copy-on-release

Result:

git.savannah.gnu.org/gitweb/?p=freedink/dfarc.git;a=tree;f=autobuild/dfarc-w32-snapshot

Generate a source tarball (non reproducibly)

This is not reproducible due to using non-reproducible tools (gettext, automake tarballs, etc.) but it doesn't matter: only building from source needs to be reproducible, and the source is the tarball.

It would be better if the source tarball were perfectly reproducible, especially for large generated content (./configure, wxGlade-generated GUI source code...), but that can be a second step.

Debian Docker as a base

AFAIU the Debian Docker images are made by Debian developers but are in no way official images. That's a pity, and to be 100% safe I should start anew from debootstrap, but Docker is providing a very efficient framework to build images, notably with caching of every build steps, immediate fresh containers, and public images repository.

This means with a single:

sudo -g docker make

you get my project reproducibly built from scratch with nothing to setup at all.

I avoid using a :latest tag, since it will change, and also backports, since they can be updated anytime. Here I'm using stretch:9.4 and no backports.

Using snapshot.debian.org in sources.list makes sure the installed packaged dependencies won't change at next build. For a dot release however (not for a rebuild), they should be updated in case there was a security fix that has an effect on built software (rare, but exists).

Last but not least, APT::Install-Recommends "false"; for better dependency control.

MXE

mxe.cc is compilation environment to get MingGW (GCC for Windows) and selected dependencies rebuilt unattended with a single make. Doing this manually would be tedious because every other day, upstream breaks MinGW cross-compilation, and debugging an hour-long build process takes ages. Been there, done that.

MXE has a reproducible-boosted binutils with a patch for SOURCE_DATE_EPOCH that avoids getting date-based and/or random build timestamps in the PE (.exe/.dll) files. It's also compiled with --enable-deterministic-archives to avoid timestamp issues in .a files (but no automatic ordering).

I set SOURCE_DATE_EPOCH to the fixed Git commit date and I run MXE's build.

This does not apply to GCC however, so I needed to e.g. patch a __DATE__ in wxWidgets.

In addition, libstdc++.a has a file ordering issue (said ordering surprisingly stays stable between a container and a host build, but varies when using a different computer with the same distros and tools versions). I hence re-archive libstdc++.a manually.

It's worth noting that PE files don't have issues with build paths (and varying BuildID-s - unlike ELF... T_T).

Again, for a dot release, it makes sense to update the MXE Git revision so as to catch security fixes, but at least I have the choice.

Build project

With this I can start a fresh Docker container and run the compilation process inside, as a non-privileged user just in case.

I set SOURCE_DATE_EPOCH to the release date at 00:00UTC, or the Git revision date for snapshots.

This rebuild framework is excluded from the source tarball, so the latter stays stable during build tuning. I see it as a post-release tool, hence not part of the release (just like distros packaging).

The generated .exe is statically compiled which helps getting a stable result (only the few needed parts of dependencies get included in the final executable).

Since MXE is not itself reproducible differences may come from MXE itself, which may need fixes as explained above. This is annoying and hopefully will be easier once they ship GCC6. To debug I unzip the different .zip-s, upx -d my .exe-s, and run diffoscope.

I use various tricks (stable ordering, stable timestamping, metadata cleaning) to make the final .zip reproducible as well. Post-processing tools would be an alternative if they were fixed.

reprotest

Any process is moot if it can't be tested.

reprotest helps by running 2 successive compilations with varying factors (build path, file system ordering, etc.), and check that we get the exact same binary. As a trade-off, I don't run it on the full build environment, just on the project itself. I plugged reprotest to the Docker container by running a sshd on the fly. I have another Makefile target to run reprotest in my host system where I also installed MXE, so I can compare results and sometimes find differences (e.g. due to using a different filesystem). In addition this is faster for debugging since changing anything in the early Dockerfile steps means a full 1h rebuild.

Copy-on-release

At release time I make a copy of the directory that contains all the self-contained build scripts and the Dockerfile, and rename it after the new release version. I'll continue improving upon the reproducible build system in the 'snapshot' directory, but the versioned directory will stay as-is and can be used in the future to get the same bit-for-bit identical .exe anytime.

This is the technique I used in my Android Rebuilds project.

Other platforms

For now I don't control the build process for other platforms: distros have their own autobuilders, so does F-Droid. Their problem :P

I have plans to make reproducible GNU/Linux AppImage-based builds in the future though. I should be able to use a finer-grained, per-dependency process rather than the huge MXE-based chunk I currently do.

I hope this helps other projects provide reproducible binaries directly! Comments/suggestions welcome.

04 January, 2020 11:50PM

RenPyWeb - one year

One year ago I posted a little entry in Ren'Py Jam 2018, which was the first-ever Ren'Py game directly playable in the browser :)

The Question Tutorial

Big thanks to Ren'Py's author who immediately showed full support for the project, and to all the other patrons who joined the effort!

One year later, RenPyWeb is officially integrated in Ren'Py with a one-click build, performances improved, countless little fixes to the Emscripten technology stack provided stability, and more than 60 games of all sizes were published for the web.

RenPyWeb

What's next? I have plans to download resources on-demand (rather than downloading the whole game on start-up), to improve support for mobile browsers, and of course to continue the myriad of little changes that make RenPyWeb more and more robust. I'm also wondering about making our web stack more widely accessible to Pygame, so as to bring more devs in the wonderful world of python-in-the-browser and improve the tech ecosystem - let me know if you're interested.

Hoping to see great new Visual Novels on the web this coming year :)

04 January, 2020 11:50PM

RenPyWeb - Ren'Py in your HTML5 web browser

I like the Ren'Py project, a popular game engine aimed at Visual Novels - that can also be used as a portable Python environment.

One limitation was that it required downloading games, while nowadays people are used to Flash- or HTML5- based games that play in-browser without having to (de)install.

Can this fixed? While maintaining compatibility with Ren'Py's several DSLs? And without rewriting everything in JavaScript?
Can Emscripten help? While this is a Python/Cython project?
After lots of experimenting, and full-stack patching/contributing, it turns out the answer is yes!

Live demo:
https://renpy.beuc.net/
The Question Tutorial Your game

At last I finished organizing and cleaning-up, published under a permissive free software / open source license, like Python and Ren'Py themselves.
Python port:
https://www.beuc.net/python-emscripten/python/dir?ci=tip
Build system:
https://github.com/renpy/renpyweb

Development in going on, consider supporting the project!
Patreonhttps://www.patreon.com/Beuc

04 January, 2020 11:50PM

January 02, 2020

grep @ Savannah

grep-3.4 released [stable]

This is to announce grep-3.4, a stable release.
Special thanks to Paul Eggert and Norihiro Tanaka for their many fine contributions.

There have been 71 commits by 4 people in the 54 weeks since 3.3.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Jim Meyering (31)
  Norihiro Tanaka (5)
  Paul Eggert (34)
  Zev Weiss (1)

Jim [on behalf of the grep maintainers]
==================================================================

Here is the GNU grep home page:
  http://gnu.org/s/grep/

For a summary of changes and contributors, see:
  http://git.sv.gnu.org/gitweb/?p=grep.git;a=shortlog;h=v3.4
or run this command from a git-cloned grep directory:
  git shortlog v3.3..v3.4

To summarize the 819 gnulib-related changes, run these commands
from a git-cloned grep directory:
  git checkout v3.4
  git submodule summary v3.3

==================================================================
Here are the compressed sources and a GPG detached signature[*]:
  https://ftp.gnu.org/gnu/grep/grep-3.4.tar.xz
  https://ftp.gnu.org/gnu/grep/grep-3.4.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://ftpmirror.gnu.org/grep/grep-3.4.tar.xz
  https://ftpmirror.gnu.org/grep/grep-3.4.tar.xz.sig

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify grep-3.4.tar.xz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 7FD9FCCB000BEEEE

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.69.197-b8fd7-dirty
  Automake 1.16a
  Gnulib v0.1-3121-gc3c36de58

==================================================================
NEWS

* Noteworthy changes in release 3.4 (2020-01-02) [stable]

** New features

  The new --no-ignore-case option causes grep to observe case
  distinctions, overriding any previous -i (--ignore-case) option.

** Bug fixes

  '.' no longer matches some invalid byte sequences in UTF-8 locales.
  [bug introduced in grep 2.7]

  grep -Fw can no longer false match in non-UTF-8 multibyte locales
  For example, this command would erroneously print its input line:
    echo ab | LC_CTYPE=ja_JP.eucjp grep -Fw b
  [Bug#38223 introduced in grep 2.28]

  The exit status of 'grep -L' is no longer incorrect when standard
  output is /dev/null.
  [Bug#37716 introduced in grep 3.2]

  A performance bug has been fixed when grep is given many patterns,
  each with no back-reference.
  [Bug#33249 introduced in grep 2.5]

  A performance bug has been fixed for patterns like '01.2' that
  cause grep to reorder tokens internally.
  [Bug#34951 introduced in grep 3.2]

** Build-related

  The build procedure no longer relies on any already-built src/grep
  that might be absent or broken.  Instead, it uses the system 'grep'
  to bootstrap, and uses src/grep only to test the build.  On Solaris
  /usr/bin/grep is broken, but you can install GNU or XPG4 'grep' from
  the standard Solaris distribution before building GNU Grep yourself.
  [bug introduced in grep 2.8]

02 January, 2020 10:32PM by Jim Meyering

GNU Guile

GNU Guile 2.9.8 (beta) released

We are delighted to announce the release of GNU Guile 2.9.8. This is the eighth and possibly final pre-release of what will eventually become the 3.0 release series.

See the release announcement for full details and a download link.

This release fixes an error in libguile that could cause Guile to crash in some particular conditions, and was notably experienced by users compiling Guile itself on Ubuntu 18.04.

We plan to release a final Guile 3.0.0 on 17 January, though we may require another prerelease in the meantime. However until then, note that GNU Guile 2.9.8 is a beta release, and as such offers no API or ABI stability guarantees. Users needing a stable Guile are advised to stay on the stable 2.2 series.

As always, experience reports with GNU Guile 2.9.8, good or bad, are very welcome; send them to guile-devel@gnu.org. If you know you found a bug, please do send a note to bug-guile@gnu.org. Happy hacking!

02 January, 2020 01:33PM by Andy Wingo (guile-devel@gnu.org)

January 01, 2020

bison @ Savannah

Bison 3.5 released [stable]

We are very happy to announce the release of Bison 3.5, the best release
ever of Bison!  Better than 3.4, although it was a big improvement over 3.3,
which was huge upgrade compared to 3.2, itself way ahead Bison 3.1.  Ethic
demands that we don't mention 3.0.  Rumor has it that Bison 3.5 is not as
good as 3.6 will be though...

Paul Eggert revised the use of integral types in both the generator and the
generated parsers.  As a consequence small parsers have a smaller footprint,
and very large automata are now possible with the default back-end (yacc.c).
If you are interested in smaller parsers, also have a look at api.token.raw.

Adrian Vogelsgesang contributed lookahead correction for C++.

The purpose of string literals has been clarified.  Indeed, they are used
for two different purposes: freeing from having to implement the keyword
matching in the scanner, and improving error messages.  Most of the time
both can be achieved at the same time, but on occasions, it does not work so
well.  We promote their use for error messages.  We still support the former
case (at least for historical skeletons), but it is not a recommended
practice.  The documentation now warns against this use.  A new warning,
-Wdangling-alias, should help users who want to enforce the use of aliases
only for error messages.

An experimental back-end for the D programming language was added thanks to
Oliver Mangold and H. S. Teoh.  It is looking for active support from the D
community.

Happy parsing!

==================================================================

Bison is a general-purpose parser generator that converts an annotated
context-free grammar into a deterministic LR or generalized LR (GLR) parser
employing LALR(1) parser tables.  Bison can also generate IELR(1) or
canonical LR(1) parser tables.  Once you are proficient with Bison, you can
use it to develop a wide range of language parsers, from those used in
simple desk calculators to complex programming languages.

Bison is upward compatible with Yacc: all properly-written Yacc grammars
work with Bison with no change.  Anyone familiar with Yacc should be able to
use Bison with little trouble.  You need to be fluent in C, C++ or Java
programming in order to use Bison.

Here is the GNU Bison home page:
   https://gnu.org/software/bison/

==================================================================

Here are the compressed sources:
  https://ftp.gnu.org/gnu/bison/bison-3.5.tar.gz   (5.1MB)
  https://ftp.gnu.org/gnu/bison/bison-3.5.tar.xz   (3.1MB)

Here are the GPG detached signatures[*]:
  https://ftp.gnu.org/gnu/bison/bison-3.5.tar.gz.sig
  https://ftp.gnu.org/gnu/bison/bison-3.5.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify bison-3.5.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 0DDCAA3278D5264E

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.69
  Automake 1.16.1
  Flex 2.6.4
  Gettext 0.19.8.1
  Gnulib v0.1-2971-gb943dd664

==================================================================

* Noteworthy changes in release 3.5 (2019-12-11) [stable]

** Backward incompatible changes

  Lone carriage-return characters (aka \r or ^M) in the grammar files are no
  longer treated as end-of-lines.  This changes the diagnostics, and in
  particular their locations.

  In C++, line numbers and columns are now represented as 'int' not
  'unsigned', so that integer overflow on positions is easily checkable via
  'gcc -fsanitize=undefined' and the like.  This affects the API for
  positions.  The default position and location classes now expose
  'counter_type' (int), used to define line and column numbers.

** Deprecated features

  The YYPRINT macro, which works only with yacc.c and only for tokens, was
  obsoleted long ago by %printer, introduced in Bison 1.50 (November 2002).
  It is deprecated and its support will be removed eventually.

** New features

*** Lookahead correction in C++

  Contributed by Adrian Vogelsgesang.

  The C++ deterministic skeleton (lalr1.cc) now supports LAC, via the
  %define variable parse.lac.

*** Variable api.token.raw: Optimized token numbers (all skeletons)

  In the generated parsers, tokens have two numbers: the "external" token
  number as returned by yylex (which starts at 257), and the "internal"
  symbol number (which starts at 3).  Each time yylex is called, a table
  lookup maps the external token number to the internal symbol number.

  When the %define variable api.token.raw is set, tokens are assigned their
  internal number, which saves one table lookup per token, and also saves
  the generation of the mapping table.

  The gain is typically moderate, but in extreme cases (very simple user
  actions), a 10% improvement can be observed.

*** Generated parsers use better types for states

  Stacks now use the best integral type for state numbers, instead of always
  using 15 bits.  As a result "small" parsers now have a smaller memory
  footprint (they use 8 bits), and there is support for large automata (16
  bits), and extra large (using int, i.e., typically 31 bits).

*** Generated parsers prefer signed integer types

  Bison skeletons now prefer signed to unsigned integer types when either
  will do, as the signed types are less error-prone and allow for better
  checking with 'gcc -fsanitize=undefined'.  Also, the types chosen are now
  portable to unusual machines where char, short and int are all the same
  width.  On non-GNU platforms this may entail including <limits.h> and (if
  available) <stdint.h> to define integer types and constants.

*** A skeleton for the D programming language

  For the last few releases, Bison has shipped a stealth experimental
  skeleton: lalr1.d.  It was first contributed by Oliver Mangold, based on
  Paolo Bonzini's lalr1.java, and was cleaned and improved thanks to
  H. S. Teoh.

  However, because nobody has committed to improving, testing, and
  documenting this skeleton, it is not clear that it will be supported in
  the future.

  The lalr1.d skeleton *is functional*, and works well, as demonstrated in
  examples/d/calc.d.  Please try it, enjoy it, and... commit to support it.

*** Debug traces in Java

  The Java backend no longer emits code and data for parser tracing if the
  %define variable parse.trace is not defined.

** Diagnostics

*** New diagnostic: -Wdangling-alias

  String literals, which allow for better error messages, are (too)
  liberally accepted by Bison, which might result in silent errors.  For
  instance

    %type <exVal> cond "condition"

  does not define "condition" as a string alias to 'cond' (nonterminal
  symbols do not have string aliases).  It is rather equivalent to

    %nterm <exVal> cond
    %token <exVal> "condition"

  i.e., it gives the type 'exVal' to the "condition" token, which was
  clearly not the intention.

  Also, because string aliases need not be defined, typos such as "baz"
  instead of "bar" will be not reported.

  The option -Wdangling-alias catches these situations.  On

    %token BAR "bar"
    %type <ival> foo "foo"
    %%
    foo: "baz" {}

  bison -Wdangling-alias reports

    warning: string literal not attached to a symbol
          | %type <ival> foo "foo"
          |                  ^~~~~
    warning: string literal not attached to a symbol
          | foo: "baz" {}
          |      ^~~~~

   The -Wall option does not (yet?) include -Wdangling-alias.

*** Better POSIX Yacc compatibility diagnostics

  POSIX Yacc restricts %type to nonterminals.  This is now diagnosed by
  -Wyacc.

    %token TOKEN1
    %type  <ival> TOKEN1 TOKEN2 't'
    %token TOKEN2
    %%
    expr:

  gives with -Wyacc

    input.y:2.15-20: warning: POSIX yacc reserves %type to nonterminals [-Wyacc]
        2 | %type  <ival> TOKEN1 TOKEN2 't'
          |               ^~~~~~
    input.y:2.29-31: warning: POSIX yacc reserves %type to nonterminals [-Wyacc]
        2 | %type  <ival> TOKEN1 TOKEN2 't'
          |                             ^~~
    input.y:2.22-27: warning: POSIX yacc reserves %type to nonterminals [-Wyacc]
        2 | %type  <ival> TOKEN1 TOKEN2 't'
          |                      ^~~~~~

*** Diagnostics with insertion

  The diagnostics now display the suggestion below the underlined source.
  Replacement for undeclared symbols are now also suggested.

    $ cat /tmp/foo.y
    %%
    list: lis '.' |

    $ bison -Wall foo.y
    foo.y:2.7-9: error: symbol 'lis' is used, but is not defined as a token and has no rules; did you mean 'list'?
        2 | list: lis '.' |
          |       ^~~
          |       list
    foo.y:2.16: warning: empty rule without %empty [-Wempty-rule]
        2 | list: lis '.' |
          |                ^
          |                %empty
    foo.y: warning: fix-its can be applied.  Rerun with option '--update'. [-Wother]

*** Diagnostics about long lines

  Quoted sources may now be truncated to fit the screen.  For instance, on a
  30-column wide terminal:

    $ cat foo.y
    %token FOO                       FOO                         FOO
    %%
    exp: FOO
    $ bison foo.y
    foo.y:1.34-36: warning: symbol FOO redeclared [-Wother]
        1 | …         FOO                  …
          |           ^~~
    foo.y:1.8-10:      previous declaration
        1 | %token FOO                     …
          |        ^~~
    foo.y:1.62-64: warning: symbol FOO redeclared [-Wother]
        1 | …         FOO
          |           ^~~
    foo.y:1.8-10:      previous declaration
        1 | %token FOO                     …
          |        ^~~

** Changes

*** Debugging glr.c and glr.cc

  The glr.c skeleton always had asserts to check its own behavior (not the
  user's).  These assertions are now under the control of the parse.assert
  %define variable (disabled by default).

*** Clean up

  Several new compiler warnings in the generated output have been avoided.
  Some unused features are no longer emitted.  Cleaner generated code in
  general.

** Bug Fixes

  Portability issues in the test suite.

  In theory, parsers using %nonassoc could crash when reporting verbose
  error messages. This unlikely bug has been fixed.

  In Java, %define api.prefix was ignored.  It now behaves as expected.

01 January, 2020 09:37AM by Akim Demaille

www-zh-cn @ Savannah

2019 summary

By GNU

Dear GNU translators!

This year, the number of new translations was similar to 2018,
but our active teams generally tracked the changes
in the original articles more closely than in 2018, especially
in the second half of the year.  Our French, Spanish,
Brazilian Portuguese and ("Simplified") Chinese teams were
particularly good, I think that the figures for them may be
within the precision of my evaluations.  Unfortunately, many our
teams are inactive or aren't so active as desirable, and
few teams are re-establised.

      General Statistics

The number of translations per file in important directories
continued growing.  Currently it is maximum (8.79 translations
per original file and 8.03 translations weighted with size
of articles).

The table below shows the number and size of newly translated articles
and the translations that were converted to the PO format in important
directories (as of 2019-12-31).

+--team--+------new-------+----converted---+---to convert---+-&-outdated-+
|  ca    |   0 (  0.0Ki)  | ^ 2 ( 91.1Ki)  |   1 (120.5Ki)  |  21 (30%)  |
+--------+----------------+----------------+----------------+------------+
|  de    |   0 (  0.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  76 (35%)  |
+--------+----------------+----------------+----------------+------------+
|  el    | * 1 (  6.9Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  26 (55%)  |
+--------+----------------+----------------+----------------+------------+
|  es    |  24 (453.8Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 3.2 (1.8%) |
+--------+----------------+----------------+----------------+------------+
|  fi    |   0 (  0.0Ki)  |   3 (118.5Ki)  |   0 (  0.0Ki)  |            |
+--------+----------------+----------------+----------------+------------+
|  fr    |   7 ( 57.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 0.9 (0.3%) |
+--------+----------------+----------------+----------------+------------+
|  hr    | * 1 (  6.9Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  38 (55%)  |
+--------+----------------+----------------+----------------+------------+
|  it    |   0 (  0.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  38 (29%)  |
+--------+----------------+----------------+----------------+------------+
|  ja    | * 1 (  6.9Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  59 (41%)  |
+--------+----------------+----------------+----------------+------------+
|  ko    |   0 (  0.0Ki)  | ^19 (357.1Ki)  |   2 (218.7Ki)  |  23 (53%)  |
+--------+----------------+----------------+----------------+------------+
|  ml    |   3 ( 68.3Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  13 (48%)  |
+--------+----------------+----------------+----------------+------------+
|  ms    |   1 (  3.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |            |
+--------+----------------+----------------+----------------+------------+
|  nl    | * 1 (  6.9Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  40 (31%)  |
+--------+----------------+----------------+----------------+------------+
|  pl    | % 2 ( 15.7Ki)  | ^ 4 (192.0Ki)  |   1 (181.0Ki)  |  56 (39%)  |
+--------+----------------+----------------+----------------+------------+
|  pt-br |  25 (275.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 0.4 (0.3%) |
+--------+----------------+----------------+----------------+------------+
|  ru    |  10 ( 71.3Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 1.9 (0.6%) |
+--------+----------------+----------------+----------------+------------+
|  sq    |   0 (  0.0Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 2.2 (6.7%) |
+--------+----------------+----------------+----------------+------------+
|  tr    |  25 (277.7Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  |  18 (66%)  |
+--------+----------------+----------------+----------------+------------+
|  zh-cn |  14 (305.8Ki)  |   0 (  0.0Ki)  |   0 (  0.0Ki)  | 0.4 (0.4%) |
+--------+----------------+----------------+----------------+------------+
|  zh-tw |   4 ( 32.0Ki)  |   7 ( 65.6Ki)  |  16 (232.6Ki)  | 4.9 (18%)  |
+--------+----------------+----------------+----------------+------------+
+--------+----------------+----------------+----------------+
| total  | 118 (1574.5Ki) |  35 ( 824.3Ki) |  68 (1761.4Ki) |
+--------+----------------+----------------+----------------+

& Typical number of outdated GNUNified translations throughout
  the year.

  • The translations of a new page,

  /education/edu-free-learning-resources.html,
  were picked from translations of an older page by Thérèse Godefroy.

^ The translations were GNUNified by Thérèse Godefroy.

% The files were committed back in 2017, but were technically
  incomplete; Thérèse Godefroy filled the missing strings.

For the reference: 7 new articles were added, amounting to 57Ki,
and there were about 700 modifications in more than 100 English
files in important directories.

Most of our active teams have no old translations to GNUNify,
and the ("Traditional") Chinese team converted a considerable part
of old translations this year.

      Orphaned Teams, New and Reformed Teams

Catalan, Czech, Greek teams were orphaned due to inactivity
(no commits for more than 3 years).

Malayalam team was re-established, and now we have one more
translation of the Free Software Definition!

The situation with the Turkish team is transitional: in June,
T. E. Kalaycı was appointed an admin of www-tr, but the team
still lacks a co-ordinator, despite being one of our most active
teams this year.

In August, we installed our first Malay translation,
/p/stallmans-law.ms.html; however, the volunteer didn't proceed
with further translations.

Executives of two free software-related businesses independently
offered us help with establishing a team for Swahili translations,
but we failed to overcome the organizational issues.

A Finnish volunteer updated the few existing old translations
in August and September.

A Romanian volunteer updated a few important translations in May.

People also offered help with Hindi (twice), Arabic and Danish
translations, but didn't succeed.

      Changes in the Page Regeneration System

GNUN had no releases this year, though there are a few minor,
but incompatible changes, so GNUN 1.0 release is pending.

Happy GNU year, and thank you for your contributions!

(I see nothing secret in this message, so if you think it may be
interesting to people who are not subscribed to the list, please
feel free to forward it).

01 January, 2020 07:19AM by Wensheng XIE

Christopher Allan Webber

201X in review

Well, this has been a big decade for me. At the close of 200X I was still very young as a programmer, had just gotten married to Morgan, had just started my job at Creative Commons, and was pretty sure everyone would figure out I was a fraud and that it would all come crashing down around me when everyone found out. (Okay, that last part is still true, but now I have more evidence I've gotten things done despite apparently pulling the wool over everyone's eyes.)

At work my boss left and I temporarily became tech lead, and we did some interesting things like kick off CC BY-SA and GPL compatibility work (which made it into the 4.0 license suite) and ran Liberated Pixel Cup (itself an interesting form of success, but I would like to spend more time talking about what the important lessons of it were... another time).

In 2011 I started MediaGoblin as a side project, but felt like I didn't really know what I was doing, yet people kept showing up and we were pushing out releases. Some people were even running the thing, and it felt fairly amazing. I left my job at Creative Commons in late 2012 and decided to try to make working on network freedom issues my main thing. It's been my main thing since, and I'm glad I've stayed focused in that way.

What I didn't expect was that the highlight of my work in the decade wasn't MediaGoblin itself but the standard we started participating in, which became ActivityPub. The work on ActivityPub arguably caused MediaGoblin to languish, but on the other hand ActivityPub was successfully ratified by the W3C as a standard and now has over 3.5 million registered users on the network and is used by dozens (at least 50) pieces of (mostly) interoperable software. That's a big success for all of it that worked on it (and there were quite a few), and in many ways I think is the actual legacy of MediaGoblin.

After ActivityPub becoming a W3C Recommendation, I took a look around and realized that other projects were using ActivityPub to accomplish the work of MediaGoblin maybe even better than MediaGoblin. The speed at which this decade passed made me conscious of how short time is and made me wonder how I should best budget it. After all, the most successful thing I worked on turned out to not be the networked software itself but the infrastructure for building networks. That lead me to reconsider whether my role was more importantly as trying to advance the state of the art, which has lead me to more recently start work on the federation laboratory called Spritely, of which I've written a bit about here.

My friend Serge Wroclawski and I also launched a podcast in the last year, Libre Lounge. I've been very proud of it; we have a lot of great episodes, so check the archive.

Keeping this work funded has turned out to be tough. In MediaGoblin land, we ran two crowdfunding campaigns, the first of which paid for my work, the second of which paid for Jessica Tallon's work on federation. The first campaign got poured entirely into MediaGoblin, the second one surprisingly resulted in making space so that we could do ActivityPub's work. (I hope people feel happy with the work we did, I do think ActivityPub wouldn't have happened without MediaGoblin's donors' support. That seems worth celebrating and a good outcome to me personally, at least.) I also was fortunate enough to get accepted into Stripe's Open Source Retreat and more recently my work on Spritely has been funded by the Samsung Stack Zero grant. Recently, people have been helping by donating on Patreon and both my increase in prominence from ActivityPub and Libre Lounge have helped grow that. That probably sounds like a lot of funding and success, but still most of this work has had to be quite... lean. You stretch that stuff out over nearly a decade and it doesn't account for nearly enough. To be honest, I've also had to pay for a lot of it myself too, especially by frequently contracting with other organizations (such as Open Tech Strategies and Digital Bazaar, great folks). But I've also had to rely on help from family resources at times. I'm much more privileged than other people, and I can do the work, and I think the work is necessary, so I've devoted myself to it. Sometimes I get emails asking how to be completely dedicated to FOSS without lucking out at a dedicated organization and I feel extra imposter-y in responding because I mean, I don't know, everything still feels very hand-to-mouth. A friend pointed to a blogpost from Fred Hicks at Evil Hat about how behind the scenes, things don't feel as sustainable sometimes, and that struck a chord with me (it was especially surprising to me, because Evil Hat is one of the most prominent tabletop gaming companies.) Nonetheless, I'm still privileged enough that I've been able to keep it working and stay dedicated, and I've received a lot of great support from all the resources mentioned above, and I'm happy about all that. I just wish I could give better advice on how to "make it work"... I'm in search of a good answer for that myself.

In more personal reflections of this decade, Morgan and I went through a number of difficult moves and some very difficult family situations, but I think our relationship is very strong, and some of the hardest stuff strengthened our resolve as a team. We've finally started to settle down, having bought a house and all that. Morgan completed one graduate program and is on the verge of completing her next one. A decade into our marriage (and 1.5 decades into our relationship), things are still wonderfully weird.

I'm halfway through my 30s now. This decade made it clearer to me how fast time goes. In the book A Deepness in the Sky, a space-trading civilization is described that measures time in seconds, kiloseconds, megaseconds, gigaseconds, etc. Increasingly I feel like the number of seconds ahead in life are always shorter than we feel like they will be; time is a truly precious resource. How much more time do I have to do interesting and useful things, to live a nice life? How much more time do we have left to get our shit together as a human species? (We seem to be doing an awful lot to shorten our timespan.)

I will say that I am kicking of 202X by doing something to hopefully contribute to lengthening both the timespan of myself and (more marginally individually, more significantly if done collectively) human society: 2020 will be the "Year of No Travel" for me. I hate traveling; it's bad for myself, bad for the environment. It's seemed most importantly to be the main thing that continues to throw my health out of whack, over and over again.

But speaking of time and its resource usage, a friend once told me that I had a habit in talks to "say the perfect thing, then ruin it by saying one more thing". I probably did something similar above (well, not claiming anything I write is perfect), but I'll preserve it anyway.

Everything, especially this blog, is imperfect anyway. Hopefully this next decade is imperfect and weird in a way we can, for the most part, enjoy.

Goodbye 201X, hello 202X.

01 January, 2020 04:46AM by Christopher Lemmer Webber

December 29, 2019

pspp @ Savannah

PSPP now supports .spv files

I just pushed support for SPV files to the master branch of PSPP. This means a number of new features:

  • There is a new program "pspp-output" that can convert .spv files into other formats, e.g. you can use it to produce text or PDF files from SPSS viewer files.
  • The "pspp" program can now output to .spv files.
  • PSPPIRE can now read and write .spv files.  The support for reading them is not refined enough (it simply dumps the to the output window), so it's not really documented yet.

I would appreciate experience reports, positive or negative.  The main known limitation is that graphs are not yet supported (this is actually a huge amount of work due to the way that SPSS implements graphs).

29 December, 2019 07:04AM by Ben Pfaff

December 25, 2019

GNUnet News

Upcoming GNUnet Talks

Upcoming GNUnet Talks

There will be various talks in the next few months on GNUnet and related projects on both the Chaos Communication Congress (36C3) as well as FOSDEM. Here is an overview:

Privacy and Decentralization @ 36c3 (YBTI)

We are pleased to have 5 talks to present as part of our "youbroketheinternet/wefixthenet" session, taking place on the OIO (Open Infrastructure Orbit) stage:

  • "re:claimID - Self-sovereign, Decentralised Identity Management and Personal Data Sharing" by Hendrik Meyer zum Felde will take place at 2019-12-27 18:30 in OIO Stage. Info
  • "Buying Snacks via NFC with GNU Taler" by Dominik Hofer will take place at 2019-12-27 21:20 in OIO Stage Info
  • "CloudCalypse 2: Social network with net2o" by Bernd Paysan will take place at 2019-12-28 21:20 in OIO Stage Info
  • "Delta Chat: e-mail based messaging, the Rustocalypse and UX driven approach" by holger krekel will take place at 2019-12-29 17:40 in OIO Stage Info
  • "Cryptography of Killing Proof-of-Work" by Jeff Burdges will take place at 2019-12-30 12:00 in OIO Stage Info

In addition to these talks, we will be hosting a snack machine which accepts Taler for payment. The first of its kind! It will be filled with various goodies, including Swiss chocolates, books, and electronics. The machine will be located somewhere in the OIO assembly, and there will be a station at which you may exchange Euro for digital Euro for immediate use. We welcome all to come try it out. :)

Decentralized Internet and Privacy devroom @FOSDEM 2020

We have 2 GNUnet-related talks at the Decentralized Internet and Privacy devroom at FOSDEM 2020 in February:

  • GNUnet: A network protocol stack for building secure, distributed, and privacy-preserving applications Info
  • Knocking Down the Nest: secushareBOX - p2p, encrypted IoT and beyond... Info

25 December, 2019 11:00PM

libredwg @ Savannah

libredwg-0.9.3 released

This is another minor patch update, with some bugfixes from fuzzed DWG's.

Here are the compressed sources:

  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.3.tar.gz   (9.8MB)
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.3.tar.xz   (3.7MB)

Here are the GPG detached signatures[*]:

  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.3.tar.gz.sig
  http://ftp.gnu.org/gnu/libredwg/libredwg-0.9.3.tar.xz.sig

Use a mirror for higher download bandwidth:

  https://www.gnu.org/order/ftp.html

Here are more binaries:

  https://github.com/LibreDWG/libredwg/releases/tag/0.9.3

Here are the SHA256 checksums:

e53d4134208ee35fbf866171ee2052edd73bf339ab5b091acbc2769d8c20c43f  libredwg-0.9.3.tar.gz
62df9eb21e7b8f107e7b2eaf0e61ed54e7939ee10fd10b896a57d59319f09483  libredwg-0.9.3.tar.xz

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify libredwg-0.9.3.tar.gz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys B4F63339E65D6414

and rerun the 'gpg --verify' command.

25 December, 2019 09:55PM by Reini Urban

December 24, 2019

GNUnet News

GNUnet 0.12.1

GNUnet 0.12.1 released

We are pleased to announce the release of GNUnet 0.12.1.
This is a very minor release. It largely fixes one function that is needed by GNU Taler 0.6.0. Please read the release notes for GNUnet 0.12.0, as they still apply. Updating is only recommended for those using GNUnet in combination with GNU Taler.

Download links

The GPG key used to sign is: D8423BCB326C7907033929C7939E6BE1E29FC3CC

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

24 December, 2019 11:00PM

December 23, 2019

Parabola GNU/Linux-libre

manual intervention required (xorgproto dependency errors)

due to some recent changes in arch, manual intervention may be required if you hit any errors of the form:

:: installing xorgproto (2019.2-2) breaks dependency * required by *

to correct dependency errors related to 'xorgproto':

# pacman -Qi libdmx      &>1 && pacman -Rdd libdmx
# pacman -Qi libxxf86dga &>1 && pacman -Rdd libxxf86dga
# pacman -Syu

23 December, 2019 05:41AM by bill auger

December 22, 2019

parallel @ Savannah

GNU Parallel 20191222 ('Impeachment') released [stable]

GNU Parallel 20191222 ('Impeachment') [stable] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

No new functionality was introduced so this is a good candidate for a stable release.

GNU Parallel is 10 years old next year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17.

See https://www.gnu.org/software/parallel/10-years-anniversary.html

Quote of the month:

  GNU parallel all the way!
    -- David Manouchehri @DaveManouchehri@twitter

New in this release:

  • Bug fixes and man page updates.

Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 3374ec53bacb199b245af2dda86df6c9
    12345678 3374ec53 bacb199b 245af2dd a86df6c9
    $ md5sum install.sh | grep 029a9ac06e8b5bc6052eac57b2c3c9ca
    029a9ac0 6e8b5bc6 052eac57 b2c3c9ca
    $ sha512sum install.sh | grep f517006d9897747bed8a4694b1acba1b
    40f53af6 9e20dae5 713ba06c f517006d 9897747b ed8a4694 b1acba1b 1464beb4
    60055629 3f2356f3 3e9c4e3c 76e3f3af a9db4b32 bd33322b 975696fc e6b23cfb
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 December, 2019 02:43PM by Ole Tange

December 19, 2019

GNUnet News

GNUnet 0.12.0

GNUnet 0.12.0 released

We are pleased to announce the release of GNUnet 0.12.0.
This is a new major release. It breaks protocol compatibility with the 0.11.x versions. Please be aware that Git master is thus henceforth INCOMPATIBLE with the 0.11.x GNUnet network, and interactions between old and new peers will result in signature verification failures. 0.11.x peers will NOT be able to communicate with Git master or 0.12.x peers.
In terms of usability, users should be aware that there are still a large number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.12.0 release is still only suitable for early adopters with some reasonable pain tolerance.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.12.0 (since 0.11.8)

  • GNS:
    • Changed key derivation protocols to adhere with LSD001. #5921
    • Names are not expected to be UTF-8 (as opposed to IDNA). #5922
    • NSS plugin now properly handles non-standard IDNA names. #5927
    • NSS plugin will refuse to process requests from root (as GNUnet code should never run as root). #5907
    • Fixed BOX service/protocol label parsing (for TLSA et al)
  • GNS/NSE: Zone revocation proof of work algorithm changed to be less susceptible to specialized ASIC hardware. #3795
  • TRANSPORT: UDP plugin moved to experimental as it is known to be unstable.
  • UTIL:
    • Improved and documented RSA binary format. #5968
    • Removed redundant hashing in EdDSA signatures. #5398
    • The gnunet-logread script for log auditing (requires perl) can now be installed.
    • Now using TweetNaCl for ECDH implementation.
  • Buildsystem: A significant number of build system issued have been fixed and improvements implemented, including:
    • GLPK dependency dropped.
    • Fixed guix package definition.
  • Documentation: Improvements to the handbook and documentation.

Known Issues

  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: ng0, Christian Grothoff, Florian Dold, xrs, Naomi Phillips and Martin Schanzenbach.

19 December, 2019 11:00PM

remotecontrol @ Savannah

December 15, 2019

Riccardo Mottola

ArcticFox 27.9.19 release

Arctic Fox 27.9.19 has been released!

Plenty of enhancements, still supports your trusty Mac from 10.6 up and Linux PowerPC!

Arctic Fox on Devuan amd64


Code has been fixed to support newer compilers. On Linux, currently, the highest supported compiler remains gcc 6.5, more recent versions do compile now with this release, but fail to link afterwards with errors on very standard symbols. Help appreciated! On NetBSD gcc 7 now works fine instead.


15 December, 2019 06:16PM by Riccardo (noreply@blogger.com)

December 13, 2019

GNU Guile

GNU Guile 2.9.7 (beta) released

We are delighted to announce GNU Guile 2.9.7, the seventh and hopefully penultimate beta release in preparation for the upcoming 3.0 stable series. See the release announcement for full details and a download link.

This release makes Guile go faster. Compared to 2.9.6 there are some significant improvements:

Comparison of microbenchmark performance for Guile 2.9.6 and 2.9.7

The cumulative comparison against 2.2 is finally looking like we have no significant regressions:

Comparison of microbenchmark performance for Guile 2.2.6 and 2.9.7

Now we're on the home stretch! Hopefully we'll get out just one more prerelease and then release a stable Guile 3.0.0 in January. However until then, note that GNU Guile 2.9.7 is a beta release, and as such offers no API or ABI stability guarantees. Users needing a stable Guile are advised to stay on the stable 2.2 series.

As always, experience reports with GNU Guile 2.9.7, good or bad, are very welcome; send them to guile-devel@gnu.org. If you know you found a bug, please do send a note to bug-guile@gnu.org. Happy hacking!

13 December, 2019 01:31PM by Andy Wingo (guile-devel@gnu.org)

December 08, 2019

www-zh-cn @ Savannah

The FSF tech team: Doing more for free software

Dear CTT translators:

At the Free Software Foundation, we like to set big goals for ourselves, and cover a lot of ground in a short time. The FSF tech team, for example, has just four members -- two senior systems administrators, one Web developer, and a part-time chief technology officer -- yet we manage to run over 120 virtual servers. These run on about a dozen machines hosted at four different data centers. These include many public-facing Web sites and community services, as well as every single IT requirement for the staff: workstations, data storage and backup, networking, printing, accounting, telephony, email, you name it.

We don't outsource any of our daily software needs because we need to be sure that they are done using only free software. Remember, there is no "cloud," just other people's computers. For example: we don't outsource our email, so every day we send over half a million messages to thousands of free software hackers through the community mailing lists we host. We also don't outsource our Web storage or networking, so we serve tens of thousands of free software downloads -- over 1.5 terabytes of data -- a day. And our popularity, and the critical nature of the resources we make available, make us a target for denial of service attacks (one is ongoing as we write this), requiring constant monitoring by the tech team, whose members take turns being ready for emergency work so that the resources our supporters depend on stay available.

As hard as we work, we still want to do more, like increasing our already strict standards on hardware compliance, so in 2020, we will finish replacing the few remaining servers that require a nonfree BIOS. To be compliant to our own high standards, we need to be working with devices that are available through Respects Your Freedom retailers. We plan to add new machines to our farm, so that we can host more community servers like the ones we already host for KDE, SugarLabs, GNU Guix, Replicant, gNewSense, GNU Linux-Libre, and FSFLA. We provide completely virtual machines that these projects use for their daily operations, whether that's Web hosting, mailing lists, software repositories, or compiling and testing software packages.

We know that many software projects and individual hackers are looking for more options on code hosting services that focus on freedom and privacy, so we are working to set up a public site that anybody can use to publish, collaborate, or document their progress on free software projects. We will follow strict criteria to ensure that this code repository hosts only fully free software, and that it follows the very best practices towards freedom and privacy.

Another project that we are very excited about for this year is a long-awaited refresh of https://www.fsf.org. Not only will it be restyled, but also easier to browse on mobile devices. As our campaigns and licensing teams are eager to create and publish more resources in different formats, we will also work to improve the support for publishing audio and video files on the site. And to enable you to do more, too, we are also developing a site to organize petitions and collect signatures, so that together we can run more effective grassroots campaigns and fight for the freedom of all computer users.

All of these efforts require countless hours of hard work, and the use of high quality hardware. These come to us at a significant cost, not just to purchase, but to keep running and to host at specialized data centers (if you have rack space in the Boston area, we are always looking for donors). For all this work, we depend on the continuous commitment of individual contributors to keep providing the technical foundation to fight for software freedom.

In solidarity,

Ruben Rodriguez Perez
Chief Technology Officer

08 December, 2019 02:20AM by Wensheng XIE

December 06, 2019

GNU Guile

GNU Guile 2.9.6 (beta) released

We are delighted to announce GNU Guile 2.9.6, the sixth beta release in preparation for the upcoming 3.0 stable series. See the release announcement for full details and a download link.

This release fixes bugs caught by users of the previous 2.9.5 prerelease, and adds some optimizations as well as a guile-3 feature for cond-expand.

In this release, we also took the opportunity to do some more rigorous benchmarking:

Comparison of microbenchmark performance for Guile 2.2.6 and 2.9.6

GNU Guile 2.9.6 is a beta release, and as such offers no API or ABI stability guarantees. Users needing a stable Guile are advised to stay on the stable 2.2 series.

As always, experience reports with GNU Guile 2.9.6, good or bad, are very welcome; send them to guile-devel@gnu.org. If you know you found a bug, please do send a note to bug-guile@gnu.org. Happy hacking!

06 December, 2019 01:15PM by Andy Wingo (guile-devel@gnu.org)

Gary Benson

GNOME 3 won’t unlock

Every couple days something on my RHEL 7 box goes into a swapstorm and uses up all the memory. I think it’s Firefox, but I never figured out why, generally I have four different Firefoxes running with four different profiles, so it’s hard to tell which one’s failing (if it even is that). Anyway, sometimes it makes the screen lock crash or something, and I can’t get in, and I can never remember what process you have to kill to get back in, so here it is: gnome-shell. You have to killall -9 gnome-shell, and it lets you back in. Also killall -STOP firefox and killall -STOP "Web Content" are handy if the swapstorm is still under way.

06 December, 2019 10:17AM by gbenson