<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dw="https://www.dreamwidth.org">
  <id>tag:dreamwidth.org,2009-05-21:377446</id>
  <title>Ian Jackson</title>
  <subtitle>Ian Jackson</subtitle>
  <author>
    <name>Ian Jackson</name>
  </author>
  <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/"/>
  <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom"/>
  <updated>2025-09-14T15:36:41Z</updated>
  <dw:journal username="diziet" type="personal"/>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:20143</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/20143.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=20143"/>
    <title>tag2upload in the first month of forky</title>
    <published>2025-09-14T15:36:17Z</published>
    <updated>2025-09-14T15:36:41Z</updated>
    <category term="git"/>
    <category term="computers"/>
    <category term="tag2upload"/>
    <category term="debian"/>
    <category term="dgit"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">&lt;p&gt;tl;dr: &lt;a href="https://wiki.debian.org/tag2upload"&gt;tag2upload&lt;/a&gt; (beta) is going well so far, and is already handling around one in 13 uploads to Debian.
&lt;ul&gt;&lt;li&gt;&lt;a href="#introduction-and-some-stats"&gt;Introduction and some stats&lt;/a&gt;
&lt;li&gt;&lt;a href="#recent-uiux-improvements"&gt;Recent UI/UX improvements&lt;/a&gt;
&lt;li&gt;&lt;a href="#why-we-are-still-in-beta"&gt;Why we are still in beta&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#retrying-on-salsa-side-failures"&gt;Retrying on Salsa-side failures&lt;/a&gt;
&lt;/li&gt;&lt;/ul&gt;

&lt;li&gt;&lt;a href="#other-notable-ongoing-work"&gt;Other notable ongoing work&lt;/a&gt;
&lt;li&gt;&lt;a href="#common-problems"&gt;Common problems&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#reuse-of-version-numbers-and-attempts-to-re-tag"&gt;Reuse of version numbers, and attempts to re-tag&lt;/a&gt;
&lt;li&gt;&lt;a href="#discrepancies-between-git-and-orig-tarballs"&gt;Discrepancies between git and orig tarballs&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;

&lt;li&gt;&lt;a href="#get-involved"&gt;Get involved&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;a name="cutid1"&gt;&lt;/a&gt;
&lt;h3&gt;&lt;a name="introduction-and-some-stats"&gt;Introduction and some stats&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;We announced tag2upload&amp;rsquo;s open beta in mid-July. That was in the middle of the the freeze for trixie, so usage was fairly light until the forky floodgates opened.
&lt;p&gt;Since then the service has successfully performed &lt;strong&gt;637 uploads&lt;/strong&gt;, of which 420 were in the last 32 days. That&amp;rsquo;s an average of about 13 per day. For comparison, during the first half of September up to today there have been 2475 uploads to unstable. That&amp;rsquo;s about 176/day.
&lt;p&gt;So, tag2upload is already handling around 7.5% of uploads. This is very gratifying for a service which is advertised as still being in beta!
&lt;p&gt;Sean and I are very pleased both with the uptake, and with the way the system has been performing.
&lt;h3&gt;&lt;a name="recent-uiux-improvements"&gt;Recent UI/UX improvements&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;During this open beta period we have been hard at work. We have made many improvements to the user experience.
&lt;p&gt;Current &lt;code&gt;git-debpush&lt;/code&gt; in forky, or trixie-backports, is much better at detecting various problems ahead of time.
&lt;p&gt;When uploads do fail on the service the emailed error reports are now more informative. For example, anomalies involving orig tarballs, which by definition can&amp;rsquo;t be detected locally (since one point of tag2upload is not to have tarballs locally) now generally result in failure reports containing a diffstat, and instructions for a local repro.
&lt;h3&gt;&lt;a name="why-we-are-still-in-beta"&gt;Why we are still in beta&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;There are a few outstanding work items that we currently want to complete before we declare the end of the beta.
&lt;h4&gt;&lt;a name="retrying-on-salsa-side-failures"&gt;Retrying on Salsa-side failures&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;The biggest of these is that the service should be able to retry when Salsa fails. Sadly, Salsa isn&amp;rsquo;t wholly reliable, and right now if it breaks when the service is trying to handle your tag, your upload can fail.
&lt;p&gt;We think most of these failures could be avoided. Implementing retries is a fairly substantial task, but doesn&amp;rsquo;t pose any fundamental difficulties. We&amp;rsquo;re working on this right now.
&lt;h3&gt;&lt;a name="other-notable-ongoing-work"&gt;Other notable ongoing work&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;We want to support pristine-tar, so that pristine-tar users can do a new upstream release. Andrea Pappacoda is working on that with us. See &lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1106071"&gt;#1106071&lt;/a&gt;. (Note that we would generally &lt;strong&gt;recommend against use of pristine-tar&lt;/strong&gt; within Debian. But we want to support it.)
&lt;p&gt;We have been having conversations with &lt;a href="https://salsa.debian.org/freexian-team/debusine"&gt;Debusine&lt;/a&gt; folks about what integration between tag2upload and Debusine would look like. We&amp;rsquo;re &lt;a href="https://salsa.debian.org/freexian-team/debusine/-/issues/815#note_651533"&gt;making some progress&lt;/a&gt; there, but a lot is still up in the air.
&lt;p&gt;&lt;a href="https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/467#note_642152"&gt;We are considering&lt;/a&gt; how best to provide tag2upload pre-checks as part of Salsa CI. There are several problems detected by the tag2upload service that could be detected by Salsa CI too, but which can&amp;rsquo;t be detected by &lt;code&gt;git-debpush&lt;/code&gt;.
&lt;h3&gt;&lt;a name="common-problems"&gt;Common problems&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;We&amp;rsquo;ve been monitoring the service and until very recently we have investigated every service-side failure, to understand the root causes. This has given us insight into the kinds of things our users want, and the kinds of packaging and git practices that are common. We&amp;rsquo;ve been able to improve the system&amp;rsquo;s handling of various anomalies and also improved the documentation.
&lt;p&gt;Right now our failure rate is still rather high, at around 7%. Partly this is because people are trying out the system on packages that haven&amp;rsquo;t ever seen git tooling with such a level of rigour.
&lt;p&gt;There are two classes of problem that are responsible for the vast majority of the failures that we&amp;rsquo;re still seeing:
&lt;h4&gt;&lt;a name="reuse-of-version-numbers-and-attempts-to-re-tag"&gt;Reuse of version numbers, and attempts to re-tag&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;tag2upload, like git (and like &lt;code&gt;dgit&lt;/code&gt;), hates it when you reuse a version number, or try to pretend that a (perhaps busted) release never happened.
&lt;p&gt;git tags aren&amp;rsquo;t namespaced, and tend to spread about promiscuously. So replacing a signed git tag, with a different tag of the same name, is a bad idea. More generally, reusing the same version number for a different (signed!) package is poor practice. Likewise, it&amp;rsquo;s usually a bad idea to remove changelog entries for versions which were actually released, just because they were later deemed improper.
&lt;p&gt;We understand that many Debian contributors have gotten used to this kind of thing. Indeed, tools like &lt;code&gt;dcut&lt;/code&gt; encourage it. It does allow you to make things neat-looking, even if you&amp;rsquo;ve made mistakes - but really it does so by &lt;em&gt;covering up&lt;/em&gt; those mistakes!
&lt;p&gt;The bottom line is that tag2upload can&amp;rsquo;t support such history-rewriting. If you discover a mistake after you&amp;rsquo;ve signed the tag, please just &lt;strong&gt;burn the version number and add a new changelog stanza&lt;/strong&gt;.
&lt;p&gt;One bonus of tag2upload&amp;rsquo;s approach is that it will discover if you are accidentally overwriting an NMU, and report that as an error.
&lt;h4&gt;&lt;a name="discrepancies-between-git-and-orig-tarballs"&gt;Discrepancies between git and orig tarballs&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;tag2upload promises that the source package that it generates corresponds precisely to the git tree you tag and sign.
&lt;p&gt;Orig tarballs make this complicated. They aren&amp;rsquo;t present on your laptop when you &lt;code&gt;git-debpush&lt;/code&gt;. When you&amp;rsquo;re not uploading a new upstream version, the tag2upload service reuses existing orig tarballs from the archive. If your git and the archive&amp;rsquo;s orig don&amp;rsquo;t agree, the tag2upload service will report an error, rather than upload a package with contents that differ from your git tag.
&lt;p&gt;With the most common Debian workflows, everything is fine:
&lt;p&gt;If you base everything on upstream git, and make your orig tarballs with &lt;code&gt;git archive&lt;/code&gt; (or &lt;code&gt;git deborig&lt;/code&gt;), your orig tarballs are the same as the git, by construction. &lt;strong&gt;We recommend usually ignoring upstream tarballs&lt;/strong&gt;: most upstreams work in git, and their tarballs can contain weirdness that we don&amp;rsquo;t want. (At worst, the tarball can contain an attack that isn&amp;rsquo;t visible in git, as with &lt;code&gt;xz&lt;/code&gt;!)
&lt;p&gt;Alternatively, if you use &lt;code&gt;gbp import-orig&lt;/code&gt;, the differences (including an attack like Jia Tan&amp;rsquo;s) are &lt;em&gt;imported into&lt;/em&gt; git for you. Then, once again, your git and the orig tarball will correspond.
&lt;p&gt;But there are other workflows where this correspondence may not hold. Those workflows are hazardous, because the thing you&amp;rsquo;re probably working with locally for your routine development is the git view. Then, when you upload, your work is transplanted onto the orig tarball, which might be quite different - so what you upload isn&amp;rsquo;t what you&amp;rsquo;ve been working on!
&lt;p&gt;This situation is detected by tag2upload, precisely because tag2upload checks that it&amp;rsquo;s keeping its promise: the source package is identical to the git view. (&lt;code&gt;dgit push&lt;/code&gt; makes the same promise.)
&lt;h3&gt;&lt;a name="get-involved"&gt;Get involved&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Of course the easiest way to get involved is to &lt;a href="https://wiki.debian.org/tag2upload"&gt;start using tag2upload&lt;/a&gt;.
&lt;p&gt;We would love to have more contributors. There are some easy tasks to get started with, in &lt;a href="https://bugs.debian.org/cgi-bin/pkgreport.cgi?src=dgit;tag=newcomer"&gt;bugs we&amp;rsquo;ve tagged &amp;ldquo;newcomer&amp;rdquo;&lt;/a&gt; &amp;mdash; mostly UX improvements such as detecting certain problems earlier, in &lt;code&gt;git-debpush&lt;/code&gt;.
&lt;p&gt;More substantially, we are looking for help with &lt;code&gt;sbuild&lt;/code&gt;: we&amp;rsquo;d like it to be able to work directly from git, rather than needing to build source packages: &lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=868527"&gt;#868527&lt;/a&gt;.&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=20143" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:19879</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/19879.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=19879"/>
    <title>Free Software, internal politics, and governance</title>
    <published>2025-05-01T22:03:17Z</published>
    <updated>2025-05-01T22:15:54Z</updated>
    <category term="computers"/>
    <category term="politics"/>
    <dw:security>public</dw:security>
    <dw:reply-count>2</dw:reply-count>
    <content type="html">&lt;p&gt;There is a thread of opinion in some Free Software communities, that we shouldn&amp;rsquo;t be doing &amp;ldquo;politics&amp;rdquo;, and instead should just focus on technology.
&lt;p&gt;But that&amp;rsquo;s impossible. This approach is naive, harmful, and, ultimately, self-defeating, even on its own narrow terms.
&lt;ul&gt;&lt;li&gt;&lt;a href="#today-im-talking-about-small-p-politics"&gt;Today I&amp;rsquo;m talking about small-p politics&lt;/a&gt;
&lt;li&gt;&lt;a href="#many-people-working-together-always-entails-politics"&gt;Many people working together always entails politics&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#consensus-is-great-but-always-requiring-it-is-harmful"&gt;Consensus is great but always requiring it is harmful&lt;/a&gt;
&lt;li&gt;&lt;a href="#governance-is-like-backups-we-need-to-practice-it"&gt;Governance is like backups: we need to practice it&lt;/a&gt;
&lt;li&gt;&lt;a href="#governance-should-usually-be-routine-and-boring"&gt;Governance should usually be routine and boring&lt;/a&gt;
&lt;li&gt;&lt;a href="#governance-means-deciding-not-just-mediating"&gt;Governance means deciding, not just mediating&lt;/a&gt;
&lt;li&gt;&lt;a href="#on-the-autonomy-of-the-programmer"&gt;On the autonomy of the programmer&lt;/a&gt;
&lt;li&gt;&lt;a href="#mitigate-the-consequences-of-decisions-retain-flexibility"&gt;Mitigate the consequences of decisions &amp;mdash; retain flexibility&lt;/a&gt;
&lt;li&gt;&lt;a href="#but-dont-do-decisionmaking-like-a-corporation"&gt;But don&amp;rsquo;t do decisionmaking like a corporation&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;

&lt;li&gt;&lt;a href="#if-you-wont-do-politics-politics-will-do-you"&gt;If you won&amp;rsquo;t do politics, politics will do you&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#if-you-dont-see-the-politics-its-still-happening"&gt;If you don&amp;rsquo;t see the politics, it&amp;rsquo;s still happening&lt;/a&gt;
&lt;/li&gt;&lt;/ul&gt;

&lt;li&gt;&lt;a href="#conclusions"&gt;Conclusions&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;a name="cutid1"&gt;&lt;/a&gt;
&lt;h2&gt;&lt;a name="today-im-talking-about-small-p-politics"&gt;Today I&amp;rsquo;m talking about small-p politics&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;In this article I&amp;rsquo;m using &amp;ldquo;politics&amp;rdquo; in the very wide sense: us humans managing our disagreements with each other.
&lt;p&gt;I&amp;rsquo;m &lt;em&gt;not&lt;/em&gt; going to talk about culture wars, woke, racism, trans rights, and so on. I am &lt;em&gt;not&lt;/em&gt; going to talk about how Free Software has always had explicitly political goals; or how it&amp;rsquo;s impossible to be neutral because choosing not to take a stand is itself to take a stand.
&lt;p&gt;Those issues are all are important and Free Software definitely must engage with them. Many of the points I make are applicable there too. But those are not my focus today.
&lt;p&gt;Today I&amp;rsquo;m talking in more general terms about politics, power, and governance.
&lt;h2&gt;&lt;a name="many-people-working-together-always-entails-politics"&gt;Many people working together always entails politics&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Computers are incredibly complicated nowadays. Making software is a joint enterprise. Even if an individual program has only a single maintainer, it fits into an ecosystem of other software, maintained by countless other developers. Larger projects can have thousands of maintainers and hundreds of thousands of contributors.
&lt;p&gt;Humans don&amp;rsquo;t always agree about everything. This is natural. Indeed, it&amp;rsquo;s healthy: to write the best code, we need a wide range of knowledge and experience.
&lt;p&gt;When we can&amp;rsquo;t come to agreement, we need a way to deal with that: a way that lets us still make progress, but also leaves us able to work together afterwards. A way that feels OK for everyone.
&lt;p&gt;Providing a framework for disagreement is the job of a governance system. The rules say which people make which decisions, who must be consulted, how the decisions are made, and, how, if any, they can be reviewed.
&lt;p&gt;This is all politics.
&lt;h3&gt;&lt;a name="consensus-is-great-but-always-requiring-it-is-harmful"&gt;Consensus is great but always requiring it is harmful&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Ideally a discussion will converge to a synthesis that satisfies everyone, or at least a consensus.
&lt;p&gt;When consensus can&amp;rsquo;t be achieved, we can hope for compromise: something everyone can live with. Compromise is achieved through negotiation.
&lt;p&gt;If every decision requires consensus, then the proponents of any wide-ranging improvement have an almost insurmountable hurdle: those who are favoured by the status quo and find it convenient can always object. So there will never be consensus for change. If there is any objection at all, no matter how ill-founded, the status quo will always win.
&lt;p&gt;This is where governance comes in.
&lt;h3&gt;&lt;a name="governance-is-like-backups-we-need-to-practice-it"&gt;Governance is like backups: we need to practice it&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Governance processes are the backstop for when discussions, and then negotiations, fail, and people still don&amp;rsquo;t see eye to eye.
&lt;p&gt;In a healthy community, everyone needs to know how the governance works and what the rules are. The participants need to accept the system&amp;rsquo;s legitimacy. Everyone, including the losing side, must be prepared to accept and implement (or, at least not obstruct) whatever the decision is, and hopefully live with it and stay around.
&lt;p&gt;That means we need to &lt;em&gt;practice&lt;/em&gt; our governance processes. We can&amp;rsquo;t just leave them for the day we have a huge and controversial decision to make. If we do that, then when it comes to the crunch we&amp;rsquo;ll have toxic rows where no-one can agree the rules; where determined people bend the rules to fit their outcome; and where afterwards people feel like the whole thing was horrible and unfair.
&lt;p&gt;So our decisionmaking bodies and roles need to be making decisions, as a matter of routine, and we need to get used to that.
&lt;p&gt;First-line decisionmaking bodies should be making decisions &lt;em&gt;frequently&lt;/em&gt;. Last-line appeal mechanisms (large-scale votes, for example) are naturally going to be exercised more rarely, but they must &lt;em&gt;happen&lt;/em&gt;, be seen as legitimate, and their outcomes must be implemented in full.
&lt;h3&gt;&lt;a name="governance-should-usually-be-routine-and-boring"&gt;Governance should usually be routine and boring&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;When governance is working well it&amp;rsquo;s quite boring.
&lt;p&gt;People offer their input, and are heard. Angles are debated, and concerns are addressed. If agreement still isn&amp;rsquo;t reached, the committee, or elected leader, makes a decision.
&lt;p&gt;Hopefully everyone thinks the leadership is legitimate, and that it properly considered and heard their arguments, and made the decision for good reasons.
&lt;p&gt;Hopefully the losing side can still get their work done (and make their own computer work the way they want); so while they will be disappointed, they can live with the outcome.
&lt;p&gt;Many human institutions manage this most of the time. It does take some knowledge about principles of governance, and ideally some experience.
&lt;h3&gt;&lt;a name="governance-means-deciding-not-just-mediating"&gt;Governance means deciding, not just mediating&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;By &lt;em&gt;making decisions&lt;/em&gt; I mean exercising their authority to rule on an actual disagreement: one that wasn&amp;rsquo;t resolved by debate or negotiation. Governance processes by definition involve &lt;em&gt;deciding&lt;/em&gt;, not just mediating. It&amp;rsquo;s not governance if we&amp;rsquo;re advising or cajoling: in that case, we&amp;rsquo;re back to demanding consensus. Governance is necessary precisely when consensus is not achieved.
&lt;p&gt;If the governance systems are to mean anything, they must be able to &lt;em&gt;(over)rule&lt;/em&gt;; that means &lt;em&gt;(over)ruling&lt;/em&gt; must be &lt;em&gt;normal&lt;/em&gt; and &lt;em&gt;accepted&lt;/em&gt;.
&lt;p&gt;Otherwise, when the we need to overrule, we&amp;rsquo;ll find that we can&amp;rsquo;t, because we lack the collective practice.
&lt;p&gt;To be legitimate (and seen as legitimate) decisions must usually be made based on the &lt;em&gt;merits&lt;/em&gt;, not on participants&amp;rsquo; status, and not only on process questions.
&lt;h3&gt;&lt;a name="on-the-autonomy-of-the-programmer"&gt;On the autonomy of the programmer&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Many programmers seem to find the very concept of governance, and binding decisionmaking, deeply uncomfortable.
&lt;p&gt;Ultimately, it means sometimes overruling someone&amp;rsquo;s technical decision. As programmers and maintainers we naturally see how this erodes our autonomy.
&lt;p&gt;But we have all seen projects where the maintainers are unpleasant, obstinate, or destructive. We have all found this frustrating. Software is all interconnected, and one programmer&amp;rsquo;s bad decisions can cause problems for many of the rest of us. We exasperate, &amp;ldquo;why won&amp;rsquo;t they just do the right thing&amp;rdquo;. This is futile. People have never &amp;ldquo;just&amp;rdquo;ed and they&amp;rsquo;re not going to start &amp;ldquo;just&amp;rdquo;ing now. So often the boot is on the other foot.
&lt;p&gt;More broadly, as software developers, we have a responsibility to our users, and a duty to write code that does good rather than ill in the world. We &lt;em&gt;ought&lt;/em&gt; to be accountable. (And not just to capitalist bosses!)
&lt;p&gt;Governance mechanisms are the answer.
&lt;p&gt;(No, forking anything but the smallest project is very rarely a practical answer.)
&lt;h3&gt;&lt;a name="mitigate-the-consequences-of-decisions-retain-flexibility"&gt;Mitigate the consequences of decisions &amp;mdash; retain flexibility&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;In software, it is often possible to soften the bad social effects of a controversial decision, by retaining flexibility. With a bit of extra work, we can often provide hooks, non-default configuration options, or plugin arrangements.
&lt;p&gt;If we can convert the question from &amp;ldquo;how will the software always behave&amp;rdquo; into merely &amp;ldquo;what should the default be&amp;rdquo;, we can often save ourselves a lot of drama.
&lt;p&gt;So it is often worth keeping even suboptimal or untidy features or options, if people want to use them and are willing to maintain them.
&lt;p&gt;There is a tradeoff here, of course. But Free Software projects often significantly under-value the social benefits of keeping everyone happy. Wrestling software &amp;mdash; even crusty or buggy software &amp;mdash; is a lot more fun than having unpleasant arguments.
&lt;h3&gt;&lt;a name="but-dont-do-decisionmaking-like-a-corporation"&gt;But don&amp;rsquo;t do decisionmaking like a corporation&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Many programmers&amp;rsquo; experience of formal decisionmaking is from their boss at work. But corporations are often a very bad example.
&lt;p&gt;They typically don&amp;rsquo;t have as much trouble actually &lt;em&gt;making&lt;/em&gt; decisions, but the actual decisions are often terrible, and not just because corporations&amp;rsquo; goals are often bad.
&lt;p&gt;You get to be a decisionmaker in a corporation by spouting plausible nonsense, sounding confident, buttering up the even-more-vacuous people further up the chain, and sometimes by sabotaging your rivals. Corporate senior managers are hardly ever held accountable &amp;mdash; typically the effects of their tenure are only properly felt well after they&amp;rsquo;ve left to mess up somewhere else.
&lt;p&gt;We should select our leaders more wisely, and base decisions on substance.
&lt;h2&gt;&lt;a name="if-you-wont-do-politics-politics-will-do-you"&gt;If you won&amp;rsquo;t do politics, politics will do you&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;As a participant in a project, or a society, you can of course opt out of getting involved in politics.
&lt;p&gt;You can opt out of learning how to do politics generally, and opt out of understanding your project&amp;rsquo;s governance structures. You can opt out of making judgements about disputed questions, and tell yourself &amp;ldquo;&lt;a href="https://en.wikipedia.org/wiki/False_balance"&gt;there&amp;rsquo;s merit on both sides&lt;/a&gt;&amp;rdquo;.
&lt;p&gt;You can hate politicians indiscriminately, and criticise anyone you see doing politics.
&lt;p&gt;If you do this, then you are abdicating your decisionmaking authority, to those who are the most effective manipulators, or the most committed to getting their way. You&amp;rsquo;re tacitly supporting the existing power bases. You&amp;rsquo;re ceding power to the best liars, to those with the least scruples, and to the people who are most motivated by dominance. This is precisely the opposite of what you wanted.
&lt;p&gt;If enough people won&amp;rsquo;t do politics, and hate anyone who does, your discussion spaces will be reduced to a battleground of only the hardiest and the most toxic.
&lt;h3&gt;&lt;a name="if-you-dont-see-the-politics-its-still-happening"&gt;If you don&amp;rsquo;t see the politics, it&amp;rsquo;s still happening&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;If your governance systems don&amp;rsquo;t work, then there is no effective redress against bad or even malicious decisions. Your roleholders and subteams are unaccountable power centres.
&lt;p&gt;Power radically distorts every human relationship, and it takes great strength of character for an unaccountable power centre not to eventually become an unaccountable toxic cabal.
&lt;p&gt;So if you have a reasonable sized community, but don&amp;rsquo;t see your formal governance systems working &amp;mdash; people debating things, votes, leadership making explicit decisions &amp;mdash; that doesn&amp;rsquo;t mean everything is fine, and all the decisions are great, and there&amp;rsquo;s no politics happening.
&lt;p&gt;It just means that most of your community have given up on the official process. It also probably means that some parts of your project have formed toxic and unaccountable cabals. Those who won&amp;rsquo;t put up with that will leave.
&lt;p&gt;The same is true if the only governance actions that ever happen are massive drama. That means that only the most determined victim of a bad decision, will even consider using such a process.
&lt;h2&gt;&lt;a name="conclusions"&gt;Conclusions&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;&lt;li&gt;&lt;p&gt;Respect and support the people who are trying to fix things with politics.

&lt;li&gt;&lt;p&gt;Be informed, and, where appropriate, involved.

&lt;li&gt;&lt;p&gt;If you are in a position of authority, be willing to &lt;em&gt;exercise&lt;/em&gt; that authority. Do more than just mediating to try to get consensus.

&lt;/p&gt;&lt;/li&gt;&lt;/p&gt;&lt;/li&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=19879" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:19480</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/19480.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=19480"/>
    <title>Rust is indeed woke</title>
    <published>2025-03-28T12:59:43Z</published>
    <updated>2025-03-28T17:09:57Z</updated>
    <category term="computers"/>
    <category term="rust"/>
    <category term="politics"/>
    <dw:security>public</dw:security>
    <dw:reply-count>3</dw:reply-count>
    <content type="html">&lt;p&gt;Rust, and resistance to it in some parts of the Linux community, has been in my feed recently. One undercurrent seems to be the notion that Rust is woke (and should therefore be rejected as part of culture wars).
&lt;p&gt;I&amp;rsquo;m going to argue that Rust, the language, &lt;em&gt;is&lt;/em&gt; woke. So the opponents are right, in that sense. Of course, as ever, dissing something for being woke is nasty and fascist-adjacent.
&lt;ul&gt;&lt;li&gt;&lt;a href="#community"&gt;Community&lt;/a&gt;
&lt;li&gt;&lt;a href="#technological-values---particularly-compared-to-cc"&gt;Technological values - particularly, compared to C/C++&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#ostensible-values"&gt;Ostensible values&lt;/a&gt;
&lt;li&gt;&lt;a href="#attitude-to-the-programmers-mistakes"&gt;Attitude to the programmer&amp;rsquo;s mistakes&lt;/a&gt;
&lt;li&gt;&lt;a href="#the-ideology-of-the-hardcore-programmer"&gt;The ideology of the hardcore programmer&lt;/a&gt;
&lt;li&gt;&lt;a href="#memory-safety-as-a-power-struggle"&gt;Memory safety as a power struggle&lt;/a&gt;
&lt;li&gt;&lt;a href="#memory-safety-via-rust-as-a-power-struggle"&gt;Memory safety &lt;em&gt;via Rust&lt;/em&gt; as a power struggle&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;

&lt;li&gt;&lt;a href="#notes"&gt;Notes&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#this-is-not-a-riir-manifesto"&gt;This is not a RIIR manifesto&lt;/a&gt;
&lt;li&gt;&lt;a href="#disclosure"&gt;Disclosure&lt;/a&gt;
&lt;li&gt;&lt;a href="#on-the-meaning-of-woke"&gt;On the meaning of &amp;ldquo;woke&amp;rdquo;&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;

&lt;li&gt;&lt;a href="#pithy-conclusion"&gt;Pithy conclusion&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;a name="cutid1"&gt;&lt;/a&gt;
&lt;h2&gt;&lt;a name="community"&gt;Community&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;The obvious way that Rust may seem woke is that it has the trappings, and many of the attitudes and outcomes, of a modern, nice, FLOSS community. Rust certainly does better than toxic environments like the Linux kernel, or Debian. This is reflected in a higher proportion of contributors from various kinds of minoritised groups. But Rust is &lt;em&gt;not&lt;/em&gt; outstanding in this respect. It certainly has its problems. Many other projects do as well or better.
&lt;p&gt;And this is well-trodden ground. I have something more interesting to say:
&lt;h2&gt;&lt;a name="technological-values---particularly-compared-to-cc"&gt;Technological values - particularly, compared to C/C++&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Rust is &lt;em&gt;woke technology&lt;/em&gt; that embodies a &lt;em&gt;woke understanding&lt;/em&gt; of what it means to be a programming language.
&lt;h3&gt;&lt;a name="ostensible-values"&gt;Ostensible values&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Let&amp;rsquo;s start with Rust&amp;rsquo;s strapline:
&lt;blockquote&gt;&lt;p&gt;A language empowering everyone to build reliable and efficient software.
&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Surprisingly, this motto is not mere marketing puff. For Rustaceans, it is a key goal which strongly influences day-to-day decisions (big and small).
&lt;p&gt;&lt;strong&gt;Empowering everyone&lt;/strong&gt; is a key aspect of this, which aligns with my own personal values. In the Rust community, we care about &lt;em&gt;empowerment&lt;/em&gt;. We are trying to help liberate our users. And we want to empower &lt;em&gt;everyone&lt;/em&gt; because everyone is entitled to technological autonomy. (For a programming language, empowering individuals means empowering their communities, of course.)
&lt;p&gt;This is all very airy-fairy, but it has concrete consequences:
&lt;h3&gt;&lt;a name="attitude-to-the-programmers-mistakes"&gt;Attitude to the programmer&amp;rsquo;s mistakes&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;In Rust we consider it a key part of our job to help the programmer avoid mistakes; to limit the consequences of mistakes; and to guide programmers in useful directions.
&lt;p&gt;If you write a bug in your Rust program, Rust doesn&amp;rsquo;t blame you. Rust asks &amp;ldquo;how could the compiler have spotted that bug&amp;rdquo;.
&lt;p&gt;This is in sharp contrast to C (and C++). C nowadays is an insanely hostile programming environment. A C compiler relentlessly scours your program for any place where you may have violated C&amp;rsquo;s almost incomprehensible rules, so that it can compile your apparently-correct program into a buggy executable. And then the bug is considered your fault.
&lt;p&gt;These aren&amp;rsquo;t just attitudes implicitly embodied in the software. They are concrete opinions expressed by compiler authors, and also by language proponents. In other words:
&lt;p&gt;Rust sees programmers writing bugs as a systemic problem, which must be addressed by improvements to the environment and the system. The toxic parts of the C and C++ community see bugs as moral failings by individual programmers.
&lt;p&gt;Sound familiar?
&lt;h3&gt;&lt;a name="the-ideology-of-the-hardcore-programmer"&gt;The ideology of the hardcore programmer&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Programming has long suffered from the myth of the &amp;ldquo;rockstar&amp;rdquo;. Silicon Valley techbro culture loves this notion.
&lt;p&gt;In reality, though, modern information systems are far too complicated for a single person. Developing systems is a &lt;em&gt;team sport&lt;/em&gt;. Nontechnical, and technical-adjacent, skills are vital: clear but friendly communication; obtaining and incorporating the insights of every member of your team; willingness to be challenged. Community building. Collaboration. Governance.
&lt;p&gt;The hardcore C community embraces the rockstar myth: they imagine that a few super-programmers (or super-reviewers) are able to spot bugs, just by being so brilliant. Of course this doesn&amp;rsquo;t actually work at all, as we can see from the atrocious bugfest that is the Linux kernel.
&lt;p&gt;These &amp;ldquo;rockstars&amp;rdquo; want us to believe that there is a steep hierarchy in programmming; that they are at the top of this hierarchy; and that being nice isn&amp;rsquo;t important.
&lt;p&gt;Sound familiar?
&lt;h3&gt;&lt;a name="memory-safety-as-a-power-struggle"&gt;Memory safety as a power struggle&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Much of the modern crisis of software reliability arises from memory-unsafe programming languages, mostly C and C++.
&lt;p&gt;Addressing this is a big job, requiring many changes. This threatens powerful interests; notably, corporations who want to keep shipping junk. (See also, conniptions over the EU Product Liability Directive.)
&lt;p&gt;The harms of this serious problem mostly fall on society at large, but the convenience of carrying on as before benefits existing powerful interests.
&lt;p&gt;Sound familiar?
&lt;h3&gt;&lt;a name="memory-safety-via-rust-as-a-power-struggle"&gt;Memory safety &lt;em&gt;via Rust&lt;/em&gt; as a power struggle&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Addressing this problem &lt;em&gt;via Rust&lt;/em&gt; is a direct threat to the power of established C programmers such as gatekeepers in the Linux kernel. Supplanting C means they will have to learn new things, and jostle for status against better Rustaceans, or &lt;a href="https://lwn.net/Articles/1011819/"&gt;be replaced&lt;/a&gt;. More broadly, Rust shows that it is practical to write fast, reliable, software, and that this does not need (mythical) &amp;ldquo;rockstars&amp;rdquo;.
&lt;p&gt;So established C programmer &amp;ldquo;experts&amp;rdquo; are existing vested interests, whose power is undermined by (this approach to) tackling this serious problem.
&lt;p&gt;Sound familiar?
&lt;h2&gt;&lt;a name="notes"&gt;Notes&lt;/a&gt;&lt;/h2&gt;
&lt;h3&gt;&lt;a name="this-is-not-a-riir-manifesto"&gt;This is not a RIIR manifesto&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;I&amp;rsquo;m &lt;em&gt;not&lt;/em&gt; saying we should rewrite all the world&amp;rsquo;s C in Rust. We should not try to do that.
&lt;p&gt;Rust is often a good choice for new code, or when a rewrite or substantial overhaul is needed anyway. But we&amp;rsquo;re going to need other techniques to deal with all of our existing C. &lt;a href="Capability_Hardware_Enhanced_RISC_Instructions"&gt;CHERI&lt;/a&gt; is a very promising approach. Sandboxing, emulation and automatic translation are other possibilities. The problem is a big one and we need a toolkit, not a magic bullet.
&lt;p&gt;But as for Linux: it is a scandal that substantial new drivers and subsystems are still being written in C. We could have been using Rust for new code throughout Linux years ago, and avoided very many bugs. Those bugs are doing real harm. This is not OK.
&lt;h3&gt;&lt;a name="disclosure"&gt;Disclosure&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;I first learned C from K&amp;amp;R I in 1989. I spent the first three decades of my life as a working programmer writing lots and lots of C. I&amp;rsquo;ve written C++ too. I used to consider myself an expert C programmer, but nowadays my C is a bit rusty and out of date. Why is my C rusty? Because I found Rust, and immediately liked and adopted it (despite its many faults).
&lt;p&gt;I like Rust because I care that the software I write &lt;em&gt;actually works&lt;/em&gt;: I care that my code doesn&amp;rsquo;t do harm in the world.
&lt;h3&gt;&lt;a name="on-the-meaning-of-woke"&gt;On the meaning of &amp;ldquo;woke&amp;rdquo;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The original meaning of &amp;ldquo;woke&amp;rdquo; is something much more specific, to do with racism. For the avoidance of doubt, I don&amp;rsquo;t think Rust is particularly antiracist.
&lt;p&gt;I&amp;rsquo;m using &amp;ldquo;woke&amp;rdquo; (like Rust&amp;rsquo;s opponents are) in the much broader, and now much more prevalent, culture wars sense.
&lt;h2&gt;&lt;a name="pithy-conclusion"&gt;Pithy conclusion&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;If you&amp;rsquo;re a senior developer who knows only C/C++, doesn&amp;rsquo;t want their authority challenged, and doesn&amp;rsquo;t want to have to learn how to write better software, you should hate Rust.
&lt;p&gt;Also you should be fired.
&lt;hr&gt;
&lt;address&gt;
Edited 2025-03-28 17:10 UTC to fix minor problems and add a new note about the meaning of the word "woke".
&lt;/address&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=19480" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:19395</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/19395.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=19395"/>
    <title>derive-deftly 1.0.0 - Rust derive macros, the easy way</title>
    <published>2025-02-11T21:14:07Z</published>
    <updated>2025-02-11T21:16:33Z</updated>
    <category term="derive-deftly"/>
    <category term="computers"/>
    <category term="rust"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">&lt;p&gt;&lt;a href="https://docs.rs/derive-deftly/latest/derive_deftly/"&gt;derive-deftly&lt;/a&gt; 1.0 is released.
&lt;p&gt;derive-deftly is a template-based derive-macro facility for Rust. It has been a great success. Your codebase may benefit from it too!
&lt;p&gt;Rust programmers will appreciate its power, flexibility, and consistency, compared to &lt;code&gt;macro_rules&lt;/code&gt;; and its convenience and simplicity, compared to proc macros.
&lt;p&gt;Programmers coming to Rust from scripting languages will appreciate derive-deftly&amp;rsquo;s convenient automatic code generation, which works as a kind of compile-time introspection.
&lt;ul&gt;&lt;li&gt;&lt;a href="#rusts-two-main-macro-systems"&gt;Rust&amp;rsquo;s two main macro systems&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#macro_rules"&gt;&lt;code&gt;macro_rules!&lt;/code&gt;&lt;/a&gt;
&lt;li&gt;&lt;a href="#proc-macros"&gt;proc macros&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;

&lt;li&gt;&lt;a href="#derive-deftly-to-the-rescue"&gt;derive-deftly to the rescue&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#example"&gt;Example&lt;/a&gt;
&lt;li&gt;&lt;a href="#special-purpose-derive-macros-are-now-worthwhile"&gt;Special-purpose derive macros are now worthwhile!&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;

&lt;li&gt;&lt;a href="#stability-without-stagnation"&gt;Stability without stagnation&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;a name="cutid1"&gt;&lt;/a&gt;
&lt;p&gt;
&lt;h2&gt;&lt;a name="rusts-two-main-macro-systems"&gt;Rust&amp;rsquo;s two main macro systems&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;I&amp;rsquo;m often a fan of metaprogramming, including macros. They can help &lt;a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself"&gt;remove duplication&lt;/a&gt; and flab, which are often the enemy of correctness.
&lt;p&gt;Rust has two macro systems. derive-deftly offers much of the power of the more advanced (proc_macros), while beating the simpler one (macro_rules) at its own game for ease of use.
&lt;p&gt;(Side note: Rust has at least three other ways to do metaprogramming: generics; &lt;code&gt;build.rs&lt;/code&gt;; and, multiple module inclusion via &lt;code&gt;#[path=]&lt;/code&gt;. These are beyond the scope of this blog post.)
&lt;h3&gt;&lt;a name="macro_rules"&gt;&lt;code&gt;macro_rules!&lt;/code&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://doc.rust-lang.org/book/ch19-06-macros.html#declarative-macros-with-macro_rules-for-general-metaprogramming"&gt;&lt;code&gt;macro_rules!&lt;/code&gt;&lt;/a&gt; aka &amp;ldquo;pattern macros&amp;rdquo;, &amp;ldquo;declarative macros&amp;rdquo;, or sometimes &amp;ldquo;macros by example&amp;rdquo; are the simpler kind of Rust macro.
&lt;p&gt;They involve writing a sort-of-BNF pattern-matcher, and a template which is then expanded with substitutions from the actual input. If your macro wants to accept comma-separated lists, or other simple kinds of input, this is OK. But often we want to emulate a &lt;code&gt;#[derive(...)]&lt;/code&gt; macro: e.g., to define code based on a struct, handling each field. Doing that with macro_rules is very awkward:
&lt;p&gt;&lt;code&gt;macro_rules!&lt;/code&gt;&amp;rsquo;s pattern language doesn&amp;rsquo;t have a cooked way to match a data structure, so you have to hand-write a matcher for Rust syntax, in each macro. Writing such a matcher is very hard in the general case, because &lt;code&gt;macro_rules&lt;/code&gt; lacks features for matching important parts of Rust syntax (notably, generics). (If you &lt;em&gt;really&lt;/em&gt; need to, there&amp;rsquo;s a &lt;a href="https://fprijate.github.io/tlborm/pat-incremental-tt-munchers.html"&gt;horrible technique&lt;/a&gt; as a workaround.)
&lt;p&gt;And, the invocation syntax for the macro is awkward: you must enclose the whole of the struct in &lt;code&gt;my_macro! { }&lt;/code&gt;. This makes it hard to apply more than one macro to the same struct, and produces rightward drift.
&lt;p&gt;Enclosing the struct this way means the macro must reproduce its input - so it can have bugs where it mangles the input, perhaps subtly. This also means the reader cannot be sure precisely whether the macro modifies the struct itself. In Rust, the types and data structures are often the key places to go to understand a program, so this is a significant downside.
&lt;p&gt;&lt;code&gt;macro_rules&lt;/code&gt; also has various other weird deficiencies too specific to list here.
&lt;p&gt;Overall, compared to (say) the C preprocessor, it&amp;rsquo;s great, but programmers used to the power of Lisp macros, or (say) metaprogramming in Tcl, will quickly become frustrated.
&lt;h3&gt;&lt;a name="proc-macros"&gt;proc macros&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Rust&amp;rsquo;s &lt;a href="https://doc.rust-lang.org/proc_macro/index.html"&gt;second macro system&lt;/a&gt; is much more advanced. It is a fully general system for processing and rewriting code. The macro&amp;rsquo;s implementation is Rust code, which takes the macro&amp;rsquo;s input as arguments, in the form of &lt;a href="https://doc.rust-lang.org/proc_macro/enum.TokenTree.html"&gt;Rust tokens&lt;/a&gt;, and returns Rust tokens to be inserted into the actual program.
&lt;p&gt;This approach is more similar to Common Lisp&amp;rsquo;s macros than to most other programming languages&amp;rsquo; macros systems. It is extremely powerful, and is used to implement many very widely used and powerful facilities. In particular, proc macros can be applied to data structures with &lt;code&gt;#[derive(...)]&lt;/code&gt;. The macro receives the data structure, in the form of Rust tokens, and returns the code for the new implementations, functions etc.
&lt;p&gt;This is used very heavily in the standard library for basic features like &lt;code&gt;#[derive(Debug)]&lt;/code&gt; and &lt;code&gt;Clone&lt;/code&gt;, and for important libraries like &lt;a href="https://serde.rs/"&gt;&lt;code&gt;serde&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://lib.rs/crates/strum"&gt;&lt;code&gt;strum&lt;/code&gt;&lt;/a&gt;.
&lt;p&gt;But, it is a complete pain in the backside to &lt;em&gt;write&lt;/em&gt; and &lt;em&gt;maintain&lt;/em&gt; a proc_macro.
&lt;p&gt;The Rust types and functions you deal with in your macro are very low level. You must manually handle every possible case, with runtime conditions and pattern-matching. Error handling and recovery is so nontrivial there are macro-writing &lt;a href="https://lib.rs/crates/proc-macro-error"&gt;libraries&lt;/a&gt; and even &lt;a href="https://lib.rs/crates/manyhow"&gt;more macros&lt;/a&gt; to help. Unlike a Lisp codewalker, a Rust proc macro must deal with Rust&amp;rsquo;s highly complex syntax. You will probably end up dealing with &lt;a href="https://lib.rs/crates/syn"&gt;syn&lt;/a&gt;, which is a complete Rust parsing library, separate from the compiler; syn is capable and comprehensive, but a proc macro must still contain a lot of often-intricate code.
&lt;p&gt;There are build/execution environment problems. The proc_macro code can&amp;rsquo;t live with your application; you have to put the proc macros in a separate cargo package, complicating your build arrangements. The proc macro package environment is weird: you can&amp;rsquo;t test it separately, without &lt;a href="https://lib.rs/crates/proc-macro2"&gt;jumping through hoops&lt;/a&gt;. Debugging can be awkward. Proper tests can only realistically be done with the help of complex &lt;a href="https://lib.rs/crates/macrotest"&gt;additional&lt;/a&gt; &lt;a href="https://lib.rs/crates/trybuild"&gt;tools&lt;/a&gt;, and will involve a pinned version of Nightly Rust.
&lt;h2&gt;&lt;a name="derive-deftly-to-the-rescue"&gt;derive-deftly to the rescue&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;derive-deftly lets you use a write a &lt;code&gt;#[derive(...)]&lt;/code&gt; macro, driven by a data structure, without wading into any of that stuff.
&lt;p&gt;Your macro definition is a template in a simple syntax, with predefined &lt;code&gt;$&lt;/code&gt;-substitutions for the various parts of the input data structure.
&lt;h3&gt;&lt;a name="example"&gt;Example&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Here&amp;rsquo;s a &lt;a href="https://salsa.debian.org/dgit-team/tag2upload-service-manager/-/blob/30bf4969b31df2802e7642330d607e61e73d5b5c/src/o2m_tracker.rs#L84"&gt;real-world&lt;/a&gt; &lt;a href="https://salsa.debian.org/dgit-team/tag2upload-service-manager/-/blob/30bf4969b31df2802e7642330d607e61e73d5b5c/src/db_data.rs#L83"&gt;example&lt;/a&gt; from a personal project:
&lt;pre&gt;&lt;code&gt;define_derive_deftly! {
    export UpdateWorkerReport:
    impl $ttype {
        pub fn update_worker_report(&amp;amp;self, wr: &amp;amp;mut WorkerReport) {
            $(
                ${when fmeta(worker_report)}
                wr.$fname = Some(self.$fname.clone()).into();
            )
        }
    }
}&lt;/code&gt;&lt;/pre&gt; &lt;pre&gt;&lt;code&gt;#[derive(Debug, Deftly, Clone)]
...
#[derive_deftly(UiMap, UpdateWorkerReport)]
pub struct JobRow {
    ...
    #[deftly(worker_report)]
    pub status: JobStatus,
    pub processing: NoneIsEmpty&amp;lt;ProcessingInfo&amp;gt;,
    #[deftly(worker_report)]
    pub info: String,
    pub duplicate_of: Option&amp;lt;JobId&amp;gt;,
}&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This is a nice example, also, of how using a macro can avoid bugs. Implementing this update by hand without a macro would involve a lot of cut-and-paste. When doing that cut-and-paste it can be very easy to accidentally write bugs where you forget to update some parts of each of the copies:
&lt;pre&gt;&lt;code&gt;    pub fn update_worker_report(&amp;amp;self, wr: &amp;amp;mut WorkerReport) {
        wr.status = Some(self.status.clone()).into();
        wr.info = Some(self.status.clone()).into();
    }&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Spot the mistake? We copy &lt;code&gt;status&lt;/code&gt; to &lt;code&gt;info&lt;/code&gt;. Bugs like this are extremely common, and not always found by the type system. derive-deftly can make it much easier to make them impossible.
&lt;h3&gt;&lt;a name="special-purpose-derive-macros-are-now-worthwhile"&gt;Special-purpose derive macros are now worthwhile!&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Because of the difficult and cumbersome nature of proc macros, very few projects have site-specific, special-purpose &lt;code&gt;#[derive(...)]&lt;/code&gt; macros.
&lt;p&gt;The &lt;a href="https://arti.torproject.org/"&gt;Arti&lt;/a&gt; &lt;a href="https://gitlab.torproject.org/tpo/core/arti"&gt;codebase&lt;/a&gt; has &lt;strong&gt;no&lt;/strong&gt; bespoke proc macros, across its 240kloc and 86 crates. (We did &lt;a href="https://lib.rs/crates/derive_builder_fork_arti"&gt;fork&lt;/a&gt; one upstream proc macro package to add a feature we needed.) I have only &lt;em&gt;one&lt;/em&gt; bespoke, case-specific, proc macro amongst all of my personal Rust projects; it predates derive-deftly.
&lt;p&gt;Since we have started using derive-deftly in Arti, it has become an important tool in our toolbox. We have &lt;strong&gt;37&lt;/strong&gt; bespoke derive macros, done with derive-deftly. Of these, 9 are exported for use by downstream crates. (For comparison there are 176 macro_rules macros.)
&lt;p&gt;In my most recent &lt;a href="https://salsa.debian.org/dgit-team/tag2upload-service-manager"&gt;personal Rust project&lt;/a&gt;, I have &lt;strong&gt;22&lt;/strong&gt; bespoke derive macros, done with with derive-deftly, and 19 macro_rules macros.
&lt;p&gt;derive-deftly macros are easy and straightforward enough that they can be used as readily as macro_rules macros. Indeed, they are often &lt;em&gt;clearer&lt;/em&gt; than a macro_rules macro.
&lt;h2&gt;&lt;a name="stability-without-stagnation"&gt;Stability without stagnation&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;derive-deftly is already highly capable, and can solve many advanced problems.
&lt;p&gt;It is mature software, well tested, with excellent documentation, comprising both comprehensive &lt;a href="https://docs.rs/derive-deftly/latest/derive_deftly/index.html#overall-toc"&gt;reference material&lt;/a&gt; and the &lt;a href="https://diziet.pages.torproject.net/rust-derive-deftly/latest/guide/"&gt;walkthrough-structured user guide&lt;/a&gt;.
&lt;p&gt;But declaring it 1.0 doesn&amp;rsquo;t mean that it won&amp;rsquo;t improve further.
&lt;p&gt;Our &lt;a href="https://gitlab.torproject.org/Diziet/rust-derive-deftly/-/issues/?sort=updated_desc&amp;amp;state=opened&amp;amp;first_page_size=100"&gt;ticket tracker&lt;/a&gt; has a laundry list of possible features. We&amp;rsquo;ll sometimes be cautious about committing to these, so we&amp;rsquo;ve added a &lt;a href="https://docs.rs/derive-deftly/1.0.0/derive_deftly/doc_changelog/index.html#t:beta"&gt;&lt;code&gt;beta&lt;/code&gt;&lt;/a&gt; feature flag, for opting in to less-stable features, so that we can prototype things without painting ourselves into a corner. And, we intend to further develop the Guide.&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=19395" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:18695</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/18695.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=18695"/>
    <title>derive-deftly is nearing 1.x - call for review/testing</title>
    <published>2024-07-03T18:32:06Z</published>
    <updated>2024-07-03T18:32:57Z</updated>
    <category term="derive-adhoc"/>
    <category term="derive-deftly"/>
    <category term="computers"/>
    <category term="rust"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">&lt;p&gt;&lt;a href="https://docs.rs/derive-deftly/latest/derive_deftly/"&gt;&lt;code&gt;derive-deftly&lt;/code&gt;&lt;/a&gt;, the template-based derive-macro facility for Rust, has been a great success.
&lt;p&gt;It&amp;rsquo;s coming up to time to declare a stable 1.x version. If you&amp;rsquo;d like to try it out, and have final comments / observations, now is the time.
&lt;ul&gt;&lt;li&gt;&lt;a href="#introduction-to-derive-deftly"&gt;Introduction to derive-deftly&lt;/a&gt;
&lt;li&gt;&lt;a href="#status"&gt;Status&lt;/a&gt;
&lt;li&gt;&lt;a href="#history"&gt;History&lt;/a&gt;
&lt;li&gt;&lt;a href="#plans---call-for-reviewtesting"&gt;Plans - call for review/testing&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;a name="cutid1"&gt;&lt;/a&gt;
&lt;h3&gt;&lt;a name="introduction-to-derive-deftly"&gt;Introduction to derive-deftly&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Have you ever wished that you could that could write a new &lt;code&gt;derive&lt;/code&gt; macro without having to mess with procedural macros?
&lt;p&gt;You can!
&lt;p&gt;&lt;a href="https://docs.rs/derive-deftly/latest/derive_deftly"&gt;&lt;code&gt;derive-deftly&lt;/code&gt;&lt;/a&gt; lets you write a &lt;code&gt;#[derive]&lt;/code&gt; macro, using a template syntax which looks a lot like &lt;code&gt;macro_rules!&lt;/code&gt;:
&lt;pre&gt;&lt;code&gt;use derive_deftly::{define_derive_deftly, Deftly};

define_derive_deftly! {
    ListVariants:

    impl $ttype {
        fn list_variants() -&amp;gt; Vec&amp;lt;&amp;amp;&amp;#39;static str&amp;gt; {
            vec![ $( stringify!( $vname ) , ) ]
        }
    }
}

#[derive(Deftly)]
#[derive_deftly(ListVariants)]
enum Enum {
    UnitVariant,
    StructVariant { a: u8, b: u16 },
    TupleVariant(u8, u16),
}

assert_eq!(
    Enum::list_variants(),
    [&amp;quot;UnitVariant&amp;quot;, &amp;quot;StructVariant&amp;quot;, &amp;quot;TupleVariant&amp;quot;],
);&lt;/code&gt;&lt;/pre&gt;&lt;h3&gt;&lt;a name="status"&gt;Status&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;derive-deftly has a wide range of features, which can be used to easily write sophisticated and reliable derive macros. We&amp;rsquo;ve been using it in &lt;a href="http://arti.torproject.org/"&gt;Arti&lt;/a&gt;, the Tor Project&amp;rsquo;s reimplementation of Tor in Rust, and we&amp;rsquo;ve found it very useful.
&lt;p&gt;There is comprehensive &lt;a href="https://docs.rs/derive-deftly/latest/derive_deftly/doc_reference/index.html"&gt;reference documentation&lt;/a&gt;, and more discursive &lt;a href="https://diziet.pages.torproject.net/rust-derive-deftly/latest/guide/"&gt;User Guide&lt;/a&gt; for a more gentle introduction. Naturally, everything is fully tested.
&lt;h3&gt;&lt;a name="history"&gt;History&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;derive-deftly started out as a Tor Hackweek project. It used to be called &lt;code&gt;derive-adhoc&lt;/code&gt;. But we renamed it because we found that many of the most interesting use cases were really not very ad-hoc at all.
&lt;p&gt;Over the past months we&amp;rsquo;ve been ticking off our &amp;ldquo;1.0 blocker&amp;rdquo; tickets. We&amp;rsquo;ve taken the opportunity to improve syntax, terminology, and semantics. We hope we have now made the last breaking changes.
&lt;h3&gt;&lt;a name="plans---call-for-reviewtesting"&gt;Plans - call for review/testing&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;In the near future, we plan to declare version 1.0. After 1.x, we intend to make breaking changes very rarely.
&lt;p&gt;So, right now, we&amp;rsquo;d like last-minute feedback. Are there any wrinkles that need to be sorted out? Please file tickets or MRs &lt;a href="https://gitlab.torproject.org/Diziet/rust-derive-deftly"&gt;on our gitlab&lt;/a&gt;. Ideally, anything which might imply breaking changes would be submitted on or before the 13th of August.
&lt;p&gt;In the medium to long term, we have many ideas for how to make derive-deftly even more convenient, and even more powerful. But we are going to proceed cautiously, because we don&amp;rsquo;t want to introduce bad syntax or bad features, which will require difficult decisions in the future about forward compatibility.&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=18695" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:18122</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/18122.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=18122"/>
    <title>How to use Rust on Debian (and Ubuntu, etc.)</title>
    <published>2024-03-21T20:05:46Z</published>
    <updated>2024-03-21T21:47:04Z</updated>
    <category term="rust"/>
    <category term="debian"/>
    <category term="computers"/>
    <category term="nailing-cargo"/>
    <dw:security>public</dw:security>
    <dw:reply-count>4</dw:reply-count>
    <content type="html">&lt;p&gt;tl;dr: Don&amp;rsquo;t just &lt;code&gt;apt install rustc cargo&lt;/code&gt;. Either do that &lt;strong&gt;and make sure to use only Rust libraries from your distro&lt;/strong&gt; (with the tiresome config runes below); or, just use &lt;a href="https://www.rust-lang.org/learn/get-started"&gt;rustup&lt;/a&gt;.
&lt;ul&gt;&lt;li&gt;&lt;a href="#dont-do-the-obvious-thing-its-never-what-you-want"&gt;Don&amp;rsquo;t do the obvious thing; it&amp;rsquo;s never what you want&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#q.-download-and-run-whatever-code-from-the-internet"&gt;Q. Download and run whatever code from the internet?&lt;/a&gt;
&lt;/li&gt;&lt;/ul&gt;

&lt;li&gt;&lt;a href="#option-1-wtf-no-i-dont-want-curlbash"&gt;Option 1: WTF, no I don&amp;rsquo;t want &lt;code&gt;curl|bash&lt;/code&gt;&lt;/a&gt;
&lt;li&gt;&lt;a href="#option-2-biting-the-curlbash-bullet"&gt;Option 2: Biting the &lt;code&gt;curl|bash&lt;/code&gt; bullet&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#privilege-separation"&gt;Privilege separation&lt;/a&gt;
&lt;/li&gt;&lt;/ul&gt;

&lt;li&gt;&lt;a href="#omg-what-a-mess"&gt;OMG what a mess&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;a name="cutid1"&gt;&lt;/a&gt;
&lt;p&gt;
&lt;h3&gt;&lt;a name="dont-do-the-obvious-thing-its-never-what-you-want"&gt;Don&amp;rsquo;t do the obvious thing; it&amp;rsquo;s never what you want&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Debian ships a Rust compiler, and a large number of Rust libraries.
&lt;p&gt;But if you just do things the obvious &amp;ldquo;default&amp;rdquo; way, with &lt;code&gt;apt install rustc cargo&lt;/code&gt;, you will end up using Debian&amp;rsquo;s &lt;em&gt;compiler&lt;/em&gt; but &lt;em&gt;upstream&lt;/em&gt; libraries, directly and uncurated from crates.io.
&lt;p&gt;This is not what you want. There are about two reasonable things to do, depending on your preferences.
&lt;h4&gt;&lt;a name="q.-download-and-run-whatever-code-from-the-internet"&gt;Q. Download and run whatever code from the internet?&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;The key question is this:
&lt;p&gt;Are you comfortable downloading code, directly from hundreds of upstream Rust package maintainers, and running it ?
&lt;p&gt;That&amp;rsquo;s what &lt;code&gt;cargo&lt;/code&gt; does. It&amp;rsquo;s one of the main things it&amp;rsquo;s &lt;em&gt;for&lt;/em&gt;. Debian&amp;rsquo;s &lt;code&gt;cargo&lt;/code&gt; behaves, in this respect, just like upstream&amp;rsquo;s. Let me say that again:
&lt;p&gt;&lt;strong&gt;Debian&amp;rsquo;s cargo promiscuously downloads code from crates.io&lt;/strong&gt; just like upstream cargo.
&lt;p&gt;So if you use Debian&amp;rsquo;s cargo in the most obvious way, you are &lt;em&gt;still&lt;/em&gt; downloading and running all those random libraries. The only thing you&amp;rsquo;re &lt;em&gt;avoiding&lt;/em&gt; downloading is the Rust compiler itself, which is precisely the part that is most carefully maintained, and of least concern.
&lt;p&gt;Debian&amp;rsquo;s cargo can even download from crates.io when you&amp;rsquo;re building official Debian source packages written in Rust: if you run &lt;code&gt;dpkg-buildpackage&lt;/code&gt;, the downloading is suppressed; but a plain &lt;code&gt;cargo build&lt;/code&gt; will try to obtain and use dependencies from the upstream ecosystem. (&amp;ldquo;Happily&amp;rdquo;, if you do this, it&amp;rsquo;s quite likely to bail out early due to version mismatches, before actually downloading anything.)
&lt;h3&gt;&lt;a name="option-1-wtf-no-i-dont-want-curlbash"&gt;Option 1: WTF, no I don&amp;rsquo;t want &lt;code&gt;curl|bash&lt;/code&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;OK, but then you must limit yourself to libraries available &lt;em&gt;within&lt;/em&gt; Debian. Each Debian release provides a curated set. It may or may not be sufficient for your needs. Many capable programs can be written using the packages in Debian.
&lt;p&gt;But any &lt;em&gt;upstream&lt;/em&gt; Rust project that you encounter is likely to be a pain to get working, unless their maintainers specifically intend to support this. (This is fairly rare, and the Rust tooling doesn&amp;rsquo;t make it easy.)
&lt;p&gt;To go with this plan, &lt;code&gt;apt install rustc cargo&lt;/code&gt; and &lt;strong&gt;put this in your configuration&lt;/strong&gt;, in &lt;code&gt;$HOME/.cargo/config.toml&lt;/code&gt;:
&lt;pre&gt;&lt;code&gt;[source.debian-packages]
directory = &amp;quot;/usr/share/cargo/registry&amp;quot;
[source.crates-io]
replace-with = &amp;quot;debian-packages&amp;quot;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This causes cargo to look in &lt;code&gt;/usr/share&lt;/code&gt; for dependencies, rather than downloading them from crates.io. You must then install the &lt;code&gt;librust-FOO-dev&lt;/code&gt; packages for each of your dependencies, with &lt;code&gt;apt&lt;/code&gt;.
&lt;p&gt;This will allow you to write your own program in Rust, and build it using &lt;code&gt;cargo build&lt;/code&gt;.
&lt;h3&gt;&lt;a name="option-2-biting-the-curlbash-bullet"&gt;Option 2: Biting the &lt;code&gt;curl|bash&lt;/code&gt; bullet&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;If you want to build software that isn&amp;rsquo;t specifically targeted at Debian&amp;rsquo;s Rust you will probably &lt;em&gt;need&lt;/em&gt; to use packages from crates.io, &lt;em&gt;not&lt;/em&gt; from Debian.
&lt;p&gt;If you&amp;rsquo;re doing to do that, there is little point not using &lt;a href="https://www.rust-lang.org/learn/get-started"&gt;rustup&lt;/a&gt; to get the latest compiler. rustup&amp;rsquo;s install rune is alarming, but cargo will be doing exactly the same kind of thing, only worse (because it trusts many more people) and more hidden.
&lt;p&gt;So in this case: &lt;em&gt;do&lt;/em&gt; run the &lt;a href="https://www.rust-lang.org/learn/get-started"&gt;&lt;code&gt;curl|bash&lt;/code&gt; install rune&lt;/a&gt;.
&lt;p&gt;Hopefully the Rust project you are trying to build have shipped a &lt;code&gt;Cargo.lock&lt;/code&gt;; that contains hashes of all the dependencies that &lt;em&gt;they&lt;/em&gt; last used and tested. If you run &lt;code&gt;cargo build --locked&lt;/code&gt;, cargo will &lt;em&gt;only&lt;/em&gt; use those versions, which are hopefully OK.
&lt;p&gt;And you can run &lt;code&gt;cargo audit&lt;/code&gt; to see if there are any reported vulnerabilities or problems. But you&amp;rsquo;ll have to bootstrap this with &lt;code&gt;cargo install --locked cargo-audit&lt;/code&gt;; cargo-audit is from the &lt;a href="https://rustsec.org/"&gt;RUSTSEC&lt;/a&gt; folks who do care about these kind of things, so hopefully running their code (and their dependencies) is fine. Note the &lt;code&gt;--locked&lt;/code&gt; which is needed because &lt;a href="https://github.com/rust-lang/cargo/issues/7169"&gt;cargo&amp;rsquo;s default behaviour is wrong&lt;/a&gt;.
&lt;h4&gt;&lt;a name="privilege-separation"&gt;Privilege separation&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;This approach is rather alarming. For my personal use, I wrote a privsep tool which allows me to run all this upstream Rust code as a separate user.
&lt;p&gt;That tool is &lt;a href="https://diziet.dreamwidth.org/8848.html"&gt;nailing-cargo&lt;/a&gt;. It&amp;rsquo;s not particularly well productised, or tested, but it does work for at least one person besides me. You may wish to try it out, or consider alternative arrangements. &lt;a href="https://salsa.debian.org/iwj/nailing-cargo"&gt;Bug reports and patches welcome&lt;/a&gt;.
&lt;h3&gt;&lt;a name="omg-what-a-mess"&gt;OMG what a mess&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Indeed. There are large number of technical and social factors at play.
&lt;p&gt;cargo itself is deeply troubling, both in principle, and in detail. I often find myself severely disappointed with its maintainers&amp;rsquo; decisions. In mitigation, much of the wider Rust upstream community &lt;em&gt;does&lt;/em&gt; takes this kind of thing very seriously, and often makes good choices. &lt;a href="https://rustsec.org/"&gt;RUSTSEC&lt;/a&gt; is one of the results.
&lt;p&gt;Debian&amp;rsquo;s technical arrangements for Rust packaging are quite dysfunctional, too: IMO the scheme is based on fundamentally wrong design principles. But, the Debian Rust packaging team is dynamic, constantly working the update treadmills; and the team is generally welcoming and helpful.
&lt;p&gt;Sadly last time I explored the possibility, the Debian Rust Team didn&amp;rsquo;t have the appetite for more fundamental changes to the &lt;a href="https://salsa.debian.org/rust-team/debcargo-conf/-/blob/master/README.rst"&gt;workflow&lt;/a&gt; (including, for example, &lt;a href="https://diziet.dreamwidth.org/10559.html"&gt;changes to dependency version handling&lt;/a&gt;). Significant improvements to upstream cargo&amp;rsquo;s approach seem unlikely, too; we can only hope that eventually someone might manage to supplant it.

&lt;address&gt;edited 2024-03-21 21:49 to add a cut tag&lt;/address&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=18122" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:17579</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/17579.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=17579"/>
    <title>Don’t use apt-get source; use dgit</title>
    <published>2023-12-04T15:10:52Z</published>
    <updated>2023-12-04T15:12:25Z</updated>
    <category term="dgit"/>
    <category term="debian"/>
    <category term="computers"/>
    <dw:security>public</dw:security>
    <dw:reply-count>1</dw:reply-count>
    <content type="html">&lt;p&gt;tl;dr:
&lt;p&gt;If you are a Debian user who knows git, &lt;strong&gt;don&amp;rsquo;t work with Debian source packages&lt;/strong&gt;. Don&amp;rsquo;t use &lt;code&gt;apt source&lt;/code&gt;, or &lt;code&gt;dpkg-source&lt;/code&gt;. Instead, &lt;strong&gt;use &lt;a href="https://manpages.debian.org/stable/dgit/dgit-user.7.en.html"&gt;dgit&lt;/a&gt; and work in git&lt;/strong&gt;.
&lt;p&gt;Also, &lt;strong&gt;don&amp;rsquo;t&lt;/strong&gt; use: &amp;ldquo;VCS&amp;rdquo; links on official Debian web pages, &lt;code&gt;debcheckout&lt;/code&gt;, or Debian&amp;rsquo;s (semi-)official gitlab, Salsa. These are suitable for Debian experts only; for most people they &lt;a href="https://diziet.dreamwidth.org/9556.html"&gt;can be beartraps&lt;/a&gt;. Instead, use &lt;a href="https://manpages.debian.org/stable/dgit/dgit-user.7.en.html"&gt;dgit&lt;/a&gt;.
&lt;ul&gt;&lt;li&gt;&lt;a href="#struggling-with-debian-source-packages"&gt;Struggling with Debian source packages?&lt;/a&gt;
&lt;li&gt;&lt;a href="#just-use-dgit"&gt;Just use dgit&lt;/a&gt;
&lt;li&gt;&lt;a href="#objections"&gt;Objections&lt;/a&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="#but-i-dont-want-to-learn-yet-another-tool"&gt;But I don&amp;rsquo;t want to learn &lt;em&gt;yet another&lt;/em&gt; tool&lt;/a&gt;
&lt;li&gt;&lt;a href="#shouldnt-i-be-using-official-debian-git-repos"&gt;Shouldn&amp;rsquo;t I be using &amp;ldquo;official&amp;rdquo; Debian git repos?&lt;/a&gt;
&lt;li&gt;&lt;a href="#gosh-is-debian-really-this-bad"&gt;Gosh, is Debian really this bad?&lt;/a&gt;
&lt;li&gt;&lt;a href="#im-a-debian-maintainer.-you-tell-me-dgit-is-something-totally-different"&gt;I&amp;rsquo;m a Debian maintainer. You tell &lt;em&gt;me&lt;/em&gt; dgit is something totally different!&lt;/a&gt;
&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;

&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;a name="cutid1"&gt;&lt;/a&gt;&amp;gt;
&lt;h3&gt;&lt;a name="struggling-with-debian-source-packages"&gt;Struggling with Debian source packages?&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;A friend of mine recently asked for help on IRC. They&amp;rsquo;re an experienced Debian administrator and user, and were trying to: make a change to a Debian package; build and install and run binary packages from it; and record that change for their future self, and their colleagues. They ended up trying to comprehend quilt.
&lt;p&gt;&lt;a href="https://manpages.debian.org/bookworm/quilt/quilt.1.en.html"&gt;quilt&lt;/a&gt; is an ancient utility for managing sets of source code patches, from well before the era of modern version control. It has many strange behaviours and footguns. Debian&amp;rsquo;s ancient and obsolete tarballs-and-patches &lt;a href="https://manpages.debian.org/bookworm/dpkg-dev/dpkg-source.1.en.html"&gt;source package format&lt;/a&gt; (which I designed the initial version of in 1993) nowadays uses quilt, at least for most packages.
&lt;p&gt;You don&amp;rsquo;t want to deal with any of this nonsense. You don&amp;rsquo;t want to learn quilt, and suffer its misbehaviours. You don&amp;rsquo;t want to learn about Debian source packages and wrestle dpkg-source.
&lt;p&gt;Happily, you don&amp;rsquo;t need to.
&lt;h3&gt;&lt;a name="just-use-dgit"&gt;Just use dgit&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;One of dgit&amp;rsquo;s main objectives is to minimise the amount of Debian craziness you need to learn. dgit aims to empower you to make changes to the software you&amp;rsquo;re running, conveniently and with a minimum of fuss.
&lt;p&gt;You can use dgit to get the source code to a Debian package, as a git tree, with &lt;code&gt;dgit clone&lt;/code&gt; (and &lt;code&gt;dgit fetch&lt;/code&gt;). The git tree can be made into a binary package directly.
&lt;p&gt;The only things you really need to know are:
&lt;ol type="1"&gt;&lt;li&gt;&lt;p&gt;By default dgit fetches from Debian unstable, the main work-in-progress branch. You may want something like &lt;code&gt;dgit clone PACKAGE bookworm,-security&lt;/code&gt; (yes, with a comma).

&lt;li&gt;&lt;p&gt;You probably want to edit &lt;code&gt;debian/changelog&lt;/code&gt; to make your packages have a different version number.

&lt;li&gt;&lt;p&gt;To build binaries, run &lt;code&gt;dpkg-buildpackage -uc -b&lt;/code&gt;.

&lt;li&gt;&lt;p&gt;Debian package builds are often disastrously messsy: builds might modify source files; and the official &lt;code&gt;debian/rules clean&lt;/code&gt; can be inadequate, or crazy. Always commit before building, and use &lt;code&gt;git clean&lt;/code&gt; and &lt;code&gt;git reset --hard&lt;/code&gt; instead of running clean rules from the package.

&lt;/p&gt;&lt;/li&gt;&lt;/p&gt;&lt;/li&gt;&lt;/p&gt;&lt;/li&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;
&lt;p&gt;Don&amp;rsquo;t try to make a Debian source package. (Don&amp;rsquo;t read the &lt;code&gt;dpkg-source&lt;/code&gt; manual!) Instead, to preserve and share your work, use the git branch.
&lt;p&gt;&lt;code&gt;dgit pull&lt;/code&gt; or &lt;code&gt;dgit fetch&lt;/code&gt; can be used to get updates.
&lt;p&gt;There is a more comprehensive tutorial, with example runes, in the &lt;a href="https://manpages.debian.org/stable/dgit/dgit-user.7.en.html"&gt;dgit-user(7)&lt;/a&gt; manpage. (There is of course &lt;a href="https://manpages.debian.org/bookworm/dgit/dgit.1.en.html"&gt;complete reference documentation&lt;/a&gt;, but you don&amp;rsquo;t need to bother reading it.)
&lt;h3&gt;&lt;a name="objections"&gt;Objections&lt;/a&gt;&lt;/h3&gt;
&lt;h4&gt;&lt;a name="but-i-dont-want-to-learn-yet-another-tool"&gt;But I don&amp;rsquo;t want to learn &lt;em&gt;yet another&lt;/em&gt; tool&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;One of dgit&amp;rsquo;s main goals is to save people from learning things you don&amp;rsquo;t need to. It aims to be straightforward, convenient, and (so far as Debian permits) unsurprising.
&lt;p&gt;So: don&amp;rsquo;t &lt;em&gt;learn&lt;/em&gt; dgit. Just run it and it will be fine :-).
&lt;h4&gt;&lt;a name="shouldnt-i-be-using-official-debian-git-repos"&gt;Shouldn&amp;rsquo;t I be using &amp;ldquo;official&amp;rdquo; Debian git repos?&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Absolutely not.&lt;/strong&gt;
&lt;p&gt;Unless you are a Debian expert, these can be terrible beartraps. One possible outcome is that you might build an apparently working program &lt;em&gt;but without the security patches&lt;/em&gt;. Yikes!
&lt;p&gt;I discussed this in more detail in 2021 in &lt;a href="https://diziet.dreamwidth.org/9556.html"&gt;another blog post plugging dgit&lt;/a&gt;.
&lt;h4&gt;&lt;a name="gosh-is-debian-really-this-bad"&gt;Gosh, is Debian really this bad?&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;Yes. On behalf of the Debian Project, I apologise.
&lt;p&gt;Debian is a very conservative institution. Change usually comes very slowly. (And when rapid or radical change has been forced through, the results haven&amp;rsquo;t always been pretty, either technically or socially.)
&lt;p&gt;Sadly this means that sometimes much needed change can take a very long time, if it happens at all. But this tendency also provides the stability and reliability that people have come to rely on Debian for.
&lt;h4&gt;&lt;a name="im-a-debian-maintainer.-you-tell-me-dgit-is-something-totally-different"&gt;I&amp;rsquo;m a Debian maintainer. You tell &lt;em&gt;me&lt;/em&gt; dgit is something totally different!&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;dgit is, in fact, a general bidirectional gateway between the Debian archive and git.
&lt;p&gt;So yes, dgit is also a tool for Debian uploaders. You should use it to do your uploads, whenever you can. It&amp;rsquo;s more convenient and more reliable than &lt;code&gt;git-buildpackage&lt;/code&gt; and &lt;code&gt;dput&lt;/code&gt; runes, and produces better output for users. You too can start to forget how to deal with source packages!
&lt;p&gt;A full treatment of this is beyond the scope of this blog post.&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=17579" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:16771</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/16771.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=16771"/>
    <title>DigiSpark (ATTiny85) - Arduino, C, Rust, build systems</title>
    <published>2023-10-22T16:02:09Z</published>
    <updated>2023-10-22T16:04:48Z</updated>
    <category term="computers"/>
    <category term="rust"/>
    <dw:security>public</dw:security>
    <dw:reply-count>5</dw:reply-count>
    <content type="html">&lt;p&gt;Recently I completed a small project, including an embedded microcontroller. For me, using the popular Arduino IDE, and C, was a mistake. The experience with Rust was better, but still very exciting, and not in a good way.&lt;/p&gt;
&lt;p&gt;Here follows the rant.&lt;/p&gt;


&lt;ul&gt;
&lt;li&gt;&lt;a href="#introduction"&gt;Introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#arduino-ide"&gt;Arduino IDE&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#writing-c-again"&gt;Writing C again&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#rust-on-the-digispark"&gt;Rust on the DigiSpark&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#riir-rewrite-it-in-rust"&gt;RIIR (Rewrite It In Rust)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#an-offer-of-help"&gt;An offer of help&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#conclusions"&gt;Conclusions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;a name="cutid1"&gt;&lt;/a&gt;
&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;In a recent project (I’ll write about the purpose, and the hardware in another post) I chose to use a &lt;a href="http://digistump.com/products/1"&gt;DigiSpark&lt;/a&gt; board. This is a small board with a USB-A tongue (but not a proper plug), and an &lt;a href="https://www.microchip.com/en-us/product/ATtiny85"&gt;ATTiny85&lt;/a&gt; microcontroller, This chip has 8 pins and is quite small really, but it was plenty for my application. By choosing something popular, I hoped for convenient hardware, and an uncomplicated experience.&lt;/p&gt;
&lt;p&gt;Convenient hardware, I got.&lt;/p&gt;
&lt;h3&gt;Arduino IDE&lt;/h3&gt;
&lt;p&gt;The usual way to program these boards is via an IDE. I thought I’d go with the flow and try that. I knew these were closely related to actual Arduinos and saw that the IDE package &lt;a href="https://packages.debian.org/bookworm/arduino"&gt;&lt;code&gt;arduino&lt;/code&gt;&lt;/a&gt; was in Debian.&lt;/p&gt;
&lt;p&gt;But it turns out that the Debian package’s version doesn’t support the DigiSpark. (AFAICT from the list it offered me, I’m not sure it supports &lt;em&gt;any&lt;/em&gt; ATTiny85 board.) Also, disturbingly, its “board manager” seemed to be offering to “install” board support, suggesting it would download “stuff” from the internet and run it. That wouldn’t be acceptable for my main laptop.&lt;/p&gt;
&lt;p&gt;I didn’t expect to be doing much programming or debugging, and the project didn’t have significant security requirements: the chip, in my circuit, has only a very narrow ability do anything to the real world, and no network connection of any kind. So I thought it would be tolerable to do the project on my low-security “video laptop”. That’s the machine where I’m prepared to say “yes” to installing random software off the internet.&lt;/p&gt;
&lt;p&gt;So I went to the &lt;a href="https://www.arduino.cc/en/software/"&gt;upstream Arduino site&lt;/a&gt; and downloaded a tarball containing the Arduino IDE. After unpacking that in &lt;code&gt;/opt&lt;/code&gt; it ran and produced a pointy-clicky IDE, as expected. I had already found &lt;a href="https://startingelectronics.org/tutorials/arduino/digispark/digispark-linux-setup/"&gt;a 3rd-party tutorial&lt;/a&gt; saying I needed to add a magic URL (from the DigiSpark’s vendor) in the preferences. That indeed allowed it to download a whole pile of stuff. Compilers, bootloader clients, god knows what.&lt;/p&gt;
&lt;p&gt;However, my tiny test program didn’t make it to the board. Half-buried in a too-small window was an error message about the board’s bootloader (“Micronucleus”) being too new.&lt;/p&gt;
&lt;p&gt;The boards I had came pre-flashed with micronucleus 2.2. Which is hardly new, But even so the official Arduino IDE (or maybe the DigiSpark’s board package?) still contains an old version. So now we have all the downsides of &lt;code&gt;curl|bash&lt;/code&gt;-ware, but we’re lacking the “it’s up to date” and “it just works” upsides.&lt;/p&gt;
&lt;p&gt;Further digging found some &lt;a href="https://digistump.com/board/index.php/topic,1834.msg13109.html#msg13109"&gt;random forum posts&lt;/a&gt; which suggested simply downloading &lt;a href="https://github.com/micronucleus/micronucleus"&gt;a newer micronucleus&lt;/a&gt; and manually stuffing it into the right place: one overwrites a specific file, in the middle the heaps of stuff that the Arduino IDE’s board support downloader squirrels away in your home directory. (In my case, the home directory of the untrusted shared user on the video laptop,)&lt;/p&gt;
&lt;p&gt;So, “whatever”. I did that. And it worked!&lt;/p&gt;
&lt;p&gt;Having demo’d my ability to run code on the board, I set about writing my program.&lt;/p&gt;
&lt;h3&gt;Writing C again&lt;/h3&gt;
&lt;p&gt;The programming language offered via the Arduino IDE is C.&lt;/p&gt;
&lt;p&gt;It’s been a little while since I started a new thing in C. After having spent so much of the last several years writing Rust. C’s primitiveness quickly started to grate, and the program couldn’t easily be as DRY as I wanted (Don’t Repeat Yourself, see &lt;a href="https://arxiv.org/abs/1210.0530"&gt;Wilson et al, 2012&lt;/a&gt;, §4, p.6). But, I carried on; after all, this was going to be quite a small job.&lt;/p&gt;
&lt;p&gt;Soon enough I had a program that looked right and compiled.&lt;/p&gt;
&lt;p&gt;Before testing it in circuit, I wanted to do some QA. So I wrote a simulator harness that &lt;code&gt;#include&lt;/code&gt;d my Arduino source file, and provided imitations of the few Arduino library calls my program used. As an side advantage, I could build and run the simulation on my main machine, in my normal development environment (Emacs, &lt;code&gt;make&lt;/code&gt;, etc.). The simulator runs confirmed the correct behaviour. (Perhaps there would have been some more faithful simulation tool, but the Arduino IDE didn’t seem to offer it, and I wasn’t inclined to go further down that kind of path.)&lt;/p&gt;
&lt;p&gt;So I got the video laptop out, and used the Arduino IDE to flash the program. It didn’t run properly. It hung almost immediately. Some very ad-hoc debugging via led-blinking (like printf debugging, only much worse) convinced me that my problem was as follows:&lt;/p&gt;
&lt;p&gt;Arduino C has 16-bit ints. My test harness was on my 64-bit Linux machine. C was autoconverting things (when building for the micrcocontroller). The way the Arduino IDE ran the compiler didn’t pass the warning options necessary to spot narrowing implicit conversions. Those warnings aren’t the default in C in general &lt;del&gt;because C compilers hate us all&lt;/del&gt; for compatibility reasons.&lt;/p&gt;
&lt;p&gt;I don’t know why those warnings are not the default in the Arduino IDE, but my guess is that they didn’t want to bother poor novice programmers with messages from the compiler explaining how their program is quite possibly wrong. After all, users don’t like error messages so we shouldn’t report errors. And novice programmers are especially fazed by error messages so it’s better to just let them struggle themselves with the arcane mysteries of undefined behaviour in C?&lt;/p&gt;
&lt;p&gt;The Arduino IDE does offer a dropdown for “compiler warnings”. The default is None. Setting it to All didn’t produce anything about my integer overflow bugs. And, the output was very hard to find anyway because the “log” window has a constant stream of strange messages from &lt;code&gt;javax.jmdns&lt;/code&gt;, with hex DNS packet dumps. WTF.&lt;/p&gt;
&lt;p&gt;Other things that were vexing about the Arduino IDE: it has fairly fixed notions (which don’t seem to be documented) about how your files and directories ought to be laid out, and magical machinery for finding things you put “nearby” its “sketch” (as it calls them) and sticking them in its ear, causing lossage. It has a tendency to become confused if you edit files under its feet (e.g. with &lt;code&gt;git checkout&lt;/code&gt;). It wasn’t really very suited to a workflow where principal development occurs elsewhere.&lt;/p&gt;
&lt;p&gt;And, important settings such as the project’s clock speed, or even the target board, or &lt;em&gt;the compiler warning settings to use&lt;/em&gt; weren’t stored in the project directory along with the actual code. I didn’t look too hard, but I presume they must be in a dotfile somewhere. This is madness.&lt;/p&gt;
&lt;p&gt;Apparently there is an Arduino CLI too. But I was already quite exasperated, and I didn’t like the idea of going so far off the beaten path, when the whole point of using all this was to stay with popular tooling and share fate with others. (How do these others cope? I have no idea.)&lt;/p&gt;
&lt;p&gt;As for the integer overflow bug:&lt;/p&gt;
&lt;p&gt;I didn’t seriously consider trying to figure out how to control in detail the C compiler options passed by the Arduino IDE. (Perhaps this is &lt;a href="https://forum.arduino.cc/t/compiler-options/468759"&gt;possible, but not really documented&lt;/a&gt;?) I did consider trying to run a cross-compiler myself from the command line, with appropriate warning options, but that would have involved providing (or stubbing, again) the Arduino/DigiSpark libraries (and bugs could easily lurk at that interface).&lt;/p&gt;
&lt;p&gt;Instead, I thought, “if only I had written the thing in Rust”. But that wasn’t possible, was it? Does Rust even support this board?&lt;/p&gt;
&lt;h3&gt;Rust on the DigiSpark&lt;/h3&gt;
&lt;p&gt;I did a cursory web search and found a very useful &lt;a href="https://dylan-garrett.com/blog/rust-digispark-attiny/"&gt;blog post by Dylan Garrett&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This encouraged me to think it might be a workable strategy. I looked at the instructions there. It seemed like I could run them via the &lt;a href="https://diziet.dreamwidth.org/tag/nailing-cargo"&gt;privsep arrangement&lt;/a&gt; I use to protect myself when developing using upstream cargo packages from &lt;code&gt;crates.io&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I got surprisingly far surprisingly quickly. It did, rather startlingly, cause my &lt;code&gt;rustup&lt;/code&gt; to download a random recent Nightly Rust, but I have six of those already for other Reasons. Very quickly I got the “trinket” LED blink example, referenced by Dylan’s blog post, to compile. Manually copying the file to the video laptop allowed me to run the previously-downloaded micronucleus executable and successfully run the blink example on my board!&lt;/p&gt;
&lt;p&gt;I thought a more principled approach to the bootloader client might allow a more convenient workflow. I found the &lt;a href="https://github.com/micronucleus/micronucleus"&gt;upstream Micronucleus&lt;/a&gt; git releases and tags, and had a look over its source code, release dates, etc. It seemed plausible, so I compiled v2.6 from source. That was a success: now I could build and install a Rust program onto my board, from the command line, on my main machine. No more pratting about with the video laptop.&lt;/p&gt;
&lt;p&gt;I had got further, more quickly, with Rust, than with the Arduino IDE, and the outcome and workflow was superior.&lt;/p&gt;
&lt;p&gt;So, basking in my success, I copied the directory containing the example into my own project, renamed it, and adjusted the &lt;code&gt;path&lt;/code&gt; references.&lt;/p&gt;
&lt;p&gt;That didn’t work. Now it didn’t build. Even after I copied about &lt;code&gt;.cargo/config.toml&lt;/code&gt; and &lt;code&gt;rust-toolchain.toml&lt;/code&gt; it didn’t build, producing a variety of exciting messages, depending what precisely I tried. I don’t have detailed logs of my flailing: the instructions say to build it by &lt;code&gt;cd&lt;/code&gt;’ing to the subdirectory, and, given that what I was trying to do was to &lt;em&gt;not&lt;/em&gt; follow those instructions, it didn’t seem sensible to try to prepare a proper repro so I could file a ticket. I wasn’t optimistic about investigating it more deeply myself: I have some experience of fighting cargo, and it’s not usually fun. Looking at some of the build control files, things seemed quite complicated.&lt;/p&gt;
&lt;p&gt;Additionally, not all of the crates are on &lt;code&gt;crates.io&lt;/code&gt;. I have no idea why not. So, I would need to supply “local” copies of them anyway. I decided to just &lt;a href="https://diziet.dreamwidth.org/14666.html"&gt;&lt;code&gt;git subtree add&lt;/code&gt;&lt;/a&gt; the &lt;a href="https://github.com/Rahix/avr-hal/issues"&gt;&lt;code&gt;avr-hal&lt;/code&gt; git tree&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;(That seemed better than the approach taken by the avr-hal project’s &lt;a href="https://github.com/Rahix/avr-hal-template"&gt;cargo template&lt;/a&gt;, since that template involve a cargo dependency on a foreign &lt;code&gt;git&lt;/code&gt; repository. Perhaps it would be possible to turn them into &lt;code&gt;path&lt;/code&gt; dependencies, but given that I had evidence of file-location-sensitive behaviour, which I didn’t feel like I wanted to spend time investigating, using that seems like it would possibly have invited more trouble. Also, I don’t like package templates very much. They’re a form of clone-and-hack: you end up stuck with whatever bugs or oddities exist in the version of the template which was current when you started.)&lt;/p&gt;
&lt;p&gt;Since I couldn’t get things to build outside &lt;code&gt;avr-hal&lt;/code&gt;, I edited the example, within &lt;code&gt;avr-hal&lt;/code&gt;, to refer to my (one) &lt;code&gt;program.rs&lt;/code&gt; file &lt;em&gt;outside&lt;/em&gt; &lt;code&gt;avr-hal&lt;/code&gt;, with a &lt;a href="https://salsa.debian.org/iwj/coffee-machine-keepalive-firmware/-/blob/2aec3263053b7ec2a9451b536101fd4def764eca/avr-hal/examples/trinket/src/bin/trinket-blink.rs#L10"&gt;&lt;code&gt;#[path]&lt;/code&gt; instruction&lt;/a&gt;. That’s not pretty, but it worked.&lt;/p&gt;
&lt;p&gt;I also had to write a &lt;a href="https://salsa.debian.org/iwj/coffee-machine-keepalive-firmware/-/blob/2aec3263053b7ec2a9451b536101fd4def764eca/in-avr-hal"&gt;nasty shell script&lt;/a&gt; to work around the lack of good support in my &lt;a href="https://diziet.dreamwidth.org/8848.html"&gt;&lt;code&gt;nailing-cargo&lt;/code&gt;&lt;/a&gt; privsep tool for builds where &lt;code&gt;cargo&lt;/code&gt; must be invoked in a deep subdirectory, and/or &lt;code&gt;Cargo.lock&lt;/code&gt; isn’t where it expects, and/or the &lt;code&gt;target&lt;/code&gt; directory containing build products is in a weird place. It also has to filter the output from &lt;code&gt;cargo&lt;/code&gt; to adjust the pathnames in the error messages. Otherwise, running both &lt;code&gt;cd A; cargo build&lt;/code&gt; and &lt;code&gt;cd B; cargo build&lt;/code&gt; from a &lt;code&gt;Makefile&lt;/code&gt; produces confusing sets of error messages, some of which contain filenames relative to &lt;code&gt;A&lt;/code&gt; and some relative to &lt;code&gt;B&lt;/code&gt;, making it impossible for my Emacs to reliably find the right file.&lt;/p&gt;
&lt;h3&gt;RIIR (Rewrite It In Rust)&lt;/h3&gt;
&lt;p&gt;Having got my build tooling sorted out I could go back to my actual program.&lt;/p&gt;
&lt;p&gt;I translated the main program, and the simulator, from C to Rust, more or less line-by-line. I made the Rust version of the simulator produce the same output format as the C one. That let me check that the two programs had the same (simulated) behaviour. Which they did (after fixing a few glitches in the simulator log formatting).&lt;/p&gt;
&lt;p&gt;Emboldened, I flashed the Rust version of my program to the DigiSpark. It worked right away!&lt;/p&gt;
&lt;p&gt;RIIR had caused the bug to vanish. Of course, to rewrite the program in Rust, and get it to compile, it was necessary to be careful about the types of all the various integers, so that’s not so surprising. Indeed, it was the point. I was then able to refactor the program to be a bit more natural and DRY, and improve some internal interfaces. Rust’s greater power, compared to C, made those cleanups easier, so making them worthwhile.&lt;/p&gt;
&lt;p&gt;However, when doing real-world testing I found a weird problem: my timings were off. Measured, the real program was too fast by a factor of slightly more than 2. A bit of searching (and searching my memory) revealed the cause: I was using a board template for an Adafruit Trinket. The Trinket has a clock speed of 8MHz. But the DigiSpark runs at 16.5MHz. (This is discussed in a &lt;a href="https://github.com/SpenceKonde/ATTinyCore/issues/349"&gt;ticket&lt;/a&gt; against one of the C/C++ libraries supporting the ATTiny85 chip.)&lt;/p&gt;
&lt;p&gt;The Arduino IDE had offered me a choice of clock speeds. I have no idea how that dropdown menu took effect; I suspect it was adding prelude code to adjust the clock prescaler. But my attempts to mess with the CPU clock prescaler register by hand at the start of my Rust program didn’t bear fruit.&lt;/p&gt;
&lt;p&gt;So instead, I adopted a bodge: since my code has (for code structure reasons, amongst others) only one place where it dealt with the underlying hardware’s notion of time, I simply changed my &lt;code&gt;delay&lt;/code&gt; function to &lt;a href="https://salsa.debian.org/iwj/coffee-machine-keepalive-firmware/-/blob/2aec3263053b7ec2a9451b536101fd4def764eca/avr-hal/examples/trinket/src/bin/trinket-blink.rs#L22"&gt;adjust the passed-in delay values&lt;/a&gt;, compensating for the wrong clock speed.&lt;/p&gt;
&lt;p&gt;There was probably a more principled way. For example I could have (re)based my work on either of the two &lt;a href="https://github.com/Rahix/avr-hal/pull/367"&gt;unmerged&lt;/a&gt; &lt;a href="https://github.com/Rahix/avr-hal/pull/401"&gt;open&lt;/a&gt; MRs which added proper support for the DigiSpark board, rather than abusing the Adafruit Trinket definition. But, having a nearly-working setup, and an explanation for the behaviour, I preferred the narrower fix to reopening any cans of worms.&lt;/p&gt;
&lt;h3&gt;An offer of help&lt;/h3&gt;
&lt;p&gt;As will be obvious from this posting, I’m not an expert in dev tools for embedded systems. Far from it. This area seems like quite a deep swamp, and I’m probably not the person to help drain it. (Frankly, much of the improvement work ought to be done, and paid for, by hardware vendors.)&lt;/p&gt;
&lt;p&gt;But, as a full Member of the Debian Project, I have considerable gatekeeping authority there. I also have much experience of software packaging, build systems, and release management. If anyone wants to try to improve the situation with embedded tooling in Debian, and is willing to do the actual packaging work. I would be happy to advise, and to review and sponsor your contributions.&lt;/p&gt;
&lt;p&gt;An obvious candidate: it seems to me that &lt;code&gt;micronucleus&lt;/code&gt; could easily be in Debian. Possibly a DigiSpark board definition could be provided to go with the &lt;code&gt;arduino&lt;/code&gt; package.&lt;/p&gt;
&lt;p&gt;Unfortunately, IMO Debian’s Rust packaging tooling and workflows are very poor, and the first of my &lt;a href="https://diziet.dreamwidth.org/10559.html"&gt;suggestions for improvement&lt;/a&gt; wasn’t well received. So if you need help with improving Rust packages in Debian, please talk to the &lt;a href="https://lists.debian.org/debian-rust/"&gt;Debian Rust Team&lt;/a&gt; yourself.&lt;/p&gt;
&lt;h3&gt;Conclusions&lt;/h3&gt;
&lt;p&gt;Embedded programming is still rather a mess and probably always will be.&lt;/p&gt;
&lt;p&gt;Embedded build systems can be bizarre. Documentation is scant. You’re often expected to download “board support packages” full of mystery binaries, from the board vendor (or others).&lt;/p&gt;
&lt;p&gt;Dev tooling is maddening, especially if aimed at novice programmers. You want version control? Hermetic tracking of your project’s build and install configuration? Actually to be told by the compiler when you write obvious bugs? You’re way off the beaten track.&lt;/p&gt;
&lt;p&gt;As ever, Free Software is under-resourced and the maintainers are often busy, or (reasonably) have other things to do with their lives.&lt;/p&gt;
&lt;h4&gt;All is not lost&lt;/h4&gt;
&lt;p&gt;Rust can be a significantly better bet than C for embedded software:&lt;/p&gt;
&lt;p&gt;The Rust compiler will catch a good proportion of programming errors, and an experienced Rust programmer can arrange (by suitable internal architecture) to catch nearly all of them. When writing for a chip in the middle of some circuit, where debugging involves staring an LED or a multimeter, that’s precisely what you want.&lt;/p&gt;
&lt;p&gt;Rust embedded dev tooling was, in this case, considerably better. Still quite chaotic and strange, and less mature, perhaps. But: significantly fewer mystery downloads, and significantly less crazy deviations from the language’s normal build system. Overall, less bad software supply chain integrity.&lt;/p&gt;
&lt;p&gt;The ATTiny85 chip, and the DigiSpark board, served my hardware needs very well. (More about the hardware aspects of this project in a future posting.)&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=16771" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:16025</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/16025.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=16025"/>
    <title>DKIM: rotate and publish your keys</title>
    <published>2023-08-15T00:15:43Z</published>
    <updated>2023-09-30T23:20:34Z</updated>
    <category term="dkim-rotate"/>
    <category term="computers"/>
    <category term="chiark"/>
    <dw:security>public</dw:security>
    <dw:reply-count>6</dw:reply-count>
    <content type="html">&lt;p&gt;If you are an email system administrator, you are probably using DKIM to sign your outgoing emails. You should be rotating the key regularly and automatically, and publishing old private keys. I have just released &lt;a href="https://www.chiark.greenend.org.uk/pipermail/sgo-software-announce/2023/000086.html"&gt;dkim-rotate 1.0&lt;/a&gt;; dkim-rotate is a tool to do this key rotation and publication.&lt;/p&gt;
&lt;p&gt;If you are an email user, your email provider ought to be doing this. If this is not done, your emails are “non-repudiable”, meaning that if they are leaked, anyone (eg, journalists, haters) can verify that they are authentic, and prove that to others. This is not desirable (for you).&lt;/p&gt;
&lt;p&gt;&lt;a name="cutid1"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Non-repudiation of emails is undesirable&lt;/h3&gt;
&lt;p&gt;This problem was described at some length in Matthew Green’s article &lt;a href="https://blog.cryptographyengineering.com/2020/11/16/ok-google-please-publish-your-dkim-secret-keys/"&gt;Ok Google: please publish your DKIM secret keys&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Avoiding non-repudiation sounds a bit like lying. After all, I’m advising creating a situation where some people can’t verify that something is true, even though it is. So I’m advocating casting doubt. Crucially, though, it’s doubt about facts that ought to be private. When you send an email, that’s between you and the recipient. Normally you don’t intend for anyone, anywhere, who happens to get a copy, to be able to verify that it was really you that sent it.&lt;/p&gt;
&lt;p&gt;In practical terms, this verifiability has already been used by journalists to verify stolen emails. Associated Press provide &lt;a href="https://github.com/associatedpress/verify-dkim"&gt;a verification tool&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Advice for all email users&lt;/h3&gt;
&lt;p&gt;As a user, you probably don’t want your emails to be non-repudiable. (Other people might want to be able to prove you sent some email, but your email system ought to serve your interests, not theirs.)&lt;/p&gt;
&lt;p&gt;So, your email provider ought to be rotating their DKIM keys, and publishing their old ones. At a rough guess, your provider probably isn’t :-(.&lt;/p&gt;
&lt;h4&gt;How to tell by looking at email headers&lt;/h4&gt;
&lt;p&gt;A quick and dirty way to guess is to have a friend look at the email headers of a message you sent. (It is important that the friend uses a different email provider, since often DKIM signatures are not applied within a single email system.)&lt;/p&gt;
&lt;p&gt;If your friend sees a &lt;code&gt;DKIM-Signature&lt;/code&gt; header then the message is DKIM signed. If they don’t, then it wasn’t. Most email traversing the public internet is DKIM signed nowadays; so if they don’t see the header probably they’re not looking using the right tools, or they’re actually on the same email system as you.&lt;/p&gt;
&lt;p&gt;In messages signed by a system running dkim-rotate, there will &lt;em&gt;also&lt;/em&gt; be a header about the key rotation, to notify potential verifiers of the situation. Other systems that avoid non-repudiation-through-DKIM might do something similar. dkim-rotate’s header looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;DKIM-Signature-Warning: NOTE REGARDING DKIM KEY COMPROMISE
 https://www.chiark.greenend.org.uk/dkim-rotate/README.txt
 https://www.chiark.greenend.org.uk/dkim-rotate/ae/aeb689c2066c5b3fee673355309fe1c7.pem&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But an email system might do half of the job of dkim-rotate: regularly rotating the key would cause the signatures of old emails to fail to verify, which is a good start. In that case there probably won’t be such a header.&lt;/p&gt;
&lt;h4&gt;Testing verification of new and old messages&lt;/h4&gt;
&lt;p&gt;You can also try verifying the signatures. This isn’t entirely straightforward, especially if you don’t have access to low-level mail tooling. Your friend will need to be able to save emails as &lt;em&gt;raw whole headers and body&lt;/em&gt;, un-decoded, un-rendered.&lt;/p&gt;
&lt;p&gt;If your friend is using a traditional Unix mail program, they should save the message as an mbox file. Otherwise, ProPublica have &lt;a href="https://www.propublica.org/nerds/authenticating-email-using-dkim-and-arc-or-how-we-analyzed-the-kasowitz-emails"&gt;instructions for attaching and transferring and obtaining the raw email&lt;/a&gt;. (Scroll down to “How to Check DKIM and ARC”.)&lt;/p&gt;
&lt;h5&gt;Checking that recent emails &lt;em&gt;are&lt;/em&gt; verifiable&lt;/h5&gt;
&lt;p&gt;Firstly, have your friend test that they can in fact verify a DKIM signature. This will demonstrate that the next test, where the verification is supposed to fail, is working properly and fails for the right reasons.&lt;/p&gt;
&lt;p&gt;Send your friend a test email now, and have them do this on a Linux system:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    # save the message as test-email.mbox
    apt install libmail-dkim-perl # or equivalent on another distro
    dkimproxy-verify &amp;lt;test-email.mbox&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should see output containing something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: pass
    ...&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the output ontains &lt;code&gt;verify result: fail (body has been altered)&lt;/code&gt; then probably your friend didn’t manage to faithfully save the unalterered raw message.&lt;/p&gt;
&lt;h5&gt;Checking &lt;em&gt;old&lt;/em&gt; emails &lt;em&gt;cannot&lt;/em&gt; be verified&lt;/h5&gt;
&lt;p&gt;When you both have that working, have your friend find an older email of yours, from (say) month ago. Perform the same steps.&lt;/p&gt;
&lt;p&gt;Hopefully they will see something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: fail (bad RSA signature)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;or maybe&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    verify result: invalid (public key: not available)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This indicates that this old email can no longer be verified. That’s good: it means that anyone who steals a copy, can’t verify it either. If it’s leaked, the journalist who receives it won’t know it’s genuine and unmodified; they should then be suspicious.&lt;/p&gt;
&lt;p&gt;If your friend sees &lt;code&gt;verify result: pass&lt;/code&gt;, then they have verified that that old email of yours is genuine. Anyone who had a copy of the mail can do that. This is good for email thieves, but not for you.&lt;/p&gt;
&lt;h3&gt;For email admins: announcing dkim-rotate 1.0&lt;/h3&gt;
&lt;p&gt;I have been running dkim-rotate 0.4 on my infrastructure, since last August. and I had entirely forgotten about it: it has run flawlessly for a year. I was reminded of the topic by seeing DKIM in other blog posts. Obviously, it is time to decreee that &lt;a href="https://www.chiark.greenend.org.uk/pipermail/sgo-software-announce/2023/000086.html"&gt;dkim-rotate is 1.0&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you’re a mail system administrator, your users are best served if you use something like dkim-rotate. The package is available in Debian stable, and supports Exim out of the box, but other MTAs should be easy to support too, via some simple ad-hoc scripting.&lt;/p&gt;
&lt;h3&gt;Limitation of this approach&lt;/h3&gt;
&lt;p&gt;Even with this key rotation approach, emails remain nonrepudiable for a short period after they’re sent - typically, a few days.&lt;/p&gt;
&lt;p&gt;Someone who obtains a leaked email very promptly, and shows it to the journalist (for example) right away, can still convince the journalist. This is not great, but at least it doesn’t apply to the vast bulk of your email archive.&lt;/p&gt;
&lt;p&gt;There are possible email protocol improvements which might help, but they’re quite out of scope for this article.&lt;/p&gt;
&lt;address&gt;Edited 2023-10-01 00:20 +01:00 to fix some grammar&lt;/address&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=16025" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:15336</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/15336.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=15336"/>
    <title>Installing Debian bookworm without systemd</title>
    <published>2023-07-19T13:25:46Z</published>
    <updated>2023-07-19T13:26:01Z</updated>
    <category term="computers"/>
    <category term="debian"/>
    <dw:security>public</dw:security>
    <dw:reply-count>10</dw:reply-count>
    <content type="html">&lt;h2&gt;Instructions&lt;/h2&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;p&gt;Get the official installation image from &lt;a href="https://www.debian.org/releases/bookworm/amd64/ch04s01.en.html"&gt;the usual locations&lt;/a&gt;. I got the netinst CD image via BitTorrent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Boot from the image and go through the installation in the normal way.&lt;/p&gt;
&lt;ol type="a"&gt;
&lt;li&gt;&lt;p&gt;You may want to select an alternative desktop environment (and unselect GNOME). These steps have been tested with MATE.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stop when you are asked to remove the installation media and reboot.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Press Alt + Right arrow to switch to the text VC. Hit return to activate the console and run the following commands (answering yes as appropriate):&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;chroot /target bash
apt-get install sysvinit-core elogind ntp dbus-x11
apt-get autoremove
exit&lt;/code&gt;&lt;/pre&gt;
&lt;ol start="4" type="1"&gt;
&lt;li&gt;&lt;p&gt;Observe the output from the &lt;code&gt;apt-get install&lt;/code&gt;. If your disk arrangements are unusual, that may generate some error messages from &lt;code&gt;update-initramfs&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go back to the installer VC with Alt + Left arrow. If there were no error messages above, you may tell it to reboot.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If there were error messages (for example, I found that if there was disk encryption, alarming messages were printed), tell the installer to go “Back”. Then ask it to “Install GRUB bootloader” (again). After that has completed, you may reboot.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enjoy your Debian system without systemd.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a name="cutid1"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Discussion&lt;/h2&gt;
&lt;p&gt;This is pleasingly straightforward, albeit with an ugly wart. This recipe was not formally developed and tested; it’s just what happened when I tried to actually perform this task.&lt;/p&gt;
&lt;p&gt;The official installation guide has &lt;a href="https://wiki.debian.org/Init#Changing_the_init_system_-_at_installation_time"&gt;similar instructions&lt;/a&gt; although they don’t seem to have the initramfs workaround.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;update-initramfs&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;The need to go back and have the installer reinstall grub is because if your storage is not very straightforward, the &lt;code&gt;update-initramfs&lt;/code&gt; caused by &lt;code&gt;apt-get install&lt;/code&gt; apparently doesn’t have all the right context. I haven’t investigated this at all; indeed, I don’t even really know that the initramfs generated in step 3 above was broken, although the messages did suggest to me that important pieces or config might have been omitted. Instead, I simply chose to bet that it might be broken, but that the installer would know what to do. So I used the installer’s “install GRUB bootloader” option, which does regenerate the initramfs. So, I don’t know that step 6 is necessary.&lt;/p&gt;
&lt;p&gt;In principle it would be better to do the switch from systemd to sysvinit earlier in the installation process, and under the control of the installer. But by default the installer goes straight from the early setup questions through to the “set the time” or “reboot” questions, without stopping. One could use the expert mode, or modify the command line, or something, but all of those things are, in practice, a lot more typing and/or interaction. And as far as I’m aware the installer doesn’t have an option for avoiding systemd .&lt;/p&gt;
&lt;h3&gt;The &lt;code&gt;apt-get install&lt;/code&gt; line&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;sysvinit-core&lt;/code&gt; is the principal part of the sysvinit init system. Asking to install that causes the deinstallation of systemd’s init and ancillary packages.&lt;/p&gt;
&lt;p&gt;systemd refuses to allow itself to be deinstalled, if it is already running, so if you boot into the systemd system you can’t then switch init system. This is why the switch is best done at install time. If you’re too late, there are &lt;a href="https://wiki.debian.org/Init#Changing_the_init_system_-_on_a_running_system"&gt;instructions for changing init system post-installation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;elogind&lt;/code&gt; is a forked version of some of systemd’s user desktop session functionality. In practice modern desktop environments need this; without it, apt will want to remove things you probably want to keep. Even if you force it, you may find that your desktop environment can’t adjust the audio volume, etc.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ntp&lt;/code&gt; is needed because nowadays the default network time client is systemd-timesyncd (which is a bad idea even on systems &lt;em&gt;with&lt;/em&gt; systemd as init). We need to specify it because the package dependencies don’t automatically give you any replacement for systemd-timesyncd.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;dbus-x11&lt;/code&gt; is a glue component. In theory it ought to be installed automatically. However, there have been problems with the dependencies that meant that (for example) asking for emacs would try to switch the init system. Specifying &lt;code&gt;dbus-x11&lt;/code&gt; explicitly is a workaround for that, which I nowadays adopt out of caution. Perhaps it is no longer needed.&lt;/p&gt;
&lt;p&gt;(On existing systems, it may be necessary to manually install &lt;code&gt;orphan-sysvinit-scripts&lt;/code&gt;, which exists as a suboptimal technical workaround for the sociopolitical problems of hostile package maintainers and Debian’s governance failures. The recipe above seems to install this package automatically.)&lt;/p&gt;
&lt;h3&gt;usrmerge&lt;/h3&gt;
&lt;p&gt;This recipe results in a system which has merged-/usr via symlinks. This configuration is a bad one. Ideally usrmerge-via-symlinks would be avoided.&lt;/p&gt;
&lt;p&gt;The un-merged system is declared “not officially supported by Debian” and key packages try very hard to force it on users. However, merged-/usr-via-symlinks is full of bugs (mostly affecting package management) which are far too hard to fix (a project by some folks to try to do so has given up).&lt;/p&gt;
&lt;p&gt;I suspect un-merged systems will suffer from fewer bugs in practice. But I don’t know how to persuade d-i to make one.&lt;/p&gt;
&lt;h3&gt;Installer images&lt;/h3&gt;
&lt;p&gt;I think there is room in the market for an unofficial installer image which installs without systemd and perhaps without usrmerge. I don’t have the effort for making such a thing myself.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Installing Debian without systemd is fairly straightforward.&lt;/p&gt;
&lt;p&gt;Operating Debian without systemd is a pleasure and every time one of my friends has some systemd-induced lossage I get to feel smug.&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=15336" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:14666</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/14666.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=14666"/>
    <title>Never use git submodules</title>
    <published>2023-03-02T19:48:20Z</published>
    <updated>2023-03-02T19:48:20Z</updated>
    <category term="computers"/>
    <category term="git"/>
    <dw:security>public</dw:security>
    <dw:reply-count>3</dw:reply-count>
    <content type="html">&lt;h2&gt;tl;dr&lt;/h2&gt;
&lt;p&gt;git submodules are &lt;em&gt;always the wrong solution&lt;/em&gt;. Yes, even the to the problem they were specifically invented to solve.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#what-is-wrong-with-git-submodules"&gt;What is wrong with git submodules&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#better-alternatives-to-git-submodules"&gt;Better alternatives to git submodules&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#use-git-subtree"&gt;Use git subtree&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#just-have-a-monorepo"&gt;Just have a monorepo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#use-a-package-management-system-and-explicit-dependencies"&gt;Use a package management system, and explicit dependencies&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#use-the-multiple-repository-tool-mr"&gt;Use the multiple repository tool &lt;code&gt;mr&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#have-your-build-expect-to-find-the-dependency-in-..-its-parent-dir"&gt;Have your build expect to find the dependency in &lt;code&gt;..&lt;/code&gt;, its parent dir&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#provide-an-ad-hoc-in-tree-script-to-download-the-dependency"&gt;Provide an ad-hoc in-tree script to download the dependency&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;a name="cutid1"&gt;&lt;/a&gt;
&lt;h2&gt;What is wrong with git submodules&lt;/h2&gt;
&lt;p&gt;There are two principal sets of reasons why they are terrible:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fundamentally wrong design. They break the git data model in multiple ways. Critical ways include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A git object in your repository is no longer necessarily resolvable/interpetable to meaningful data. (Shallow clones have the same issue but only with respect to history. git submodules do this for the contents of the tree.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;git submodules violate the usual rule that all URLs, hostnames, and so on, used by git, are provided by the git configuration and the user, rather than appearing in-tree.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;git submodules introduce completely new states your tree can be in, many of them strange or undesirable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wrong behaviour in detail. git’s behaviour with submodules is often buggy or bizarre. Some of these problems are implied by the design, but many of them are additional unforced errors. Some of the defects occur even if you don’t &lt;code&gt;git submodule init&lt;/code&gt;, so affect &lt;em&gt;all&lt;/em&gt; programs and users which interact with your tree.&lt;/p&gt;
&lt;p&gt;Just a few examples of lossage with submodules:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;git checkout no longer reliably switches branches&lt;/li&gt;
&lt;li&gt;editing files and trying to commit them no longer reliably works&lt;/li&gt;
&lt;li&gt;locally pulling a new version from main no longer reliably works&lt;/li&gt;
&lt;li&gt;git ls-files can disagree with git log and git cat-file&lt;/li&gt;
&lt;li&gt;URLs from .gitmodules: they can be malicious; they can end up cached in individual trees’ (individual users’) .git/config; etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Generally, normal git operations like git checkout and git pull can leave the submodule in a weird state where you have to run one of the git submodule commands to fix it up. Often the easiest way (especially for a non-expert) to get back to a normal state is to throw the whole tree away and re-clone it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ultimately, this means that the author of a program which works with git has two options:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;p&gt;Don’t support submodules. Tell users of your program who file bugs involving submodules that they’re not supported.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Do an enormous amount of extra work: At every point you interact with git, experiment to see what bizarre behaviour submodules exhibit, and write code to deal with all the possibilities.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;As a result, a substantial subset of git tooling is broken in the presence of submodules. This is especially true of local automation and tooling, which is otherwise an effective way of improving your processes. But, of course this also applies to git itself! Which is one of the causes of the bugs that git itself has when working with submodules.&lt;/p&gt;
&lt;h2&gt;Better alternatives to git submodules&lt;/h2&gt;
&lt;p&gt;In my opinion git submodule is &lt;em&gt;never&lt;/em&gt; the right answer. Often, git submodule is the &lt;em&gt;worst&lt;/em&gt; answer and &lt;em&gt;any&lt;/em&gt; of the following would be better.&lt;/p&gt;
&lt;h3&gt;Use git subtree&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://manpages.debian.org/stable/git-man/git-subtree.1.en.html"&gt;git subtree&lt;/a&gt; solves many of the same problems as git submodule, but it does not violate the git data model.&lt;/p&gt;
&lt;p&gt;Use this when:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You want to track and use, in-tree, a separate project which ought to have its own identity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The separate project is of reasonable size (compared to your own).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With git subtree, people and programs that do not need to specifically interact with the upstream for the subtree, do not need to know that it even &lt;em&gt;is&lt;/em&gt; a subtree. They can make and switch branches, commit, and so on, as they like.&lt;/p&gt;
&lt;p&gt;git subtree can automatically separate out changes made in the downstream, for application to (or submission to) the upstream branch.&lt;/p&gt;
&lt;p&gt;I have used git subtree and found it capable and convenient, and pleasingly straightforward.&lt;/p&gt;
&lt;h3&gt;Just have a monorepo&lt;/h3&gt;
&lt;p&gt;If you are the upstream for all the pieces, it is often more convenient to merge the git trees into a single git tree with a single history.&lt;/p&gt;
&lt;p&gt;Use this when:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The maintenance of all the pieces is &lt;em&gt;organisationally&lt;/em&gt; and &lt;em&gt;politically&lt;/em&gt; cohesive enough that you can share a git history.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The whole monorepo would be of reasonable size.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any long-running branches you need to make are for release channels, or the similar, not for having separate versions of the internal dependencies for the different pieces in the monorepo.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Use a package management system, and explicit dependencies&lt;/h3&gt;
&lt;p&gt;Instead of subsuming the dependency’s tree into your own, give the dependency a proper API and reuse it via a package management system. (If necessary, maintain a proper downstream fork of the dependency.)&lt;/p&gt;
&lt;p&gt;The package manager might be be:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;a distro-style package management system such as &lt;code&gt;apt&lt;/code&gt;+&lt;code&gt;dpkg&lt;/code&gt;+&lt;a href="https://manpages.debian.org/stable/sbuild/sbuild.1.en.html"&gt;&lt;code&gt;sbuild&lt;/code&gt;&lt;/a&gt; (or a proprietary/private dependency-managing build system); or&lt;/li&gt;
&lt;li&gt;a language specific package manager (eg &lt;code&gt;cargo&lt;/code&gt;).&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Use this when:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You are already using, or familiar with, a suitable package manager,&lt;/li&gt;
&lt;li&gt;The API provided by the dependency can be reasonably represented in that package manager (even if unstably).&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Use the multiple repository tool &lt;code&gt;mr&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://manpages.debian.org/stable/myrepos/mr.1.en.html"&gt;&lt;code&gt;mr(1)&lt;/code&gt;&lt;/a&gt; is a tool which lets you conveniently manage a possibly large number of trees, usually as sibling directories.&lt;/p&gt;
&lt;p&gt;I haven’t used this myself but it looks capable and straightforward. As I understand it, you’d usually use this in combination with the &lt;code&gt;..&lt;/code&gt;-based dependency expectation I describe below.&lt;/p&gt;
&lt;p&gt;It seems like it would be good when your project has a fair number of “foreign” dependencies.&lt;/p&gt;
&lt;h3&gt;Have your build expect to find the dependency in &lt;code&gt;..&lt;/code&gt;, its parent dir&lt;/h3&gt;
&lt;p&gt;This is a very lightweight solution. Just have the files in your tree refer to the dependencies with &lt;code&gt;../dependency-name/&lt;/code&gt;. Expect users (and programs) to manually clone and update the right dependency version, alongside your project.&lt;/p&gt;
&lt;p&gt;Consider this when:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Your project is at an early stage and you want to get going quickly and worry about this build system stuff later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The dependency is disabled by default, and almost never neeeded.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Every program or human that wants to run a build that needs the dependency will need to know to clone the dependency, and keep it up to date. This will be a nuisance, and if you’re doing CI it will mean some custom CI scripting. But this is all probably still better than git submodules. At least it will be completely obvious to everyone what’s going on, how to make changes to the dependency, and so on.&lt;/p&gt;
&lt;h3&gt;Provide an ad-hoc in-tree script to download the dependency&lt;/h3&gt;
&lt;p&gt;As a last resort, you can embed the URL to find your dependency, and the instructions for downloading it, in your top-level package’s build system. This is clumsy and awkward, but, astonishingly, it is less painful than git submodules.&lt;/p&gt;
&lt;p&gt;Use this when:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Most people using/building your software won’t need the dependency at all.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In particular, most people won’t need to edit the dependency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;None of the other options are suitable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Usually the downstream build runes should git clone the dependency, and the downstream tree should name the precise commitid needed.&lt;/p&gt;
&lt;p&gt;Try to avoid this situation. It’s not a good place to be. But:&lt;/p&gt;
&lt;h4&gt;Yes, really, git submodule is worse than ad-hoc Makefile runes&lt;/h4&gt;
&lt;p&gt;The ad-hoc shell script route feels very hacky. But it has some important advantages over git submodule. In particular, unlike with git submodule, this approach (like most of the others I suggest) means that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;All tooling that expects to clone your repository, make changes, do builds, track changes, etc., will work correctly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You are in precise control of when/whether the download occurs: ie, you can arrange to download the dependency precisely when it’s needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You are in precise control of your version management and checking of the dependency: your script controls what version of the dependency to use, and whether that should be “pinned” or dynamically updated.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I’m not advocating ad-hoc runes over git submodules because I like ad-hoc runes or think they’re a good idea. It’s just that git submodule is really so very very bad.&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=14666" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:14345</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/14345.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=14345"/>
    <title>derive-adhoc: powerful pattern-based derive macros for Rust</title>
    <published>2023-02-03T00:29:28Z</published>
    <updated>2023-02-03T00:34:28Z</updated>
    <category term="derive-adhoc"/>
    <category term="computers"/>
    <category term="rust"/>
    <dw:security>public</dw:security>
    <dw:reply-count>2</dw:reply-count>
    <content type="html">&lt;h2&gt;tl;dr&lt;/h2&gt;
&lt;p&gt;Have you ever wished that you could that could write a new &lt;code&gt;derive&lt;/code&gt; macro without having to mess with procedural macros?&lt;/p&gt;
&lt;p&gt;Now you can!&lt;/p&gt;
&lt;p&gt;&lt;a href="https://docs.rs/derive-adhoc/latest/derive_adhoc"&gt;&lt;code&gt;derive-adhoc&lt;/code&gt;&lt;/a&gt; lets you write a &lt;code&gt;#[derive]&lt;/code&gt; macro, using a template syntax which looks a lot like &lt;code&gt;macro_rules!&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;It’s still 0.x - so unstable, and maybe with sharp edges. We want feedback!&lt;/p&gt;
&lt;p&gt;And, the documentation is still very terse. It is doesn’t &lt;em&gt;omit&lt;/em&gt; anything, but, it is severely lacking in examples, motivation, and so on. It will suit readers who enjoy dense reference material.&lt;/p&gt;
&lt;a name="cutid1"&gt;&lt;/a&gt;
&lt;h2&gt;Background - Rust’s two (main) macro systems&lt;/h2&gt;
&lt;p&gt;Rust has two principal macro systems.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://doc.rust-lang.org/book/ch19-06-macros.html#declarative-macros-with-macro_rules-for-general-metaprogramming"&gt;&lt;code&gt;macro_rules!&lt;/code&gt;&lt;/a&gt;&lt;/strong&gt; (also known as “macros by example”) is relatively straightforward to use. You have some control over the argument syntax for your macro, and then you can generate output code using a pattern-style template.&lt;/p&gt;
&lt;p&gt;But, its power is limited. In particular, although you can specify a pattern to match the arguments to your macro, the pattern matching system has serious limitations (for example, it has a very hard time with Rust’s generic type parameters). Also, you can’t feed existing pieces of your program to a macro without passing them as arguments: so you must write them out twice, or have the macro re-generate its own arguments as a side-effect.&lt;/p&gt;
&lt;p&gt;Because of these limitations, code which makes heavy use of &lt;code&gt;macro_rules!&lt;/code&gt; macros can be less than clear.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://doc.rust-lang.org/book/ch19-06-macros.html#procedural-macros-for-generating-code-from-attributes"&gt;Procedural macros&lt;/a&gt;&lt;/strong&gt; are an extremely powerful and almost fully general code-rewriting macro facility. They work by taking actual rust code (represented as a stream of tokens) as their input, running arbitrary computations, and then generating a new stream of tokens as an output. Procedural macros can be applied (with &lt;code&gt;derive&lt;/code&gt;) to Rust’s data structure definitions, to parse them, and autogenerate data-structure-dependent code. They are the basis of many extremely powerful facilities, both in standard Rust, and in popular Rust libraries.&lt;/p&gt;
&lt;p&gt;However, procedural macros are hard to write. You must deal with libraries for parsing Rust source code out of tokens. You must generate compile errors manually. You often end up matching on Rust syntax in excruciating detail. Procedural macros run in an inconvient execution context and must live in a separate Rust package. And so on.&lt;/p&gt;
&lt;h2&gt;“Derive by example” with &lt;code&gt;derive-adhoc&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;derive-adhoc&lt;/code&gt; aims to provide much of the power of proc macros, with the convenience of &lt;code&gt;macro_rules!&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;You write a template which is expanded for a data structure (for a &lt;code&gt;struct&lt;/code&gt;, say). &lt;code&gt;derive-adhoc&lt;/code&gt; takes care of parsing the struct, and gives you convenient expansion variables for use in your template.&lt;/p&gt;
&lt;h3&gt;A simple example - deriving &lt;code&gt;Clone&lt;/code&gt; without inferred trait bounds&lt;/h3&gt;
&lt;p&gt;Here is a simple example, taken from &lt;a href="https://gitlab.torproject.org/Diziet/rust-derive-adhoc/-/blob/8b307207263b97d8e12cddeeebf3cc27502ac331/tests/expand/clone.rs"&gt;&lt;code&gt;clone.rs&lt;/code&gt;&lt;/a&gt;, in &lt;code&gt;derive-adhoc&lt;/code&gt;’s &lt;a href="https://gitlab.torproject.org/Diziet/rust-derive-adhoc/-/tree/8b307207263b97d8e12cddeeebf3cc27502ac331/tests/expand"&gt;test suite&lt;/a&gt;; &lt;a href="https://gitlab.torproject.org/Diziet/rust-derive-adhoc/-/blob/8b307207263b97d8e12cddeeebf3cc27502ac331/tests/expand/clone.expanded.rs"&gt;&lt;code&gt;clone.expanded.rs&lt;/code&gt;&lt;/a&gt; shows the result of the macro expansion.&lt;/p&gt;
&lt;p&gt;This showcases very few of &lt;code&gt;derive-adhoc&lt;/code&gt;’s features, but it gives a flavour.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  // Very simple `Clone`
  //
  // Useful because it doesn&amp;#39;t infer Clone bounds on generic type
  // parameters, like std&amp;#39;s derive of Clone does.  Instead, it
  // unconditionally attempts to implement Clone.
  //
  // Only works on `struct { }` structs.
  //
  // (This does a small subset of what the educe crate&amp;#39;s `Clone` does.)
  define_derive_adhoc!{
      MyClone =

      impl&amp;lt;$tgens&amp;gt; Clone for $ttype {
          fn clone(&amp;amp;self) -&amp;gt; Self {
              Self {
                  $(
                      $fname: self.$fname.clone(),
                  )
              }
          }
      }
  }

  // If we were to `#[derive(Clone)]`, DecoratedError&amp;lt;io::Error&amp;gt; wouldn&amp;#39;t
  // be Clone, because io::Error isn&amp;#39;t, even though the Arc means we can clone.
  #[derive(Adhoc)]
  #[derive_adhoc(MyClone)]
  struct DecoratedError&amp;lt;E&amp;gt; {
      context: String,
      error: Arc&amp;lt;E&amp;gt;,
  }&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Replacing an existing bespoke proc macro - a more complex example&lt;/h3&gt;
&lt;p&gt;Recently, I thought I would try out derive-adhoc in &lt;a href="https://www.chiark.greenend.org.uk/~ianmdlvl/hippotat/current/docs/"&gt;Hippotat&lt;/a&gt;, a personal project of mine, which currently uses a project-specific proc macro. This was an enjoyable experience.&lt;/p&gt;
&lt;p&gt;I found the &lt;a href="https://salsa.debian.org/iwj/hippotat/-/blob/98b28d5d5dab0c68c5512448cc735f53828a6869/src/config.rs#L65"&gt;new code&lt;/a&gt; a huge improvement over &lt;a href="https://salsa.debian.org/iwj/hippotat/-/blob/a6ac94b4922602af64f22b86ba5347ad95fcda44/macros/macros.rs#L68"&gt;the old code&lt;/a&gt;. I intend to tidy up this branch and merge it into Hippotat’s mainline, at some suitable point in the release cycles of Hippotat and &lt;code&gt;derive-adhoc&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I won’t copy the whole thing here, but: now we have things like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  pub struct InstanceConfig {
    #[adhoc(special=&amp;quot;link&amp;quot;, skl=&amp;quot;SKL::None&amp;quot;)]  pub link: LinkName,
  ...

  derive_adhoc!{
    InstanceConfig:

    fn resolve_instance(rctx: &amp;amp;ResolveContext) -&amp;gt; InstanceConfig {
      InstanceConfig {
        $(
          $fname: rctx.
            ${if fmeta(special) {
              ${paste special_ ${fmeta(special)}}
            } else {&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Instead of this kind of awfulness:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;   } else if attr.path == parse_quote!{ special } {
     let meta = match attr.parse_meta().unwrap() {
       Meta::List(list) =&amp;gt; list,
       _ =&amp;gt; panic!(),
     };
     let (tmethod, tskl) = meta.nested.iter().collect_tuple().unwrap();
     fn get_path(meta: &amp;amp;NestedMeta) -&amp;gt; TokenStream {
       match meta {
         NestedMeta::Meta(Meta::Path(ref path)) =&amp;gt; path.to_token_stream(),
         _ =&amp;gt; panic!(),
       }
     }
     method = get_path(tmethod);
     *skl.borrow_mut() = Some(get_path(tskl));
   }&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;History and acknowledgements&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;derive-adhoc&lt;/code&gt; was my project proposal in last year’s Tor Project Hackweek. Thanks to Tor for giving us the space to do something like this.&lt;/p&gt;
&lt;p&gt;Nick Mathewson joined in, and has made important code contributions, also given invaluable opinions and feedback. Thanks very much to Nick - I look forward to working on this more with you. I take responsibility for all bugs, mistakes, and misjudgements of taste.&lt;/p&gt;
&lt;h2&gt;Future plans&lt;/h2&gt;
&lt;p&gt;We’re hoping &lt;code&gt;derive-adhoc&lt;/code&gt; will become a widely-used library, significantly improving Rust’s expressive power at the same time as improving the clarity of macro-using programs.&lt;/p&gt;
&lt;p&gt;We would like to see the wider Rust community experiment with it and give us feedback. Who knows? Maybe it will someday inspire a “Derive By Example” feature in standard Rust.&lt;/p&gt;
&lt;p&gt;Two words of warning, though:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The documentation is currently very terse. Many readers will find it far too dense and dry for easy comprehension. The examples are sparse, not well integrated, and not very well explained. We will need your patience - and and your help - as we try to improve it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The library and its template syntax are still unstable. As more people try &lt;code&gt;derive_adhoc&lt;/code&gt;, we expect to find areas where the syntax and behavior need to improve. While still at &lt;code&gt;0.x&lt;/code&gt;, we’ll be keen to make those improvements, without much regard to backward compatibility. So, for now, expect breaking changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We hope to release a more-stable and better-documented version 1.x later this lear.&lt;/p&gt;
&lt;p&gt;So, please try it out and let us know what you think.&lt;/p&gt;
&lt;h2&gt;Documentation and references about &lt;code&gt;derive-adhoc&lt;/code&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.rs/derive-adhoc/latest/derive_adhoc/index.html#"&gt;Top-level documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://lib.rs/crates/derive-adhoc"&gt;&lt;code&gt;derive-adhoc&lt;/code&gt; on lib.rs&lt;/a&gt; and &lt;a href="https://crates.io/crates/derive-adhoc"&gt;crates.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.rs/derive-adhoc/latest/derive_adhoc/doc_template_syntax/index.html"&gt;Template syntax reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gitlab.torproject.org/Diziet/rust-derive-adhoc/-/tree/8b307207263b97d8e12cddeeebf3cc27502ac331/tests/expand"&gt;Examples / test cases&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.rs/derive-adhoc/latest/derive_adhoc/doc_implementation/index.html"&gt;How it works&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;address&gt;edited 2023-02-03 00:34 Z to fix a typo&lt;/address&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=14345" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:14161</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/14161.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=14161"/>
    <title>SGO (and my) VPN and network access tools - in bookworm</title>
    <published>2023-01-14T00:41:35Z</published>
    <updated>2023-01-14T00:41:35Z</updated>
    <category term="hippotat"/>
    <category term="userv"/>
    <category term="chiark"/>
    <category term="secnet"/>
    <category term="debian"/>
    <category term="computers"/>
    <dw:security>public</dw:security>
    <dw:reply-count>1</dw:reply-count>
    <content type="html">&lt;p&gt;Recently, we managed to get secnet and hippotat into Debian. They are on track to go into Debian bookworm. This completes in Debian the set of VPN/networking tools I (and other &lt;a href="https://www.greenend.org.uk/"&gt;Greenend&lt;/a&gt;) folks have been using for many years.&lt;/p&gt;
&lt;p&gt;The Sinister Greenend Organisation’s suite of network access tools consists mainly of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;secnet&lt;/code&gt; - VPN.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hippotat&lt;/code&gt; - IP-over-HTTP (workaround for bad networks)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;userv ipif&lt;/code&gt; - user-created network interfaces&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;secnet&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.chiark.greenend.org.uk/pipermail/sgo-software-announce/2023/000081.html"&gt;secnet&lt;/a&gt; is our very mature VPN system.&lt;/p&gt;
&lt;p&gt;Its basic protocol idea is similar to that in Wireguard, but it’s much older. Differences from Wireguard include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Comes with some (rather clumsy) provisioning tooling, supporting almost any desired virtual network topology. In the SGO we have a complete mesh of fixed sites (servers), and a number of roaming hosts (clients), each of which can have one or more sites as its home.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No special kernel drivers required. Everything is userspace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An exciting “polypath” mode where packets are sent via multiple underlying networks in parallel, offering increased reliability for roaming hosts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Portable to non-Linux platforms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A much older, and less well audited, codebase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Very flexible configuration arrangements, but things are also under-documented and to an extent under-productised.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hasn’t been ported to phones/tablets.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;secnet was originally written by Stephen Early, starting in 1996 or so. I inherited it some years ago and have been maintaining it since. It’s mostly written in C.&lt;/p&gt;
&lt;h2&gt;Hippotat&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.chiark.greenend.org.uk/pipermail/sgo-software-announce/2023/000082.html"&gt;Hippotat&lt;/a&gt; is best described by copying the intro from the docs:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hippotat is a system to allow you to use your normal VPN, ssh, and other applications, even in broken network environments that are only ever tested with “web stuff”.&lt;/p&gt;
&lt;p&gt;Packets are parcelled up into HTTP POST requests, resembling form submissions (or JavaScript XMLHttpRequest traffic), and the returned packets arrive via the HTTP response bodies.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It doesn’t rely on TLS tunnelling so can work even if the local network is trying to intercept TLS. I recently &lt;a href="https://diziet.dreamwidth.org/12934.html"&gt;rewrote Hippotat in Rust&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;&lt;code&gt;userv ipif&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;userv ipif&lt;/code&gt; is one of the &lt;a href="https://www.chiark.greenend.org.uk/ucgi/~ian/git?p=userv-utils.git;a=summary"&gt;userv utilities&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It allows safe delegation of network routing to unprivileged users. The delegation is of a specific address range, so different ranges can be delegated to different users, and the authorised user cannot interfere with other traffic.&lt;/p&gt;
&lt;p&gt;This is used in the default configuration of hippotat packages, so that an ordinary user can start up the hippotat client as needed.&lt;/p&gt;
&lt;p&gt;On &lt;a href="https://www.chiark.greenend.org.uk/"&gt;chiark&lt;/a&gt; userv-ipif is used to delegate networking to users, including administrators of allied VPN realms. So chiark actually runs at least 4 VPN-ish systems in production: secnet, hippotat, &lt;a href="https://vox.distorted.org.uk/mdw/"&gt;Mark Wooding&lt;/a&gt;’s Tripe, and still a few links managed by the now-superseded &lt;code&gt;udptunnel&lt;/code&gt; system.&lt;/p&gt;
&lt;h2&gt;&lt;code&gt;userv&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;userv ipif&lt;/code&gt; is a userv service. That is, it is a facility which uses &lt;code&gt;userv&lt;/code&gt; to bridge a privilege boundary.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.chiark.greenend.org.uk/doc/userv/spec.html/"&gt;&lt;code&gt;userv&lt;/code&gt;&lt;/a&gt; is perhaps my most under-appreciated program. userv can be used to straightforwardly bridge (local) privilege boundaries on Unix systems.&lt;/p&gt;
&lt;p&gt;So for example it can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Allow a sysadmin to provide a shell script to be called by unprivileged users, but which will run as root. &lt;code&gt;sudo&lt;/code&gt; can do this too but it has quite a few gotchas, and you have to be quite careful how you use it - and its security record isn’t great either.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Form the internal boundary in a privilege-separated system service. So, for example, the &lt;code&gt;hippotat&lt;/code&gt; client is a program you can run from the command line as a normal user, if the relevant network addresses have been delegated to you. On chiark, CGI programs run as the providing user - not using &lt;code&gt;suexec&lt;/code&gt; (which I don’t trust), but via userv.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;userv services can be defined by the &lt;em&gt;called user&lt;/em&gt;, not only by the system administrator. This allows a user to reconfigure or divert a system-provided default implementation, and even allows users to define and implement ad-hoc services of their own. (Although, the system administrator can override user config.)&lt;/p&gt;
&lt;h2&gt;Acknowledgements&lt;/h2&gt;
&lt;p&gt;Thanks for the help I had in this effort.&lt;/p&gt;
&lt;p&gt;In particular, thanks to Sean Whitton for encouragement, and the ftpmaster review; and to the Debian Rust Team for their help navigating the complexities of handling Rust packages within the Debian Rust Team workflow.&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=14161" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:13884</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/13884.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=13884"/>
    <title>Rust for the Polyglot Programmer, December 2022 edition</title>
    <published>2022-12-20T01:23:21Z</published>
    <updated>2022-12-20T01:47:52Z</updated>
    <category term="rust"/>
    <category term="computers"/>
    <category term="rust-polyglot"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">&lt;p&gt;I have reviewed, updated and revised my short book about the Rust programming language, &lt;a href="https://www.chiark.greenend.org.uk/~ianmdlvl/rust-polyglot/"&gt;Rust for the Polyglot Programmer&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It now covers some language improvements from the past year (noting which versions of Rust they’re available in), and has been updated for changes in the Rust library ecosystem.&lt;/p&gt;
&lt;p&gt;With (further) assistance from Mark Wooding, there is also a new &lt;a href="https://www.chiark.greenend.org.uk/%7Eianmdlvl/rust-polyglot/safety.html#integers-conversion-checking"&gt;table of recommendations for numerical conversion&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Recap about Rust for the Polyglot Programmer&lt;/h3&gt;
&lt;p&gt;There are many introductory materials about Rust. This one is rather different. Compared to much other information about Rust, Rust for the Polyglot Programmer is:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Dense: I assume a lot of starting knowledge. Or to look at it another way: I expect my reader to be able to look up and digest non-Rust-specific words or concepts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Broad: I cover not just the language and tools, but also the library ecosystem, development approach, community ideology, and so on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Frank: much material about Rust has a tendency to gloss over or minimise the bad parts. I don’t do that. That also frees me to talk about strategies for dealing with the bad parts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Non-neutral: I’m not afraid to recommend particular libraries, for example. I’m not afraid to extol Rust’s virtues in the areas where it does well.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Terse, and sometimes shallow: I often gloss over what I see as unimportant or fiddly details; instead I provide links to appropriate reference materials.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After reading Rust for the Polyglot Programmer, you won’t know everything you need to know to use Rust for any project, but should know where to find it.&lt;/p&gt;
&lt;p&gt;Comments are welcome of course, via the Dreamwidth comments or &lt;a href="https://salsa.debian.org/iwj/rust-polyglot/"&gt;Salsa issue or MR&lt;/a&gt;. (If you’re making a contribution, please indicate your agreement with the &lt;a href="https://salsa.debian.org/iwj/rust-polyglot/-/raw/main/DEVELOPER-CERTIFICATE"&gt;Developer Certificate of Origin&lt;/a&gt;.)&lt;/p&gt;
&lt;address&gt;edited 2022-12-20 01:48 to fix a typo&lt;/address&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=13884" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:13657</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/13657.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=13657"/>
    <title>Rust needs #[throws]</title>
    <published>2022-12-16T18:55:09Z</published>
    <updated>2022-12-18T23:27:58Z</updated>
    <category term="rust"/>
    <category term="computers"/>
    <dw:security>public</dw:security>
    <dw:reply-count>5</dw:reply-count>
    <content type="html">&lt;h2&gt;tl;dr:&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;Ok&lt;/code&gt;-wrapping as needed in today’s Rust is a significant distraction, because there are multiple ways to do it. They are all slightly awkward in different ways, so are least-bad in different situations. You must choose a way for every fallible function, and sometimes change a function from one pattern to another.&lt;/p&gt;
&lt;p&gt;Rust really needs &lt;code&gt;#[throws]&lt;/code&gt; as a first-class language feature. Code using &lt;code&gt;#[throws]&lt;/code&gt; is simpler and clearer.&lt;/p&gt;
&lt;p&gt;Please try out withoutboats’s &lt;a href="https://crates.io/crates/fehler"&gt;&lt;code&gt;fehler&lt;/code&gt;&lt;/a&gt;. I think you will like it.&lt;/p&gt;
&lt;h2&gt;Contents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#a-recent-personal-experience-in-coding-style"&gt;A recent personal experience in coding style&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#what-is-ok-wrapping-intro-to-rust-error-handling"&gt;What is &lt;code&gt;Ok&lt;/code&gt; wrapping? Intro to Rust error handling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#a-minor-inconvenience-or-a-significant-distraction"&gt;A minor inconvenience, or a significant distraction?&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#idioms-for-ok-wrapping---a-bestiary"&gt;Idioms for &lt;code&gt;Ok&lt;/code&gt;-wrapping - a bestiary&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#what-is-to-be-done-then"&gt;What is to be done, then?&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#limitations-of-fehler"&gt;Limitations of fehler&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#please-can-we-have-throws-in-the-rust-language"&gt;Please can we have &lt;code&gt;#[throws]&lt;/code&gt; in the Rust language&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#explicitness"&gt;“Explicitness”&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#appendix---examples-showning-code-with-ok-wrapping-is-worse-than-code-using-throws"&gt;Appendix - examples showning code with &lt;code&gt;Ok&lt;/code&gt; wrapping is worse than code using &lt;code&gt;#[throws]&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;A recent personal experience in coding style&lt;/h2&gt;
&lt;p&gt;Ever since I read withoutboats’s &lt;a href="https://without.boats/blog/failure-to-fehler/"&gt;2020 article&lt;/a&gt; about &lt;a href="https://github.com/withoutboats/fehler"&gt;&lt;code&gt;fehler&lt;/code&gt;&lt;/a&gt;, I have been using it in most of my personal projects.&lt;/p&gt;
&lt;p&gt;For &lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1025898"&gt;Reasons&lt;/a&gt; I recently had a go at eliminating the dependency on &lt;code&gt;fehler&lt;/code&gt; from &lt;a href="https://www.chiark.greenend.org.uk/~ianmdlvl/hippotat/current/docs/README.html"&gt;Hippotat&lt;/a&gt;. So, I made a branch, deleted the dependency and imports, and started on the whack-a-mole with the compiler errors.&lt;/p&gt;
&lt;p&gt;After about a half hour of this, I was starting to feel queasy.&lt;/p&gt;
&lt;p&gt;After an hour I had decided that basically everything I was doing was making the code worse. And, bizarrely, I kept having to make &lt;em&gt;individual decisons&lt;/em&gt; about what idiom to use in each place. I couldn’t face it any more.&lt;/p&gt;
&lt;p&gt;After sleeping on the question I decided that Hippotat would be in Debian &lt;em&gt;with&lt;/em&gt; &lt;code&gt;fehler&lt;/code&gt;, or not at all. Happily the Debian Rust Team generously helped me out, so the answer is that &lt;a href="https://packages.debian.org/search?keywords=librust-fehler"&gt;&lt;code&gt;fehler&lt;/code&gt;&lt;/a&gt; is now in Debian, so it’s fine.&lt;/p&gt;
&lt;p&gt;For me this experience, of trying to convert Rust-with-&lt;code&gt;#[throws]&lt;/code&gt; to Rust-without-&lt;code&gt;#[throws&lt;/code&gt;] brought the &lt;code&gt;Ok&lt;/code&gt; wrapping problem into sharp focus.&lt;/p&gt;
&lt;h2&gt;What is &lt;code&gt;Ok&lt;/code&gt; wrapping? Intro to Rust error handling&lt;/h2&gt;
&lt;p&gt;(You can skip this section if you’re already a seasoned Rust programer.)&lt;/p&gt;
&lt;p&gt;In Rust, fallibility is represented by functions that return &lt;code&gt;Result&amp;lt;SuccessValue, Error&amp;gt;&lt;/code&gt;: this is a generic type, representing either whatever &lt;code&gt;SuccessValue&lt;/code&gt; is (in the &lt;code&gt;Ok&lt;/code&gt; variant of the data-bearing enum) or some &lt;code&gt;Error&lt;/code&gt; (in the &lt;code&gt;Err&lt;/code&gt; variant). For example, &lt;code&gt;std::fs::read_to_string&lt;/code&gt;, which takes a filename and returns the contents of the named file, returns &lt;code&gt;Result&amp;lt;String, std::io::Error&amp;gt;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This is a nice and typesafe formulation of, and generalisation of, the traditional C practice, where a function indicates in its return value whether it succeeded, and errors are indicated with an error code.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Result&lt;/code&gt; is part of the standard library and there are convenient facilities for checking for errors, extracting successful results, and so on. In particular, Rust has the postfix &lt;code&gt;?&lt;/code&gt; operator, which, when applied to a &lt;code&gt;Result&lt;/code&gt;, does one of two things: if the &lt;code&gt;Result&lt;/code&gt; was &lt;code&gt;Ok&lt;/code&gt;, it yields the inner successful value; if the &lt;code&gt;Result&lt;/code&gt; was &lt;code&gt;Err&lt;/code&gt;, it returns early from the current function, returning an &lt;code&gt;Err&lt;/code&gt; in turn to the caller.&lt;/p&gt;
&lt;p&gt;This means you can write things like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    let input_data = std::fs::read_to_string(input_file)?;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and the error handling is pretty automatic. You get a compiler warning, or a type error, if you forget the &lt;code&gt;?&lt;/code&gt;, so you can’t accidentally ignore errors.&lt;/p&gt;
&lt;p&gt;But, there is a downside. When you are returning a successful outcome from your function, you must convert it into a &lt;code&gt;Result&lt;/code&gt;. After all, your fallible function has return type &lt;code&gt;Result&amp;lt;SuccessValue, Error&amp;gt;&lt;/code&gt;, which is a different type to &lt;code&gt;SuccessValue&lt;/code&gt;. So, for example, inside &lt;code&gt;std::fs::read_to_string&lt;/code&gt;, we see this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;        let mut string = String::new();
        file.read_to_string(&amp;amp;mut string)?;
        Ok(string)
    }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;string&lt;/code&gt; has type &lt;code&gt;String&lt;/code&gt;; &lt;code&gt;fs::read_to_string&lt;/code&gt; must return &lt;code&gt;Result&amp;lt;String, ..&amp;gt;&lt;/code&gt;, so at the end of the function we must return &lt;code&gt;Ok(string)&lt;/code&gt;. This applies to &lt;code&gt;return&lt;/code&gt; statements, too: if you want an early successful return from a fallible function, you must write &lt;code&gt;return Ok(whatever)&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This is particularly annoying for functions that don’t actually return a nontrivial value. Normally, when you write a function that doesn’t return a value you don’t write the return type. The compiler interprets this as syntactic sugar for &lt;code&gt;-&amp;gt; ()&lt;/code&gt;, ie, that the function returns &lt;code&gt;()&lt;/code&gt;, the empty tuple, used in Rust as a dummy value in these kind of situations. A block (&lt;code&gt;{ ... }&lt;/code&gt;) whose last statement ends in a &lt;code&gt;;&lt;/code&gt; has type &lt;code&gt;()&lt;/code&gt;. So, when you fall off the end of a function, the return value is &lt;code&gt;()&lt;/code&gt;, without you having to write it. So you simply leave out the stuff in your program about the return value, and your function doesn’t have one (i.e. it returns &lt;code&gt;()&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;But, a function which either fails with an error, or completes successfuly without returning anything, has return type &lt;code&gt;Result&amp;lt;(), Error&amp;gt;&lt;/code&gt;. At the end of such a function, you must explicitly provide the success value. After all, if you just fall off the end of a block, it means the block has value &lt;code&gt;()&lt;/code&gt;, which is not of type &lt;code&gt;Result&amp;lt;(), Error&amp;gt;&lt;/code&gt;. So the fallible function must end with &lt;code&gt;Ok(())&lt;/code&gt;, as we see in &lt;a href="https://doc.rust-lang.org/std/fs/fn.read_to_string.html#examples"&gt;the example for &lt;code&gt;std::fs::read_to_string&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;A minor inconvenience, or a significant distraction?&lt;/h2&gt;
&lt;p&gt;I think the need for &lt;code&gt;Ok&lt;/code&gt;-wrapping on all success paths from fallible functions is generally regarded as just a minor inconvenience. Certainly the experienced Rust programmer gets very used to it. However, while trying to remove &lt;code&gt;fehler&lt;/code&gt;’s &lt;code&gt;#[throws]&lt;/code&gt; from Hippotat, I noticed something that is evident in codebases using “vanilla” Rust (without &lt;code&gt;fehler&lt;/code&gt;) but which goes un-remarked.&lt;/p&gt;
&lt;p&gt;There are &lt;strong&gt;multiple ways&lt;/strong&gt; to write the &lt;code&gt;Ok&lt;/code&gt;-wrapping, and the different ways are &lt;strong&gt;appropriate in different situations&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;See the following examples, all taken from a &lt;a href="https://gitlab.torproject.org/tpo/core/arti"&gt;real codebase&lt;/a&gt;. (And it’s not just me: I do all of these in different places, - when I don’t have &lt;code&gt;fehler&lt;/code&gt; available - but all these examples are from code written by others.)&lt;/p&gt;
&lt;h3&gt;Idioms for &lt;code&gt;Ok&lt;/code&gt;-wrapping - a bestiary&lt;/h3&gt;
&lt;h4&gt;Wrap just a returned variable binding&lt;/h4&gt;
&lt;p&gt;If you have the return value in a variable, you can write &lt;code&gt;Ok(reval)&lt;/code&gt; at the end of the function, instead of &lt;code&gt;retval&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    pub fn take_until(&amp;amp;mut self, term: u8) -&amp;gt; Result&amp;lt;&amp;amp;&amp;#39;a [u8]&amp;gt; {
        // several lines of code
        Ok(result)
    }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the returned value is not already bound to variable, making a function fallible might mean choosing to bind it to a variable.&lt;/p&gt;
&lt;h4&gt;Wrap a nontrivial return expression&lt;/h4&gt;
&lt;p&gt;Even if it’s not just a variable, you can wrap the expression which computes the returned value. This is often done if the returned value is a struct literal:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    fn take_from(r: &amp;amp;mut Reader&amp;lt;&amp;#39;_&amp;gt;) -&amp;gt; Result&amp;lt;Self&amp;gt; {
        // several lines of code
        Ok(AuthChallenge { challenge, methods })
    }&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Introduce &lt;code&gt;Ok(())&lt;/code&gt; at the end&lt;/h4&gt;
&lt;p&gt;For functions returning &lt;code&gt;Result&amp;lt;()&amp;gt;&lt;/code&gt;, you can write &lt;code&gt;Ok(())&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This is usual, but &lt;em&gt;not&lt;/em&gt; ubiquitous, since sometimes you can omit it.&lt;/p&gt;
&lt;h4&gt;Wrap the whole body&lt;/h4&gt;
&lt;p&gt;If you don’t have the return value in a variable, you can wrap the whole body of the function in &lt;code&gt;Ok(&lt;/code&gt;…&lt;code&gt;)&lt;/code&gt;. Whether this is a good idea depends on how big and complex the body is.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    fn from_str(s: &amp;amp;str) -&amp;gt; std::result::Result&amp;lt;Self, Self::Err&amp;gt; {
        Ok(match s {
            &amp;quot;Authority&amp;quot; =&amp;gt; RelayFlags::AUTHORITY,
            // many other branches
            _ =&amp;gt; RelayFlags::empty(),
        })
    }&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Omit the wrap when calling fallible sub-functions&lt;/h4&gt;
&lt;p&gt;If your function wraps another function call of the same return and error type, you don’t need to write the &lt;code&gt;Ok&lt;/code&gt; at all. Instead, you can simply call the function and not apply &lt;code&gt;?&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;You can do this even if your function selects between a number of different sub-functions to call:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    fn fmt(&amp;amp;self, f: &amp;amp;mut std::fmt::Formatter&amp;lt;&amp;#39;_&amp;gt;) -&amp;gt; std::fmt::Result {
        if flags::unsafe_logging_enabled() {
            std::fmt::Display::fmt(&amp;amp;self.0, f)
        } else {
            self.0.display_redacted(f)
        }
    }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But this doesn’t work if the returned error type isn’t the same, but needs the autoconversion implied by the &lt;code&gt;?&lt;/code&gt; operator.&lt;/p&gt;
&lt;h4&gt;Convert a fallible sub-function error with &lt;code&gt;Ok( ... ?)&lt;/code&gt;&lt;/h4&gt;
&lt;p&gt;If the final thing a function does is chain to another fallible function, but with a different error type, the error must be converted somehow. This can be done with &lt;code&gt;?&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;     fn try_from(v: i32) -&amp;gt; Result&amp;lt;Self, Error&amp;gt; {
         Ok(Percentage::new(v.try_into()?))
     }&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Convert a fallible sub-function error with &lt;code&gt;.map_err&lt;/code&gt;&lt;/h4&gt;
&lt;p&gt;Or, rarely, people solve the same problem by converting explicitly with &lt;code&gt;.map_err&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;     pub fn create_unbootstrapped(self) -&amp;gt; Result&amp;lt;TorClient&amp;lt;R&amp;gt;&amp;gt; {
         // several lines of code
         TorClient::create_inner(
             // several parameters
         )
         .map_err(ErrorDetail::into)
     }&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;What is to be done, then?&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;fehler&lt;/code&gt; library is in excellent taste and has the answer. With &lt;code&gt;fehler&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Whether a function is fallible, and what it’s error type is, is specified in one place. It is not entangled with the main return value type, nor with the success return paths.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So the success paths out of a function are not specially marked with error handling boilerplate. The end of function return value, and the expression after &lt;code&gt;return&lt;/code&gt;, are automatically wrapped up in &lt;code&gt;Ok&lt;/code&gt;. So the body of a fallible function is just like the body of an infallible one, except for places where error handling is actually involved.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Error returns occur through &lt;code&gt;?&lt;/code&gt; error chaining, and with a new explicit syntax for error return.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We usually talk about the error we are possibly returning, and avoid talking about &lt;code&gt;Result&lt;/code&gt; unless we need to.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code&gt;fehler&lt;/code&gt; provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An attribute macro &lt;code&gt;#[throws(ErrorType)]&lt;/code&gt; to make a function fallible in this way.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A macro &lt;code&gt;throws!(error)&lt;/code&gt; for explicitly failing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is precisely correct. It is very ergonomic.&lt;/p&gt;
&lt;p&gt;Consequences include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;One does not need to decide where to put the &lt;code&gt;Ok&lt;/code&gt;-wrapping, since it’s automatic rather than explicitly written out.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Specifically, what idiom to adopt in the body (for example &lt;code&gt;{write!(...)?;}&lt;/code&gt; vs &lt;code&gt;{write!(...)}&lt;/code&gt; in a formatter) does not depend on whether the error needs converting, how complex the body is, and whether the final expression in the function is itself fallible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Making an infallible function fallible involves only adding &lt;code&gt;#[throws]&lt;/code&gt; to its definition, and &lt;code&gt;?&lt;/code&gt; to its call sites. One does not need to edit the body, or the return type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Changing the error returned by a function to a suitably compatible different error type does not involve changing the function body.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is no need for a local &lt;code&gt;Result&lt;/code&gt; alias shadowing &lt;code&gt;std::result::Result&lt;/code&gt;, which means that &lt;em&gt;when&lt;/em&gt; one needs to speak of &lt;code&gt;Result&lt;/code&gt; explciitly, the code is clearer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Limitations of fehler&lt;/h3&gt;
&lt;p&gt;But, fehler is a Rust procedural macro, so it cannot get everything right. Sadly there are some wrinkles.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You can’t write &lt;code&gt;#[throws]&lt;/code&gt; on a closure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sometimes you can get quite poor error messages if you have a sufficiently broken function body.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Code inside a macro call isn’t properly visible to &lt;code&gt;fehler&lt;/code&gt; so sometimes &lt;code&gt;return&lt;/code&gt; statements inside macro calls are untreated. This will lead to a type error, so isn’t a correctness hazard, but it can be nuisance if you like other syntax extensions eg &lt;a href="https://lib.rs/crates/if_chain"&gt;&lt;code&gt;if_chain&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;#[must_use] #[throws(Error)] fn obtain() -&amp;gt; Thing;&lt;/code&gt; ought to mean that &lt;code&gt;Thing&lt;/code&gt; must be used, not the &lt;code&gt;Result&amp;lt;Thing, Error&amp;gt;&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But, Rust-with-&lt;code&gt;#[throws]&lt;/code&gt; is so much nicer a language than Rust-with-mandatory-&lt;code&gt;Ok&lt;/code&gt;-wrapping, that these are minor inconveniences.&lt;/p&gt;
&lt;h3&gt;Please can we have &lt;code&gt;#[throws]&lt;/code&gt; in the Rust language&lt;/h3&gt;
&lt;p&gt;This ought to be part of the language, not a macro library. In the compiler, it would be possible to get the all the corner cases right. It would make the feature available to everyone, and it would quickly become idiomatic Rust throughout the community.&lt;/p&gt;
&lt;p&gt;It is evident from reading writings from the time, particularly those from withoutboats, that there were significant objections to automatic &lt;code&gt;Ok&lt;/code&gt;-wrapping. It seems to have become quite political, and some folks burned out on the topic.&lt;/p&gt;
&lt;p&gt;Perhaps, now, a couple of years later, we can revisit this area and &lt;strong&gt;solve this problem in the language itself&lt;/strong&gt; ?&lt;/p&gt;
&lt;h3&gt;“Explicitness”&lt;/h3&gt;
&lt;p&gt;An argument I have seen made against automatic &lt;code&gt;Ok&lt;/code&gt;-wrapping, and, in general, against any kind of useful language affordance, is that it makes things less explicit.&lt;/p&gt;
&lt;p&gt;But this argument is fundamentally wrong for &lt;code&gt;Ok&lt;/code&gt;-wrapping. Explicitness is not an unalloyed good. We humans have only limited attention. We need to focus that attention where it is actually needed. So explicitness is good in situtions where what is going on is unusual; or would otherwise be hard to read; or is tricky or error-prone. Generally: explicitness is good for things where we need to direct humans’ attention.&lt;/p&gt;
&lt;p&gt;But &lt;code&gt;Ok&lt;/code&gt;-wrapping is ubiquitous in fallible Rust code. The compiler mechanisms and type systems almost completely defend against mistakes. All but the most novice programmer knows what’s going on, and the very novice programmer doesn’t need to. Rust’s error handling arrangments are designed specifically so that we can avoid worrying about fallibility unless necessary — except for the &lt;code&gt;Ok&lt;/code&gt;-wrapping. Explicitness about &lt;code&gt;Ok&lt;/code&gt;-wrapping directs our attention away from whatever other things the code is doing: it is a distraction.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;So, explicitness about &lt;code&gt;Ok&lt;/code&gt;-wrapping is a &lt;em&gt;bad thing&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;Appendix - examples showning code with &lt;code&gt;Ok&lt;/code&gt; wrapping is worse than code using &lt;code&gt;#[throws]&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Observe these diffs, from my abandoned attempt to remove the &lt;code&gt;fehler&lt;/code&gt; dependency from Hippotat.&lt;/p&gt;
&lt;p&gt;I have a type alias &lt;code&gt;AE&lt;/code&gt; for the usual error type (&lt;code&gt;AE&lt;/code&gt; stands for &lt;code&gt;anyhow::Error&lt;/code&gt;). In the non-&lt;code&gt;#[throws]&lt;/code&gt; code, I end up with a type alias &lt;code&gt;AR&amp;lt;T&amp;gt;&lt;/code&gt; for &lt;code&gt;Result&amp;lt;T, AE&amp;gt;&lt;/code&gt;, which I think is more opaque — but at least that avoids typing out &lt;code&gt;-&amp;gt; Result&amp;lt; , AE&amp;gt;&lt;/code&gt; a thousand times. Some people like to have a local &lt;code&gt;Result&lt;/code&gt; alias, but that means that the standard &lt;code&gt;Result&lt;/code&gt; has to be referred to as &lt;code&gt;StdResult&lt;/code&gt; or &lt;code&gt;std::result::Result&lt;/code&gt;.&lt;/p&gt;
&lt;table rules="cols"&gt;
&lt;tr&gt;
&lt;th&gt;
With &lt;code&gt;fehler&lt;/code&gt; and &lt;code&gt;#[throws]&lt;/code&gt;
&lt;th&gt;
Vanilla Rust, &lt;code&gt;Result&amp;lt;&amp;gt;&lt;/code&gt;, mandatory &lt;code&gt;Ok-wrapping&lt;/code&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="2"&gt;
&lt;hr&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th colspan="2"&gt;
Return value clearer, error return less wordy:

&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;impl Parseable for Secret {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;impl Parseable for Secret {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&lt;strong&gt;#[throws(AE)]&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;fn parse(s: Option&amp;lt;&amp;amp;str&amp;gt;) -&amp;gt; Self {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;fn parse(s: Option&amp;lt;&amp;amp;str&amp;gt;) -&amp;gt; &lt;strong&gt;AR&amp;lt;&lt;/strong&gt;Self&lt;strong&gt;&amp;gt;&lt;/strong&gt; {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;let s = s.value()?;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;let s = s.value()?;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if s.is_empty() { &lt;strong&gt;throw!&lt;/strong&gt;(anyhow!(“secret value cannot be empty”)) }&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if s.is_empty() { &lt;strong&gt;return Err&lt;/strong&gt;(anyhow!(“secret value cannot be empty”)) }&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Secret(s.into())&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;strong&gt;Ok(&lt;/strong&gt;Secret(s.into())&lt;strong&gt;)&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;…&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;…&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th colspan="2"&gt;
No need to wrap whole match statement in &lt;code&gt;Ok( ):&lt;/code&gt;

&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&lt;strong&gt;#[throws(AE)]&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;pub fn client&amp;lt;T&amp;gt;(&amp;amp;self, key: &amp;amp;’static str, skl: SKL) -&amp;gt; T&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;pub fn client&amp;lt;T&amp;gt;(&amp;amp;self, key: &amp;amp;’static str, skl: SKL) -&amp;gt; &lt;strong&gt;AR&amp;lt;&lt;/strong&gt;T&lt;strong&gt;&amp;gt;&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;where T: Parseable + Default {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;where T: Parseable + Default {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;match self.end {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;strong&gt;Ok(&lt;/strong&gt;match self.end {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;LinkEnd::Client =&amp;gt; self.ordinary(key, skl)?,&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;LinkEnd::Client =&amp;gt; self.ordinary(key, skl)?,&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;LinkEnd::Server =&amp;gt; default(),&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;LinkEnd::Server =&amp;gt; default(),&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&lt;strong&gt;)&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;…&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;…&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th colspan="2"&gt;
Return value and &lt;code&gt;Ok(())&lt;/code&gt; entirely replaced by &lt;code&gt;#[throws]&lt;/code&gt;:

&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;impl Display for Loc {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;impl Display for Loc {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&lt;strong&gt;#[throws(fmt::Error)]&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;fn fmt(&amp;amp;self, f: &amp;amp;mut fmt::Formatter) {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;fn fmt(&amp;amp;self, f: &amp;amp;mut fmt::Formatter)&lt;strong&gt; -&amp;gt; fmt::Result&lt;/strong&gt; {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;write!(f, “{:?}:{}”, &amp;amp;self.file, self.lno)?;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;write!(f, “{:?}:{}”, &amp;amp;self.file, self.lno)?;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if let Some(s) = &amp;amp;self.section {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if let Some(s) = &amp;amp;self.section {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;write!(f, “ ”)?;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;write!(f, “ ”)?;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;…&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;…&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&lt;strong&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Ok(())&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th colspan="2"&gt;
Call to &lt;code&gt;write!&lt;/code&gt; now looks the same as in more complex case shown above:

&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;impl Debug for Secret {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;impl Debug for Secret {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&lt;strong&gt;#[throws(fmt::Error)]&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;fn fmt(&amp;amp;self, f: &amp;amp;mut fmt::Formatter) {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;fn fmt(&amp;amp;self, f: &amp;amp;mut fmt::Formatter)&lt;strong&gt;-&amp;gt; fmt::Result&lt;/strong&gt; {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;write!(f, &amp;quot;Secret(***)&amp;quot;)&lt;strong&gt;?;&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;write!(f, &amp;quot;Secret(***)&amp;quot;)&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th colspan="2"&gt;
Much tiresome &lt;code&gt;return Ok()&lt;/code&gt; noise removed:

&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;impl FromStr for SectionName {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;impl FromStr for SectionName {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;type Err = AE;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;type Err = AE;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&lt;strong&gt;#[throws(AE)]&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;fn from_str(s: &amp;amp;str) -&amp;gt; Self {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;fn from_str(s: &amp;amp;str) -&amp;gt;&lt;strong&gt;AR&amp;lt;&lt;/strong&gt; Self&lt;strong&gt;&amp;gt;&lt;/strong&gt; {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;match s {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;match s {&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;“COMMON” =&amp;gt; return SN::Common,&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;“COMMON” =&amp;gt; return &lt;strong&gt;Ok(&lt;/strong&gt;SN::Common&lt;strong&gt;)&lt;/strong&gt;,&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;“LIMIT” =&amp;gt; return SN::GlobalLimit,&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;“LIMIT” =&amp;gt; return &lt;strong&gt;Ok(&lt;/strong&gt;SN::GlobalLimit&lt;strong&gt;)&lt;/strong&gt;,&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;_ =&amp;gt; { }&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;_ =&amp;gt; { }&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;};&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;};&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if let Ok(n@ ServerName(_)) = s.parse() { return SN::Server(n) }&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if let Ok(n@ ServerName(_)) = s.parse() { return &lt;strong&gt;Ok(&lt;/strong&gt;SN::Server(n)&lt;strong&gt;)&lt;/strong&gt; }&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if let Ok(n@ ClientName(_)) = s.parse() { return SN::Client(n) }&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if let Ok(n@ ClientName(_)) = s.parse() { return &lt;strong&gt;Ok(&lt;/strong&gt;SN::Client(n)&lt;strong&gt;)&lt;/strong&gt; }&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;…&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;…&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if client == “LIMIT” { return SN::ServerLimit(server) }&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if client == “LIMIT” { return &lt;strong&gt;Ok(&lt;/strong&gt;SN::ServerLimit(server)&lt;strong&gt;)&lt;/strong&gt; }&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;let client = client.parse().context(“client name in link section name”)?;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;let client = client.parse().context(“client name in link section name”)?;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;SN::Link(LinkName { server, client })&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;strong&gt;Ok(&lt;/strong&gt;SN::Link(LinkName { server, client })&lt;strong&gt;)&lt;/strong&gt;&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;&amp;nbsp;&amp;nbsp;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;}&amp;nbsp;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;address&gt;
edited 2022-12-18 19:58 UTC to improve, and 2022-12-18 23:28 to fix, formatting&lt;/address&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=13657" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:13476</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/13476.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=13476"/>
    <title>Stop writing Rust linked list libraries!</title>
    <published>2022-11-12T15:13:30Z</published>
    <updated>2022-11-16T23:55:43Z</updated>
    <category term="computers"/>
    <category term="rust"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">&lt;h3&gt;tl;dr:&lt;/h3&gt;
&lt;p&gt;Don’t write a Rust linked list library: they are hard to do well, and usually useless.&lt;/p&gt;
&lt;p&gt;Use &lt;code&gt;VecDeque&lt;/code&gt;, which is great. If you actually need more than &lt;code&gt;VecDeque&lt;/code&gt; can do, use one of the handful of libraries that actually offer a significantly more useful API.&lt;/p&gt;
&lt;p&gt;If you are writing your own data structure, check if someone has done it already, and consider &lt;code&gt;slotmap&lt;/code&gt; or &lt;code&gt;generation_arena&lt;/code&gt;, (or maybe &lt;code&gt;Rc&lt;/code&gt;/&lt;code&gt;Arc&lt;/code&gt;).&lt;/p&gt;
&lt;h3&gt;Contents&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#survey-of-rust-linked-list-libraries"&gt;Survey of Rust linked list libraries&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#background"&gt;Background&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#results"&gt;Results&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#why-are-there-so-many-poor-rust-linked-list-libraries"&gt;Why are there so many poor Rust linked list libraries ?&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#double-ended-queues"&gt;Double-ended queues&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#the-cursor-concept"&gt;The &lt;code&gt;Cursor&lt;/code&gt; concept&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#rustic-approaches-to-pointers-to-and-between-nodes-data-structures"&gt;Rustic approaches to pointers-to-and-between-nodes data structures&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#the-alternative-for-nodey-data-structures-in-safe-rust-rcarc"&gt;The alternative for nodey data structures in safe Rust: &lt;code&gt;Rc&lt;/code&gt;/&lt;code&gt;Arc&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#rusts-package-ecosystem-demonstrating-softwares-nih-problem"&gt;Rust’s package ecosystem demonstrating software’s NIH problem&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#the-package-naming-paradox"&gt;The package naming paradox&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Survey of Rust linked list libraries&lt;/h3&gt;
&lt;p&gt;I have updated my &lt;a href="https://www.chiark.greenend.org.uk/~ianmdlvl/rc-dlist-deque/#other-doubly-linked-list-libraries"&gt;Survey of Rust linked list libraries&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;Background&lt;/h4&gt;
&lt;p&gt;In 2019 I was writing &lt;a href="https://www.chiark.greenend.org.uk/~ianmdlvl/plag-mangler/"&gt;plag-mangler&lt;/a&gt;, a tool for &lt;a href="https://diziet.dreamwidth.org/2706.html"&gt;planar graph layout&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I needed a data structure. Naturally I looked for a library to help. I didn’t find what I needed, so I wrote &lt;a href="https://lib.rs/crates/rc-dlist-deque"&gt;rc-dlist-deque&lt;/a&gt;. However, on the way I noticed an inordinate number of linked list libraries written in Rust. Most all of these had no real reason for existing. Even the one in the Rust standard library is useless.&lt;/p&gt;
&lt;h4&gt;Results&lt;/h4&gt;
&lt;p&gt;Now I have redone the survey. The results are depressing. In 2019 there were 5 libraries which, in my opinion, were largely useless. In late 2022 there are now &lt;strong&gt;thirteen&lt;/strong&gt; linked list libraries that ought probably not ever to be used. And, a further eight libraries for which there are strictly superior alternatives. Many of these have the signs of projects whose authors are otherwise competent: proper documentation, extensive APIs, and so on.&lt;/p&gt;
&lt;p&gt;There is &lt;strong&gt;one&lt;/strong&gt; new library which is better for some applications than those available in 2019. (I’m referring to &lt;code&gt;generational_token_list&lt;/code&gt;, which makes a plausible alternative to &lt;code&gt;dlv-list&lt;/code&gt; which I already recommended in 2019.)&lt;/p&gt;
&lt;h3&gt;Why are there so many poor Rust linked list libraries ?&lt;/h3&gt;
&lt;p&gt;Linked lists and Rust do not go well together. But (and I’m guessing here) I presume many people are taught in programming school that a linked list is a fundamental data structure; people are often even asked to write one as a teaching exercise. This is a bad idea in Rust. Or maybe they’ve heard that writing linked lists in Rust is hard and want to prove they can do it.&lt;/p&gt;
&lt;h4&gt;Double-ended queues&lt;/h4&gt;
&lt;p&gt;One of the main applications for a linked list in a language like C, is a queue, where you put items in at one end, and take them out at the other. The Rust standard library has a data structure for that, &lt;a href="https://doc.rust-lang.org/std/collections/struct.VecDeque.html"&gt;&lt;code&gt;VecDeque&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Five of the available libraries:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Have an API which is a subset of that of &lt;code&gt;VecDeque&lt;/code&gt;: basically, pushing and popping elements at the front and back.&lt;/li&gt;
&lt;li&gt;Have worse performance for most applications than &lt;code&gt;VecDeque&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;Are less mature, less available, less well tested, etc., than &lt;code&gt;VecDeque&lt;/code&gt;, simply because &lt;code&gt;VecDeque&lt;/code&gt; is in the Rust Standard Library.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For these you could, and should, just use &lt;code&gt;VecDeque&lt;/code&gt; instead.&lt;/p&gt;
&lt;h4&gt;The &lt;code&gt;Cursor&lt;/code&gt; concept&lt;/h4&gt;
&lt;p&gt;A proper linked list lets you identify and hold onto an element in the middle of the list, and cheaply insert and remove elements there.&lt;/p&gt;
&lt;p&gt;Rust’s ownership and borrowing rules make this awkward. One idea that people have many times reinvented and reimplemented, is to have a &lt;code&gt;Cursor&lt;/code&gt; type, derived from the list, which is a reference to an element, and permits insertion and removal there.&lt;/p&gt;
&lt;p&gt;Eight libraries have implemented this in the obvious way. However, there is a serious API limitation:&lt;/p&gt;
&lt;p&gt;To prevent a cursor being invalidated (e.g. by deletion of the entry it points to) you can’t modify the list while the cursor exists. You can only have one cursor (that can be used for modification) at a time.&lt;/p&gt;
&lt;p&gt;The practical effect of this is that you cannot retain cursors. You can make and use such a cursor for a particular operation, but you must dispose of it soon. Attempts to do otherwise will see you losing a battle with the borrow checker.&lt;/p&gt;
&lt;p&gt;If that’s good enough, then you could just use a &lt;code&gt;VecDeque&lt;/code&gt; and use array indices instead of the cursors. It’s true that deleting or adding elements in the middle involves a lot of copying, but your algorithm is O(n) even with the single-cursor list libraries, because it must first walk the cursor to the desired element.&lt;/p&gt;
&lt;p&gt;Formally, I believe any algorithm using these exclusive cursors can be rewritten, in an obvious way, to simply iterate and/or copy from the start or end (as one can do with &lt;code&gt;VecDeque&lt;/code&gt;) without changing the headline O() performance characteristics.&lt;/p&gt;
&lt;p&gt;IMO the savings available from avoiding extra copies etc. are not worth the additional dependency, unsafe code, and so on, especially as there are other ways of helping with that (e.g. boxing the individual elements).&lt;/p&gt;
&lt;p&gt;Even if you don’t find that convincing, &lt;code&gt;generational_token_list&lt;/code&gt; and &lt;code&gt;dlv_list&lt;/code&gt; are strictly superior since they offer a more flexible and convenient API and better performance, and rely on much less unsafe code.&lt;/p&gt;
&lt;h3&gt;Rustic approaches to pointers-to-and-between-nodes data structures&lt;/h3&gt;
&lt;p&gt;Most of the time a &lt;code&gt;VecDeque&lt;/code&gt; is great. But if you actually want to hold onto (perhaps many) references to the middle of the list, and later modify it through those references, you &lt;em&gt;do&lt;/em&gt; need something more. This is a specific case of a general class of problems where the naive approach (use Rust references to the data structure nodes) doesn’t work well.&lt;/p&gt;
&lt;p&gt;But there is a good solution:&lt;/p&gt;
&lt;p&gt;Keep all the nodes in an array (a &lt;code&gt;Vec&amp;lt;Option&amp;lt;T&amp;gt;&amp;gt;&lt;/code&gt; or similar) and use the index in the array as your node reference. This is fast, and quite ergonomic, and neatly solves most of the problems. If you are concerned that bare indices might cause confusion, as newly inserted elements would reuse indices, add a per-index generation count.&lt;/p&gt;
&lt;p&gt;These approaches have been neatly packaged up in libraries like &lt;a href="https://lib.rs/crates/slab"&gt;&lt;code&gt;slab&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://lib.rs/crates/slotmap"&gt;&lt;code&gt;slotmap&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://lib.rs/crates/generational-arena"&gt;&lt;code&gt;generational-arena&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://lib.rs/crates/thunderdome"&gt;&lt;code&gt;thunderdome&lt;/code&gt;&lt;/a&gt;. And they have been nicely applied to linked lists by the authors of &lt;a href="https://lib.rs/crates/generational_token_list"&gt;&lt;code&gt;generational_token_list&lt;/code&gt;&lt;/a&gt;. and &lt;a href="https://lib.rs/crates/dlv-list"&gt;&lt;code&gt;dlv-list&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;The alternative for nodey data structures in safe Rust: &lt;code&gt;Rc&lt;/code&gt;/&lt;code&gt;Arc&lt;/code&gt;&lt;/h4&gt;
&lt;p&gt;Of course, you can just use Rust’s “interior mutability” and reference counting smart pointers, to directly implement the data structure of your choice.&lt;/p&gt;
&lt;p&gt;In many applications, a single-threaded data structure is fine, in which case &lt;a href="https://doc.rust-lang.org/std/rc/index.html"&gt;&lt;code&gt;Rc&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://doc.rust-lang.org/std/cell/index.html"&gt;&lt;code&gt;Cell&lt;/code&gt;/&lt;code&gt;RefCell&lt;/code&gt;&lt;/a&gt; will let you write safe code, with cheap refcount updates and runtime checks inserted to defend against unexpected aliasing, use-after-free, etc.&lt;/p&gt;
&lt;p&gt;I took this approach in &lt;code&gt;rc-dlist-deque&lt;/code&gt;, because I wanted each node to be able to be on multiple lists.&lt;/p&gt;
&lt;h3&gt;Rust’s package ecosystem demonstrating software’s NIH problem&lt;/h3&gt;
&lt;p&gt;The Rust ecosystem is full of &lt;a href="https://en.wikipedia.org/wiki/Not_invented_here"&gt;NIH&lt;/a&gt; libraries of all kinds. In my survey, there are: five good options; seven libraries which are plausible, but just not as good as the alternatives; and fourteen others.&lt;/p&gt;
&lt;p&gt;There is a whole rant I could have about how the whole software and computing community is pathologically neophilic. Often we seem to actively resist reusing ideas, let alone code; and are ignorant and dismissive of what has gone before. As a result, we keep solving the same problems, badly - making the same mistakes over and over again. In some subfields, working software, or nearly working software, is frequently replaced with something worse, maybe more than once.&lt;/p&gt;
&lt;p&gt;One aspect of this is a massive cultural bias towards rewriting rather than reusing, let alone fixing and using.&lt;/p&gt;
&lt;p&gt;Many people can come out of a degree, trained to be a programmer, and have &lt;em&gt;no&lt;/em&gt; formal training in selecting and evaluating software; this is even though working effectively with computers requires making good use of everyone else’s work.&lt;/p&gt;
&lt;p&gt;If one isn’t taught these skills (when and how to search for prior art, how to choose between dependencies, and so on) one must learn it on the job. The result is usually an ad-hoc and unsystematic approach, often dominated by fashion rather than engineering.&lt;/p&gt;
&lt;h4&gt;The package naming paradox&lt;/h4&gt;
&lt;p&gt;The more experienced and competent programmer is aware of all the other options that exist - after all they have evaluated other choices before writing their own library.&lt;/p&gt;
&lt;p&gt;So they will call their library something like &lt;code&gt;generational_token_list&lt;/code&gt; or &lt;code&gt;vecdeque-stableix&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Whereas the novice straight out of a pre-Rust programming course just thinks what they are doing is the one and only obvious thing (even though it’s a poor idea) and hasn’t even searched for a previous implementation. So they call their package something obvious like “linked list”.&lt;/p&gt;
&lt;p&gt;As a result, the most obvious names seem to refer to the least useful libraries.&lt;/p&gt;
&lt;hr&gt;
&lt;address&gt;
Edited 2022-11-16 23:55 UTC to update numbers of libraries in various categories following updates to the survey (including updates prompted by feedback received after this post first published).
&lt;/address&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=13476" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:13087</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/13087.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=13087"/>
    <title>Skipping releases when upgrading Debian systems</title>
    <published>2022-10-10T15:48:27Z</published>
    <updated>2022-10-10T15:48:27Z</updated>
    <category term="computers"/>
    <category term="debian"/>
    <category term="chiark"/>
    <dw:security>public</dw:security>
    <dw:reply-count>1</dw:reply-count>
    <content type="html">&lt;p&gt;Debian does not officially support upgrading from earlier than the previous stable release: you’re not supposed to “skip” releases. Instead, you’re supposed to upgrade to each intervening major release in turn.&lt;/p&gt;
&lt;p&gt;However, skipping intervening releases does, in fact, often work quite well. Apparently, this is surprising to many people, even Debian insiders. I was encouraged to write about it some more.&lt;/p&gt;
&lt;h3&gt;My personal experience&lt;/h3&gt;
&lt;p&gt;I have three conventionally-managed personal server systems (by which I mean systems which aren’t reprovisioned by some kind of automation). Of these at least two have been skip upgraded at least once:&lt;/p&gt;
&lt;p&gt;The one I don’t think I’ve skip-upgraded (at least, not recently) is my house network manager (and now VM host) which I try to keep to a minimum in terms of functionality and which I keep quite up to date. It &lt;em&gt;was&lt;/em&gt; crossgraded from i386 (32-bit) to amd64 (64-bit) fairly recently, which is a thing that Debian isn’t sure it supports. The crossgrade was done a hurry and without any planning, prompted by Spectre et al suddenly requiring big changes to Xen. But it went well enough.&lt;/p&gt;
&lt;p&gt;My home “does random stuff” server (media server, web cache, printing, DNS, backups etc.), has &lt;a href="https://manpages.debian.org/bullseye/etckeeper/etckeeper.8.en.html"&gt;&lt;code&gt;etckeeper&lt;/code&gt;&lt;/a&gt; records starting in 2015. I upgraded directly from jessie (Debian 8) to buster (Debian 10). I think it has probably had earlier skip upgrade(s): the oldest file in &lt;code&gt;/etc&lt;/code&gt; is from December 1996 and I have been doing occasional skip upgrades as long as I can remember.&lt;/p&gt;
&lt;p&gt;And of course there’s chiark, which is one of the oldest Debian installs in existence. I wrote about the &lt;a href="https://diziet.dreamwidth.org/11840.html"&gt;most recent upgrade&lt;/a&gt;, where I went directly from jessie i386 ELTS (32-bit Debian 8) to bulleye amd64 (64-bit Debian 11). That was a very extreme case which required significant planning and pre-testing, since the package dependencies were in no way sufficient for the proper ordering. But, I don’t normally go to such lengths. Normally, even on chiark, I just edit the &lt;code&gt;sources.list&lt;/code&gt; and see what apt proposes to do.&lt;/p&gt;
&lt;p&gt;I often skip upgrade chiark because I tend to defer risky-looking upgrades partly in the hope of others fixing the bugs while I wait :-), and partly just because change is disruptive and amortising it is very helpful both to me and my users. I have some records of chiark’s upgrades from my announcements to users. As well as the recent “skip skip up cross grade, direct”, I definitely did a skip upgrade of chiark from squeeze (Debian 6) to jessie (Debian 8). It appears that the previous skip upgrade on chiark was rex (Debian 1.2) to hamm (Debian 2.0).&lt;/p&gt;
&lt;p&gt;I don’t think it’s usual for me to choose to do a multi-release upgrade the “officially supported” way, in two (or more) stages, on a server. I &lt;em&gt;have&lt;/em&gt; done that on systems with a GUI desktop setup, but even then I usually skip the intermediate reboot(s).&lt;/p&gt;
&lt;h3&gt;When to skip upgrade (and what precautions to take)&lt;/h3&gt;
&lt;p&gt;I’m certainly not saying that everyone ought to be doing this routinely. Most users with a Debian install that is older than oldstable probably ought to reinstall it, or do the two-stage upgrade.&lt;/p&gt;
&lt;p&gt;Skip upgrading almost always runs into &lt;em&gt;some&lt;/em&gt; kind of trouble (albeit, usually trouble that isn’t particularly hard to fix if you know what you’re doing).&lt;/p&gt;
&lt;p&gt;However, officially supported &lt;em&gt;non&lt;/em&gt;-skip upgrades go wrong too. Doing a two-or-more-releases upgrade via the intermediate releases can expose you to significant bugs in the intermediate releases, which were later fixed. Because Debian’s users and downstreams are cautious, and Debian itself can be slow, it is common for bugs to appear for one release and then be fixed only in the next. Paradoxically, this seems to be especially true with the kind of big and scary changes where you’d naively think the upgrade mechanisms would break if you skipped the release where the change first came in.&lt;/p&gt;
&lt;p&gt;I would not recommend a skip upgrade to someone who is not a competent Debian administrator, with good familiarity with Debian package management, including use of &lt;code&gt;dpkg&lt;/code&gt; directly to fix things up. You should have a mental toolkit of manual bug workaround techniques. I always should make sure that I have rescue media (and in the case of a remote system, full remote access including ability to boot a different image), although I don’t often need it.&lt;/p&gt;
&lt;p&gt;And, when considering a skip upgrade, you should be aware of the major changes that have occurred in Debian.&lt;/p&gt;
&lt;p&gt;Skip upgrading is more likely to be a good idea with a complex and highly customised system: a fairly vanilla install is not likely to encounter problems during a two-stage update. (And, a vanilla system can be “upgraded” by reinstalling.)&lt;/p&gt;
&lt;p&gt;I haven’t recently skip upgraded a laptop or workstation. I doubt I would attempt it; modern desktop software seems to take a much harder line about breaking things that are officially unsupported, and generally trying to force everyone into the preferred mold.&lt;/p&gt;
&lt;h3&gt;A request to Debian maintainers&lt;/h3&gt;
&lt;p&gt;I would like to encourage Debian maintainers to defer removing upgrade compatibility machinery until it is actually getting in the way, or has become hazardous, or many years obsolete.&lt;/p&gt;
&lt;p&gt;Examples of the kinds of things which it would be nice to keep, and which do not usually cause much inconvenience to retain, are dependency declarations (particularly, alternatives), and (many) transitional fragments in maintainer scripts.&lt;/p&gt;
&lt;p&gt;If you find yourself needing to either delete some compatibility feature, or refactor/reorganise it, I think it is probably best to delete it. If you modify it significantly, the resulting thing (which won’t be tested until someone uses it in anger) is quite likely to have bugs which make it go wrong more badly (or, more confusingly) than the breakage that would happen without it.&lt;/p&gt;
&lt;p&gt;Obviously this is all a judgement call.&lt;/p&gt;
&lt;p&gt;I’m not saying Debian should formally “support” skip upgrades, to the extent of (further) slowing down important improvements. Nor am I asking for any change to the routine approach to (for example) transitional packages (i.e. the technique for ensuring continuity of installation when a package name changes).&lt;/p&gt;
&lt;p&gt;We try to make release upgrades work perfectly; but skip upgrades don’t have to work perfectly to be valuable. Retaining compatibility code can also make it easier to provide official backports, and it probably helps downstreams with different release schedules.&lt;/p&gt;
&lt;p&gt;The fact that maintainers do in practice often defer removing compatibility code provides useful flexibility and options to at least some people. So it would be nice if you’d at least not go out of your way to break it.&lt;/p&gt;
&lt;h4&gt;Building on older releases&lt;/h4&gt;
&lt;p&gt;I would also like to encourage maintainers to provide source packages in Debian unstable that will still &lt;em&gt;build&lt;/em&gt; on older releases, where this isn’t too hard and the resulting binaries might be basically functional.&lt;/p&gt;
&lt;p&gt;Speaking personally, it’s not uncommon for me to rebuild packages from unstable and install them on much older releases. This is another thing that is not officially supported, but which often works well.&lt;/p&gt;
&lt;p&gt;I’m not saying to contort your build system, or delay progress. You’ll definitely want to depend on a recentish debhelper. But, for example, retaining old build-dependency alternatives is nice. Retaining them doesn’t constitute a promise that it works - it just makes life slightly easier for someone who is going off piste.&lt;/p&gt;
&lt;p&gt;If you know you have users on multiple distros or multiple releases, and wish to fully support them, you can go further, of course. Many of my own packages are directly buildable, or even directly installable, on older releases.&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=13087" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:12934</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/12934.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=12934"/>
    <title>Hippotat (IP over HTTP) - first advertised release</title>
    <published>2022-09-28T20:12:48Z</published>
    <updated>2022-09-28T20:12:48Z</updated>
    <category term="computers"/>
    <category term="rust"/>
    <category term="hippotat"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">&lt;p&gt;I have released version 1.0.0 of &lt;a href="https://www.chiark.greenend.org.uk/~ianmdlvl/hippotat/current/docs/"&gt;Hippotat&lt;/a&gt;, my IP-over-HTTP system. To quote the &lt;a href="https://www.chiark.greenend.org.uk/~ianmdlvl/hippotat/current/docs/README.html"&gt;README&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You’re in a cafe or a hotel, trying to use the provided wifi. But it’s not working. You discover that port 80 and port 443 are open, but the wifi forbids all other traffic.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Never mind, start up your hippotat client. Now you have connectivity. Your VPN and SSH and so on run over Hippotat. The result is not very efficient, but it does work.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Story&lt;/h3&gt;
&lt;p&gt;In early 2017 I was in a mountaintop cafeteria, hoping to do some work on my laptop. (For Reasons I couldn’t go skiing that day.) I found that local wifi was badly broken: It had a severe port block. I had to use my port 443 SSH server to get anywhere. My usual arrangements punt everything over my VPN, which uses UDP of course, and I had to bodge several things. Using a web browser directly only the wifi worked normally, of course - otherwise the other guests would have complained. This was not the first experience like this I’d had, but this time I had nothing much else to do but fix it.&lt;/p&gt;
&lt;p&gt;In a few furious hacking sessions, I wrote Hippotat, a tool for making my traffic look enough like “ordinary web browsing” that it gets through most stupid firewalls. That Python version of Hippotat served me well for many years, despite being rather shonky, extremely inefficient in CPU (and therefore battery) terms and not very productised.&lt;/p&gt;
&lt;p&gt;But recently things have started to go wrong. I was using Twisted Python and there was what I think must be some kind of buffer handling bug, which started happening when I upgraded the OS (getting newer versions of Python and the Twisted libraries). The Hippotat code, and the Twisted APIs, were quite convoluted, and I didn’t fancy debugging it.&lt;/p&gt;
&lt;p&gt;So last year I rewrote it in Rust. The new Rust client did very well against my existing servers. To my shame, I didn’t get around to releasing it.&lt;/p&gt;
&lt;p&gt;However, more recently I upgraded the server hosts my Hippotat daemons run on to recent Debian releases. They started to be affected by the bug too, rendering my Rust client unuseable. I decided I had to deploy the Rust server code.&lt;/p&gt;
&lt;p&gt;This involved some packaging work. Having done that, it’s time to release it: &lt;a href="https://www.chiark.greenend.org.uk/pipermail/sgo-software-announce/2022/000077.html"&gt;Hippotat 1.0.0 is out&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;The &lt;a href="https://www.chiark.greenend.org.uk/~ianmdlvl/hippotat/current/docs/install.html#building"&gt;package build instructions&lt;/a&gt; are rather strange&lt;/h3&gt;
&lt;p&gt;My usual approach to releasing something like this would be to provide a git repository containing a proper Debian source package. I might also build binaries, using &lt;code&gt;sbuild&lt;/code&gt;, and I would consider actually uploading to Debian.&lt;/p&gt;
&lt;p&gt;However, despite me taking a fairly conservative approach to adding dependencies to Hippotat, still a couple of the (not very unusual) Rust packages that Hippotat depends on are not in Debian. Last year I considered tackling this head-on, but I got derailed by &lt;a href="https://diziet.dreamwidth.org/10559.html"&gt;difficulties with Rust packaging in Debian&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Furthermore, the version of the Rust compiler itself in Debian stable is incapable of dealing with recent versions of very many upstream Rust packages, because many packages’ most recent versions now require the 2021 Edition of Rust. Sadly, Rust’s package manager, cargo, has no mechanism for trying to choose dependency versions that are actually compatible with the available compiler; &lt;a href="https://github.com/rust-lang/rfcs/pull/2495"&gt;efforts to solve this problem&lt;/a&gt; have &lt;a href="https://github.com/rust-lang/cargo/issues/9930"&gt;still not borne the needed fruit&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The result is that, in practice, currently Hippotat has to be built with (a) a reasonably recent Rust toolchain such as found in Debian unstable or obtained from Rust upstream; (b) dependencies obtained from the upstream Rust repository.&lt;/p&gt;
&lt;p&gt;At least things aren’t &lt;em&gt;completely&lt;/em&gt; terrible: &lt;a href="https://rustup.rs/"&gt;Rustup&lt;/a&gt; itself, despite its alarming install rune, has a pretty good story around integrity, release key management and so on. And with the right build rune, cargo will check not just the versions, but the precise content hashes, of the dependencies to be obtained from crates.io, against the information I provide in the &lt;code&gt;Cargo.lock&lt;/code&gt; file. So at least when you build it you can be sure that the dependencies you’re getting are the same ones I used myself when I built and tested Hippotat. And there’s only 147 of them (counting indirect dependencies too), so what could possibly go wrong?&lt;/p&gt;
&lt;p&gt;Sadly the resulting package build system cannot work with Debian’s best tool for doing clean and controlled builds, &lt;a href="https://manpages.debian.org/bullseye/sbuild/sbuild.1.en.html"&gt;sbuild&lt;/a&gt;. Under the circumstances, I don’t feel I want to publish any binaries.&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=12934" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:12367</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/12367.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=12367"/>
    <title>prefork-interp - automatic startup time amortisation for all manner of scripts</title>
    <published>2022-08-22T23:37:44Z</published>
    <updated>2022-08-23T09:30:11Z</updated>
    <category term="chiark-utils"/>
    <category term="prefork-interp"/>
    <category term="computers"/>
    <category term="chiark"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">&lt;h3&gt;The problem I had - Mason, so, sadly, FastCGI&lt;/h3&gt;
&lt;p&gt;Since the update to current Debian stable, the website for &lt;a href="https://yarrg.chiark.net"&gt;YARRG&lt;/a&gt;, (a play-aid for Puzzle Pirates which I wrote some years ago), started to occasionally return “Internal Server Error”, apparently due to &lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1014617"&gt;bug(s)&lt;/a&gt; in some FastCGI libraries.&lt;/p&gt;
&lt;p&gt;I was using FastCGI because the website is written in &lt;a href="https://metacpan.org/pod/HTML::Mason"&gt;Mason&lt;/a&gt;, a Perl web framework, and I found that Mason CGI calls were slow. I’m using CGI - yes, trad CGI - via userv-cgi. Running Mason this way would “compile” the template for each HTTP request just when it was rendered, and then throw the compiled version away. The more modern approach of an application server doesn’t scale well to a system which has many web “applications” most of which are very small. The admin overhead of maintaining a daemon, and corresponding webserver config, for each such “service” would be prohibitive, even with some kind of autoprovisioning setup. FastCGI has an interpreter wrapper which seemed like it ought to solve this problem, but it’s quite inconvenient, and often flaky.&lt;/p&gt;
&lt;p&gt;I decided I could do better, and set out to eliminate FastCGI from my setup. The result seems to be a success; once I’d done all the hard work of writing &lt;code&gt;prefork-interp&lt;/code&gt;, I found the result very straightforward to deploy.&lt;/p&gt;
&lt;h3&gt;prefork-interp&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;prefork-interp&lt;/code&gt; is a small C program which wraps a script, plus a scripting language library to cooperate with the wrapper program. Together they achieve the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Startup cost of the script (loading modules it uses, precompuations, loading and processing of data files, etc.) is paid once, and reused for subsequent invocations of the same script.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;Minimal intervention to the script source code:
&lt;ul&gt;
&lt;li&gt;one new library to import&lt;/li&gt;
&lt;li&gt;one new call to make from that library, right after the script intialisation is complete&lt;/li&gt;
&lt;li&gt;change to the &lt;code&gt;#!&lt;/code&gt; line.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The new “initialisation complete” call turns the program into a little server (a daemon), and then returns once for each actual invocation, each time in a fresh grandchild process.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Features:&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Seamless reloading on changes to the script source code (automatic, and configurable).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Concurrency limiting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Options for distinguishing different configurations of the same script so that they get a server each.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can run the same script standalone, as a one-off execution, as well as under &lt;code&gt;prefork-interp&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Currently, a script-side library is provided for Perl. I’m pretty sure Python would be fairly straightforward.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Important properties not always satisfied by competing approaches:&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Error output (stderr) and exit status from both phases of the script code execution faithfully reproduced to the calling context. Environment, arguments, and stdin/stdout/stderr descriptors, passed through to each invocation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No polling, other than a long-term idle timeout, so good on laptops (or phones).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automatic lifetime management of the per-script server, including startup and cleanup. No integration needed with system startup machinery: No explicit management of daemons, init scripts, systemd units, cron jobs, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Useable right away without fuss for CGI programs but also for other kinds of program invocation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;(I believe) reliable handling of unusual states arising from failed invocations or races.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Swans paddling furiously&lt;/h3&gt;
&lt;p&gt;The implementation is much more complicated than the (apparent) interface.&lt;/p&gt;
&lt;p&gt;I won’t go into all the details here (there are some &lt;a href="https://www.chiark.greenend.org.uk/ucgi/~ian/git?p=chiark-utils.git;a=blob;f=cprogs/prefork-interp.c;h=56d6040d5d1f929fbef81a97d528d17b83d64d49;hb=HEAD#l38"&gt;terrifying diagrams&lt;/a&gt; in the source code if you really want), but some highlights:&lt;/p&gt;
&lt;p&gt;We use an &lt;code&gt;AF_UNIX&lt;/code&gt; socket (hopefully in &lt;code&gt;/run/user/UID&lt;/code&gt;, but in &lt;code&gt;~&lt;/code&gt; if not) for rendezvous. We can try to connect without locking, but we must protect the socket with a separate lockfile to avoid two concurrent restart attempts.&lt;/p&gt;
&lt;p&gt;We want stderr from the script setup (pre-initialisation) to be delivered to the caller, so the script ought to inherit our stderr and then will need to replace it later. Twice, in fact, because the daemonic server process can’t have a stderr.&lt;/p&gt;
&lt;p&gt;When a script is restarted for any reason, any old socket will be removed. We want the old server process to detect that and quit. (If hung about, it would wait for the idle timeout; if this happened a lot - eg, a constantly changing set of services - we might end up running out of pids or something.) Spotting the socket disappearing, without polling, involves use of a library capable of using &lt;code&gt;inotify&lt;/code&gt; (or the equivalent elsewhere). Choosing a C library to do this is not so hard, but portable interfaces to this functionality can be hard to find in scripting languages, and also we don’t want every language binding to have to reimplement these checks. So for this purpose there’s a little watcher process, and associated IPC.&lt;/p&gt;
&lt;p&gt;When an invoking instance of &lt;code&gt;prefork-interp&lt;/code&gt; is killed, we must arrange for the executing service instance to stop reading from its stdin (and, ideally, writing its stdout). Otherwise it’s stealing input from &lt;code&gt;prefork-interp&lt;/code&gt;’s successors (maybe the user’s shell)!&lt;/p&gt;
&lt;p&gt;Cleanup ought not to depend on positive actions by failing processes, so each element of the system has to detect failures of its peers by means such as EOF on sockets/pipes.&lt;/p&gt;
&lt;h3&gt;Obtaining prefork-interp&lt;/h3&gt;
&lt;p&gt;I put this new tool in my chiark-utils package, which is a collection of useful miscellany. It’s available &lt;a href="https://www.chiark.greenend.org.uk/ucgi/~ian/git/chiark-utils.git/"&gt;from git&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Currently I make releases by &lt;a href="https://packages.debian.org/search?suite=default&amp;amp;section=all&amp;amp;arch=any&amp;amp;searchon=sourcenames&amp;amp;keywords=chiark-utils"&gt;uploading to Debian&lt;/a&gt;, where prefork-interp has just hit Debian unstable, in chiark-utils 7.0.0.&lt;/p&gt;
&lt;h3&gt;Support for other scripting languages&lt;/h3&gt;
&lt;p&gt;I would love Python to be supported. If any pythonistas reading this think you might like to help out, please get in touch. The specification for the protocol, and what the script library needs to do, is &lt;a href="https://www.chiark.greenend.org.uk/ucgi/~ian/git?p=chiark-utils.git;a=blob;f=cprogs/prefork-interp.c;h=56d6040d5d1f929fbef81a97d528d17b83d64d49;hb=HEAD#l266"&gt;documented in the source code&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Future plans for chiark-utils&lt;/h3&gt;
&lt;p&gt;chiark-utils as a whole is in need of some tidying up of its build system and packaging.&lt;/p&gt;
&lt;p&gt;I intend to try to do some reorganisation. Currently I think it would be better to organising the source tree more strictly with a directory for each included facility, rather than grouping “compiled” and “scripts” together.&lt;/p&gt;
&lt;p&gt;The Debian binary packages should be reorganised more fully according to their dependencies, so that installing a program will ensure that it works.&lt;/p&gt;
&lt;p&gt;I should probably move the official git repo from my own git+gitweb to a forge (so we can have MRs and issues and so on).&lt;/p&gt;
&lt;p&gt;And there should be a lot more testing, including Debian autopkgtests.&lt;/p&gt;
&lt;address&gt;edited 2022-08-23 10:30 +01:00 to improve the formatting&lt;/address&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=12367" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:12191</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/12191.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=12191"/>
    <title>dkim-rotate - rotation and revocation of DKIM signing keys</title>
    <published>2022-08-08T00:20:56Z</published>
    <updated>2022-08-08T00:20:56Z</updated>
    <category term="chiark"/>
    <category term="computers"/>
    <category term="dkim-rotate"/>
    <dw:security>public</dw:security>
    <dw:reply-count>3</dw:reply-count>
    <content type="html">&lt;h1&gt;Background&lt;/h1&gt;
&lt;p&gt;Internet email is becoming more reliant on DKIM, a scheme for having mail servers cryptographically sign emails. The Big Email providers have started silently spambinning messages that lack either DKIM signatures, or SPF. DKIM is arguably less broken than SPF, so I wanted to deploy it.&lt;/p&gt;
&lt;p&gt;But it has a problem: if done in a naive way, it makes all your emails non-repudiable, forever. This is not really a desirable property - at least, not desirable for you, although it can be nice for someone who (for example) gets hold of leaked messages obtained by hacking mailboxes.&lt;/p&gt;
&lt;p&gt;This problem was described at some length in Matthew Green’s article &lt;a href="https://blog.cryptographyengineering.com/2020/11/16/ok-google-please-publish-your-dkim-secret-keys/"&gt;&lt;em&gt;Ok Google: please publish your DKIM secret keys&lt;/em&gt;&lt;/a&gt;. Following links from that article does get you to a short script to achieve key rotation but it had a number of problems, and wasn’t useable in my context.&lt;/p&gt;
&lt;h1&gt;dkim-rotate&lt;/h1&gt;
&lt;p&gt;So I have written my own software for rotating and revoking DKIM keys: &lt;a href="https://www.chiark.greenend.org.uk/~ianmdlvl/dkim-rotate/"&gt;dkim-rotate&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I think it is a good solution to this problem, and it ought to be deployable in many contexts (and readily adaptable to those it doesn’t already support).&lt;/p&gt;
&lt;p&gt;Here’s the feature list taken from the README:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Leaked emails become unattestable (plausibily deniable) within a few days — soon after the configured maximum email propagation time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mail domain DNS configuration can be static, and separated from operational DKIM key rotation. Domain owner delegates DKIM configuration to mailserver administrator, so that dkim-rotate does not need to edit your mail domain’s zone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When a single mail server handles multiple mail domains, only a single dkim-rotate instance is needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports situations where multiple mail servers may originate mails for a single mail domain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DNS zonefile remains small; old keys are published via a webserver, rather than DNS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports runtime (post-deployment) changes to tuning parameters and configuration settings. Careful handling of errors and out-of-course situations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Convenient outputs: a standard DNS zonefile; easily parseable settings for the MTA; and, a directory of old keys directly publishable by a webserver.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Complications&lt;/h1&gt;
&lt;p&gt;It seems like it should be a simple problem. Keep N keys, and every day (or whatever), generate and start using a new key, and deliberately leak the oldest private key.&lt;/p&gt;
&lt;p&gt;But, things are more complicated than that. Considerably more complicated, as it turns out.&lt;/p&gt;
&lt;p&gt;I didn’t want the DKIM key rotation software to have to edit the actual DNS zones for each relevant mail domain. That would tightly entangle the mail server administration with the DNS administration, and there are many contexts (including many of mine) where these roles are separated.&lt;/p&gt;
&lt;p&gt;The solution is to use DNS aliases (&lt;code&gt;CNAME&lt;/code&gt;). But, now we need a fixed, relatively small, set of &lt;code&gt;CNAME&lt;/code&gt; records for each mail domain. That means a fixed, relatively small set of key identifiers (“selectors” in DKIM terminology), which must be used in rotation.&lt;/p&gt;
&lt;p&gt;We don’t want the private keys to be published &lt;em&gt;via the DNS&lt;/em&gt; because that makes an ever-growing DNS zone, which isn’t great for performance; and, because we want to place barriers in the way of processes which might &lt;em&gt;enumerate&lt;/em&gt; the set of keys we use (and the set of keys we have leaked) and keep records of what status each key had when. So we need a separate publication channel - for which a webserver was the obvious answer.&lt;/p&gt;
&lt;p&gt;We want the private keys to be readily noticeable and findable by someone who is verifying an alleged leaked email dump, but to be hard to enumerate. (One part of the strategy for this is to leave a note about it, with the prospective private key url, in the email headers.)&lt;/p&gt;
&lt;p&gt;The key rotation operations are more complicated than first appears, too. The short summary, above, neglects to consider the fact that DNS updates have a nonzero propagation time: if you change the DNS, not everyone on the Internet will experience the change immediately. So as well as a timeout for how long it might take an email to be delivered (ie, how long the DKIM signature remains valid), there is also a timeout for how long to wait after updating the DNS, before relying on everyone having got the memo. (This same timeout applies both before starting to sign emails with a new key, and before deliberately compromising a key which has been withdrawn and deadvertised.)&lt;/p&gt;
&lt;p&gt;Updating the DNS, and the MTA configuration, are fallible operations. So we need to cope with out-of-course situations, where a previous DNS or MTA update failed. In that case, we need to retry the failed update, and not proceed with key rotation. We mustn’t start the timer for the key rotation until the update has been implemented.&lt;/p&gt;
&lt;p&gt;The rotation script will usually be run by cron, but it might be run by hand, and when it is run by hand it ought not to “jump the gun” and do anything “too early” (ie, before the relevant timeout has expired). cron jobs don’t always run, and don’t always run at precisely the right time. (And there’s daylight saving time, to consider, too.)&lt;/p&gt;
&lt;p&gt;So overall, it’s not sufficient to drive the system via cron and have it proceed by one unit of rotation on each run.&lt;/p&gt;
&lt;p&gt;And, hardest of all, I wanted to support post-deployment configuration changes, while continuing to keep the whole the system operational. Otherwise, you have to bake in all the timing parameters right at the beginning and can’t change anything ever. So for example, I wanted to be able to change the email and DNS propagation delays, and even the number of selectors to use, &lt;em&gt;without&lt;/em&gt; adversely affecting the delivery of already-sent emails, and without having to shut anything down.&lt;/p&gt;
&lt;p&gt;I think I have solved these problems.&lt;/p&gt;
&lt;p&gt;The resulting system is one which keeps track of the timing constraints, and the next event which might occur, on a per-key basis. It calculates on each run, which key(s) can be advanced to the next stage of their lifecycle, and performs the appropriate operations. The regular key update schedule is then an emergent property of the config parameters and cron job schedule. (I provide some example config.)&lt;/p&gt;
&lt;h1&gt;Exim&lt;/h1&gt;
&lt;p&gt;Integrating dkim-rotate itself with Exim was fairly easy. The &lt;code&gt;lsearch&lt;/code&gt; lookup function can be used to fish information out of a suitable data file maintained by &lt;code&gt;dkim-rotate&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;But a final awkwardness was getting Exim to make the right DKIM signatures, at the right time.&lt;/p&gt;
&lt;p&gt;When making a DKIM signature, one must choose a signing authority domain name: who should we claim to be? (This is the “SDID” in DKIM terms.) A mailserver that handles many different mail domains will be able to make good signatures on behalf of many of them. It seems to me that domain to be the mail domain in the &lt;code&gt;From:&lt;/code&gt; header of the email. (The RFC doesn’t seem to be clear on what is expected.) Exim doesn’t seem to have anything builtin to do that.&lt;/p&gt;
&lt;p&gt;And, you only want to DKIM-sign emails that are originated locally or from trustworthy sources. You don’t want to DKIM-sign messages that you received from the global Internet, and are sending out again (eg because of an email alias or mailing list). In theory if you verify DKIM on all incoming emails, you could avoid being fooled into signing bad emails, but rejecting all non-DKIM-verified email would be a very strong policy decision. Again, Exim doesn’t seem to have cooked machinery.&lt;/p&gt;
&lt;p&gt;The resulting Exim configuration parameters run to &lt;em&gt;22 lines&lt;/em&gt;, and because they’re parameters to an existing config item (the &lt;code&gt;smtp&lt;/code&gt; transport) they can’t even easily be deployed as a drop-in file via Debian’s “split config” Exim configuration scheme.&lt;/p&gt;
&lt;p&gt;(I don’t know if the file written for Exim’s use by dkim-rotate would be suitable for other MTAs, but this part of dkim-rotate could easily be extended.)&lt;/p&gt;
&lt;h1&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;I have today &lt;a href="https://www.chiark.greenend.org.uk/pipermail/sgo-software-announce/2022/000076.html"&gt;released dkim-rotate 0.4&lt;/a&gt;, which is the first public release for general use.&lt;/p&gt;
&lt;p&gt;I have it deployed and working, but it’s new so there may well be bugs to work out.&lt;/p&gt;
&lt;p&gt;If you would like to try it out, you can get it &lt;a href="https://salsa.debian.org/iwj/dkim-rotate"&gt;via git from Debian Salsa&lt;/a&gt;. (Debian folks can also find it freshly in Debian unstable.)&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=12191" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:11840</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/11840.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=11840"/>
    <title>chiark’s skip-skip-cross-up-grade</title>
    <published>2022-07-19T19:43:18Z</published>
    <updated>2022-07-30T11:27:58Z</updated>
    <category term="computers"/>
    <category term="debian"/>
    <category term="chiark"/>
    <dw:security>public</dw:security>
    <dw:reply-count>15</dw:reply-count>
    <content type="html">&lt;p&gt;Two weeks ago I upgraded chiark from Debian jessie i386 to bullseye amd64, after nearly 30 years running Debian i386. This went really quite well, in fact!&lt;/p&gt;
&lt;h1&gt;Background&lt;/h1&gt;
&lt;p&gt;chiark is my “colo” - a server I run, which lives in a data centre in London. It hosts ~200 users with shell accounts, various websites and mailing lists, moderators for a number of USENET newsgroups, and countless other services. chiark’s internal setup is designed to enable my users to do a maximum number of exciting things with a minimum of intervention from me.&lt;/p&gt;
&lt;p&gt;chiark’s OS install dates to 1993, when I installed Debian 0.93R5, the first version of Debian to advertise the ability to be upgraded without reinstalling. I think that makes it one of the oldest Debian installations in existence.&lt;/p&gt;
&lt;p&gt;Obviously it’s had several new hardware platforms too. (There was a prior install of Linux on the initial hardware, remnants of which can maybe still be seen in some obscure corners of chiark’s &lt;code&gt;/usr/local&lt;/code&gt;.)&lt;/p&gt;
&lt;p&gt;chiark’s install is also at the very high end of the installation complexity, and customisation, scale: reinstalling it completely would be an enormous amount of work. And it’s unique.&lt;/p&gt;
&lt;h1&gt;chiark’s upgrade history&lt;/h1&gt;
&lt;p&gt;chiark’s last major OS upgrade was to jessie (Debian 8, released in April 2015). That was in 2016. Since then we have been relying on Debian’s excellent security support posture, and the &lt;a href="https://wiki.debian.org/LTS"&gt;Debian LTS&lt;/a&gt; and more recently Freexian’s &lt;a href="https://wiki.debian.org/LTS/Extended"&gt;Debian ELTS&lt;/a&gt; projects and some local updates, The use of ELTS - which supports only a subset of packages - was particularly uncomfortable.&lt;/p&gt;
&lt;p&gt;Additionally, chiark was installed with 32-bit x86 Linux (Debian i386), since that was what was supported and available at the time. But 32-bit is looking very long in the tooth.&lt;/p&gt;
&lt;h1&gt;Why do a skip upgrade&lt;/h1&gt;
&lt;p&gt;So, I wanted to move to the fairly recent stable release - Debian 11 (bullseye), which is just short of a year old. And I wanted to “crossgrade” (as its called) to 64-bit.&lt;/p&gt;
&lt;p&gt;In the past, I have found I have had greater success by doing “direct” upgrades, skipping intermediate releases, rather than by following the officially-supported path of going via every intermediate release.&lt;/p&gt;
&lt;p&gt;Doing a skip upgrade avoids exposure to any packaging bugs which were present only in intermediate release(s). Debian does usually fix bugs, but Debian has many cautious users, so it is not uncommon for bugs to be found after release, and then not be fixed until the next one.&lt;/p&gt;
&lt;p&gt;A skip upgrade avoids the need to try to upgrade to already-obsolete releases (which can involve messing about with multiple snapshots from &lt;a href="https://snapshot.debian.org/"&gt;snapshot.debian.org&lt;/a&gt;. It is also significantly faster and simpler, which is important not only because it reduces downtime, but also because it removes opportunities (and reduces the time available) for things to go badly.&lt;/p&gt;
&lt;p&gt;One downside is that sometimes maintainers aggressively remove compatibility measures for older releases. (And compatibililty packages are generally removed quite quickly by even cautious maintainers.) That means that the sysadmin who wants to skip-upgrade needs to do more manual fixing of things that haven’t been dealt with automatically. And occasionally one finds compatibility problems that show up only when mixing very old and very new software, that no-one else has seen.&lt;/p&gt;
&lt;h1&gt;Crossgrading&lt;/h1&gt;
&lt;p&gt;Crossgrading is fairly complex and hazardous. It is well supported by the low level tools (eg, dpkg) but the higher-level packaging tools (eg, apt) get very badly confused.&lt;/p&gt;
&lt;p&gt;Nowadays the system is so complex that downloading things by hand and manually feeding them to dpkg is impractical, other than as a very occasional last resort.&lt;/p&gt;
&lt;p&gt;The approach, generally, has been to set the system up to “want to” be the new architecture, run apt in a download-only mode, and do the package installation manually, with some fixing up and retrying, until the system is coherent enough for apt to work.&lt;/p&gt;
&lt;p&gt;This is the approach I took. (In current releases, there are tools that will help but they are only in recent releases and I wanted to go direct. I also doubted that they would work properly on chiark, since it’s so unusual.)&lt;/p&gt;
&lt;h1&gt;Peril and planning&lt;/h1&gt;
&lt;p&gt;Overall, this was a risky strategy to choose. The package dependencies wouldn’t necessarily express all of the sequencing needed. But it still seemed that if I could come up with a working recipe, I could do it.&lt;/p&gt;
&lt;p&gt;I restored most of one of chiark’s backups onto a scratch volume on my laptop. With the LVM snapshot tools and chroots. I was able to develop and test a set of scripts that would perform the upgrade. This was a very effective approach: my super-fast laptop, with local caches of the package repositories, was able to do many “edit, test, debug” cycles.&lt;/p&gt;
&lt;p&gt;My recipe made heavy use of snapshot.debian.org, to make sure that it wouldn’t rot between testing and implementation.&lt;/p&gt;
&lt;p&gt;When I had a working scheme, I told my users about the planned downtime. I warned everyone it might take even 2 or 3 days. I made sure that my access arrangemnts to the data centre were in place, in case I needed to visit in person. (I have remote serial console and power cycler access.)&lt;/p&gt;
&lt;h1&gt;Reality - the terrible rescue install&lt;/h1&gt;
&lt;p&gt;My first task on taking the service down was the check that the emergency rescue installation worked: chiark has an ancient USB stick in the back, which I can boot to from the BIOS. The idea being that many things that go wrong could be repaired from there.&lt;/p&gt;
&lt;p&gt;I found that that install was too old to understand chiark’s storage arrangements. mdadm tools gave very strange output. So I needed to upgrade it. After some experiments, I rebooted back into the main install, bringing chiark’s service back online.&lt;/p&gt;
&lt;p&gt;I then used the main install of chiark as a kind of meta-rescue-image for the rescue-image. The process of getting the rescue image upgraded (not even to amd64, but just to something not totally ancient) was fraught. Several times I had to rescue it by copying files in from the main install outside. And, the rescue install was on a truly ancient 2G USB stick which was terribly terribly slow, and also very small.&lt;/p&gt;
&lt;p&gt;I hadn’t done any significant planning for this subtask, because it was low-risk: there was little way to break the main install. Due to all these adverse factors, sorting out the rescue image took &lt;em&gt;five hours&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;If I had known how long it would take, at the beginning, I would have skipped it. 5 hours is more than it would have taken to go to London and fix something in person.&lt;/p&gt;
&lt;h1&gt;Reality - the actual core upgrade&lt;/h1&gt;
&lt;p&gt;I was able to start the actual upgrade in the mid-afternoon. I meticulously checked and executed the steps from my plan.&lt;/p&gt;
&lt;p&gt;The terrifying scripts which sequenced the critical package updates ran flawlessly. Within an hour or so I had a system which was running bullseye amd64, albeit with many important packages still missing or unconfigured.&lt;/p&gt;
&lt;p&gt;So I didn’t need the rescue image after all, nor to go to the datacentre.&lt;/p&gt;
&lt;h1&gt;Fixing all the things&lt;/h1&gt;
&lt;p&gt;Then I had to deal with all the inevitable fallout from an upgrade.&lt;/p&gt;
&lt;p&gt;Notable incidents:&lt;/p&gt;
&lt;h2&gt;exim4 has a new tainting system&lt;/h2&gt;
&lt;p&gt;This is to try to help the sysadmin avoid writing unsafe string interpolations. (&lt;a href="https://xkcd.com/327/"&gt;“Little Bobby Tables.”&lt;/a&gt;) This was done by Exim upstream in a great hurry as part of a security response process.&lt;/p&gt;
&lt;p&gt;The new checks meant that the mail configuration did not work at all. I had to turn off the taint check completely. I’m fairly confident that this is correct, because I am hyper-aware of quoting issues and all of my configuration is written to avoid the problems that tainting is supposed to avoid.&lt;/p&gt;
&lt;p&gt;One particular annoyance is that the approach taken for sqlite lookups makes it totally impossible to use more than one sqlite database. I think the sqlite quoting operator which one uses to interpolate values produces tainted output? I need to investigate this properly.&lt;/p&gt;
&lt;h2&gt;LVM now ignores PVs which are directly contained within LVs by default&lt;/h2&gt;
&lt;p&gt;chiark has LVM-on-RAID-on-LVM. This generally works really well.&lt;/p&gt;
&lt;p&gt;However, there was one edge case where I ended up without the intermediate RAID layer. The result is LVM-on-LVM.&lt;/p&gt;
&lt;p&gt;But recent versions of the LVM tools do not look at PVs inside LVs, by default. This is to help you avoid corrupting the state of any VMs you have on your system. I didn’t know that at the time, though. All I knew was that LVM was claiming my PV was “unusable”, and wouldn’t explain why.&lt;/p&gt;
&lt;p&gt;I was about to start on a thorough reading of the 15,000-word essay that is the commentary in the default &lt;code&gt;/etc/lvm/lvm.conf&lt;/code&gt; to try to see if anything was relevant, when I received a helpful tipoff on IRC pointing me to the &lt;code&gt;scan_lvs&lt;/code&gt; option.&lt;/p&gt;
&lt;p&gt;I need to file a bug asking for the LVM tools to explain &lt;em&gt;why&lt;/em&gt; they have declared a PV unuseable.&lt;/p&gt;
&lt;h2&gt;apache2’s default config no longer read one of my config files&lt;/h2&gt;
&lt;p&gt;I had to do a merge (of my changes vs the maintainers’ changes) for &lt;code&gt;/etc/apache2/apache2.conf&lt;/code&gt;. When doing this merge I failed to notice that the file &lt;code&gt;/etc/apache2/conf.d/httpd.conf&lt;/code&gt; was no longer included by default. My merge dropped that line. There were some important things in there, and until I found this the webserver was broken.&lt;/p&gt;
&lt;h2&gt;&lt;code&gt;dpkg --skip-same-version&lt;/code&gt; DTWT during a crossgrade&lt;/h2&gt;
&lt;p&gt;(This is not a “fix all the things” - I found it when developing my upgrade process.)&lt;/p&gt;
&lt;p&gt;When doing a crossgrade, one often wants to say to dpkg “install all these things, but don’t reinstall things that have already been done”. That’s what &lt;code&gt;--skip-same-version&lt;/code&gt; is for.&lt;/p&gt;
&lt;p&gt;However, the logic had not been updated as part of the work to support multiarch, so it was wrong. I prepared a patched version of dpkg, and inserted it in the appropriate point in my prepared crossgrade plan.&lt;/p&gt;
&lt;p&gt;The patch is now filed as &lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1014476"&gt;bug #1014476 against dpkg upstream&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Mailman&lt;/h2&gt;
&lt;p&gt;Mailman is no longer in bullseye. It’s only available in the previous release, buster.&lt;/p&gt;
&lt;p&gt;bullseye has Mailman 3 which is a totally different system - requiring basically, a completely new install and configuration. To even preserve existing archive links (a very important requirement) is decidedly nontrivial.&lt;/p&gt;
&lt;p&gt;I decided to punt on this whole situation. Currently chiark is running buster’s version of Mailman. I will have to deal with this at some point and I’m not looking forward to it.&lt;/p&gt;
&lt;h2&gt;Python&lt;/h2&gt;
&lt;p&gt;Of course that Mailman is Python 2. The Python project’s extremely badly handled transition includes a recommendation to change the meaning of &lt;code&gt;#!/usr/bin/python&lt;/code&gt; from Python 2, to Python 3.&lt;/p&gt;
&lt;p&gt;But Python 3 is a new language, barely compatible with Python 2 even in the most recent iterations of both, and it is usual to need to coinstall them.&lt;/p&gt;
&lt;p&gt;Happily Debian have provided the &lt;code&gt;python-is-python2&lt;/code&gt; package to make things work sensibly, albeit with unpleasant imprecations in the package summary description.&lt;/p&gt;
&lt;h2&gt;USENET news&lt;/h2&gt;
&lt;p&gt;Oh my god. INN uses many non-portable data formats, which just depend on your C types. And there are complicated daemons, statically linked libraries which cache on-disk data, and much to go wrong.&lt;/p&gt;
&lt;p&gt;I had numerous problems with this, and several outages and malfunctions. I may write about that on a future occasion.&lt;/p&gt;
&lt;address&gt;(edited 2022-07-20 11:36 +01:00 and 2022-07-30 12:28+01:00 to fix typos)&lt;/address&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=11840" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:11775</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/11775.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=11775"/>
    <title>Otter (game server) 1.0.0 released</title>
    <published>2022-04-02T15:39:52Z</published>
    <updated>2022-04-02T15:39:52Z</updated>
    <category term="board games"/>
    <category term="computers"/>
    <category term="rust"/>
    <category term="otter"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">&lt;p&gt;I have just &lt;a href="http://www.chiark.greenend.org.uk/pipermail/sgo-software-announce/2022/000066.html"&gt;released Otter 1.0.0&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Recap: what is Otter&lt;/h3&gt;
&lt;p&gt;Otter is my game server for arbitrary board games. Unlike most online game systems. It does not know (nor does it need to know) the rules of the game you are playing. Instead, it lets you and your friends play with common tabletop/boardgame elements such as hands of cards, boards, and so on. So it’s something like a “tabletop simulator” (but it does not have any 3D, or a physics engine, or anything like that).&lt;/p&gt;
&lt;p&gt;There are provided game materials and templates for Penultima, Mao, and card games in general.&lt;/p&gt;
&lt;p&gt;Otter also supports uploadable game bundles, which allows users to add support for additional games - and this can be done without programming.&lt;/p&gt;
&lt;p&gt;For more information, see the &lt;a href="https://www.chiark.greenend.org.uk/~ianmdlvl/otter/docs/"&gt;online documentation&lt;/a&gt;. There are a longer intro and some screenshots in &lt;a href="https://diziet.dreamwidth.org/8121.html"&gt;my 2021 introductory blog post about Otter&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Releasing 1.0.0&lt;/h3&gt;
&lt;p&gt;I’m calling this release 1.0.0 because I think I can now say that its quality, reliability and stability is suitable for general use. In particular, Otter now builds on Stable Rust, which makes it a lot easier to install and maintain.&lt;/p&gt;
&lt;h3&gt;Switching web framework, and async Rust&lt;/h3&gt;
&lt;p&gt;I switched Otter from the Rocket web framework to Actix. There are things to prefer about both systems, and I still have a soft spot for Rocket. But ultimately I needed a framework which was fully released and supported for use with Stable Rust.&lt;/p&gt;
&lt;p&gt;There are few if any Rust web frameworks that are not &lt;code&gt;async&lt;/code&gt;. This is rather a shame. Async Rust is a considerably more awkward programming environment than ordinary non-async Rust. I don’t want to digress into a litany of complaints, but suffice it to say that while I really love Rust, my views on async Rust are considerably more mixed.&lt;/p&gt;
&lt;h3&gt;Future plans&lt;/h3&gt;
&lt;p&gt;In the near future I plan to add a couple of features to better support some particular games: currency-like resources, and a better UI for dice-like randomness.&lt;/p&gt;
&lt;p&gt;In the longer term, Otter’s, installation and account management arrangements are rather unsophisticated and un-webby. There is not currently any publicly available instance for you to try it out without installing it on a machine of your own. There’s not even any provided binaries: you must built Otter yourself. I hope to be able to improve this situation but it involves dealing with cloud CI and containers and so-on, which can all be rather unpleasant.&lt;/p&gt;
&lt;p&gt;Users on chiark will find an instance of Otter there.&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=11775" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:11154</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/11154.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=11154"/>
    <title>Rooting an Eos Fairphone 4</title>
    <published>2022-02-23T23:44:11Z</published>
    <updated>2022-02-23T23:44:11Z</updated>
    <category term="phone"/>
    <category term="computers"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">&lt;p&gt;Last week I received (finally) my Fairphone 4, supplied with a de-googled operating system, which I had ordered from the &lt;a href="https://esolutions.shop/shop/murena-fairphone-4-eu/"&gt;E Foundation’s shop&lt;/a&gt; in December. (I’m am very hard on hardware and my venerable Fairphone 2 is really on its last legs.)&lt;/p&gt;
&lt;p&gt;I expect to have full control over the software on any computing device I own which is as complicated, capable, and therefore, hazardous, as a mobile phone. Unfortunately the Eos image (they prefer to spell it “/e/ os”, srsly!) doesn’t come with a way to get root without taking fairly serious measures including unlocking the bootloader. Unlocking the bootloader wouldn’t be desirable for me but I can’t live without root. So.&lt;/p&gt;
&lt;p&gt;I started with these helpful instructions: &lt;a href="https://forum.xda-developers.com/t/fairphone-4-root.4376421/" class="uri"&gt;https://forum.xda-developers.com/t/fairphone-4-root.4376421/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I found the whole process a bit of a trial, and I thought I would write down what I did. But, it’s not straightforward, at least for someone like me who only has a dim understanding of all this Android stuff. Unfortunately, due to the number of missteps and restarts, what I &lt;em&gt;actually&lt;/em&gt; did is not really a sensible procedure. So here is a retcon of a process I think will work:&lt;/p&gt;
&lt;h3&gt;Unlock the bootloader&lt;/h3&gt;
&lt;p&gt;The E Foundation provide instructions for unlocking the bootloader on a stock FP4, here &lt;a href="https://doc.e.foundation/devices/FP4/install" class="uri"&gt;https://doc.e.foundation/devices/FP4/install&lt;/a&gt; and they seem applicable to the “Murena” phone supplied with Eos pre-installed, too.&lt;/p&gt;
&lt;p&gt;NB tht unlocking the bootloader &lt;strong&gt;wipes the phone&lt;/strong&gt;. So we do it first.&lt;/p&gt;
&lt;p&gt;So:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;Power on the phone, &lt;strong&gt;with no SIM installed&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;You get a welcome screen.&lt;/li&gt;
&lt;li&gt;Skip all things on startup &lt;strong&gt;including wifi&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Go to the very end of the settings, tap a gazillion times on the phone’s version until you’re a developer&lt;/li&gt;
&lt;li&gt;In the developer settings, allow usb debugging&lt;/li&gt;
&lt;li&gt;In the developer settings, allow oem bootloader unlocking&lt;/li&gt;
&lt;li&gt;Connect a computer via a USB cable, say yes on phone to USB debugging&lt;/li&gt;
&lt;li&gt;&lt;code&gt;adb reboot bootloader&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;The phone will reboot into a texty kind of screen, the bootloader&lt;/li&gt;
&lt;li&gt;&lt;code&gt;fastboot flashing unlock&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;The phone will reboot, back to the welcome screen&lt;/li&gt;
&lt;li&gt;Repeat steps 3-9 (maybe not all are necessary)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;fastboot flashing unlock_critical&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;The phone will reboot, back to the welcome screen&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Note that although you are running &lt;code&gt;fastboot&lt;/code&gt;, you must run this command with the phone in “bootloader” mode, &lt;em&gt;not&lt;/em&gt; “fastboot” (aka “fastbootd”) mode. If you run &lt;code&gt;fastboot flashing unlcok&lt;/code&gt; from fastboot you just get a “don’t know what you’re talking about”. I found conflicting instructions on what kind of Vulcan nerve pinches could be used to get into which boot modes, and had poor experiences with those. &lt;code&gt;adb reboot bootloader&lt;/code&gt; always worked reliably for me.&lt;/p&gt;
&lt;p&gt;Some docs say to run &lt;code&gt;fastboot oem unlock&lt;/code&gt;; I used &lt;code&gt;flashing&lt;/code&gt;. Maybe this depends on the Android tools version.&lt;/p&gt;
&lt;h3&gt;Initial privacy prep and OTA update&lt;/h3&gt;
&lt;p&gt;We want to update the supplied phone OS. The build mine shipped with is too buggy to run Magisk, the application we are going to use to root the phone. (With the pre-installed phone OS, Magisk crashes at the “patch boot image” step.) But I didn’t want to let the phone talk to Google, even for the push notifications registration.&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;From the welcome screen, skip all things except location, date, time. Notably, &lt;strong&gt;do not set up wifi&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;In settings, “microg” section
&lt;ol type="1"&gt;
&lt;li&gt;turn off cloud messaging&lt;/li&gt;
&lt;li&gt;turn off google safetynet&lt;/li&gt;
&lt;li&gt;turn off google registration (NB you must do this after the other two, because their sliders become dysfunctional after you turn google registration off)&lt;/li&gt;
&lt;li&gt;turn off both location modules&lt;/li&gt;
&lt;/ol&gt;&lt;/li&gt;
&lt;li&gt;In settings, location section, turn off allowed location for browser and magic earth&lt;/li&gt;
&lt;li&gt;Now go into settings and enable wifi, giving it your wifi details&lt;/li&gt;
&lt;li&gt;Tell the phone to update its operating system. This is a big download.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Install &lt;a href="https://magiskmanager.com/"&gt;Magisk&lt;/a&gt;, the root manager&lt;/h3&gt;
&lt;p&gt;(As a starting point I used these instructions &lt;a href="https://www.xda-developers.com/how-to-install-magisk/" class="uri"&gt;https://www.xda-developers.com/how-to-install-magisk/&lt;/a&gt; and a lot of random forum posts.)&lt;/p&gt;
&lt;p&gt;You will need the official &lt;code&gt;boot.img&lt;/code&gt;. Bizarrely there doesn’t seem to be a way to obtain this from the phone. Instead, you must download it. You can find it by starting at &lt;a href="https://doc.e.foundation/devices/FP4/install" class="uri"&gt;https://doc.e.foundation/devices/FP4/install&lt;/a&gt; which links to &lt;a href="https://images.ecloud.global/stable/FP4/" class="uri"&gt;https://images.ecloud.global/stable/FP4/&lt;/a&gt;. At the time of writing, the most recent version, whose version number seemed to correspond to the OS update I installed above, was &lt;code&gt;IMG-e-0.21-r-20220123158735-stable-FP4.zip&lt;/code&gt;.&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;Download the giant zipfile to your computer&lt;/li&gt;
&lt;li&gt;Unzip it to extract boot.img&lt;/li&gt;
&lt;li&gt;Copy the file to your phone’s “storage”. Eg, via adb: with the phone booted into the main operating system, using USB debugging, &lt;code&gt;adb push boot.img /storage/self/primary/Download&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;On the phone, open the browser, and enter &lt;a href="https://f-droid.org"&gt;&lt;code&gt;https://f-droid.org&lt;/code&gt;&lt;/a&gt;. Click on the link to install f-droid. You will need to enable installing apps from the browser (follow the provided flow to the settings, change the setting, and then use Back, and you can do the install). If you wish, you can download the f-droid apk separately on a computer, and verify it with pgp.&lt;/li&gt;
&lt;li&gt;Using f-droid, install Magisk. You will need to enable installing apps from f-droid. (I installed Magisk from f-droid because 1. I was going to trust f-droid anyway 2. it has a shorter URL than Magisk’s.)&lt;/li&gt;
&lt;li&gt;Open the Magisk app. Tell Magisk to install (Magisk, not the app). There will be only one option: patch boot file. Tell it to patch the &lt;code&gt;boot.img&lt;/code&gt; file from before.&lt;/li&gt;
&lt;li&gt;Transfer the &lt;code&gt;magisk_patched-THING.img&lt;/code&gt; back to your computer (eg via &lt;code&gt;adb pull&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;adb reboot bootloader&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;fastboot boot magisk_patched-THING.img&lt;/code&gt; (again, NB, from bootloader mode, not from fastboot mode)&lt;/li&gt;
&lt;li&gt;In Magisk you’ll see it shows as installed. But it’s not really; you’ve just booted from an image with it. Ask to install Magisk with “Direct install”.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;After you have done all this, I believe that each time you do an over-the-air OS update, you must, between installing the update and rebooting the phone, ask Magisk to “Install to inactive slot (after OTA)”. Presumably if you forget you must do the &lt;code&gt;fastboot boot&lt;/code&gt; dance again.&lt;/p&gt;
&lt;p&gt;After all this, I was able to use &lt;code&gt;tsu&lt;/code&gt; in Termux. There’s a strange behaviour with the root prompt you get apropos Termux’s request for root; I found that it definitely worked if Termux wasn’t the foreground app…&lt;/p&gt;
&lt;p&gt;You have to leave the bootloader unlocked. Howwever, as I understand it, the phone’s encryption will still prevent an attacker from hoovering the data out of your phone. The bootloader lock is to prevent someone tricking you into entering the decryption passkey into a trojaned device.&lt;/p&gt;
&lt;h3&gt;Other things to change&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Probably, after you’re done with this, disable installing apps from the Browser. I will install Signal before doing that, since that’s not in f-droid because of mutual distrust between the f-droid and Signal folks. The permission is called “Install unknown apps”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Turn off “instant apps” aka “open links in apps even if the app is not installed”. OMG WTF BBQ.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Turn off “wifi scanning even if wifi off”. WTF.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I turned off storage manager auto delete, on the grounds that I didn’t know what the phone might think of as “having been backed up”. I can manage my own space use, thanks very much.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There are probably other things to change. I have not yet transferred my Signal account from my old phone. It is possible that Signal will require me to re-enable the google push notifications, but I hope that having disabled them in microg it will be happy to use its own system, as it does on my old phone.&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=11154" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:10886</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/10886.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=10886"/>
    <title>EUDCC QR codes vs NHS “Travel” barcodes vs TAC Verify</title>
    <published>2022-02-04T19:43:37Z</published>
    <updated>2022-02-04T19:43:37Z</updated>
    <category term="eudcc"/>
    <category term="computers"/>
    <category term="covid"/>
    <dw:security>public</dw:security>
    <dw:reply-count>2</dw:reply-count>
    <content type="html">&lt;p&gt;The EU Digital Covid Certificate scheme is a format for (digitally signed) vaccination status certificates. Not only EU countries participate - the UK is now a participant in this scheme.&lt;/p&gt;
&lt;p&gt;I am currently on my way to go skiing in the French Alps. So I needed a certificate that would be accepted in France. AFAICT the official way to do this is to get the “international” certificate from the NHS, and take it to a French pharmacy who will convert it into something suitably French. (AIUI the NHS “international” barcode is the same regardless of whether you get it via the NHS website, the NHS app, or a paper letter. NB that there is one barcode per vaccine dose so you have to get the right one - probably that means your booster since there’s a 9 month rule!)&lt;/p&gt;
&lt;p&gt;I read on an forum somewhere that you could use the French TousAntiCovid app to convert the barcode. So I thought I would try that. The TousAntiCovid is Free Softare and on F-Droid, so I was happy to install and use it for this.&lt;/p&gt;
&lt;p&gt;I also used the French TAC Verify app to check to see what barcodes were accepted. (I found an official document addressed to French professionals recommending this as an option for verifying the status of visitors to one’s establishment.) Unfortunately this involves a googlified phone, but one could use a burner phone or ask a friend who’s bitten that bullet already.&lt;/p&gt;
&lt;p&gt;I discovered that, indeed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;My own NHS QR code is &lt;em&gt;not&lt;/em&gt; accepted by TAC Verify&lt;/li&gt;
&lt;li&gt;My own NHS QR code can be loaded into TousAntiCovid, and added to my “wallet” in the app&lt;/li&gt;
&lt;li&gt;If I get TousAntiCovid to display that certificate, it shows a &lt;em&gt;visually different&lt;/em&gt; QR code which TAC Verify &lt;em&gt;accepts&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This made me curious.&lt;/p&gt;
&lt;p&gt;I used a QR code reader to decode both barcodes. The decodings were identical! A long string of guff starting &lt;code&gt;HC1:&lt;/code&gt;. AIUI it is an encoded JWT. But there was a difference in the framing: Binary Eye reported that the NHS barcode used &lt;a href="https://en.wikipedia.org/wiki/QR_code#Error_correction"&gt;error correction level&lt;/a&gt; “M” (medium, aka 15%). The TousAntiCovid barcode used level “L” (low, 7%).&lt;/p&gt;
&lt;p&gt;I had my QR code software &lt;em&gt;regenerate&lt;/em&gt; a QR code at level “M” for the data from the TousAntiCovid code. The result was a QR code which is &lt;em&gt;identical&lt;/em&gt; (pixel-wise) to the one from the NHS.&lt;/p&gt;
&lt;p&gt;So the only difference is the error correction level. Curiously, both “L” (low, generated by TousAntiCovid, accepted by TAC Verify) and “M” (medium, generated by NHS, rejected by TAC Verify) are lower than the “Q” (25&amp;quot;) &lt;a href="https://github.com/ehn-dcc-development/hcert-spec/blob/main/hcert_spec.md#422-qr-2d-barcode"&gt;recommended by what I think is the specification&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is all very odd. But the upshot is that I think you can convert the NHS “international” barcode into something that should work in France simply by passing it through any QR code software to re-encode it at error correction level “L” (7%). But if you’re happy to use the TousAntiCovid app it’s probably a good way to store them.&lt;/p&gt;
&lt;p&gt;I guess I’ll find out when I get to France if the converted NHS barcodes work in real establishments. Thanks to the folks behind sanipasse.fr for publishing &lt;a href="https://sanipasse.fr/french-health-pass"&gt;some helpful backround info&lt;/a&gt; and operating a Free Software backed public verification service.&lt;/p&gt;
&lt;h3&gt;Footnote&lt;/h3&gt;
&lt;p&gt;To compare the QR codes pixelwise, I roughly cropped the NHS PDF image using a GUI tool, and then on each of the two images used &lt;code&gt;pnmcrop&lt;/code&gt; (to trim the border), &lt;code&gt;pnmscale&lt;/code&gt; (to rescale the one-pixel-per-pixel output from Binary Eye) and &lt;code&gt;pnmarith -difference&lt;/code&gt; to compare them (producing a pretty squirgly image showing just the pixel edges due to antialiasing).&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=10886" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2009-05-21:377446:10559</id>
    <link rel="alternate" type="text/html" href="https://diziet.dreamwidth.org/10559.html"/>
    <link rel="self" type="text/xml" href="https://diziet.dreamwidth.org/data/atom/?itemid=10559"/>
    <title>Debian’s approach to Rust - Dependency handling</title>
    <published>2022-01-03T18:35:37Z</published>
    <updated>2022-01-03T18:35:37Z</updated>
    <category term="rust"/>
    <category term="debian"/>
    <category term="computers"/>
    <dw:security>public</dw:security>
    <dw:reply-count>6</dw:reply-count>
    <content type="html">&lt;p&gt;tl;dr: Faithfully following upstream semver, in Debian package dependencies, is a bad idea.&lt;/p&gt;
&lt;h1&gt;Introduction&lt;/h1&gt;
&lt;p&gt;I have been involved in Debian for a very long time. And I’ve been working with Rust for a few years now. Late last year I had cause to try to work on &lt;a href="https://wiki.debian.org/Rust"&gt;Rust things within Debian&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When I did, I found it very difficult. The Debian Rust Team were very helpful. However, the workflow and tooling require very large amounts of manual clerical work - work which it is almost impossible to do correctly since the information required does not exist. I had wanted to package a fairly straightforward program I had written in Rust, partly as a learning exercise. But, unfortunately, after I got stuck in, it looked to me like the effort would be wildly greater than I was prepared for, so I gave up.&lt;/p&gt;
&lt;p&gt;Since then I’ve been thinking about what I learned about how Rust is packaged in Debian. I think I can see how to fix some of the problems. Although I don’t want to go charging in and try to tell everyone how to do things, I felt I ought at least to write up my ideas. Hence this blog post, which may become the first of a series.&lt;/p&gt;
&lt;p&gt;This post is going to be about &lt;a href="https://semver.org/"&gt;semver&lt;/a&gt; handling. I see problems with other aspects of dependency handling and source code management and traceability as well, and of course if my ideas find favour in principle, there are a lot of details that need to be worked out, including some kind of transition plan.&lt;/p&gt;
&lt;h2&gt;How Debian packages Rust, and build vs runtime dependencies&lt;/h2&gt;
&lt;p&gt;Today I will be discussing almost entirely &lt;em&gt;build&lt;/em&gt;-dependencies; Rust doesn’t (yet?) support dynamic linking, so built Rust binaries don’t have Rusty dependencies.&lt;/p&gt;
&lt;p&gt;However, things are a bit confusing because even the Debian “binary” packages for Rust libraries contain pure source code. So for a Rust library package, “building” the Debian binary package from the Debian source package does not involve running the Rust compiler; it’s just file-copying and format conversion. The library’s Rust dependencies do not need to be installed on the “build” machine for this.&lt;/p&gt;
&lt;p&gt;So I’m mostly going to be talking about &lt;a href="https://www.debian.org/doc/debian-policy/ch-relationships.html#binary-dependencies-depends-recommends-suggests-enhances-pre-depends"&gt;&lt;code&gt;Depends&lt;/code&gt;&lt;/a&gt; fields, which are Debian’s way of talking about &lt;em&gt;runtime&lt;/em&gt; dependencies, even though they are used only at build-time. The way this works is that some ultimate leaf package (which is supposed to produce actual executable code) &lt;a href="https://www.debian.org/doc/debian-policy/ch-relationships.html#relationships-between-source-and-binary-packages-build-depends-build-depends-indep-build-depends-arch-build-conflicts-build-conflicts-indep-build-conflicts-arch"&gt;&lt;code&gt;Build-Depends&lt;/code&gt;&lt;/a&gt; on the libraries it needs, and those &lt;code&gt;Depends&lt;/code&gt; on their under-libraries, so that everything needed is installed.&lt;/p&gt;
&lt;h1&gt;What do dependencies mean and what are they for anyway?&lt;/h1&gt;
&lt;p&gt;In systems where packages declare dependencies on other packages, it generally becomes necessary to support “versioned” dependencies. In all but the most simple systems, this involves an ordering (or similar) on version numbers and a way for a package A to specify that it depends on certain versions of B.&lt;/p&gt;
&lt;p&gt;Both Debian and Rust have this. Rust upstream crates have version numbers and can specify their dependencies according to semver. Debian’s dependency system can represent that.&lt;/p&gt;
&lt;p&gt;So it was natural for the designers of the scheme for packaging Rust code in Debian to simply translate the Rust version dependencies to Debian ones. However, while the two dependency schemes seem equivalent in the abstract, their concrete real-world semantics are totally different.&lt;/p&gt;
&lt;p&gt;These different package management systems have different practices and different meanings for dependencies. (Interestingly, &lt;a href="https://iscinumpy.dev/post/bound-version-constraints/"&gt;the Python world also has debates about the meaning and proper use of dependency versions&lt;/a&gt;.)&lt;/p&gt;
&lt;h2&gt;The epistemological problem&lt;/h2&gt;
&lt;p&gt;Consider some package A which is known to depend on B. In general, it is not trivial to know which versions of B will be satisfactory. I.e., whether a new B, with potentially-breaking changes, will actually break A.&lt;/p&gt;
&lt;p&gt;Sometimes tooling can be used which calculates this (eg, the Debian &lt;a href="https://manpages.debian.org/bullseye/dpkg-dev/dpkg-shlibdeps.1.en.html"&gt;&lt;code&gt;shlibdeps&lt;/code&gt;&lt;/a&gt; system for runtime dependencies) but this is unusual - especially for build-time dependencies. Which versions of B are OK can normally only be discovered by a human consideration of changelogs etc., or by having a computer try particular combinations.&lt;/p&gt;
&lt;p&gt;Few ecosystems with dependencies, in the Free Software community at least, make an attempt to precisely calculate the versions of B that are actually required to build some A. So it turns out that there are &lt;em&gt;three&lt;/em&gt; cases for a particular combination of A and B: it is believed to work; it is known not to work; and: it is not known whether it will work.&lt;/p&gt;
&lt;p&gt;And, I am not aware of any dependency system that has an explicit machine-readable representation for the “unknown” state, so that they can say something like “A is known to depend on B; versions of B before v1 are known to break; version v2 is known to work”. (Sometimes statements like that can be found in human-readable docs.)&lt;/p&gt;
&lt;p&gt;That leaves two possibilities for the semantics of a dependency &lt;em&gt;A depends B, version(s) V..W&lt;/em&gt;: &lt;strong&gt;Precise: A will definitely work if B matches V..W&lt;/strong&gt;, and &lt;strong&gt;Optimistic: We have no reason to think B breaks with any of V..W&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;At first sight the latter does not seem useful, since how would the package manager find a working combination? Taking Debian as an example, which uses optimistic version dependencies, the answer is as follows: The primary information about what package versions to use is not only the dependencies, but mostly in which Debian &lt;a href="https://en.wikipedia.org/wiki/Debian_releases"&gt;release&lt;/a&gt; is being targeted. (Other systems using optimistic version dependencies could use the date of the build, i.e. use only packages that are “current”.)&lt;/p&gt;
&lt;table rules="all"&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;th&gt;
Precise
&lt;th&gt;
&lt;p&gt;Optimistic&lt;/p&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;p&gt;People involved in version management&lt;/p&gt;
&lt;td&gt;
Package developers, &lt;br&gt; downstream developers/users.
&lt;td&gt;
&lt;p&gt;Package developers, &lt;br&gt; downstream developer/users, &lt;br&gt; distribution QA and release managers.&lt;/p&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;p&gt;Package developers declare versions V and dependency ranges V..W so that&lt;/p&gt;
&lt;td&gt;
It definitely works.
&lt;td&gt;
&lt;p&gt;A wide range of B can satisfy the declared requirement.&lt;/p&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;p&gt;The principal version data used by the package manager&lt;/p&gt;
&lt;td&gt;
Only dependency versions.
&lt;td&gt;
&lt;p&gt;Contextual, eg, Releases - set(s) of packages available.&lt;/p&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;p&gt;Version dependencies are for&lt;/p&gt;
&lt;td&gt;
Selecting working combinations (out of all that ever existed).
&lt;td&gt;
&lt;p&gt;Sequencing (ordering) of updates; QA.&lt;/p&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;p&gt;Expected use pattern by a downstream&lt;/p&gt;
&lt;td&gt;
Downstream can combine any&lt;br /&gt;
declared-good combination.
&lt;td&gt;
&lt;p&gt;Use a particular release of the whole system. Mixing-and-matching requires additional QA and remedial work.&lt;/p&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;p&gt;Downstreams are protected from breakage by&lt;/p&gt;
&lt;td&gt;
Pessimistically updating versions and dependencies whenever anything might go wrong.
&lt;td&gt;
&lt;p&gt;Whole-release QA.&lt;/p&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;p&gt;A substantial deployment will typically contain&lt;/p&gt;
&lt;td&gt;
Multiple versions of many packages.
&lt;td&gt;
&lt;p&gt;A single version of each package, except where there are actual incompatibilities which are too hard to fix.&lt;/p&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;p&gt;Package updates are driven by&lt;/p&gt;
&lt;td&gt;
Top-down: &lt;br&gt; Depending package updates the declared metadata.
&lt;td&gt;
Bottom-up: &lt;br&gt; Depended-on package is updated in the repository for the work-in-progress release.
&lt;/table&gt;
&lt;p&gt;So, while Rust and Debian have systems that look superficially similar, they contain fundamentally different kinds of information. Simply representing the Rust versions directly into Debian doesn’t work.&lt;/p&gt;
&lt;p&gt;What is currently done by the Debian Rust Team is to &lt;a href="https://salsa.debian.org/rust-team/debcargo-conf/-/raw/master/src/aes/debian/patches/relax-deps.patch"&gt;manually patch the dependency specifications&lt;/a&gt;, to relax them. This is very labour-intensive, and there is little automation supporting either decisionmaking or actually applying the resulting changes.&lt;/p&gt;
&lt;h1&gt;What to do&lt;/h1&gt;
&lt;h2&gt;Desired end goal&lt;/h2&gt;
&lt;p&gt;To update a Rust package in Debian, that many things depend on, one need simply update that package.&lt;/p&gt;
&lt;p&gt;Debian’s &lt;a href="https://tracker.debian.org/"&gt;sophisticated build and CI infrastructure&lt;/a&gt; will try building all the reverse-dependencies against the new version. Packages that actually fail against the new dependency are flagged as suffering from release-critical problems.&lt;/p&gt;
&lt;p&gt;Debian Rust developers then update those other packages too. If the problems turn out to be too difficult, it is possible to roll back.&lt;/p&gt;
&lt;p&gt;If a problem with a depending packages is not resolved in a timely fashion, priority is given to updating core packages, and the depending package falls by the wayside (since it is empirically unmaintainable, given available effort).&lt;/p&gt;
&lt;p&gt;There is no routine manual patching of dependency metadata (or of anything else).&lt;/p&gt;
&lt;h2&gt;Radical proposal&lt;/h2&gt;
&lt;p&gt;Debian should not precisely follow &lt;a href="https://doc.rust-lang.org/cargo/reference/semver.html"&gt;upstream Rust semver&lt;/a&gt; dependency information. Instead, Debian should optimistically try the combinations of packages that we want to have. The resulting breakages will be discovered by automated QA; they will have to be fixed by manual intervention of some kind, but usually, simply updating the depending package will be sufficient.&lt;/p&gt;
&lt;p&gt;This no longer ensures (unlike the upstream Rust scheme) that the result is expected to build and work if the dependencies are satisfied. But as discussed, we don’t really need that property in Debian. More important is the new property we gain: that we are able to mix and match versions that we find work in practice, without a great deal of manual effort.&lt;/p&gt;
&lt;p&gt;Or to put it another way, in Debian we should do as a Rust upstream maintainer does when they do the regular “update dependencies for new semvers” task: we should update everything, see what breaks, and fix those.&lt;/p&gt;
&lt;p&gt;(In theory a Rust upstream package maintainer is supposed to do some additional checks or something. But the practices are not standardised and any checks one does almost never reveal anything untoward, so in practice I think many Rust upstreams just update and see what happens. The Rust upstream community has other mechanisms - often, reactive ones - to deal with any problems. Debian should subscribe to those same information sources, eg &lt;a href="https://rustsec.org/"&gt;RustSec&lt;/a&gt;.)&lt;/p&gt;
&lt;h2&gt;Nobbling cargo&lt;/h2&gt;
&lt;p&gt;Somehow, when cargo is run to build Rust things against these Debian packages, cargo’s dependency system will have to be overridden so that the version of the package that is actually selected by Debian’s package manager is used by cargo without complaint.&lt;/p&gt;
&lt;p&gt;We probably don’t want to change the Rust version numbers of Debian Rust library packages, so this should be done by either presenting cargo with an automatically-massaged &lt;code&gt;Cargo.toml&lt;/code&gt; where the dependency version restrictions are relaxed, or by using a modified version of cargo which has special option(s) to relax certain dependencies.&lt;/p&gt;
&lt;h2&gt;Handling breakage&lt;/h2&gt;
&lt;p&gt;Rust packages in Debian should already be provided with &lt;a href="https://salsa.debian.org/ci-team/autopkgtest/raw/master/doc/README.package-tests.rst"&gt;autopkgtests&lt;/a&gt; so that &lt;a href="https://ci.debian.net/"&gt;ci.debian.net&lt;/a&gt; will detect build breakages. Build breakages will stop the updated dependency from migrating to the work-in-progress release, &lt;a href="https://release.debian.org/"&gt;Debian testing&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To resolve this, and allow forward progress, we will usually upload a new version of the dependency containing an appropriate &lt;a href="https://www.debian.org/doc/debian-policy/ch-relationships.html#packages-which-break-other-packages-breaks"&gt;&lt;code&gt;Breaks&lt;/code&gt;&lt;/a&gt;, and either file an &lt;a href="https://bugs.debian.org/release-critical/"&gt;RC bug&lt;/a&gt; against the depending package, or update it. This can be done after the upload of the base package.&lt;/p&gt;
&lt;p&gt;Thus, resolution of breakage due to incompatibilities will be done collaboratively within the Debian archive, rather than ad-hoc locally. And it can be done without blocking.&lt;/p&gt;
&lt;p&gt;My proposal prioritises the ability to make progress in the core, over stability and in particular over retaining leaf packages. This is not Debian’s usual approach but given the Rust ecosystem’s practical attitudes to API design, versioning, etc., I think the instability will be manageable. In practice fixing leaf packages is not usually really that hard, but it’s still work and the question is what happens if the work doesn’t get done. After all we are always a shortage of effort - and we probably still will be, even if we get rid of the makework clerical work of patching dependency versions everywhere (so that usually no work is needed on depending packages).&lt;/p&gt;
&lt;h2&gt;Exceptions to the one-version rule&lt;/h2&gt;
&lt;p&gt;There will have to be some packages that we need to keep multiple versions of. We won’t want to update every depending package manually when this happens. Instead, we’ll probably want to set a version number split: rdepends which want version &amp;lt;X will get the old one.&lt;/p&gt;
&lt;h1&gt;Details - a sketch&lt;/h1&gt;
&lt;p&gt;I’m going to sketch out some of the details of a scheme I think would work. But I haven’t thought this through fully. This is still mostly at the handwaving stage. If my ideas find favour, we’ll have to do some detailed review and consider a whole bunch of edge cases I’m glossing over.&lt;/p&gt;
&lt;p&gt;The dependency specification consists of two halves: the depending &lt;code&gt;.deb&lt;/code&gt;‘s &lt;code&gt;Depends&lt;/code&gt; (or, for a leaf package, &lt;code&gt;Build-Depends&lt;/code&gt;) and the base &lt;code&gt;.deb&lt;/code&gt;’ &lt;code&gt;Version&lt;/code&gt; and perhaps &lt;code&gt;Breaks&lt;/code&gt; and &lt;a href="https://www.debian.org/doc/debian-policy/ch-relationships.html#virtual-packages-provides"&gt;&lt;code&gt;Provides&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Even though libraries vastly outnumber leaf packages, we still want to avoid updating leaf Debian source packages simply to bump dependencies.&lt;/p&gt;
&lt;h2&gt;Dependency encoding proposal&lt;/h2&gt;
&lt;p&gt;Compared to the existing scheme, I suggest we implement the dependency relaxation by changing the depended-on package, rather than the depending one.&lt;/p&gt;
&lt;p&gt;So we retain roughly the existing semver translation for &lt;code&gt;Depends&lt;/code&gt; fields. But we drop all local patching of dependency versions.&lt;/p&gt;
&lt;p&gt;Into every library source package we insert a new Debian-specific metadata file declaring the earliest version that we uploaded. When we translate a library source package to a &lt;code&gt;.deb&lt;/code&gt;, the “binary” package build adds &lt;code&gt;Provides&lt;/code&gt; for every previous version.&lt;/p&gt;
&lt;p&gt;The effect is that when one updates a base package, the usual behaviour is to simply try to use it to satisfy everything that depends on that base package. The Debian CI will report the build or test failures of all the depending packages which the API changes broke.&lt;/p&gt;
&lt;p&gt;We will have a choice, then:&lt;/p&gt;
&lt;h2&gt;Breakage handling - update broken depending packages individually&lt;/h2&gt;
&lt;p&gt;If there are only a few packages that are broken, for each broken dependency, we add an appropriate &lt;code&gt;Breaks&lt;/code&gt; to the base binary package. (The version field in the &lt;code&gt;Breaks&lt;/code&gt; should be chosen narrowly, so that it is possible to resolve it without changing the major version of the dependency, eg by making a minor source change.)&lt;/p&gt;
&lt;p&gt;When can then do one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Update the dependency from upstream, to a version which works with the new base. (Assuming there is one.) This should be the usual response.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix the dependency source code so that builds and works with the new base package. If this wasn’t just a backport of an upstream change, we should send our fix upstream. (We should prefer to update the whole package, than to backport an API adjustment.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;File an RC bug against the dependency (which will eventually trigger autoremoval), or preemptively ask for the Debian release managers to remove the dependency from the work-in-progress release.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Breakage handling - declare new incompatible API in Debian&lt;/h2&gt;
&lt;p&gt;If the API changes are widespread and many dependencies are affected, we should represent this by changing the in-Debian-source-package metadata to arrange for fewer &lt;code&gt;Provides&lt;/code&gt; lines to be generated - withdrawing the &lt;code&gt;Provides&lt;/code&gt; lines for earlier APIs.&lt;/p&gt;
&lt;p&gt;Hopefully examination of the upstream changelog will show what the main compat break is, and therefore tell us which &lt;code&gt;Provides&lt;/code&gt; we still want to retain.&lt;/p&gt;
&lt;p&gt;This is like declaring &lt;code&gt;Breaks&lt;/code&gt; for &lt;em&gt;all&lt;/em&gt; the rdepends. We should do it if many rdepends are affected.&lt;/p&gt;
&lt;p&gt;Then, for each rdependency, we must choose one of the responses in the bullet points above. In practice this will often be a mass bug filing campaign, or large update campaign.&lt;/p&gt;
&lt;h2&gt;Breakage handling - multiple versions&lt;/h2&gt;
&lt;p&gt;Sometimes there will be a big API rewrite in some package, and we can’t easily update all of the rdependencies because the upstream ecosystem is fragmented and the work involved in reconciling it all is too substantial.&lt;/p&gt;
&lt;p&gt;When this happens we will bite the bullet and include multiple versions of the base package in Debian. The old version will become a new source package with a version number in its name.&lt;/p&gt;
&lt;p&gt;This is analogous to how key C/C++ libraries are handled.&lt;/p&gt;
&lt;h2&gt;Downsides of this scheme&lt;/h2&gt;
&lt;p&gt;The first obvious downside is that assembling some arbitrary set of Debian Rust library packages, that satisfy the dependencies declared by Debian, is no longer necessarily going to work. The combinations that Debian has tested - Debian releases - will work, though. And at least, any breakage will affect only people &lt;em&gt;building&lt;/em&gt; Rust code using Debian-supplied libraries.&lt;/p&gt;
&lt;p&gt;Another less obvious problem is that because there is no such thing as &lt;code&gt;Build-Breaks&lt;/code&gt; (in a Debian binary package), the per-package update scheme may result in no way to declare that a particular library update breaks the build of a particular leaf package. In other words, old source packages might no longer build when exposed to newer versions of their build-dependencies, taken from a newer Debian release. This is a thing that already happens in Debian, with source packages in other languages, though.&lt;/p&gt;
&lt;h2&gt;Semver violation&lt;/h2&gt;
&lt;p&gt;I am proposing that Debian should routinely compile Rust packages against dependencies in violation of the declared semver, and ship the results to Debian’s millions of users.&lt;/p&gt;
&lt;p&gt;This sounds quite alarming! But I think it will not in fact lead to shipping bad binaries, for the following reasons:&lt;/p&gt;
&lt;p&gt;The Rust community strongly values safety (in a broad sense) in its APIs. An API which is merely &lt;em&gt;capable of&lt;/em&gt; insecure (or other seriously bad) use is generally considered to be wrong. For example, such situations are regarded as vulnerabilities by the RustSec project, even if there is no suggestion that any actually-broken caller source code exists, let alone that actually-broken compiled code is likely.&lt;/p&gt;
&lt;p&gt;The Rust community also values alerting programmers to problems. Nontrivial semantic changes to APIs are typically accompanied not merely by a semver bump, but also by changes to names or types, precisely to ensure that broken combinations of code do not compile.&lt;/p&gt;
&lt;p&gt;Or to look at it another way, in Debian we would simply be doing what many Rust upstream developers routinely do: bump the versions of their dependencies, and throw it at the wall and hope it sticks. We can mitigate the risks the same way a Rust upstream maintainer would: when updating a package we should of course review the upstream changelog for any gotchas. We should look at RustSec and other upstream ecosystem tracking and authorship information.&lt;/p&gt;
&lt;h1&gt;Difficulties for another day&lt;/h1&gt;
&lt;p&gt;As I said, I see some other issues with Rust in Debian.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I think the library “&lt;a href="https://doc.rust-lang.org/cargo/reference/features.html"&gt;feature flag&lt;/a&gt;” &lt;a href="https://wiki.debian.org/Teams/RustPackaging/Policy"&gt;encoding scheme&lt;/a&gt; is unnecessary. I hope to explain this in a future essay.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I found Debian’s approach to handling the source code for its Rust packages quite awkward; and, it has some troubling properties. Again, I hope to write about this later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I get the impression that updating &lt;a href="https://tracker.debian.org/pkg/rustc"&gt;rustc in Debian&lt;/a&gt; is a very difficult process. I haven’t worked on this myself and I don’t feel qualified to have opinions about it. I hope others are thinking about how to make things easier.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Thanks all for your attention!&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=diziet&amp;ditemid=10559" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
</feed>
