summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorLuke Shumaker <lukeshu@lukeshu.com>2018-02-09 17:28:49 -0500
committerLuke Shumaker <lukeshu@lukeshu.com>2018-02-09 17:29:55 -0500
commitfe95d9386908cd2b199778db38eefdce1d054020 (patch)
tree4aa5f12ef1c920c832efff93abf162918fba426c
parent208b8bb7a31158fabb444931e957a2b7185bfd9f (diff)
parent46238db12b6178ce3826665a1fea180dd28b0356 (diff)
make: Update em-dashes for pandoc > 1.8.2.1
-rw-r--r--pandoc.rb29
-rw-r--r--public/arch-systemd.html16
-rw-r--r--public/bash-arrays.html42
-rw-r--r--public/bash-redirection.html6
-rw-r--r--public/build-bash-1.html20
-rw-r--r--public/emacs-as-an-os.html8
-rw-r--r--public/emacs-shells.html18
-rw-r--r--public/fs-licensing-explanation.html24
-rw-r--r--public/git-go-pre-commit.html10
-rw-r--r--public/http-notes.html38
-rw-r--r--public/index.atom502
-rw-r--r--public/index.html11
-rw-r--r--public/index.md1
-rw-r--r--public/java-segfault-redux.html44
-rw-r--r--public/java-segfault-redux.md4
-rw-r--r--public/java-segfault.html18
-rw-r--r--public/java-segfault.md4
-rw-r--r--public/lp2015-videos.html4
-rw-r--r--public/make-memoize.html8
-rw-r--r--public/nginx-mediawiki.html6
-rw-r--r--public/nginx-mediawiki.md2
-rw-r--r--public/pacman-overview.html22
-rw-r--r--public/poor-system-documentation.html6
-rw-r--r--public/purdue-cs-login.html26
-rw-r--r--public/rails-improvements.html6
-rw-r--r--public/ryf-routers.html10
-rw-r--r--public/term-colors.html10
-rw-r--r--public/term-colors.md4
-rw-r--r--public/what-im-working-on-fall-2014.html42
-rw-r--r--public/x11-systemd.html48
30 files changed, 539 insertions, 450 deletions
diff --git a/pandoc.rb b/pandoc.rb
index 9c12351..8c51cb4 100644
--- a/pandoc.rb
+++ b/pandoc.rb
@@ -1,3 +1,4 @@
+# coding: utf-8
require 'open3'
require 'json'
@@ -37,7 +38,11 @@ module Pandoc
end
def [](key)
- Pandoc::AST::js2sane(@js[0]["unMeta"][key])
+ Pandoc::AST::js2sane(@js["meta"][key])
+ end
+
+ def js
+ @js
end
def to(format)
@@ -61,16 +66,30 @@ module Pandoc
return js
end
case js["t"]
+ when "MetaMap"
+ Hash[js["c"].map{|k,v| [k, js2sane(v)]}]
when "MetaList"
js["c"].map{|c| js2sane(c)}
- when "MetaInlines"
- js["c"].map{|c| js2sane(c)}.join()
- when "Space"
- " "
+ when "MetaBool"
+ js["c"]
when "MetaString"
js["c"]
+ when "MetaInlines"
+ js["c"].map{|c| js2sane(c)}.join()
+ when "MetaBlocks"
+ js["c"].map{|c| js2sane(c)}.join("\n")
when "Str"
js["c"]
+ when "Space"
+ " "
+ when "RawInline"
+ js["c"][1]
+ when "RawBlock"
+ js["c"][1]
+ when "Para"
+ js["c"].map{|c| js2sane(c)}.join()
+ else
+ raise "Unexpected AST node type '#{js["t"]}'"
end
end
end
diff --git a/public/arch-systemd.html b/public/arch-systemd.html
index d9507ae..662645c 100644
--- a/public/arch-systemd.html
+++ b/public/arch-systemd.html
@@ -9,19 +9,19 @@
<body>
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » arch-systemd</header>
<article>
-<h1 id="what-arch-linuxs-switch-to-systemd-means-for-users">What Arch Linux's switch to systemd means for users</h1>
+<h1 id="what-arch-linuxs-switch-to-systemd-means-for-users">What Arch Linux’s switch to systemd means for users</h1>
<p>This is based on a post on <a href="http://www.reddit.com/r/archlinux/comments/zoffo/systemd_we_will_keep_making_it_the_distro_we_like/c66nrcb">reddit</a>, published on 2012-09-11.</p>
<p>systemd is a replacement for UNIX System V-style init; instead of having <code>/etc/init.d/*</code> or <code>/etc/rc.d/*</code> scripts, systemd runs in the background to manage them.</p>
-<p>This has the <strong>advantages</strong> that there is proper dependency tracking, easing the life of the administrator and allowing for things to be run in parallel safely. It also uses &quot;targets&quot; instead of &quot;init levels&quot;, which just makes more sense. It also means that a target can be started or stopped on the fly, such as mounting or unmounting a drive, which has in the past only been done at boot up and shut down.</p>
-<p>The <strong>downside</strong> is that it is (allegedly) big, bloated<a href="#fn1" class="footnoteRef" id="fnref1"><sup>1</sup></a>, and does (arguably) more than it should. Why is there a dedicated systemd-fsck? Why does systemd encapsulate the functionality of syslog? That, and it means somebody is standing on my lawn.</p>
-<p>The <strong>changes</strong> an Arch user needs to worry about is that everything is being moved out of <code>/etc/rc.conf</code>. Arch users will still have the choice between systemd and SysV-init, but rc.conf is becoming the SysV-init configuration file, rather than the general system configuration file. If you will still be using SysV-init, basically the only thing in rc.conf will be <code>DAEMONS</code>.<a href="#fn2" class="footnoteRef" id="fnref2"><sup>2</sup></a> For now there is compatibility for the variables that used to be there, but that is going away.</p>
+<p>This has the <strong>advantages</strong> that there is proper dependency tracking, easing the life of the administrator and allowing for things to be run in parallel safely. It also uses “targets” instead of “init levels”, which just makes more sense. It also means that a target can be started or stopped on the fly, such as mounting or unmounting a drive, which has in the past only been done at boot up and shut down.</p>
+<p>The <strong>downside</strong> is that it is (allegedly) big, bloated<a href="#fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a>, and does (arguably) more than it should. Why is there a dedicated systemd-fsck? Why does systemd encapsulate the functionality of syslog? That, and it means somebody is standing on my lawn.</p>
+<p>The <strong>changes</strong> an Arch user needs to worry about is that everything is being moved out of <code>/etc/rc.conf</code>. Arch users will still have the choice between systemd and SysV-init, but rc.conf is becoming the SysV-init configuration file, rather than the general system configuration file. If you will still be using SysV-init, basically the only thing in rc.conf will be <code>DAEMONS</code>.<a href="#fn2" class="footnote-ref" id="fnref2"><sup>2</sup></a> For now there is compatibility for the variables that used to be there, but that is going away.</p>
<section class="footnotes">
<hr />
<ol>
-<li id="fn1"><p><em>I</em> don't think it's bloated, but that is the criticism. Basically, I discount any argument that uses &quot;bloated&quot; without backing it up. I was trying to say that it takes a lot of heat for being bloated, and that there is be some truth to that (the systemd-fsck and syslog comments), but that these claims are largely unsubstantiated, and more along the lines of &quot;I would have done it differently&quot;. Maybe your ideas are better, but you haven't written the code.</p>
-<p>I personally don't have an opinion either way about SysV-init vs systemd. I recently migrated my boxes to systemd, but that was because the SysV init scripts for NFSv4 in Arch are problematic. I suppose this is another <strong>advantage</strong> I missed: <em>people generally consider systemd &quot;units&quot; to be more robust and easier to write than SysV &quot;scripts&quot;.</em></p>
-<p>I'm actually not a fan of either. If I had more time on my hands, I'd be running a <code>make</code>-based init system based on a research project IBM did a while ago. So I consider myself fairly objective; my horse isn't in this race.<a href="#fnref1">↩</a></p></li>
-<li id="fn2"><p>You can still have <code>USEDMRAID</code>, <code>USELVM</code>, <code>interface</code>, <code>address</code>, <code>netmask</code>, and <code>gateway</code>. But those are minor.<a href="#fnref2">↩</a></p></li>
+<li id="fn1"><p><em>I</em> don’t think it’s bloated, but that is the criticism. Basically, I discount any argument that uses “bloated” without backing it up. I was trying to say that it takes a lot of heat for being bloated, and that there is be some truth to that (the systemd-fsck and syslog comments), but that these claims are largely unsubstantiated, and more along the lines of “I would have done it differently”. Maybe your ideas are better, but you haven’t written the code.</p>
+<p>I personally don’t have an opinion either way about SysV-init vs systemd. I recently migrated my boxes to systemd, but that was because the SysV init scripts for NFSv4 in Arch are problematic. I suppose this is another <strong>advantage</strong> I missed: <em>people generally consider systemd “units” to be more robust and easier to write than SysV “scripts”.</em></p>
+<p>I’m actually not a fan of either. If I had more time on my hands, I’d be running a <code>make</code>-based init system based on a research project IBM did a while ago. So I consider myself fairly objective; my horse isn’t in this race.<a href="#fnref1" class="footnote-back">↩</a></p></li>
+<li id="fn2"><p>You can still have <code>USEDMRAID</code>, <code>USELVM</code>, <code>interface</code>, <code>address</code>, <code>netmask</code>, and <code>gateway</code>. But those are minor.<a href="#fnref2" class="footnote-back">↩</a></p></li>
</ol>
</section>
diff --git a/public/bash-arrays.html b/public/bash-arrays.html
index af76b18..8e424bb 100644
--- a/public/bash-arrays.html
+++ b/public/bash-arrays.html
@@ -10,24 +10,24 @@
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » bash-arrays</header>
<article>
<h1 id="bash-arrays">Bash arrays</h1>
-<p>Way too many people don't understand Bash arrays. Many of them argue that if you need arrays, you shouldn't be using Bash. If we reject the notion that one should never use Bash for scripting, then thinking you don't need Bash arrays is what I like to call &quot;wrong&quot;. I don't even mean real scripting; even these little stubs in <code>/usr/bin</code>:</p>
+<p>Way too many people don’t understand Bash arrays. Many of them argue that if you need arrays, you shouldn’t be using Bash. If we reject the notion that one should never use Bash for scripting, then thinking you don’t need Bash arrays is what I like to call “wrong”. I don’t even mean real scripting; even these little stubs in <code>/usr/bin</code>:</p>
<pre><code>#!/bin/sh
java -jar /…/something.jar $* # WRONG!</code></pre>
<p>Command line arguments are exposed as an array, that little <code>$*</code> is accessing it, and is doing the wrong thing (for the lazy, the correct thing is <code>-- &quot;$@&quot;</code>). Arrays in Bash offer a safe way preserve field separation.</p>
-<p>One of the main sources of bugs (and security holes) in shell scripts is field separation. That's what arrays are about.</p>
+<p>One of the main sources of bugs (and security holes) in shell scripts is field separation. That’s what arrays are about.</p>
<h2 id="what-field-separation">What? Field separation?</h2>
-<p>Field separation is just splitting a larger unit into a list of &quot;fields&quot;. The most common case is when Bash splits a &quot;simple command&quot; (in the Bash manual's terminology) into a list of arguments. Understanding how this works is an important prerequisite to understanding arrays, and even why they are important.</p>
-<p>Dealing with lists is something that is very common in Bash scripts; from dealing with lists of arguments, to lists of files; they pop up a lot, and each time, you need to think about how the list is separated. In the case of <code>$PATH</code>, the list is separated by colons. In the case of <code>$CFLAGS</code>, the list is separated by whitespace. In the case of actual arrays, it's easy, there's no special character to worry about, just quote it, and you're good to go.</p>
+<p>Field separation is just splitting a larger unit into a list of “fields”. The most common case is when Bash splits a “simple command” (in the Bash manual’s terminology) into a list of arguments. Understanding how this works is an important prerequisite to understanding arrays, and even why they are important.</p>
+<p>Dealing with lists is something that is very common in Bash scripts; from dealing with lists of arguments, to lists of files; they pop up a lot, and each time, you need to think about how the list is separated. In the case of <code>$PATH</code>, the list is separated by colons. In the case of <code>$CFLAGS</code>, the list is separated by whitespace. In the case of actual arrays, it’s easy, there’s no special character to worry about, just quote it, and you’re good to go.</p>
<h2 id="bash-word-splitting">Bash word splitting</h2>
-<p>When Bash reads a &quot;simple command&quot;, it splits the whole thing into a list of &quot;words&quot;. &quot;The first word specifies the command to be executed, and is passed as argument zero. The remaining words are passed as arguments to the invoked command.&quot; (to quote <code>bash(1)</code>)</p>
+<p>When Bash reads a “simple command”, it splits the whole thing into a list of “words”. “The first word specifies the command to be executed, and is passed as argument zero. The remaining words are passed as arguments to the invoked command.” (to quote <code>bash(1)</code>)</p>
<p>It is often hard for those unfamiliar with Bash to understand when something is multiple words, and when it is a single word that just contains a space or newline. To help gain an intuitive understanding, I recommend using the following command to print a bullet list of words, to see how Bash splits them up:</p>
<pre><code>printf ' -> %s\n' <var>words…</var><hr> -&gt; word one
-&gt; multiline
word
-&gt; third word
</code></pre>
-<p>In a simple command, in absence of quoting, Bash separates the &quot;raw&quot; input into words by splitting on spaces and tabs. In other places, such as when expanding a variable, it uses the same process, but splits on the characters in the <code>$IFS</code> variable (which has the default value of space/tab/newline). This process is, creatively enough, called &quot;word splitting&quot;.</p>
-<p>In most discussions of Bash arrays, one of the frequent criticisms is all the footnotes and &quot;gotchas&quot; about when to quote things. That's because they usually don't set the context of word splitting. <strong>Double quotes (<code>&quot;</code>) inhibit Bash from doing word splitting.</strong> That's it, that's all they do. Arrays are already split into words; without wrapping them in double quotes Bash re-word splits them, which is almost <em>never</em> what you want; otherwise, you wouldn't be working with an array.</p>
+<p>In a simple command, in absence of quoting, Bash separates the “raw” input into words by splitting on spaces and tabs. In other places, such as when expanding a variable, it uses the same process, but splits on the characters in the <code>$IFS</code> variable (which has the default value of space/tab/newline). This process is, creatively enough, called “word splitting”.</p>
+<p>In most discussions of Bash arrays, one of the frequent criticisms is all the footnotes and “gotchas” about when to quote things. That’s because they usually don’t set the context of word splitting. <strong>Double quotes (<code>&quot;</code>) inhibit Bash from doing word splitting.</strong> That’s it, that’s all they do. Arrays are already split into words; without wrapping them in double quotes Bash re-word splits them, which is almost <em>never</em> what you want; otherwise, you wouldn’t be working with an array.</p>
<h2 id="normal-array-syntax">Normal array syntax</h2>
<table>
<caption>
@@ -74,7 +74,7 @@ word
</tr>
</tbody>
</table>
-<p>It's really that simple—that covers most usages of arrays, and most of the mistakes made with them.</p>
+<p>It’s really that simple—that covers most usages of arrays, and most of the mistakes made with them.</p>
<p>To help you understand the difference between <code>@</code> and <code>*</code>, here is a sample of each:</p>
<table>
<tbody>
@@ -165,8 +165,8 @@ done</code></pre></td>
</tbody>
</table>
<h2 id="argument-array-syntax">Argument array syntax</h2>
-<p>Accessing the arguments is mostly that simple, but that array doesn't actually have a variable name. It's special. Instead, it is exposed through a series of special variables (normal variables can only start with letters and underscore), that <em>mostly</em> match up with the normal array syntax.</p>
-<p>Setting the arguments array, on the other hand, is pretty different. That's fine, because setting the arguments array is less useful anyway.</p>
+<p>Accessing the arguments is mostly that simple, but that array doesn’t actually have a variable name. It’s special. Instead, it is exposed through a series of special variables (normal variables can only start with letters and underscore), that <em>mostly</em> match up with the normal array syntax.</p>
+<p>Setting the arguments array, on the other hand, is pretty different. That’s fine, because setting the arguments array is less useful anyway.</p>
<table>
<caption>
<h1>Accessing the arguments array</h1>
@@ -204,7 +204,7 @@ done</code></pre></td>
<tr><td><code>array=("${array[0]}" "${array[@]:<var>n+1</var>}")</code></td><td><code>shift <var>n</var></code></td></tr>
</tbody>
</table>
-<p>Did you notice what was inconsistent? The variables <code>$*</code>, <code>$@</code>, and <code>$#</code> behave like the <var>n</var>=0 entry doesn't exist.</p>
+<p>Did you notice what was inconsistent? The variables <code>$*</code>, <code>$@</code>, and <code>$#</code> behave like the <var>n</var>=0 entry doesn’t exist.</p>
<table>
<caption>
<h1>Inconsistencies</h1>
@@ -233,11 +233,11 @@ done</code></pre></td>
</tr>
</tbody>
</table>
-<p>These make sense because argument 0 is the name of the script—we almost never want that when parsing arguments. You'd spend more code getting the values that it currently gives you.</p>
+<p>These make sense because argument 0 is the name of the script—we almost never want that when parsing arguments. You’d spend more code getting the values that it currently gives you.</p>
<p>Now, for an explanation of setting the arguments array. You cannot set argument <var>n</var>=0. The <code>set</code> command is used to manipulate the arguments passed to Bash after the fact—similarly, you could use <code>set -x</code> to make Bash behave like you ran it as <code>bash -x</code>; like most GNU programs, the <code>--</code> tells it to not parse any of the options as flags. The <code>shift</code> command shifts each entry <var>n</var> spots to the left, using <var>n</var>=1 if no value is specified; and leaving argument 0 alone.</p>
-<h2 id="but-you-mentioned-gotchas-about-quoting">But you mentioned &quot;gotchas&quot; about quoting!</h2>
-<p>But I explained that quoting simply inhibits word splitting, which you pretty much never want when working with arrays. If, for some odd reason, you do what word splitting, then that's when you don't quote. Simple, easy to understand.</p>
-<p>I think possibly the only case where you do want word splitting with an array is when you didn't want an array, but it's what you get (arguments are, by necessity, an array). For example:</p>
+<h2 id="but-you-mentioned-gotchas-about-quoting">But you mentioned “gotchas” about quoting!</h2>
+<p>But I explained that quoting simply inhibits word splitting, which you pretty much never want when working with arrays. If, for some odd reason, you do what word splitting, then that’s when you don’t quote. Simple, easy to understand.</p>
+<p>I think possibly the only case where you do want word splitting with an array is when you didn’t want an array, but it’s what you get (arguments are, by necessity, an array). For example:</p>
<pre><code># Usage: path_ls PATH1 PATH2…
# Description:
# Takes any number of PATH-style values; that is,
@@ -253,13 +253,13 @@ path_ls() {
find -L &quot;${dirs[@]}&quot; -maxdepth 1 -type f -executable \
-printf &#39;%f\n&#39; 2&gt;/dev/null | sort -u
}</code></pre>
-<p>Logically, there shouldn't be multiple arguments, just a single <code>$PATH</code> value; but, we can't enforce that, as the array can have any size. So, we do the robust thing, and just act on the entire array, not really caring about the fact that it is an array. Alas, there is still a field-separation bug in the program, with the output.</p>
-<h2 id="i-still-dont-think-i-need-arrays-in-my-scripts">I still don't think I need arrays in my scripts</h2>
+<p>Logically, there shouldn’t be multiple arguments, just a single <code>$PATH</code> value; but, we can’t enforce that, as the array can have any size. So, we do the robust thing, and just act on the entire array, not really caring about the fact that it is an array. Alas, there is still a field-separation bug in the program, with the output.</p>
+<h2 id="i-still-dont-think-i-need-arrays-in-my-scripts">I still don’t think I need arrays in my scripts</h2>
<p>Consider the common code:</p>
<pre><code>ARGS=&#39; -f -q&#39;
command $ARGS # unquoted variables are a bad code-smell anyway</code></pre>
-<p>Here, <code>$ARGS</code> is field-separated by <code>$IFS</code>, which we are assuming has the default value. This is fine, as long as <code>$ARGS</code> is known to never need an embedded space; which you do as long as it isn't based on anything outside of the program. But wait until you want to do this:</p>
+<p>Here, <code>$ARGS</code> is field-separated by <code>$IFS</code>, which we are assuming has the default value. This is fine, as long as <code>$ARGS</code> is known to never need an embedded space; which you do as long as it isn’t based on anything outside of the program. But wait until you want to do this:</p>
<pre><code>ARGS=&#39; -f -q&#39;
if [[ -f &quot;$filename&quot; ]]; then
@@ -267,7 +267,7 @@ if [[ -f &quot;$filename&quot; ]]; then
fi
command $ARGS</code></pre>
-<p>Now you're hosed if <code>$filename</code> contains a space! More than just breaking, it could have unwanted side effects, such as when someone figures out how to make <code>filename='foo --dangerous-flag'</code>.</p>
+<p>Now you’re hosed if <code>$filename</code> contains a space! More than just breaking, it could have unwanted side effects, such as when someone figures out how to make <code>filename='foo --dangerous-flag'</code>.</p>
<p>Compare that with the array version:</p>
<pre><code>ARGS=(-f -q)
@@ -280,7 +280,7 @@ command &quot;${ARGS[@]}&quot;</code></pre>
<p>Except for the little stubs that call another program with <code>&quot;$@&quot;</code> at the end, trying to write for multiple shells (including the ambiguous <code>/bin/sh</code>) is not a task for mere mortals. If you do try that, your best bet is probably sticking to POSIX. Arrays are not POSIX; except for the arguments array, which is; though getting subset arrays from <code>$@</code> and <code>$*</code> is not (tip: use <code>set --</code> to re-purpose the arguments array).</p>
<p>Writing for various versions of Bash, though, is pretty do-able. Everything here works all the way back in bash-2.0 (December 1996), with the following exceptions:</p>
<ul>
-<li><p>The <code>+=</code> operator wasn't added until Bash 3.1.</p>
+<li><p>The <code>+=</code> operator wasn’t added until Bash 3.1.</p>
<ul>
<li>As a work-around, use <code>array[${#array[*]}]=<var>word</var></code> to append a single element.</li>
</ul></li>
@@ -291,7 +291,7 @@ command &quot;${ARGS[@]}&quot;</code></pre>
<li>In Bash 4.1 and higher, it works in the way described in the main part of this document.</li>
</ul></li>
</ul>
-<p>Now, Bash 1.x doesn't have arrays at all. <code>$@</code> and <code>$*</code> work, but using <code>:</code> to select a range of elements from them doesn't. Good thing most boxes have been updated since 1996!</p>
+<p>Now, Bash 1.x doesn’t have arrays at all. <code>$@</code> and <code>$*</code> work, but using <code>:</code> to select a range of elements from them doesn’t. Good thing most boxes have been updated since 1996!</p>
</article>
<footer>
diff --git a/public/bash-redirection.html b/public/bash-redirection.html
index d4616fe..adeb9c9 100644
--- a/public/bash-redirection.html
+++ b/public/bash-redirection.html
@@ -10,8 +10,8 @@
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » bash-redirection</header>
<article>
<h1 id="bash-redirection">Bash redirection</h1>
-<p>Apparently, too many people don't understand Bash redirection. They might get the basic syntax, but they think of the process as declarative; in Bourne-ish shells, it is procedural.</p>
-<p>In Bash, streams are handled in terms of &quot;file descriptors&quot; of &quot;FDs&quot;. FD 0 is stdin, FD 1 is stdout, and FD 2 is stderr. The equivalence (or lack thereof) between using a numeric file descriptor, and using the associated file in <code>/dev/*</code> and <code>/proc/*</code> is interesting, but beyond the scope of this article.</p>
+<p>Apparently, too many people don’t understand Bash redirection. They might get the basic syntax, but they think of the process as declarative; in Bourne-ish shells, it is procedural.</p>
+<p>In Bash, streams are handled in terms of “file descriptors” of “FDs”. FD 0 is stdin, FD 1 is stdout, and FD 2 is stderr. The equivalence (or lack thereof) between using a numeric file descriptor, and using the associated file in <code>/dev/*</code> and <code>/proc/*</code> is interesting, but beyond the scope of this article.</p>
<h2 id="step-1-pipes">Step 1: Pipes</h2>
<p>To quote the Bash manual:</p>
<pre><code>A &#39;pipeline&#39; is a sequence of simple commands separated by one of the
@@ -19,7 +19,7 @@ control operators &#39;|&#39; or &#39;|&amp;&#39;.
The format for a pipeline is
[time [-p]] [!] COMMAND1 [ [| or |&amp;] COMMAND2 ...]</code></pre>
-<p>Now, <code>|&amp;</code> is just shorthand for <code>2&gt;&amp;1 |</code>, the pipe part happens here, but the <code>2&gt;&amp;1</code> part doesn't happen until step 2.</p>
+<p>Now, <code>|&amp;</code> is just shorthand for <code>2&gt;&amp;1 |</code>, the pipe part happens here, but the <code>2&gt;&amp;1</code> part doesn’t happen until step 2.</p>
<p>First, if the command is part of a pipeline, the pipes are set up. For every instance of the <code>|</code> metacharacter, Bash creates a pipe (<code>pipe(3)</code>), and duplicates (<code>dup2(3)</code>) the write end of the pipe to FD 1 of the process on the left side of the <code>|</code>, and duplicate the read end of the pipe to FD 0 of the process on the right side.</p>
<h2 id="step-2-redirections">Step 2: Redirections</h2>
<p><em>After</em> the initial FD 0 and FD 1 fiddling by pipes is done, Bash looks at the redirections. <strong>This means that redirections can override pipes.</strong></p>
diff --git a/public/build-bash-1.html b/public/build-bash-1.html
index 3c78a6d..f166d6e 100644
--- a/public/build-bash-1.html
+++ b/public/build-bash-1.html
@@ -12,9 +12,9 @@
<h1 id="building-bash-1.14.7-on-a-modern-system">Building Bash 1.14.7 on a modern system</h1>
<p>In a previous revision of my <a href="./bash-arrays.html">Bash arrays post</a>, I wrote:</p>
<blockquote>
-<p>Bash 1.x won't compile with modern GCC, so I couldn't verify how it behaves.</p>
+<p>Bash 1.x won’t compile with modern GCC, so I couldn’t verify how it behaves.</p>
</blockquote>
-<p>I recall spending a little time fighting with it, but apparently I didn't try very hard: getting Bash 1.14.7 to build on a modern box is mostly just adjusting it to use <code>stdarg</code> instead of the no-longer-implemented <code>varargs</code>. There's also a little fiddling with the pre-autoconf automatic configuration.</p>
+<p>I recall spending a little time fighting with it, but apparently I didn’t try very hard: getting Bash 1.14.7 to build on a modern box is mostly just adjusting it to use <code>stdarg</code> instead of the no-longer-implemented <code>varargs</code>. There’s also a little fiddling with the pre-autoconf automatic configuration.</p>
<h2 id="stdarg">stdarg</h2>
<p>Converting to <code>stdarg</code> is pretty simple: For each variadic function (functions that take a variable number of arguments), follow these steps:</p>
<ol type="1">
@@ -24,27 +24,27 @@
<li>Replace <code>va_start (args);</code> with <code>va_start (args, format);</code> in the function bodies.</li>
<li>Replace <code>function_name ();</code> with <code>function_name (char *, ...)</code> in header files and/or at the top of C files.</li>
</ol>
-<p>There's one function that uses the variable name <code>control</code> instead of <code>format</code>.</p>
-<p>I've prepared <a href="./bash-1.14.7-gcc4-stdarg.patch">a patch</a> that does this.</p>
+<p>There’s one function that uses the variable name <code>control</code> instead of <code>format</code>.</p>
+<p>I’ve prepared <a href="./bash-1.14.7-gcc4-stdarg.patch">a patch</a> that does this.</p>
<h2 id="configuration">Configuration</h2>
-<p>Instead of using autoconf-style tests to test for compiler and platform features, Bash 1 used the file <code>machines.h</code> that had <code>#ifdefs</code> and a huge database of of different operating systems for different platforms. It's gross. And quite likely won't handle your modern operating system.</p>
+<p>Instead of using autoconf-style tests to test for compiler and platform features, Bash 1 used the file <code>machines.h</code> that had <code>#ifdefs</code> and a huge database of of different operating systems for different platforms. It’s gross. And quite likely won’t handle your modern operating system.</p>
<p>I made these two small changes to <code>machines.h</code> to get it to work correctly on my box:</p>
<ol type="1">
<li>Replace <code>#if defined (i386)</code> with <code>#if defined (i386) || defined (__x86_64__)</code>. The purpose of this is obvious.</li>
-<li>Add <code>#define USE_TERMCAP_EMULATION</code> to the section for Linux [sic] on i386 (<code># if !defined (done386) &amp;&amp; (defined (__linux__) || defined (linux))</code>). What this does is tell it to link against libcurses to use curses termcap emulation, instead of linking against libtermcap (which doesn't exist on modern GNU/Linux systems).</li>
+<li>Add <code>#define USE_TERMCAP_EMULATION</code> to the section for Linux [sic] on i386 (<code># if !defined (done386) &amp;&amp; (defined (__linux__) || defined (linux))</code>). What this does is tell it to link against libcurses to use curses termcap emulation, instead of linking against libtermcap (which doesn’t exist on modern GNU/Linux systems).</li>
</ol>
-<p>Again, I've prepared <a href="./bash-1.14.7-machines-config.patch">a patch</a> that does this.</p>
+<p>Again, I’ve prepared <a href="./bash-1.14.7-machines-config.patch">a patch</a> that does this.</p>
<h2 id="building">Building</h2>
<p>With those adjustments, it should build, but with quite a few warnings. Making a couple of changes to <code>CFLAGS</code> should fix that:</p>
<pre><code>make CFLAGS=&#39;-O -g -Werror -Wno-int-to-pointer-cast -Wno-pointer-to-int-cast -Wno-deprecated-declarations -include stdio.h -include stdlib.h -include string.h -Dexp2=bash_exp2&#39;</code></pre>
-<p>That's a doozy! Let's break it down:</p>
+<p>That’s a doozy! Let’s break it down:</p>
<ul>
<li><code>-O -g</code> The default value for CFLAGS (defined in <code>cpp-Makefile</code>)</li>
<li><code>-Werror</code> Treat warnings as errors; force us to deal with any issues.</li>
<li><code>-Wno-int-to-pointer-cast -Wno-pointer-to-int-cast</code> Allow casting between integers and pointers. Unfortunately, the way this version of Bash was designed requires this.</li>
-<li><code>-Wno-deprecated-declarations</code> The <code>getwd</code> function in <code>unistd.h</code> is considered deprecated (use <code>getcwd</code> instead). However, if <code>getcwd</code> is available, Bash uses it's own <code>getwd</code> wrapper around <code>getcwd</code> (implemented in <code>general.c</code>), and only uses the signature from <code>unistd.h</code>, not the actuall implementation from libc.</li>
+<li><code>-Wno-deprecated-declarations</code> The <code>getwd</code> function in <code>unistd.h</code> is considered deprecated (use <code>getcwd</code> instead). However, if <code>getcwd</code> is available, Bash uses it’s own <code>getwd</code> wrapper around <code>getcwd</code> (implemented in <code>general.c</code>), and only uses the signature from <code>unistd.h</code>, not the actuall implementation from libc.</li>
<li><code>-include stdio.h -include stdlib.h -include string.h</code> Several files are missing these header file includes. If not for <code>-Werror</code>, the default function signature fallbacks would work.</li>
-<li><code>-Dexp2=bash_exp2</code> Avoid a conflict between the parser's <code>exp2</code> helper function and <code>math.h</code>'s base-2 exponential function.</li>
+<li><code>-Dexp2=bash_exp2</code> Avoid a conflict between the parser’s <code>exp2</code> helper function and <code>math.h</code>’s base-2 exponential function.</li>
</ul>
<p>Have fun, software archaeologists!</p>
diff --git a/public/emacs-as-an-os.html b/public/emacs-as-an-os.html
index 5403609..8a42e96 100644
--- a/public/emacs-as-an-os.html
+++ b/public/emacs-as-an-os.html
@@ -11,11 +11,11 @@
<article>
<h1 id="emacs-as-an-operating-system">Emacs as an operating system</h1>
<p>This was originally published on <a href="https://news.ycombinator.com/item?id=6292742">Hacker News</a> on 2013-08-29.</p>
-<p>Calling Emacs an OS is dubious, it certainly isn't a general purpose OS, and won't run on real hardware. But, let me make the case that Emacs is an OS.</p>
+<p>Calling Emacs an OS is dubious, it certainly isn’t a general purpose OS, and won’t run on real hardware. But, let me make the case that Emacs is an OS.</p>
<p>Emacs has two parts, the C part, and the Emacs Lisp part.</p>
-<p>The C part isn't just a Lisp interpreter, it is a Lisp Machine emulator. It doesn't particularly resemble any of the real Lisp machines. The TCP, Keyboard/Mouse, display support, and filesystem are done at the hardware level (the operations to work with these things are among the primitive operations provided by the hardware). Of these, the display being handled by the hardware isn't particularly uncommon, historically; the filesystem is a little stranger.</p>
-<p>The Lisp part of Emacs is the operating system that runs on that emulated hardware. It's not a particularly powerful OS, it not a multitasking system. It has many packages available for it (though not until recently was there a official package manager). It has reasonably powerful IPC mechanisms. It has shells, mail clients (MUAs and MSAs), web browsers, web servers and more, all written entirely in Emacs Lisp.</p>
-<p>You might say, &quot;but a lot of that is being done by the host operating system!&quot; Sure, some of it is, but all of it is sufficiently low level. If you wanted to share the filesystem with another OS running in a VM, you might do it by sharing it as a network filesystem; this is necessary when the VM OS is not designed around running in a VM. However, because Emacs OS will always be running in the Emacs VM, we can optimize it by having the Emacs VM include processor features mapping the native OS, and have the Emacs OS be aware of them. It would be slower and more code to do that all over the network.</p>
+<p>The C part isn’t just a Lisp interpreter, it is a Lisp Machine emulator. It doesn’t particularly resemble any of the real Lisp machines. The TCP, Keyboard/Mouse, display support, and filesystem are done at the hardware level (the operations to work with these things are among the primitive operations provided by the hardware). Of these, the display being handled by the hardware isn’t particularly uncommon, historically; the filesystem is a little stranger.</p>
+<p>The Lisp part of Emacs is the operating system that runs on that emulated hardware. It’s not a particularly powerful OS, it not a multitasking system. It has many packages available for it (though not until recently was there a official package manager). It has reasonably powerful IPC mechanisms. It has shells, mail clients (MUAs and MSAs), web browsers, web servers and more, all written entirely in Emacs Lisp.</p>
+<p>You might say, “but a lot of that is being done by the host operating system!” Sure, some of it is, but all of it is sufficiently low level. If you wanted to share the filesystem with another OS running in a VM, you might do it by sharing it as a network filesystem; this is necessary when the VM OS is not designed around running in a VM. However, because Emacs OS will always be running in the Emacs VM, we can optimize it by having the Emacs VM include processor features mapping the native OS, and have the Emacs OS be aware of them. It would be slower and more code to do that all over the network.</p>
</article>
<footer>
diff --git a/public/emacs-shells.html b/public/emacs-shells.html
index a168d0a..172eb91 100644
--- a/public/emacs-shells.html
+++ b/public/emacs-shells.html
@@ -9,19 +9,23 @@
<body>
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » emacs-shells</header>
<article>
-<h1 id="a-summary-of-emacs-bundled-shell-and-terminal-modes">A summary of Emacs' bundled shell and terminal modes</h1>
+<h1 id="a-summary-of-emacs-bundled-shell-and-terminal-modes">A summary of Emacs’ bundled shell and terminal modes</h1>
<p>This is based on a post on <a href="http://www.reddit.com/r/emacs/comments/1bzl8b/how_can_i_get_a_dumbersimpler_shell_in_emacs/c9blzyb">reddit</a>, published on 2013-04-09.</p>
-<p>Emacs comes bundled with a few different shell and terminal modes. It can be hard to keep them straight. What's the difference between <code>M-x term</code> and <code>M-x ansi-term</code>?</p>
-<p>Here's a good breakdown of the different bundled shells and terminals for Emacs, from dumbest to most Emacs-y.</p>
+<p>Emacs comes bundled with a few different shell and terminal modes. It can be hard to keep them straight. What’s the difference between <code>M-x term</code> and <code>M-x ansi-term</code>?</p>
+<p>Here’s a good breakdown of the different bundled shells and terminals for Emacs, from dumbest to most Emacs-y.</p>
<h2 id="term-mode">term-mode</h2>
<p>Your VT100-esque terminal emulator; it does what most terminal programs do. Ncurses-things work OK, but dumping large amounts of text can be slow. By default it asks you which shell to run, defaulting to the environmental variable <code>$SHELL</code> (<code>/bin/bash</code> for me). There are two modes of operation:</p>
<ul>
-<li>char mode: Keys are sent immediately to the shell (including keys that are normally Emacs keystrokes), with the following exceptions:</li>
+<li>char mode: Keys are sent immediately to the shell (including keys that are normally Emacs keystrokes), with the following exceptions:
+<ul>
<li><code>(term-escape-char) (term-escape-char)</code> sends <code>(term-escape-char)</code> to the shell (see above for what the default value is).</li>
<li><code>(term-escape-char) &lt;anything-else&gt;</code> is like doing equates to <code>C-x &lt;anything-else&gt;</code> in normal Emacs.</li>
<li><code>(term-escape-char) C-j</code> switches to line mode.</li>
-<li>line mode: Editing is done like in a normal Emacs buffer, <code>&lt;enter&gt;</code> sends the current line to the shell. This is useful for working with a program's output.</li>
+</ul></li>
+<li>line mode: Editing is done like in a normal Emacs buffer, <code>&lt;enter&gt;</code> sends the current line to the shell. This is useful for working with a program’s output.
+<ul>
<li><code>C-c C-k</code> switches to char mode.</li>
+</ul></li>
</ul>
<p>This mode is activated with</p>
<pre><code>; Creates or switches to an existing &quot;*terminal*&quot; buffer.
@@ -32,10 +36,10 @@ M-x term</code></pre>
; The default &#39;term-escape-char&#39; is &quot;C-c&quot; and &quot;C-x&quot;
M-x ansi-term</code></pre>
<h2 id="shell-mode">shell-mode</h2>
-<p>The name is a misnomer; shell-mode is a terminal emulator, not a shell; it's called that because it is used for running a shell (bash, zsh, …). The idea of this mode is to use an external shell, but make it Emacs-y. History is not handled by the shell, but by Emacs; <code>M-p</code> and <code>M-n</code> access the history, while arrows/<code>C-p</code>/<code>C-n</code> move the point (which is is consistent with other Emacs REPL-type interfaces). It ignores VT100-type terminal colors, and colorizes things itself (it inspects words to see if they are directories, in the case of <code>ls</code>). This has the benefit that it does syntax highlighting on the currently being typed command. Ncurses programs will of course not work. This mode is activated with:</p>
+<p>The name is a misnomer; shell-mode is a terminal emulator, not a shell; it’s called that because it is used for running a shell (bash, zsh, …). The idea of this mode is to use an external shell, but make it Emacs-y. History is not handled by the shell, but by Emacs; <code>M-p</code> and <code>M-n</code> access the history, while arrows/<code>C-p</code>/<code>C-n</code> move the point (which is is consistent with other Emacs REPL-type interfaces). It ignores VT100-type terminal colors, and colorizes things itself (it inspects words to see if they are directories, in the case of <code>ls</code>). This has the benefit that it does syntax highlighting on the currently being typed command. Ncurses programs will of course not work. This mode is activated with:</p>
<pre><code>M-x shell</code></pre>
<h2 id="eshell-mode">eshell-mode</h2>
-<p>This is a shell+terminal, entirely written in Emacs lisp. (Interestingly, it doesn't set <code>$SHELL</code>, so that will be whatever it was when you launched Emacs). This won't even be running zsh or bash, it will be running &quot;esh&quot;, part of Emacs.</p>
+<p>This is a shell+terminal, entirely written in Emacs lisp. (Interestingly, it doesn’t set <code>$SHELL</code>, so that will be whatever it was when you launched Emacs). This won’t even be running zsh or bash, it will be running “esh”, part of Emacs.</p>
</article>
<footer>
diff --git a/public/fs-licensing-explanation.html b/public/fs-licensing-explanation.html
index d7eaf4b..4976e9e 100644
--- a/public/fs-licensing-explanation.html
+++ b/public/fs-licensing-explanation.html
@@ -9,25 +9,25 @@
<body>
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » fs-licensing-explanation</header>
<article>
-<h1 id="an-explanation-of-how-copyleft-licensing-works">An explanation of how &quot;copyleft&quot; licensing works</h1>
+<h1 id="an-explanation-of-how-copyleft-licensing-works">An explanation of how “copyleft” licensing works</h1>
<p>This is based on a post on <a href="http://www.reddit.com/r/freesoftware/comments/18xplw/can_software_be_free_gnu_and_still_be_owned_by_an/c8ixwq2">reddit</a>, published on 2013-02-21.</p>
<blockquote>
-<p>While reading the man page for readline I noticed the copyright section said &quot;Readline is Copyright (C) 1989-2011 Free Software Foundation Inc&quot;. How can software be both licensed under GNU and copyrighted to a single group? It was my understanding that once code became free it didn't belong to any particular group or individual.</p>
-<p>[LiveCode is GPLv3, but also sells non-free licenses] Can you really have the same code under two conflicting licenses? Once licensed under GPL3 wouldn't they too be required to adhere to its rules?</p>
+<p>While reading the man page for readline I noticed the copyright section said “Readline is Copyright (C) 1989-2011 Free Software Foundation Inc”. How can software be both licensed under GNU and copyrighted to a single group? It was my understanding that once code became free it didn’t belong to any particular group or individual.</p>
+<p>[LiveCode is GPLv3, but also sells non-free licenses] Can you really have the same code under two conflicting licenses? Once licensed under GPL3 wouldn’t they too be required to adhere to its rules?</p>
</blockquote>
-<p>I believe that GNU/the FSF has an FAQ that addresses this, but I can't find it, so here we go.</p>
+<p>I believe that GNU/the FSF has an FAQ that addresses this, but I can’t find it, so here we go.</p>
<h3 id="glossary">Glossary:</h3>
<ul>
-<li>&quot;<em>Copyright</em>&quot; is the right to control how copies are made of something.</li>
-<li>Something for which no one holds the copyright is in the &quot;<em>public domain</em>&quot;, because anyone (&quot;the public&quot;) is allowed to do <em>anything</em> with it.</li>
-<li>A &quot;<em>license</em>&quot; is basically a legal document that says &quot;I promise not to sue you if make copies in these specific ways.&quot;</li>
-<li>A &quot;<em>non-free</em>&quot; license basically says &quot;There are no conditions under which you can make copies that I won't sue you.&quot;</li>
-<li>A &quot;<em>permissive</em>&quot; (type of free) license basically says &quot;You can do whatever you want, BUT have to give me credit&quot;, and is very similar to the public domain. If the copyright holder didn't have the copyright, they couldn't sue you to make sure that you gave them credit, and nobody would have to give them credit.</li>
-<li>A &quot;<em>copyleft</em>&quot; (type of free) license basically says, &quot;You can do whatever you want, BUT anyone who gets a copy from you has to be able to do whatever they want too.&quot; If the copyright holder didn't have the copyright, they couldn't sue you to make sure that you gave the source to people go got it from you, and non-free versions of these programs would start to exist.</li>
+<li>“<em>Copyright</em>” is the right to control how copies are made of something.</li>
+<li>Something for which no one holds the copyright is in the “<em>public domain</em>”, because anyone (“the public”) is allowed to do <em>anything</em> with it.</li>
+<li>A “<em>license</em>” is basically a legal document that says “I promise not to sue you if make copies in these specific ways.”</li>
+<li>A “<em>non-free</em>” license basically says “There are no conditions under which you can make copies that I won’t sue you.”</li>
+<li>A “<em>permissive</em>” (type of free) license basically says “You can do whatever you want, BUT have to give me credit”, and is very similar to the public domain. If the copyright holder didn’t have the copyright, they couldn’t sue you to make sure that you gave them credit, and nobody would have to give them credit.</li>
+<li>A “<em>copyleft</em>” (type of free) license basically says, “You can do whatever you want, BUT anyone who gets a copy from you has to be able to do whatever they want too.” If the copyright holder didn’t have the copyright, they couldn’t sue you to make sure that you gave the source to people go got it from you, and non-free versions of these programs would start to exist.</li>
</ul>
<h3 id="specific-questions">Specific questions:</h3>
-<p>Readline: The GNU GPL is a copyleft license. If you make a modified version of Readline, and give it to others without letting them have the source code, the FSF will sue you. They can do this because they have the copyright on Readline, and in the GNU GPL (the license they used) it only says that they won't sue you if you distribute the source with the modified version. If they didn't have the copyright, they couldn't sue you, and the GNU GPL would be worthless.</p>
-<p>LiveCode: The copyright holder for something is not required to obey the license—the license is only a promise not to sue you; of course they won't sue themselves. They can also offer different terms to different people. They can tell most people &quot;I won't sue you as long as you share the source,&quot; but if someone gave them a little money, they might say, &quot;I also promise not sue sue this guy, even if he doesn't give out the source.&quot;</p>
+<p>Readline: The GNU GPL is a copyleft license. If you make a modified version of Readline, and give it to others without letting them have the source code, the FSF will sue you. They can do this because they have the copyright on Readline, and in the GNU GPL (the license they used) it only says that they won’t sue you if you distribute the source with the modified version. If they didn’t have the copyright, they couldn’t sue you, and the GNU GPL would be worthless.</p>
+<p>LiveCode: The copyright holder for something is not required to obey the license—the license is only a promise not to sue you; of course they won’t sue themselves. They can also offer different terms to different people. They can tell most people “I won’t sue you as long as you share the source,” but if someone gave them a little money, they might say, “I also promise not sue sue this guy, even if he doesn’t give out the source.”</p>
</article>
<footer>
diff --git a/public/git-go-pre-commit.html b/public/git-go-pre-commit.html
index 5c536f1..e0c29bf 100644
--- a/public/git-go-pre-commit.html
+++ b/public/git-go-pre-commit.html
@@ -11,9 +11,9 @@
<article>
<h1 id="a-git-pre-commit-hook-for-automatically-formatting-go-code">A git pre-commit hook for automatically formatting Go code</h1>
<p>One of the (many) wonderful things about the Go programming language is the <code>gofmt</code> tool, which formats your source in a canonical way. I thought it would be nice to integrate this in my <code>git</code> workflow by adding it in a pre-commit hook to automatically format my source code when I committed it.</p>
-<p>The Go distribution contains a git pre-commit hook that checks whether the source code is formatted, and aborts the commit if it isn't. I don't remember if I was aware of this at the time (or if it even existed at the time, or if it is new), but I wanted it to go ahead and format the code for me.</p>
-<p>I found a few solutions online, but they were all missing something—support for partial commits. I frequently use <code>git add -p</code>/<code>git gui</code> to commit a subset of the changes I've made to a file, the existing solutions would end up adding the entire set of changes to my commit.</p>
-<p>I ended up writing a solution that only formats the version of the that is staged for commit; here's my <code>.git/hooks/pre-commit</code>:</p>
+<p>The Go distribution contains a git pre-commit hook that checks whether the source code is formatted, and aborts the commit if it isn’t. I don’t remember if I was aware of this at the time (or if it even existed at the time, or if it is new), but I wanted it to go ahead and format the code for me.</p>
+<p>I found a few solutions online, but they were all missing something—support for partial commits. I frequently use <code>git add -p</code>/<code>git gui</code> to commit a subset of the changes I’ve made to a file, the existing solutions would end up adding the entire set of changes to my commit.</p>
+<p>I ended up writing a solution that only formats the version of the that is staged for commit; here’s my <code>.git/hooks/pre-commit</code>:</p>
<pre><code>#!/bin/bash
# This would only loop over files that are already staged for commit.
@@ -31,8 +31,8 @@ for file in **/*.go; do
git add &quot;$file&quot;
mv &quot;$tmp&quot; &quot;$file&quot;
done</code></pre>
-<p>It's still not perfect. It will try to operate on every <code>*.go</code> file—which might do weird things if you have a file that hasn't been checked in at all. This also has the effect of formatting files that were checked in without being formatted, but weren't modified in this commit.</p>
-<p>I don't remember why I did that—as you can see from the comment, I knew how to only select files that were staged for commit. I haven't worked on any projects in Go in a while—if I return to one of them, and remember why I did that, I will update this page.</p>
+<p>It’s still not perfect. It will try to operate on every <code>*.go</code> file—which might do weird things if you have a file that hasn’t been checked in at all. This also has the effect of formatting files that were checked in without being formatted, but weren’t modified in this commit.</p>
+<p>I don’t remember why I did that—as you can see from the comment, I knew how to only select files that were staged for commit. I haven’t worked on any projects in Go in a while—if I return to one of them, and remember why I did that, I will update this page.</p>
</article>
<footer>
diff --git a/public/http-notes.html b/public/http-notes.html
index 6b6c1b2..b99b643 100644
--- a/public/http-notes.html
+++ b/public/http-notes.html
@@ -10,15 +10,16 @@
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » http-notes</header>
<article>
<h1 id="notes-on-subtleties-of-http-implementation">Notes on subtleties of HTTP implementation</h1>
-<h1 id="why-the-absolute-form-used-for-proxy-requests">Why the absolute-form used for proxy requests</h1>
+<p>I may add to this as time goes on, but I’ve written up some notes on subtleties HTTP/1.1 message syntax as specified in RFC 2730.</p>
+<h2 id="why-the-absolute-form-is-used-for-proxy-requests">Why the absolute-form is used for proxy requests</h2>
<p><a href="https://tools.ietf.org/html/rfc7230#section-5.3.2">RFC7230§5.3.2</a> says that a (non-CONNECT) request to an HTTP proxy should look like</p>
<pre><code>GET http://authority/path HTTP/1.1</code></pre>
<p>rather than the usual</p>
<pre><code>GET /path HTTP/1.1
Host: authority</code></pre>
-<p>And doesn't give a hint as to why the message syntax is different here.</p>
-<p><a href="https://parsiya.net/blog/2016-07-28-thick-client-proxying---part-6-how-https-proxies-work/#3-1-1-why-not-use-the-host-header">A blog post by Parsia Hakimian</a> claims that the reason is that it's a legacy behavior inherited from HTTP/1.0, which had proxies, but not the Host header field. Which is mostly true. But we can also realize that the usual syntax does not allow specifying a URI scheme, which means that we cannot specify a transport. Sure, the only two HTTP transports we might expect to use today are TCP (scheme: http) and TLS (scheme: https), and TLS requires we use a CONNECT request to the proxy, meaning that the only option left is a TCP transport; but that is no reason to avoid building generality into the protocol.</p>
-<h1 id="on-taking-short-cuts-based-on-early-header-field-values">On taking short-cuts based on early header field values</h1>
+<p>And doesn’t give a hint as to why the message syntax is different here.</p>
+<p><a href="https://parsiya.net/blog/2016-07-28-thick-client-proxying---part-6-how-https-proxies-work/#3-1-1-why-not-use-the-host-header">A blog post by Parsia Hakimian</a> claims that the reason is that it’s a legacy behavior inherited from HTTP/1.0, which had proxies, but not the Host header field. Which is mostly true. But we can also realize that the usual syntax does not allow specifying a URI scheme, which means that we cannot specify a transport. Sure, the only two HTTP transports we might expect to use today are TCP (scheme: http) and TLS (scheme: https), and TLS requires we use a CONNECT request to the proxy, meaning that the only option left is a TCP transport; but that is no reason to avoid building generality into the protocol.</p>
+<h2 id="on-taking-short-cuts-based-on-early-header-field-values">On taking short-cuts based on early header field values</h2>
<p><a href="https://tools.ietf.org/html/rfc7230#section-3.2.2">RFC7230§3.2.2</a> says:</p>
<blockquote>
<pre><code>The order in which header fields with differing field names are
@@ -27,39 +28,36 @@ header fields that contain control data first, such as Host on
requests and Date on responses, so that implementations can decide
when not to handle a message as early as possible.</code></pre>
</blockquote>
-<p>I took that as a notice that I can use the first Host or similar header to quickly route along to my sub-component before I've parsed the entire header field set.</p>
-<p>However, it later states in <a href="https://tools.ietf.org/html/rfc7230#section-5.4">§5.4</a>:</p>
+<p>Which is great! We can make an optimization!</p>
+<p>This is only a valid optimization for deciding to <em>not handle</em> a message. You cannot use it to decide to route to a backend early based on this. Part of the reason is that <a href="https://tools.ietf.org/html/rfc7230#section-5.4">§5.4</a> tells us we must inspect the entire header field set to know if we need to respond with a 400 status code:</p>
<blockquote>
<pre><code>A server MUST respond with a 400 (Bad Request) status code to any
HTTP/1.1 request message that lacks a Host header field and to any
request message that contains more than one Host header field or a
Host header field with an invalid field-value.</code></pre>
</blockquote>
-<p>Which means that I must parse the entire header field set.</p>
-<p>However, if I look a bit closer at §3.2.2, I see that this short-cut is only valid for deciding to <em>not handle</em> a message; if I am handling it, I cannot use this short-cut.</p>
-<p>Except that if I decide not to handle a request based on the Host header field, the correct thing to do is to send a 404 status code. Which implies that I have parsed the remainder of the header field set to validate the message syntax. Oh no, what do I do?</p>
-<p>Well, there are a number of &quot;A server MUST respond with a XXX code if&quot; rules that can all be triggered on the same request. So we get to choose which to use.</p>
-<p>And fortunately for optimizing implementations, <a href="https://tools.ietf.org/html/rfc7230#section-3.2.5">§3.2.5</a> gave us:</p>
+<p>However, if I decide not to handle a request based on the Host header field, the correct thing to do is to send a 404 status code. Which implies that I have parsed the remainder of the header field set to validate the message syntax. We need to parse the entire field-set to know if we need to send a 400 or a 404. Did this just kill the possibility of using the optimization?</p>
+<p>Well, there are a number of “A server MUST respond with a XXX code if” rules that can all be triggered on the same request. So we get to choose which to use. And fortunately for optimizing implementations, <a href="https://tools.ietf.org/html/rfc7230#section-3.2.5">§3.2.5</a> gave us:</p>
<blockquote>
<pre><code>A server that receives a ... set of fields,
larger than it wishes to process MUST respond with an appropriate 4xx
(Client Error) status code.</code></pre>
</blockquote>
-<p>And since the header field set is longer than we want to process (since we want to short-cut processing), we are free to respond with whichever 4XX status code we like!</p>
-<h1 id="on-normalizing-target-uris">On normalizing target URIs</h1>
-<p>An implementer is tempted to normalize URIs all over the place, just for safety and sanitation. After all, <a href="https://tools.ietf.org/html/rfc3986#section-6.1">RFC3986§6.1</a> says it's safe!</p>
-<p>Unfortunately, most URI normalizers implementations will normalize an empty path to &quot;/&quot;. Which is not always save; <a href="https://tools.ietf.org/html/rfc7230#section-2.7.3">RFC7230§2.7.3</a>, which defines this &quot;equivalence&quot;, actually says:</p>
+<p>Since the header field set is longer than we want to process (since we want to short-cut processing), we are free to respond with whichever 4XX status code we like!</p>
+<h2 id="on-normalizing-target-uris">On normalizing target URIs</h2>
+<p>An implementer is tempted to normalize URIs all over the place, just for safety and sanitation. After all, <a href="https://tools.ietf.org/html/rfc3986#section-6.1">RFC3986§6.1</a> says it’s safe!</p>
+<p>Unfortunately, most URI normalization implementations will normalize an empty path to “/”. Which is not always safe; <a href="https://tools.ietf.org/html/rfc7230#section-2.7.3">RFC7230§2.7.3</a>, which defines this “equivalence”, actually says:</p>
<blockquote>
<pre><code> When not being used in
absolute form as the request target of an OPTIONS request, an empty
path component is equivalent to an absolute path of &quot;/&quot;, so the
normal form is to provide a path of &quot;/&quot; instead.</code></pre>
</blockquote>
-<p>Which means we can't use the usual normalizer implementation if we are making an OPTIONS request!</p>
-<p>Why is that? Well, if we turn to <a href="https://tools.ietf.org/html/rfc7230#section-5.3.4">§5.3.4</a>, we find the answer. One of the special cases for when the request target is not a URI, is that we may use &quot;*&quot; as the target for an OPTIONS request to request information about the origin server itself, rather than a resource on that server.</p>
-<p>However, as discussed above, the target in a request to a proxy must be an absolute URI (and <a href="https://tools.ietf.org/html/rfc7230#section-5.3.2">§5.3.2</a> says that the origin server must also understand this syntax). So, we must define a way to map &quot;*&quot; to an absolute URI.</p>
-<p>Naively, one might be tempted to use &quot;/*&quot; as the path. But that would make it impossible to have a resource actually named &quot;/*&quot;. So, we must define a special case in the URI syntax that doesn't obstruct a real path.</p>
-<p>If we didn't have this special case in the URI normalizer, and we handled the &quot;/&quot; path as the same as empty in the OPTIONS handler of the last proxy server, then it would be impossible to request OPTIONS for the &quot;/&quot; resources, as it would get translated into &quot;*&quot; and treated as OPTIONS for the entire server.</p>
+<p>Which means we can’t use the usual normalization implementation if we are making an OPTIONS request!</p>
+<p>Why is that? Well, if we turn to <a href="https://tools.ietf.org/html/rfc7230#section-5.3.4">§5.3.4</a>, we find the answer. One of the special cases for when the request target is not a URI, is that we may use “*” as the target for an OPTIONS request to request information about the origin server itself, rather than a resource on that server.</p>
+<p>However, as discussed above, the target in a request to a proxy must be an absolute URI (and <a href="https://tools.ietf.org/html/rfc7230#section-5.3.2">§5.3.2</a> says that the origin server must also understand this syntax). So, we must define a way to map “*” to an absolute URI.</p>
+<p>Naively, one might be tempted to use “/*” as the path. But that would make it impossible to have a resource actually named “/*”. So, we must define a special case in the URI syntax that doesn’t obstruct a real path.</p>
+<p>If we didn’t have this special case in the URI normalization rules, and we handled the “/” path as the same as empty in the OPTIONS handler of the last proxy server, then it would be impossible to request OPTIONS for the “/” resources, as it would get translated into “*” and treated as OPTIONS for the entire server.</p>
</article>
<footer>
diff --git a/public/index.atom b/public/index.atom
index 1a8e9be..bc1b887 100644
--- a/public/index.atom
+++ b/public/index.atom
@@ -5,21 +5,83 @@
<link rel="self" type="application/atom+xml" href="./index.atom"/>
<link rel="alternate" type="text/html" href="./"/>
<link rel="alternate" type="text/markdown" href="./index.md"/>
- <updated>2016-08-27T19:26:20-04:00</updated>
+ <updated>2016-09-30T00:00:00+00:00</updated>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<id>https://lukeshu.com/blog/</id>
<entry xmlns="http://www.w3.org/2005/Atom">
+ <link rel="alternate" type="text/html" href="./http-notes.html"/>
+ <link rel="alternate" type="text/markdown" href="./http-notes.md"/>
+ <id>https://lukeshu.com/blog/http-notes.html</id>
+ <updated>2016-09-30T00:00:00+00:00</updated>
+ <published>2016-09-30T00:00:00+00:00</published>
+ <title>Notes on subtleties of HTTP implementation</title>
+ <content type="html">&lt;h1 id="notes-on-subtleties-of-http-implementation"&gt;Notes on subtleties of HTTP implementation&lt;/h1&gt;
+&lt;p&gt;I may add to this as time goes on, but I’ve written up some notes on subtleties HTTP/1.1 message syntax as specified in RFC 2730.&lt;/p&gt;
+&lt;h2 id="why-the-absolute-form-is-used-for-proxy-requests"&gt;Why the absolute-form is used for proxy requests&lt;/h2&gt;
+&lt;p&gt;&lt;a href="https://tools.ietf.org/html/rfc7230#section-5.3.2"&gt;RFC7230§5.3.2&lt;/a&gt; says that a (non-CONNECT) request to an HTTP proxy should look like&lt;/p&gt;
+&lt;pre&gt;&lt;code&gt;GET http://authority/path HTTP/1.1&lt;/code&gt;&lt;/pre&gt;
+&lt;p&gt;rather than the usual&lt;/p&gt;
+&lt;pre&gt;&lt;code&gt;GET /path HTTP/1.1
+Host: authority&lt;/code&gt;&lt;/pre&gt;
+&lt;p&gt;And doesn’t give a hint as to why the message syntax is different here.&lt;/p&gt;
+&lt;p&gt;&lt;a href="https://parsiya.net/blog/2016-07-28-thick-client-proxying---part-6-how-https-proxies-work/#3-1-1-why-not-use-the-host-header"&gt;A blog post by Parsia Hakimian&lt;/a&gt; claims that the reason is that it’s a legacy behavior inherited from HTTP/1.0, which had proxies, but not the Host header field. Which is mostly true. But we can also realize that the usual syntax does not allow specifying a URI scheme, which means that we cannot specify a transport. Sure, the only two HTTP transports we might expect to use today are TCP (scheme: http) and TLS (scheme: https), and TLS requires we use a CONNECT request to the proxy, meaning that the only option left is a TCP transport; but that is no reason to avoid building generality into the protocol.&lt;/p&gt;
+&lt;h2 id="on-taking-short-cuts-based-on-early-header-field-values"&gt;On taking short-cuts based on early header field values&lt;/h2&gt;
+&lt;p&gt;&lt;a href="https://tools.ietf.org/html/rfc7230#section-3.2.2"&gt;RFC7230§3.2.2&lt;/a&gt; says:&lt;/p&gt;
+&lt;blockquote&gt;
+&lt;pre&gt;&lt;code&gt;The order in which header fields with differing field names are
+received is not significant. However, it is good practice to send
+header fields that contain control data first, such as Host on
+requests and Date on responses, so that implementations can decide
+when not to handle a message as early as possible.&lt;/code&gt;&lt;/pre&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;Which is great! We can make an optimization!&lt;/p&gt;
+&lt;p&gt;This is only a valid optimization for deciding to &lt;em&gt;not handle&lt;/em&gt; a message. You cannot use it to decide to route to a backend early based on this. Part of the reason is that &lt;a href="https://tools.ietf.org/html/rfc7230#section-5.4"&gt;§5.4&lt;/a&gt; tells us we must inspect the entire header field set to know if we need to respond with a 400 status code:&lt;/p&gt;
+&lt;blockquote&gt;
+&lt;pre&gt;&lt;code&gt;A server MUST respond with a 400 (Bad Request) status code to any
+HTTP/1.1 request message that lacks a Host header field and to any
+request message that contains more than one Host header field or a
+Host header field with an invalid field-value.&lt;/code&gt;&lt;/pre&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;However, if I decide not to handle a request based on the Host header field, the correct thing to do is to send a 404 status code. Which implies that I have parsed the remainder of the header field set to validate the message syntax. We need to parse the entire field-set to know if we need to send a 400 or a 404. Did this just kill the possibility of using the optimization?&lt;/p&gt;
+&lt;p&gt;Well, there are a number of “A server MUST respond with a XXX code if” rules that can all be triggered on the same request. So we get to choose which to use. And fortunately for optimizing implementations, &lt;a href="https://tools.ietf.org/html/rfc7230#section-3.2.5"&gt;§3.2.5&lt;/a&gt; gave us:&lt;/p&gt;
+&lt;blockquote&gt;
+&lt;pre&gt;&lt;code&gt;A server that receives a ... set of fields,
+larger than it wishes to process MUST respond with an appropriate 4xx
+(Client Error) status code.&lt;/code&gt;&lt;/pre&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;Since the header field set is longer than we want to process (since we want to short-cut processing), we are free to respond with whichever 4XX status code we like!&lt;/p&gt;
+&lt;h2 id="on-normalizing-target-uris"&gt;On normalizing target URIs&lt;/h2&gt;
+&lt;p&gt;An implementer is tempted to normalize URIs all over the place, just for safety and sanitation. After all, &lt;a href="https://tools.ietf.org/html/rfc3986#section-6.1"&gt;RFC3986§6.1&lt;/a&gt; says it’s safe!&lt;/p&gt;
+&lt;p&gt;Unfortunately, most URI normalization implementations will normalize an empty path to “/”. Which is not always safe; &lt;a href="https://tools.ietf.org/html/rfc7230#section-2.7.3"&gt;RFC7230§2.7.3&lt;/a&gt;, which defines this “equivalence”, actually says:&lt;/p&gt;
+&lt;blockquote&gt;
+&lt;pre&gt;&lt;code&gt; When not being used in
+absolute form as the request target of an OPTIONS request, an empty
+path component is equivalent to an absolute path of &amp;quot;/&amp;quot;, so the
+normal form is to provide a path of &amp;quot;/&amp;quot; instead.&lt;/code&gt;&lt;/pre&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;Which means we can’t use the usual normalization implementation if we are making an OPTIONS request!&lt;/p&gt;
+&lt;p&gt;Why is that? Well, if we turn to &lt;a href="https://tools.ietf.org/html/rfc7230#section-5.3.4"&gt;§5.3.4&lt;/a&gt;, we find the answer. One of the special cases for when the request target is not a URI, is that we may use “*” as the target for an OPTIONS request to request information about the origin server itself, rather than a resource on that server.&lt;/p&gt;
+&lt;p&gt;However, as discussed above, the target in a request to a proxy must be an absolute URI (and &lt;a href="https://tools.ietf.org/html/rfc7230#section-5.3.2"&gt;§5.3.2&lt;/a&gt; says that the origin server must also understand this syntax). So, we must define a way to map “*” to an absolute URI.&lt;/p&gt;
+&lt;p&gt;Naively, one might be tempted to use “/*” as the path. But that would make it impossible to have a resource actually named “/*”. So, we must define a special case in the URI syntax that doesn’t obstruct a real path.&lt;/p&gt;
+&lt;p&gt;If we didn’t have this special case in the URI normalization rules, and we handled the “/” path as the same as empty in the OPTIONS handler of the last proxy server, then it would be impossible to request OPTIONS for the “/” resources, as it would get translated into “*” and treated as OPTIONS for the entire server.&lt;/p&gt;
+</content>
+ <author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
+ <rights type="html">&lt;p&gt;The content of this page is Copyright © 2016 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;This page is licensed under the &lt;a href="https://creativecommons.org/licenses/by-sa/3.0/"&gt;CC BY-SA-3.0&lt;/a&gt; license.&lt;/p&gt;</rights>
+ </entry>
+
+ <entry xmlns="http://www.w3.org/2005/Atom">
<link rel="alternate" type="text/html" href="./x11-systemd.html"/>
<link rel="alternate" type="text/markdown" href="./x11-systemd.md"/>
<id>https://lukeshu.com/blog/x11-systemd.html</id>
- <updated>2016-08-27T19:26:20-04:00</updated>
+ <updated>2016-02-28T00:00:00+00:00</updated>
<published>2016-02-28T00:00:00+00:00</published>
<title>My X11 setup with systemd</title>
<content type="html">&lt;h1 id="my-x11-setup-with-systemd"&gt;My X11 setup with systemd&lt;/h1&gt;
-&lt;p&gt;Somewhere along the way, I decided to use systemd user sessions to manage the various parts of my X11 environment would be a good idea. If that was a good idea or not... we'll see.&lt;/p&gt;
-&lt;p&gt;I've sort-of been running this setup as my daily-driver for &lt;a href="https://lukeshu.com/git/dotfiles.git/commit/?id=a9935b7a12a522937d91cb44a0e138132b555e16"&gt;a bit over a year&lt;/a&gt;, continually tweaking it though.&lt;/p&gt;
+&lt;p&gt;Somewhere along the way, I decided to use systemd user sessions to manage the various parts of my X11 environment would be a good idea. If that was a good idea or not… we’ll see.&lt;/p&gt;
+&lt;p&gt;I’ve sort-of been running this setup as my daily-driver for &lt;a href="https://lukeshu.com/git/dotfiles.git/commit/?id=a9935b7a12a522937d91cb44a0e138132b555e16"&gt;a bit over a year&lt;/a&gt;, continually tweaking it though.&lt;/p&gt;
&lt;p&gt;My setup is substantially different than the one on &lt;a href="https://wiki.archlinux.org/index.php/Systemd/User"&gt;ArchWiki&lt;/a&gt;, because the ArchWiki solution assumes that there is only ever one X server for a user; I like the ability to run &lt;code&gt;Xorg&lt;/code&gt; on my real monitor, and also have &lt;code&gt;Xvnc&lt;/code&gt; running headless, or start my desktop environment on a remote X server. Though, I would like to figure out how to use systemd socket activation for the X server, as the ArchWiki solution does.&lt;/p&gt;
&lt;p&gt;This means that all of my graphical units take &lt;code&gt;DISPLAY&lt;/code&gt; as an &lt;code&gt;@&lt;/code&gt; argument. To get this to all work out, this goes in each &lt;code&gt;.service&lt;/code&gt; file, unless otherwise noted:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[Unit]
@@ -27,8 +89,8 @@ After=X11@%i.target
Requisite=X11@%i.target
[Service]
Environment=DISPLAY=%I&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;We'll get to &lt;code&gt;X11@.target&lt;/code&gt; later, what it says is &amp;quot;I should only be running if X11 is running&amp;quot;.&lt;/p&gt;
-&lt;p&gt;I eschew complex XDMs or &lt;code&gt;startx&lt;/code&gt; wrapper scripts, opting for the more simple &lt;code&gt;xinit&lt;/code&gt;, which I either run on login for some boxes (my media station), or type &lt;code&gt;xinit&lt;/code&gt; when I want X11 on others (most everything else). Essentially, what &lt;code&gt;xinit&lt;/code&gt; does is run &lt;code&gt;~/.xserverrc&lt;/code&gt; (or &lt;code&gt;/etc/X11/xinit/xserverrc&lt;/code&gt;) to start the server, then once the server is started (which it takes a substantial amount of magic to detect) it runs run &lt;code&gt;~/.xinitrc&lt;/code&gt; (or &lt;code&gt;/etc/X11/xinit/xinitrc&lt;/code&gt;) to start the clients. Once &lt;code&gt;.xinitrc&lt;/code&gt; finishes running, it stops the X server and exits. Now, when I say &amp;quot;run&amp;quot;, I don't mean execute, it passes each file to the system shell (&lt;code&gt;/bin/sh&lt;/code&gt;) as input.&lt;/p&gt;
+&lt;p&gt;We’ll get to &lt;code&gt;X11@.target&lt;/code&gt; later, what it says is “I should only be running if X11 is running”.&lt;/p&gt;
+&lt;p&gt;I eschew complex XDMs or &lt;code&gt;startx&lt;/code&gt; wrapper scripts, opting for the more simple &lt;code&gt;xinit&lt;/code&gt;, which I either run on login for some boxes (my media station), or type &lt;code&gt;xinit&lt;/code&gt; when I want X11 on others (most everything else). Essentially, what &lt;code&gt;xinit&lt;/code&gt; does is run &lt;code&gt;~/.xserverrc&lt;/code&gt; (or &lt;code&gt;/etc/X11/xinit/xserverrc&lt;/code&gt;) to start the server, then once the server is started (which it takes a substantial amount of magic to detect) it runs run &lt;code&gt;~/.xinitrc&lt;/code&gt; (or &lt;code&gt;/etc/X11/xinit/xinitrc&lt;/code&gt;) to start the clients. Once &lt;code&gt;.xinitrc&lt;/code&gt; finishes running, it stops the X server and exits. Now, when I say “run”, I don’t mean execute, it passes each file to the system shell (&lt;code&gt;/bin/sh&lt;/code&gt;) as input.&lt;/p&gt;
&lt;p&gt;Xorg requires a TTY to run on; if we log in to a TTY with &lt;code&gt;logind&lt;/code&gt;, it will give us the &lt;code&gt;XDG_VTNR&lt;/code&gt; variable to tell us which one we have, so I pass this to &lt;code&gt;X&lt;/code&gt; in &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/serverrc"&gt;my &lt;code&gt;.xserverrc&lt;/code&gt;&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/hint/sh
if [ -z &amp;quot;$XDG_VTNR&amp;quot; ]; then
@@ -36,8 +98,8 @@ if [ -z &amp;quot;$XDG_VTNR&amp;quot; ]; then
else
exec /usr/bin/X -nolisten tcp &amp;quot;$@&amp;quot; vt$XDG_VTNR
fi&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;This was the default for &lt;a href="https://projects.archlinux.org/svntogit/packages.git/commit/trunk/xserverrc?h=packages/xorg-xinit&amp;amp;id=f9f5de58df03aae6c8a8c8231a83327d19b943a1"&gt;a while&lt;/a&gt; in Arch, to support &lt;code&gt;logind&lt;/code&gt;, but was &lt;a href="https://projects.archlinux.org/svntogit/packages.git/commit/trunk/xserverrc?h=packages/xorg-xinit&amp;amp;id=5a163ddd5dae300e7da4b027e28c37ad3b535804"&gt;later removed&lt;/a&gt; in part because &lt;code&gt;startx&lt;/code&gt; (which calls &lt;code&gt;xinit&lt;/code&gt;) started adding it as an argument as well, so &lt;code&gt;vt$XDG_VTNR&lt;/code&gt; was being listed as an argument twice, which is an error. IMO, that was a problem in &lt;code&gt;startx&lt;/code&gt;, and they shouldn't have removed it from the default system &lt;code&gt;xserverrc&lt;/code&gt;, but that's just me. So I copy/pasted it into my user &lt;code&gt;xserverrc&lt;/code&gt;.&lt;/p&gt;
-&lt;p&gt;That's the boring part, though. Where the magic starts happening is in &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/clientrc"&gt;my &lt;code&gt;.xinitrc&lt;/code&gt;&lt;/a&gt;:&lt;/p&gt;
+&lt;p&gt;This was the default for &lt;a href="https://projects.archlinux.org/svntogit/packages.git/commit/trunk/xserverrc?h=packages/xorg-xinit&amp;amp;id=f9f5de58df03aae6c8a8c8231a83327d19b943a1"&gt;a while&lt;/a&gt; in Arch, to support &lt;code&gt;logind&lt;/code&gt;, but was &lt;a href="https://projects.archlinux.org/svntogit/packages.git/commit/trunk/xserverrc?h=packages/xorg-xinit&amp;amp;id=5a163ddd5dae300e7da4b027e28c37ad3b535804"&gt;later removed&lt;/a&gt; in part because &lt;code&gt;startx&lt;/code&gt; (which calls &lt;code&gt;xinit&lt;/code&gt;) started adding it as an argument as well, so &lt;code&gt;vt$XDG_VTNR&lt;/code&gt; was being listed as an argument twice, which is an error. IMO, that was a problem in &lt;code&gt;startx&lt;/code&gt;, and they shouldn’t have removed it from the default system &lt;code&gt;xserverrc&lt;/code&gt;, but that’s just me. So I copy/pasted it into my user &lt;code&gt;xserverrc&lt;/code&gt;.&lt;/p&gt;
+&lt;p&gt;That’s the boring part, though. Where the magic starts happening is in &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/clientrc"&gt;my &lt;code&gt;.xinitrc&lt;/code&gt;&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/hint/sh
if [ -z &amp;quot;$XDG_RUNTIME_DIR&amp;quot; ]; then
@@ -53,15 +115,15 @@ cat &amp;lt; &amp;quot;${XDG_RUNTIME_DIR}/x11-wm@${_DISPLAY}&amp;quot; &amp;amp;
systemctl --user start &amp;quot;X11@${_DISPLAY}.target&amp;quot; &amp;amp;
wait
systemctl --user stop &amp;quot;X11@${_DISPLAY}.target&amp;quot;&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;There are two contracts/interfaces here: the &lt;code&gt;X11@DISPLAY.target&lt;/code&gt; systemd target, and the &lt;code&gt;${XDG_RUNTIME_DIR}/x11-wm@DISPLAY&lt;/code&gt; named pipe. The systemd &lt;code&gt;.target&lt;/code&gt; should be pretty self explanatory; the most important part is that it starts the window manager. The named pipe is just a hacky way of blocking until the window manager exits (&amp;quot;traditional&amp;quot; &lt;code&gt;.xinitrc&lt;/code&gt; files end with the line &lt;code&gt;exec your-window-manager&lt;/code&gt;, so this mimics that behavior). It works by assuming that the window manager will open the pipe at startup, and keep it open (without necessarily writing anything to it); when the window manager exits, the pipe will get closed, sending EOF to the &lt;code&gt;wait&lt;/code&gt;ed-for &lt;code&gt;cat&lt;/code&gt;, allowing it to exit, letting the script resume. The window manager (WMII) is made to have the pipe opened by executing it this way in &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/wmii@.service"&gt;its &lt;code&gt;.service&lt;/code&gt; file&lt;/a&gt;:&lt;/p&gt;
+&lt;p&gt;There are two contracts/interfaces here: the &lt;code&gt;X11@DISPLAY.target&lt;/code&gt; systemd target, and the &lt;code&gt;${XDG_RUNTIME_DIR}/x11-wm@DISPLAY&lt;/code&gt; named pipe. The systemd &lt;code&gt;.target&lt;/code&gt; should be pretty self explanatory; the most important part is that it starts the window manager. The named pipe is just a hacky way of blocking until the window manager exits (“traditional” &lt;code&gt;.xinitrc&lt;/code&gt; files end with the line &lt;code&gt;exec your-window-manager&lt;/code&gt;, so this mimics that behavior). It works by assuming that the window manager will open the pipe at startup, and keep it open (without necessarily writing anything to it); when the window manager exits, the pipe will get closed, sending EOF to the &lt;code&gt;wait&lt;/code&gt;ed-for &lt;code&gt;cat&lt;/code&gt;, allowing it to exit, letting the script resume. The window manager (WMII) is made to have the pipe opened by executing it this way in &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/wmii@.service"&gt;its &lt;code&gt;.service&lt;/code&gt; file&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ExecStart=/usr/bin/env bash -c &amp;#39;exec 8&amp;gt;${XDG_RUNTIME_DIR}/x11-wm@%I; exec wmii&amp;#39;&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;which just opens the file on file descriptor 8, then launches the window manager normally. The only further logic required by the window manager with regard to the pipe is that in the window manager &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/config.sh"&gt;configuration&lt;/a&gt;, I should close that file descriptor after forking any process that isn't &amp;quot;part of&amp;quot; the window manager:&lt;/p&gt;
+&lt;p&gt;which just opens the file on file descriptor 8, then launches the window manager normally. The only further logic required by the window manager with regard to the pipe is that in the window manager &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/config.sh"&gt;configuration&lt;/a&gt;, I should close that file descriptor after forking any process that isn’t “part of” the window manager:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;runcmd() (
...
exec 8&amp;gt;&amp;amp;- # xinit/systemd handshake
...
)&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;So, back to the &lt;code&gt;X11@DISPLAY.target&lt;/code&gt;; I configure what it &amp;quot;does&amp;quot; with symlinks in the &lt;code&gt;.requires&lt;/code&gt; and &lt;code&gt;.wants&lt;/code&gt; directories:&lt;/p&gt;
+&lt;p&gt;So, back to the &lt;code&gt;X11@DISPLAY.target&lt;/code&gt;; I configure what it “does” with symlinks in the &lt;code&gt;.requires&lt;/code&gt; and &lt;code&gt;.wants&lt;/code&gt; directories:&lt;/p&gt;
&lt;ul class="tree"&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user"&gt;.config/systemd/user/&lt;/a&gt;&lt;/p&gt;
@@ -81,13 +143,13 @@ systemctl --user stop &amp;quot;X11@${_DISPLAY}.target&amp;quot;&lt;/code&gt;&lt
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;code&gt;.requires&lt;/code&gt; directory is how I configure which window manager it starts. This would allow me to configure different window managers on different displays, by creating a &lt;code&gt;.requires&lt;/code&gt; directory with the &lt;code&gt;DISPLAY&lt;/code&gt; included, e.g. &lt;code&gt;X11@:2.requires&lt;/code&gt;.&lt;/p&gt;
-&lt;p&gt;The &lt;code&gt;.wants&lt;/code&gt; directory is for general X display setup; it's analogous to &lt;code&gt;/etc/X11/xinit/xinitrc.d/&lt;/code&gt;. All of the files in it are simple &lt;code&gt;Type=oneshot&lt;/code&gt; service files. The &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xmodmap@.service"&gt;xmodmap&lt;/a&gt; and &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xresources@.service"&gt;xresources&lt;/a&gt; files are pretty boring, they're just systemd versions of the couple lines that just about every traditional &lt;code&gt;.xinitrc&lt;/code&gt; contains, the biggest difference being that they look at &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/modmap"&gt;&lt;code&gt;~/.config/X11/modmap&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/resources"&gt;&lt;code&gt;~/.config/X11/resources&lt;/code&gt;&lt;/a&gt; instead of the traditional locations &lt;code&gt;~/.xmodmap&lt;/code&gt; and &lt;code&gt;~/.Xresources&lt;/code&gt;.&lt;/p&gt;
-&lt;p&gt;What's possibly of note is &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xresources-dpi@.service"&gt;&lt;code&gt;xresources-dpi@.service&lt;/code&gt;&lt;/a&gt;. In X11, there are two sources of DPI information, the X display resolution, and the XRDB &lt;code&gt;Xft.dpi&lt;/code&gt; setting. It isn't defined which takes precedence (to my knowledge), and even if it were (is), application authors wouldn't be arsed to actually do the right thing. For years, Firefox (well, Iceweasel) happily listened to the X display resolution, but recently it decided to only look at &lt;code&gt;Xft.dpi&lt;/code&gt;, which objectively seems a little silly, since the X display resolution is always present, but &lt;code&gt;Xft.dpi&lt;/code&gt; isn't. Anyway, Mozilla's change drove me to to create a &lt;a href="https://lukeshu.com/git/dotfiles/tree/.local/bin/xrdb-set-dpi"&gt;script&lt;/a&gt; to make the &lt;code&gt;Xft.dpi&lt;/code&gt; setting match the X display resolution. Disclaimer: I have no idea if it works if the X server has multiple displays (with possibly varying resolution).&lt;/p&gt;
+&lt;p&gt;The &lt;code&gt;.wants&lt;/code&gt; directory is for general X display setup; it’s analogous to &lt;code&gt;/etc/X11/xinit/xinitrc.d/&lt;/code&gt;. All of the files in it are simple &lt;code&gt;Type=oneshot&lt;/code&gt; service files. The &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xmodmap@.service"&gt;xmodmap&lt;/a&gt; and &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xresources@.service"&gt;xresources&lt;/a&gt; files are pretty boring, they’re just systemd versions of the couple lines that just about every traditional &lt;code&gt;.xinitrc&lt;/code&gt; contains, the biggest difference being that they look at &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/modmap"&gt;&lt;code&gt;~/.config/X11/modmap&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/resources"&gt;&lt;code&gt;~/.config/X11/resources&lt;/code&gt;&lt;/a&gt; instead of the traditional locations &lt;code&gt;~/.xmodmap&lt;/code&gt; and &lt;code&gt;~/.Xresources&lt;/code&gt;.&lt;/p&gt;
+&lt;p&gt;What’s possibly of note is &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xresources-dpi@.service"&gt;&lt;code&gt;xresources-dpi@.service&lt;/code&gt;&lt;/a&gt;. In X11, there are two sources of DPI information, the X display resolution, and the XRDB &lt;code&gt;Xft.dpi&lt;/code&gt; setting. It isn’t defined which takes precedence (to my knowledge), and even if it were (is), application authors wouldn’t be arsed to actually do the right thing. For years, Firefox (well, Iceweasel) happily listened to the X display resolution, but recently it decided to only look at &lt;code&gt;Xft.dpi&lt;/code&gt;, which objectively seems a little silly, since the X display resolution is always present, but &lt;code&gt;Xft.dpi&lt;/code&gt; isn’t. Anyway, Mozilla’s change drove me to to create a &lt;a href="https://lukeshu.com/git/dotfiles/tree/.local/bin/xrdb-set-dpi"&gt;script&lt;/a&gt; to make the &lt;code&gt;Xft.dpi&lt;/code&gt; setting match the X display resolution. Disclaimer: I have no idea if it works if the X server has multiple displays (with possibly varying resolution).&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/usr/bin/env bash
dpi=$(LC_ALL=C xdpyinfo|sed -rn &amp;#39;s/^\s*resolution:\s*(.*) dots per inch$/\1/p&amp;#39;)
xrdb -merge &amp;lt;&amp;lt;&amp;lt;&amp;quot;Xft.dpi: ${dpi}&amp;quot;&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Since we want XRDB to be set up before any other programs launch, we give both of the &lt;code&gt;xresources&lt;/code&gt; units &lt;code&gt;Before=X11@%i.target&lt;/code&gt; (instead of &lt;code&gt;After=&lt;/code&gt; like everything else). Also, two programs writing to &lt;code&gt;xrdb&lt;/code&gt; at the same time has the same problem as two programs writing to the same file; one might trash the other's changes. So, I stuck &lt;code&gt;Conflicts=xresources@:i.service&lt;/code&gt; into &lt;code&gt;xresources-dpi.service&lt;/code&gt;.&lt;/p&gt;
-&lt;p&gt;And that's the &amp;quot;core&amp;quot; of my X11 systemd setup. But, you generally want more things running than just the window manager, like a desktop notification daemon, a system panel, and an X composition manager (unless your window manager is bloated and has a composition manager built in). Since these things are probably window-manager specific, I've stuck them in a directory &lt;code&gt;wmii@.service.wants&lt;/code&gt;:&lt;/p&gt;
+&lt;p&gt;Since we want XRDB to be set up before any other programs launch, we give both of the &lt;code&gt;xresources&lt;/code&gt; units &lt;code&gt;Before=X11@%i.target&lt;/code&gt; (instead of &lt;code&gt;After=&lt;/code&gt; like everything else). Also, two programs writing to &lt;code&gt;xrdb&lt;/code&gt; at the same time has the same problem as two programs writing to the same file; one might trash the other’s changes. So, I stuck &lt;code&gt;Conflicts=xresources@:i.service&lt;/code&gt; into &lt;code&gt;xresources-dpi.service&lt;/code&gt;.&lt;/p&gt;
+&lt;p&gt;And that’s the “core” of my X11 systemd setup. But, you generally want more things running than just the window manager, like a desktop notification daemon, a system panel, and an X composition manager (unless your window manager is bloated and has a composition manager built in). Since these things are probably window-manager specific, I’ve stuck them in a directory &lt;code&gt;wmii@.service.wants&lt;/code&gt;:&lt;/p&gt;
&lt;ul class="tree"&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user"&gt;.config/systemd/user/&lt;/a&gt;&lt;/p&gt;
@@ -105,13 +167,13 @@ xrdb -merge &amp;lt;&amp;lt;&amp;lt;&amp;quot;Xft.dpi: ${dpi}&amp;quot;&lt;/code
&lt;/ul&gt;
&lt;p&gt;For the window manager &lt;code&gt;.service&lt;/code&gt;, I &lt;em&gt;could&lt;/em&gt; just say &lt;code&gt;Type=simple&lt;/code&gt; and call it a day (and I did for a while). But, I like to have &lt;code&gt;lxpanel&lt;/code&gt; show up on all of my WMII tags (desktops), so I have &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/config.sh"&gt;my WMII configuration&lt;/a&gt; stick this in the WMII &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/rules"&gt;&lt;code&gt;/rules&lt;/code&gt;&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/panel/ tags=/.*/ floating=always&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Unfortunately, for this to work, &lt;code&gt;lxpanel&lt;/code&gt; must be started &lt;em&gt;after&lt;/em&gt; that gets inserted into WMII's rules. That wasn't a problem pre-systemd, because &lt;code&gt;lxpanel&lt;/code&gt; was started by my WMII configuration, so ordering was simple. For systemd to get this right, I must have a way of notifying systemd that WMII's fully started, and it's safe to start &lt;code&gt;lxpanel&lt;/code&gt;. So, I stuck this in &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/wmii@.service"&gt;my WMII &lt;code&gt;.service&lt;/code&gt; file&lt;/a&gt;:&lt;/p&gt;
+&lt;p&gt;Unfortunately, for this to work, &lt;code&gt;lxpanel&lt;/code&gt; must be started &lt;em&gt;after&lt;/em&gt; that gets inserted into WMII’s rules. That wasn’t a problem pre-systemd, because &lt;code&gt;lxpanel&lt;/code&gt; was started by my WMII configuration, so ordering was simple. For systemd to get this right, I must have a way of notifying systemd that WMII’s fully started, and it’s safe to start &lt;code&gt;lxpanel&lt;/code&gt;. So, I stuck this in &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/wmii@.service"&gt;my WMII &lt;code&gt;.service&lt;/code&gt; file&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# This assumes that you write READY=1 to $NOTIFY_SOCKET in wmiirc
Type=notify
NotifyAccess=all&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and this in &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/wmiirc"&gt;my WMII configuration&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;systemd-notify --ready || true&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Now, this setup means that &lt;code&gt;NOTIFY_SOCKET&lt;/code&gt; is set for all the children of &lt;code&gt;wmii&lt;/code&gt;; I'd rather not have it leak into the applications that I start from the window manager, so I also stuck &lt;code&gt;unset NOTIFY_SOCKET&lt;/code&gt; after forking a process that isn't part of the window manager:&lt;/p&gt;
+&lt;p&gt;Now, this setup means that &lt;code&gt;NOTIFY_SOCKET&lt;/code&gt; is set for all the children of &lt;code&gt;wmii&lt;/code&gt;; I’d rather not have it leak into the applications that I start from the window manager, so I also stuck &lt;code&gt;unset NOTIFY_SOCKET&lt;/code&gt; after forking a process that isn’t part of the window manager:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;runcmd() (
...
unset NOTIFY_SOCKET # systemd
@@ -119,14 +181,14 @@ NotifyAccess=all&lt;/code&gt;&lt;/pre&gt;
exec 8&amp;gt;&amp;amp;- # xinit/systemd handshake
...
)&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Unfortunately, because of a couple of &lt;a href="https://github.com/systemd/systemd/issues/2739"&gt;bugs&lt;/a&gt; and &lt;a href="https://github.com/systemd/systemd/issues/2737"&gt;race conditions&lt;/a&gt; in systemd, &lt;code&gt;systemd-notify&lt;/code&gt; isn't reliable. If systemd can't receive the &lt;code&gt;READY=1&lt;/code&gt; signal from my WMII configuration, there are two consequences:&lt;/p&gt;
+&lt;p&gt;Unfortunately, because of a couple of &lt;a href="https://github.com/systemd/systemd/issues/2739"&gt;bugs&lt;/a&gt; and &lt;a href="https://github.com/systemd/systemd/issues/2737"&gt;race conditions&lt;/a&gt; in systemd, &lt;code&gt;systemd-notify&lt;/code&gt; isn’t reliable. If systemd can’t receive the &lt;code&gt;READY=1&lt;/code&gt; signal from my WMII configuration, there are two consequences:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;code&gt;lxpanel&lt;/code&gt; will never start, because it will always be waiting for &lt;code&gt;wmii&lt;/code&gt; to be ready, which will never happen.&lt;/li&gt;
-&lt;li&gt;After a couple of minutes, systemd will consider &lt;code&gt;wmii&lt;/code&gt; to be timed out, which is a failure, so then it will kill &lt;code&gt;wmii&lt;/code&gt;, and exit my X11 session. That's no good!&lt;/li&gt;
+&lt;li&gt;After a couple of minutes, systemd will consider &lt;code&gt;wmii&lt;/code&gt; to be timed out, which is a failure, so then it will kill &lt;code&gt;wmii&lt;/code&gt;, and exit my X11 session. That’s no good!&lt;/li&gt;
&lt;/ol&gt;
-&lt;p&gt;Using &lt;code&gt;socat&lt;/code&gt; to send the message to systemd instead of &lt;code&gt;systemd-notify&lt;/code&gt; &amp;quot;should&amp;quot; always work, because it tries to read from both ends of the bi-directional stream, and I can't imagine that getting EOF from the &lt;code&gt;UNIX-SENDTO&lt;/code&gt; end will ever be faster than the systemd manager from handling the datagram that got sent. Which is to say, &amp;quot;we work around the race condition by being slow and shitty.&amp;quot;&lt;/p&gt;
+&lt;p&gt;Using &lt;code&gt;socat&lt;/code&gt; to send the message to systemd instead of &lt;code&gt;systemd-notify&lt;/code&gt; “should” always work, because it tries to read from both ends of the bi-directional stream, and I can’t imagine that getting EOF from the &lt;code&gt;UNIX-SENDTO&lt;/code&gt; end will ever be faster than the systemd manager from handling the datagram that got sent. Which is to say, “we work around the race condition by being slow and shitty.”&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;socat STDIO UNIX-SENDTO:&amp;quot;$NOTIFY_SOCKET&amp;quot; &amp;lt;&amp;lt;&amp;lt;READY=1 || true&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;But, I don't like that. I'd rather write my WMII configuration to the world as I wish it existed, and have workarounds encapsulated elsewhere; &lt;a href="http://blog.robertelder.org/interfaces-most-important-software-engineering-concept/"&gt;&amp;quot;If you have to cut corners in your project, do it inside the implementation, and wrap a very good interface around it.&amp;quot;&lt;/a&gt;. So, I wrote a &lt;code&gt;systemd-notify&lt;/code&gt; compatible &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/workarounds.sh"&gt;function&lt;/a&gt; that ultimately calls &lt;code&gt;socat&lt;/code&gt;:&lt;/p&gt;
+&lt;p&gt;But, I don’t like that. I’d rather write my WMII configuration to the world as I wish it existed, and have workarounds encapsulated elsewhere; &lt;a href="http://blog.robertelder.org/interfaces-most-important-software-engineering-concept/"&gt;“If you have to cut corners in your project, do it inside the implementation, and wrap a very good interface around it.”&lt;/a&gt;. So, I wrote a &lt;code&gt;systemd-notify&lt;/code&gt; compatible &lt;a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/workarounds.sh"&gt;function&lt;/a&gt; that ultimately calls &lt;code&gt;socat&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;##
# Just like systemd-notify(1), but slower, which is a shitty
# workaround for a race condition in systemd.
@@ -167,7 +229,7 @@ systemd-notify() {
printf -v n &amp;#39;%s\n&amp;#39; &amp;quot;${our_env[@]}&amp;quot;
socat STDIO UNIX-SENDTO:&amp;quot;$NOTIFY_SOCKET&amp;quot; &amp;lt;&amp;lt;&amp;lt;&amp;quot;$n&amp;quot;
}&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;So, one day when the systemd bugs have been fixed (and presumably the Linux kernel supports passing the cgroup of a process as part of its credentials), I can remove that from &lt;code&gt;workarounds.sh&lt;/code&gt;, and not have to touch anything else in my WMII configuration (I do use &lt;code&gt;systemd-notify&lt;/code&gt; in a couple of other, non-essential, places too; this wasn't to avoid having to change just 1 line).&lt;/p&gt;
+&lt;p&gt;So, one day when the systemd bugs have been fixed (and presumably the Linux kernel supports passing the cgroup of a process as part of its credentials), I can remove that from &lt;code&gt;workarounds.sh&lt;/code&gt;, and not have to touch anything else in my WMII configuration (I do use &lt;code&gt;systemd-notify&lt;/code&gt; in a couple of other, non-essential, places too; this wasn’t to avoid having to change just 1 line).&lt;/p&gt;
&lt;p&gt;So, now that &lt;code&gt;wmii@.service&lt;/code&gt; properly has &lt;code&gt;Type=notify&lt;/code&gt;, I can just stick &lt;code&gt;After=wmii@.service&lt;/code&gt; into my &lt;code&gt;lxpanel@.service&lt;/code&gt;, right? Wrong! Well, I &lt;em&gt;could&lt;/em&gt;, but my &lt;code&gt;lxpanel&lt;/code&gt; service has nothing to do with WMII; why should I couple them? Instead, I create &lt;a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/wm-running@.target"&gt;&lt;code&gt;wm-running@.target&lt;/code&gt;&lt;/a&gt; that can be used as a synchronization point:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# wmii@.service
Before=wm-running@%i.target
@@ -175,7 +237,7 @@ Before=wm-running@%i.target
# lxpanel@.service
After=X11@%i.target wm-running@%i.target
Requires=wm-running@%i.target&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Finally, I have my desktop started and running. Now, I'd like for programs that aren't part of the window manager to not dump their stdout and stderr into WMII's part of the journal, like to have a record of which graphical programs crashed, and like to have a prettier cgroup/process graph. So, I use &lt;code&gt;systemd-run&lt;/code&gt; to run external programs from the window manager:&lt;/p&gt;
+&lt;p&gt;Finally, I have my desktop started and running. Now, I’d like for programs that aren’t part of the window manager to not dump their stdout and stderr into WMII’s part of the journal, like to have a record of which graphical programs crashed, and like to have a prettier cgroup/process graph. So, I use &lt;code&gt;systemd-run&lt;/code&gt; to run external programs from the window manager:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;runcmd() (
...
unset NOTIFY_SOCKET # systemd
@@ -183,9 +245,9 @@ Requires=wm-running@%i.target&lt;/code&gt;&lt;/pre&gt;
exec 8&amp;gt;&amp;amp;- # xinit/systemd handshake
exec systemd-run --user --scope -- sh -c &amp;quot;$*&amp;quot;
)&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;I run them as a scope instead of a service so that they inherit environment variables, and don't have to mess with getting &lt;code&gt;DISPLAY&lt;/code&gt; or &lt;code&gt;XAUTHORITY&lt;/code&gt; into their units (as I &lt;em&gt;don't&lt;/em&gt; want to make them global variables in my systemd user session).&lt;/p&gt;
-&lt;p&gt;I'd like to get &lt;code&gt;lxpanel&lt;/code&gt; to also use &lt;code&gt;systemd-run&lt;/code&gt; when launching programs, but it's a low priority because I don't really actually use &lt;code&gt;lxpanel&lt;/code&gt; to launch programs, I just have the menu there to make sure that I didn't break the icons for programs that I package (I did that once back when I was Parabola's packager for Iceweasel and IceCat).&lt;/p&gt;
-&lt;p&gt;And that's how I use systemd with X11.&lt;/p&gt;
+&lt;p&gt;I run them as a scope instead of a service so that they inherit environment variables, and don’t have to mess with getting &lt;code&gt;DISPLAY&lt;/code&gt; or &lt;code&gt;XAUTHORITY&lt;/code&gt; into their units (as I &lt;em&gt;don’t&lt;/em&gt; want to make them global variables in my systemd user session).&lt;/p&gt;
+&lt;p&gt;I’d like to get &lt;code&gt;lxpanel&lt;/code&gt; to also use &lt;code&gt;systemd-run&lt;/code&gt; when launching programs, but it’s a low priority because I don’t really actually use &lt;code&gt;lxpanel&lt;/code&gt; to launch programs, I just have the menu there to make sure that I didn’t break the icons for programs that I package (I did that once back when I was Parabola’s packager for Iceweasel and IceCat).&lt;/p&gt;
+&lt;p&gt;And that’s how I use systemd with X11.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2016 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -196,34 +258,34 @@ Requires=wm-running@%i.target&lt;/code&gt;&lt;/pre&gt;
<link rel="alternate" type="text/html" href="./java-segfault-redux.html"/>
<link rel="alternate" type="text/markdown" href="./java-segfault-redux.md"/>
<id>https://lukeshu.com/blog/java-segfault-redux.html</id>
- <updated>2016-05-02T02:28:19-04:00</updated>
+ <updated>2016-02-28T00:00:00+00:00</updated>
<published>2016-02-28T00:00:00+00:00</published>
<title>My favorite bug: segfaults in Java (redux)</title>
<content type="html">&lt;h1 id="my-favorite-bug-segfaults-in-java-redux"&gt;My favorite bug: segfaults in Java (redux)&lt;/h1&gt;
-&lt;p&gt;Two years ago, I &lt;a href="./java-segfault.html"&gt;wrote&lt;/a&gt; about one of my favorite bugs that I'd squashed two years before that. About a year after that, someone posted it &lt;a href="https://news.ycombinator.com/item?id=9283571"&gt;on Hacker News&lt;/a&gt;.&lt;/p&gt;
-&lt;p&gt;There was some fun discussion about it, but also some confusion. After finishing a season of mentoring team 4272, I've decided that it would be fun to re-visit the article, and dig up the old actual code, instead of pseudo-code, hopefully improving the clarity (and providing a light introduction for anyone wanting to get into modifying the current SmartDashbaord).&lt;/p&gt;
+&lt;p&gt;Two years ago, I &lt;a href="./java-segfault.html"&gt;wrote&lt;/a&gt; about one of my favorite bugs that I’d squashed two years before that. About a year after that, someone posted it &lt;a href="https://news.ycombinator.com/item?id=9283571"&gt;on Hacker News&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;There was some fun discussion about it, but also some confusion. After finishing a season of mentoring team 4272, I’ve decided that it would be fun to re-visit the article, and dig up the old actual code, instead of pseudo-code, hopefully improving the clarity (and providing a light introduction for anyone wanting to get into modifying the current SmartDashbaord).&lt;/p&gt;
&lt;h2 id="the-context"&gt;The context&lt;/h2&gt;
-&lt;p&gt;In 2012, I was a high school senior, and lead programmer programmer on the FIRST Robotics Competition team 1024. For the unfamiliar, the relevant part of the setup is that there are 2 minute and 15 second matches in which you have a 120 pound robot that sometimes runs autonomously, and sometimes is controlled over WiFi from a person at a laptop running stock &amp;quot;driver station&amp;quot; software and modifiable &amp;quot;dashboard&amp;quot; software.&lt;/p&gt;
+&lt;p&gt;In 2012, I was a high school senior, and lead programmer programmer on the FIRST Robotics Competition team 1024. For the unfamiliar, the relevant part of the setup is that there are 2 minute and 15 second matches in which you have a 120 pound robot that sometimes runs autonomously, and sometimes is controlled over WiFi from a person at a laptop running stock “driver station” software and modifiable “dashboard” software.&lt;/p&gt;
&lt;p&gt;That year, we mostly used the dashboard software to allow the human driver and operator to monitor sensors on the robot, one of them being a video feed from a web-cam mounted on it. This was really easy because the new standard dashboard program had a click-and drag interface to add stock widgets; you just had to make sure the code on the robot was actually sending the data.&lt;/p&gt;
-&lt;p&gt;That's great, until when debugging things, the dashboard would suddenly vanish. If it was run manually from a terminal (instead of letting the driver station software launch it), you would see a core dump indicating a segmentation fault.&lt;/p&gt;
-&lt;p&gt;This wasn't just us either; I spoke with people on other teams, everyone who was streaming video had this issue. But, because it only happened every couple of minutes, and a match is only 2:15, it didn't need to run very long, they just crossed their fingers and hoped it didn't happen during a match.&lt;/p&gt;
-&lt;p&gt;The dashboard was written in Java, and the source was available (under a 3-clause BSD license) via read-only SVN at &lt;code&gt;http://firstforge.wpi.edu/svn/repos/smart_dashboard/trunk&lt;/code&gt; (which is unfortunately no longer online, fortunately I'd posted some snapshots on the web). So I dove in, hunting for the bug.&lt;/p&gt;
+&lt;p&gt;That’s great, until when debugging things, the dashboard would suddenly vanish. If it was run manually from a terminal (instead of letting the driver station software launch it), you would see a core dump indicating a segmentation fault.&lt;/p&gt;
+&lt;p&gt;This wasn’t just us either; I spoke with people on other teams, everyone who was streaming video had this issue. But, because it only happened every couple of minutes, and a match is only 2:15, it didn’t need to run very long, they just crossed their fingers and hoped it didn’t happen during a match.&lt;/p&gt;
+&lt;p&gt;The dashboard was written in Java, and the source was available (under a 3-clause BSD license) via read-only SVN at &lt;code&gt;http://firstforge.wpi.edu/svn/repos/smart_dashboard/trunk&lt;/code&gt; (which is unfortunately no longer online, fortunately I’d posted some snapshots on the web). So I dove in, hunting for the bug.&lt;/p&gt;
&lt;p&gt;The repository was divided into several NetBeans projects (not exhaustively listed):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://gitorious.org/absfrc/sources.git/?p=absfrc:sources.git;a=blob_plain;f=smartdashboard-client-2012-1-any.src.tar.xz;hb=HEAD"&gt;&lt;code&gt;client/smartdashboard&lt;/code&gt;&lt;/a&gt;: The main dashboard program, has a plugin architecture.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gitorious.org/absfrc/sources.git/?p=absfrc:sources.git;a=blob_plain;f=wpijavacv-208-1-any.src.tar.xz;hb=HEAD"&gt;&lt;code&gt;WPIJavaCV&lt;/code&gt;&lt;/a&gt;: A higher-level wrapper around JavaCV, itself a Java Native Interface (JNI) wrapper to talk to OpenCV (C and C++).&lt;/li&gt;
&lt;li&gt;&lt;a href="https://gitorious.org/absfrc/sources.git/?p=absfrc:sources.git;a=blob_plain;f=smartdashboard-extension-wpicameraextension-210-1-any.src.tar.xz;hb=HEAD"&gt;&lt;code&gt;extensions/camera/WPICameraExtension&lt;/code&gt;&lt;/a&gt;: The standard camera feed plugin, processes the video through WPIJavaCV.&lt;/li&gt;
&lt;/ul&gt;
-&lt;p&gt;I figured that the bug must be somewhere in the C or C++ code that was being called by JavaCV, because that's the language where segfaults happen. It was especially a pain to track down the pointers that were causing the issue, because it was hard with native debuggers to see through all of the JVM stuff to the OpenCV code, and the OpenCV stuff is opaque to Java debuggers.&lt;/p&gt;
-&lt;p&gt;Eventually the issue lead me back into the WPICameraExtension, then into WPIJavaCV--there was a native pointer being stored in a Java variable; Java code called the native routine to &lt;code&gt;free()&lt;/code&gt; the structure, but then tried to feed it to another routine later. This lead to difficulty again--tracking objects with Java debuggers was hard because they don't expect the program to suddenly segfault; it's Java code, Java doesn't segfault, it throws exceptions!&lt;/p&gt;
-&lt;p&gt;With the help of &lt;code&gt;println()&lt;/code&gt; I was eventually able to see that some code was executing in an order that straight didn't make sense.&lt;/p&gt;
+&lt;p&gt;I figured that the bug must be somewhere in the C or C++ code that was being called by JavaCV, because that’s the language where segfaults happen. It was especially a pain to track down the pointers that were causing the issue, because it was hard with native debuggers to see through all of the JVM stuff to the OpenCV code, and the OpenCV stuff is opaque to Java debuggers.&lt;/p&gt;
+&lt;p&gt;Eventually the issue lead me back into the WPICameraExtension, then into WPIJavaCV—there was a native pointer being stored in a Java variable; Java code called the native routine to &lt;code&gt;free()&lt;/code&gt; the structure, but then tried to feed it to another routine later. This lead to difficulty again—tracking objects with Java debuggers was hard because they don’t expect the program to suddenly segfault; it’s Java code, Java doesn’t segfault, it throws exceptions!&lt;/p&gt;
+&lt;p&gt;With the help of &lt;code&gt;println()&lt;/code&gt; I was eventually able to see that some code was executing in an order that straight didn’t make sense.&lt;/p&gt;
&lt;h2 id="the-bug"&gt;The bug&lt;/h2&gt;
-&lt;p&gt;The basic flow of WPIJavaCV is you have a &lt;code&gt;WPICamera&lt;/code&gt;, and you call &lt;code&gt;.getNewImage()&lt;/code&gt; on it, which gives you a &lt;code&gt;WPIImage&lt;/code&gt;, which you could do all kinds of fancy OpenCV things on, but then ultimately call &lt;code&gt;.getBufferedImage()&lt;/code&gt;, which gives you a &lt;code&gt;java.awt.image.BufferedImage&lt;/code&gt; that you can pass to Swing to draw on the screen. You do this every for frame. Which is exactly what &lt;code&gt;WPICameraExtension.java&lt;/code&gt; did, except that &amp;quot;all kinds of fancy OpenCV things&amp;quot; consisted only of:&lt;/p&gt;
+&lt;p&gt;The basic flow of WPIJavaCV is you have a &lt;code&gt;WPICamera&lt;/code&gt;, and you call &lt;code&gt;.getNewImage()&lt;/code&gt; on it, which gives you a &lt;code&gt;WPIImage&lt;/code&gt;, which you could do all kinds of fancy OpenCV things on, but then ultimately call &lt;code&gt;.getBufferedImage()&lt;/code&gt;, which gives you a &lt;code&gt;java.awt.image.BufferedImage&lt;/code&gt; that you can pass to Swing to draw on the screen. You do this every for frame. Which is exactly what &lt;code&gt;WPICameraExtension.java&lt;/code&gt; did, except that “all kinds of fancy OpenCV things” consisted only of:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public WPIImage processImage(WPIColorImage rawImage) {
return rawImage;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The idea was that you would extend the class, overriding that one method, if you wanted to do anything fancy.&lt;/p&gt;
-&lt;p&gt;One of the neat things about WPIJavaCV was that every OpenCV object class extended had a &lt;code&gt;finalize()&lt;/code&gt; method (via inheriting from the abstract class &lt;code&gt;WPIDisposable&lt;/code&gt;) that freed the underlying C/C++ memory, so you didn't have to worry about memory leaks like in plain JavaCV. To inherit from &lt;code&gt;WPIDisposable&lt;/code&gt;, you had to write a &lt;code&gt;disposed()&lt;/code&gt; method that actually freed the memory. This was better than writing &lt;code&gt;finalize()&lt;/code&gt; directly, because it did some safety with NULL pointers and idempotency if you wanted to manually free something early.&lt;/p&gt;
+&lt;p&gt;One of the neat things about WPIJavaCV was that every OpenCV object class extended had a &lt;code&gt;finalize()&lt;/code&gt; method (via inheriting from the abstract class &lt;code&gt;WPIDisposable&lt;/code&gt;) that freed the underlying C/C++ memory, so you didn’t have to worry about memory leaks like in plain JavaCV. To inherit from &lt;code&gt;WPIDisposable&lt;/code&gt;, you had to write a &lt;code&gt;disposed()&lt;/code&gt; method that actually freed the memory. This was better than writing &lt;code&gt;finalize()&lt;/code&gt; directly, because it did some safety with NULL pointers and idempotency if you wanted to manually free something early.&lt;/p&gt;
&lt;p&gt;Now, &lt;code&gt;edu.wpi.first.WPIImage.disposed()&lt;/code&gt; called &lt;code&gt;&lt;a href="https://github.com/bytedeco/javacv/blob/svn/src/com/googlecode/javacv/cpp/opencv_core.java#L398"&gt;com.googlecode.javacv.cpp.opencv_core.IplImage&lt;/a&gt;.release()&lt;/code&gt;, which called (via JNI) &lt;code&gt;IplImage:::release()&lt;/code&gt;, which called libc &lt;code&gt;free()&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@Override
protected void disposed() {
@@ -240,12 +302,12 @@ public BufferedImage getBufferedImage() {
return image.getBufferedImage();
}&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;The &lt;code&gt;println()&lt;/code&gt; output I saw that didn't make sense was that &lt;code&gt;someFrame.finalize()&lt;/code&gt; was running before &lt;code&gt;someFrame.getBuffereImage()&lt;/code&gt; had returned!&lt;/p&gt;
-&lt;p&gt;You see, if it is waiting for the return value of a method &lt;code&gt;m()&lt;/code&gt; of object &lt;code&gt;a&lt;/code&gt;, and code in &lt;code&gt;m()&lt;/code&gt; that is yet to be executed doesn't access any other methods or properties of &lt;code&gt;a&lt;/code&gt;, then it will go ahead and consider &lt;code&gt;a&lt;/code&gt; eligible for garbage collection before &lt;code&gt;m()&lt;/code&gt; has finished running.&lt;/p&gt;
-&lt;p&gt;Put another way, &lt;code&gt;this&lt;/code&gt; is passed to a method just like any other argument. If a method is done accessing &lt;code&gt;this&lt;/code&gt;, then it's &amp;quot;safe&amp;quot; for the JVM to go ahead and garbage collect it.&lt;/p&gt;
-&lt;p&gt;That is normally a safe &amp;quot;optimization&amp;quot; to make… except for when a destructor method (&lt;code&gt;finalize()&lt;/code&gt;) is defined for the object; the destructor can have side effects, and Java has no way to know whether it is safe for them to happen before &lt;code&gt;m()&lt;/code&gt; has finished running.&lt;/p&gt;
-&lt;p&gt;I'm not entirely sure if this is a &amp;quot;bug&amp;quot; in the compiler or the language specification, but I do believe that it's broken behavior.&lt;/p&gt;
-&lt;p&gt;Anyway, in this case it's unsafe with WPI's code.&lt;/p&gt;
+&lt;p&gt;The &lt;code&gt;println()&lt;/code&gt; output I saw that didn’t make sense was that &lt;code&gt;someFrame.finalize()&lt;/code&gt; was running before &lt;code&gt;someFrame.getBuffereImage()&lt;/code&gt; had returned!&lt;/p&gt;
+&lt;p&gt;You see, if it is waiting for the return value of a method &lt;code&gt;m()&lt;/code&gt; of object &lt;code&gt;a&lt;/code&gt;, and code in &lt;code&gt;m()&lt;/code&gt; that is yet to be executed doesn’t access any other methods or properties of &lt;code&gt;a&lt;/code&gt;, then it will go ahead and consider &lt;code&gt;a&lt;/code&gt; eligible for garbage collection before &lt;code&gt;m()&lt;/code&gt; has finished running.&lt;/p&gt;
+&lt;p&gt;Put another way, &lt;code&gt;this&lt;/code&gt; is passed to a method just like any other argument. If a method is done accessing &lt;code&gt;this&lt;/code&gt;, then it’s “safe” for the JVM to go ahead and garbage collect it.&lt;/p&gt;
+&lt;p&gt;That is normally a safe “optimization” to make… except for when a destructor method (&lt;code&gt;finalize()&lt;/code&gt;) is defined for the object; the destructor can have side effects, and Java has no way to know whether it is safe for them to happen before &lt;code&gt;m()&lt;/code&gt; has finished running.&lt;/p&gt;
+&lt;p&gt;I’m not entirely sure if this is a “bug” in the compiler or the language specification, but I do believe that it’s broken behavior.&lt;/p&gt;
+&lt;p&gt;Anyway, in this case it’s unsafe with WPI’s code.&lt;/p&gt;
&lt;h2 id="my-work-around"&gt;My work-around&lt;/h2&gt;
&lt;p&gt;My work-around was to change this function in &lt;code&gt;WPIImage&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public BufferedImage getBufferedImage() {
@@ -253,7 +315,7 @@ public BufferedImage getBufferedImage() {
return image.getBufferedImage(); // `this` may get garbage collected before it returns!
}&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;In the above code, &lt;code&gt;this&lt;/code&gt; is a &lt;code&gt;WPIImage&lt;/code&gt;, and it may get garbage collected between the time that &lt;code&gt;image.getBufferedImage()&lt;/code&gt; is dispatched, and the time that &lt;code&gt;image.getBufferedImage()&lt;/code&gt; accesses native memory. When it is garbage collected, it calls &lt;code&gt;image.release()&lt;/code&gt;, which &lt;code&gt;free()&lt;/code&gt;s that native memory. That seems pretty unlikely to happen; that's a very small gap of time. However, running 30 times a second, eventually bad luck with the garbage collector happens, and the program crashes.&lt;/p&gt;
+&lt;p&gt;In the above code, &lt;code&gt;this&lt;/code&gt; is a &lt;code&gt;WPIImage&lt;/code&gt;, and it may get garbage collected between the time that &lt;code&gt;image.getBufferedImage()&lt;/code&gt; is dispatched, and the time that &lt;code&gt;image.getBufferedImage()&lt;/code&gt; accesses native memory. When it is garbage collected, it calls &lt;code&gt;image.release()&lt;/code&gt;, which &lt;code&gt;free()&lt;/code&gt;s that native memory. That seems pretty unlikely to happen; that’s a very small gap of time. However, running 30 times a second, eventually bad luck with the garbage collector happens, and the program crashes.&lt;/p&gt;
&lt;p&gt;The work-around was to insert a bogus call to this to keep &lt;code&gt;this&lt;/code&gt; around until after we were also done with &lt;code&gt;image&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;to this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public BufferedImage getBufferedImage() {
@@ -262,10 +324,10 @@ public BufferedImage getBufferedImage() {
getWidth(); // bogus call to keep `this` around
return ret;
}&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Yeah. After spending weeks wading through though thousands of lines of Java, C, and C++, a bogus call to a method I didn't care about was the fix.&lt;/p&gt;
-&lt;p&gt;TheLoneWolfling on Hacker News noted that they'd be worried about the JVM optimizing out the call to &lt;code&gt;getWidth()&lt;/code&gt;. I'm not, because &lt;code&gt;WPIImage.getWidth()&lt;/code&gt; calls &lt;code&gt;IplImage.width()&lt;/code&gt;, which is declared as &lt;code&gt;native&lt;/code&gt;; the JVM must run it because it might have side effects. On the other hand, looking back, I think I just shrunk the window for things to go wrong: it may be possible for the garbage collection to trigger in the time between &lt;code&gt;getWidth()&lt;/code&gt; being dispatched and &lt;code&gt;width()&lt;/code&gt; running. Perhaps there was something in the C/C++ code that made it safe, I don't recall, and don't care quite enough to dig into OpenCV internals again. Or perhaps I'm mis-remembering the fix (which I don't actually have a file of), and I called some other method that &lt;em&gt;could&lt;/em&gt; get optimized out (though I &lt;em&gt;do&lt;/em&gt; believe that it was either &lt;code&gt;getWidth()&lt;/code&gt; or &lt;code&gt;getHeight()&lt;/code&gt;).&lt;/p&gt;
-&lt;h2 id="wpis-fix"&gt;WPI's fix&lt;/h2&gt;
-&lt;p&gt;Four years later, the SmartDashboard is still being used! But it no longer has this bug, and it's not using my workaround. So, how did the WPILib developers fix it?&lt;/p&gt;
+&lt;p&gt;Yeah. After spending weeks wading through though thousands of lines of Java, C, and C++, a bogus call to a method I didn’t care about was the fix.&lt;/p&gt;
+&lt;p&gt;TheLoneWolfling on Hacker News noted that they’d be worried about the JVM optimizing out the call to &lt;code&gt;getWidth()&lt;/code&gt;. I’m not, because &lt;code&gt;WPIImage.getWidth()&lt;/code&gt; calls &lt;code&gt;IplImage.width()&lt;/code&gt;, which is declared as &lt;code&gt;native&lt;/code&gt;; the JVM must run it because it might have side effects. On the other hand, looking back, I think I just shrunk the window for things to go wrong: it may be possible for the garbage collection to trigger in the time between &lt;code&gt;getWidth()&lt;/code&gt; being dispatched and &lt;code&gt;width()&lt;/code&gt; running. Perhaps there was something in the C/C++ code that made it safe, I don’t recall, and don’t care quite enough to dig into OpenCV internals again. Or perhaps I’m mis-remembering the fix (which I don’t actually have a file of), and I called some other method that &lt;em&gt;could&lt;/em&gt; get optimized out (though I &lt;em&gt;do&lt;/em&gt; believe that it was either &lt;code&gt;getWidth()&lt;/code&gt; or &lt;code&gt;getHeight()&lt;/code&gt;).&lt;/p&gt;
+&lt;h2 id="wpis-fix"&gt;WPI’s fix&lt;/h2&gt;
+&lt;p&gt;Four years later, the SmartDashboard is still being used! But it no longer has this bug, and it’s not using my workaround. So, how did the WPILib developers fix it?&lt;/p&gt;
&lt;p&gt;Well, the code now lives &lt;a href="https://usfirst.collab.net/gerrit/#/admin/projects/"&gt;in git at collab.net&lt;/a&gt;, so I decided to take a look.&lt;/p&gt;
&lt;p&gt;The stripped out WPIJavaCV from the main video feed widget, and now use a purely Java implementation of MPJPEG streaming.&lt;/p&gt;
&lt;p&gt;However, the old video feed widget is still available as an extension (so that you can still do cool things with &lt;code&gt;processImage&lt;/code&gt;), and it also no longer has this bug. Their fix was to put a mutex around all accesses to &lt;code&gt;image&lt;/code&gt;, which should have been the obvious solution to me.&lt;/p&gt;
@@ -279,13 +341,13 @@ public BufferedImage getBufferedImage() {
<link rel="alternate" type="text/html" href="./nginx-mediawiki.html"/>
<link rel="alternate" type="text/markdown" href="./nginx-mediawiki.md"/>
<id>https://lukeshu.com/blog/nginx-mediawiki.html</id>
- <updated>2015-05-19T23:53:52-06:00</updated>
+ <updated>2015-05-19T00:00:00+00:00</updated>
<published>2015-05-19T00:00:00+00:00</published>
<title>An Nginx configuration for MediaWiki</title>
<content type="html">&lt;h1 id="an-nginx-configuration-for-mediawiki"&gt;An Nginx configuration for MediaWiki&lt;/h1&gt;
-&lt;p&gt;There are &lt;a href="http://wiki.nginx.org/MediaWiki"&gt;several&lt;/a&gt; &lt;a href="https://wiki.archlinux.org/index.php/MediaWiki#Nginx"&gt;example&lt;/a&gt; &lt;a href="https://www.mediawiki.org/wiki/Manual:Short_URL/wiki/Page_title_--_nginx_rewrite--root_access"&gt;Nginx&lt;/a&gt; &lt;a href="https://www.mediawiki.org/wiki/Manual:Short_URL/Page_title_-_nginx,_Root_Access,_PHP_as_a_CGI_module"&gt;configurations&lt;/a&gt; &lt;a href="http://wiki.nginx.org/RHEL_5.4_%2B_Nginx_%2B_Mediawiki"&gt;for&lt;/a&gt; &lt;a href="http://stackoverflow.com/questions/11080666/mediawiki-on-nginx"&gt;MediaWiki&lt;/a&gt; floating around the web. Many of them don't block the user from accessing things like &lt;code&gt;/serialized/&lt;/code&gt;. Many of them also &lt;a href="https://labs.parabola.nu/issues/725"&gt;don't correctly handle&lt;/a&gt; a wiki page named &lt;code&gt;FAQ&lt;/code&gt;, since that is a name of a file in the MediaWiki root! In fact, the configuration used on the official Nginx Wiki has both of those issues!&lt;/p&gt;
+&lt;p&gt;There are &lt;a href="http://wiki.nginx.org/MediaWiki"&gt;several&lt;/a&gt; &lt;a href="https://wiki.archlinux.org/index.php/MediaWiki#Nginx"&gt;example&lt;/a&gt; &lt;a href="https://www.mediawiki.org/wiki/Manual:Short_URL/wiki/Page_title_--_nginx_rewrite--root_access"&gt;Nginx&lt;/a&gt; &lt;a href="https://www.mediawiki.org/wiki/Manual:Short_URL/Page_title_-_nginx,_Root_Access,_PHP_as_a_CGI_module"&gt;configurations&lt;/a&gt; &lt;a href="http://wiki.nginx.org/RHEL_5.4_%2B_Nginx_%2B_Mediawiki"&gt;for&lt;/a&gt; &lt;a href="http://stackoverflow.com/questions/11080666/mediawiki-on-nginx"&gt;MediaWiki&lt;/a&gt; floating around the web. Many of them don’t block the user from accessing things like &lt;code&gt;/serialized/&lt;/code&gt;. Many of them also &lt;a href="https://labs.parabola.nu/issues/725"&gt;don’t correctly handle&lt;/a&gt; a wiki page named &lt;code&gt;FAQ&lt;/code&gt;, since that is a name of a file in the MediaWiki root! In fact, the configuration used on the official Nginx Wiki has both of those issues!&lt;/p&gt;
&lt;p&gt;This is because most of the configurations floating around basically try to pass all requests through, and blacklist certain requests, either denying them, or passing them through to &lt;code&gt;index.php&lt;/code&gt;.&lt;/p&gt;
-&lt;p&gt;It's my view that blacklisting is inferior to whitelisting in situations like this. So, I developed the following configuration that instead works by whitelisting certain paths.&lt;/p&gt;
+&lt;p&gt;It’s my view that blacklisting is inferior to whitelisting in situations like this. So, I developed the following configuration that instead works by whitelisting certain paths.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;root /path/to/your/mediawiki; # obviously, change this line
index index.php;
@@ -319,7 +381,7 @@ location @php {
fastcgi_pass unix:/run/php-fpm/wiki.sock;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We are now using this configuration on &lt;a href="https://wiki.parabola.nu/"&gt;ParabolaWiki&lt;/a&gt;, but with an alias for &lt;code&gt;location = /favicon.ico&lt;/code&gt; to the correct file in the skin, and with FastCGI caching for PHP.&lt;/p&gt;
-&lt;p&gt;The only thing I don't like about this is the &lt;code&gt;try_files /var/emtpy&lt;/code&gt; bits--surely there is a better way to have it go to one of the &lt;code&gt;@&lt;/code&gt; location blocks, but I couldn't figure it out.&lt;/p&gt;
+&lt;p&gt;The only thing I don’t like about this is the &lt;code&gt;try_files /var/emtpy&lt;/code&gt; bits—surely there is a better way to have it go to one of the &lt;code&gt;@&lt;/code&gt; location blocks, but I couldn’t figure it out.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2015 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -330,14 +392,14 @@ location @php {
<link rel="alternate" type="text/html" href="./lp2015-videos.html"/>
<link rel="alternate" type="text/markdown" href="./lp2015-videos.md"/>
<id>https://lukeshu.com/blog/lp2015-videos.html</id>
- <updated>2015-03-22T07:52:39-04:00</updated>
+ <updated>2015-03-22T00:00:00+00:00</updated>
<published>2015-03-22T00:00:00+00:00</published>
<title>I took some videos at LibrePlanet</title>
<content type="html">&lt;h1 id="i-took-some-videos-at-libreplanet"&gt;I took some videos at LibrePlanet&lt;/h1&gt;
-&lt;p&gt;I'm at &lt;a href="https://libreplanet.org/2015/"&gt;LibrePlanet&lt;/a&gt;, and have been loving the talks. For most of yesterday, there was a series of short &amp;quot;lightning&amp;quot; talks in room 144. I decided to hang out in that room for the later part of the day, because while most of the talks were live streamed and recorded, there were no cameras in room 144; so I couldn't watch them later.&lt;/p&gt;
+&lt;p&gt;I’m at &lt;a href="https://libreplanet.org/2015/"&gt;LibrePlanet&lt;/a&gt;, and have been loving the talks. For most of yesterday, there was a series of short “lightning” talks in room 144. I decided to hang out in that room for the later part of the day, because while most of the talks were live streamed and recorded, there were no cameras in room 144; so I couldn’t watch them later.&lt;/p&gt;
&lt;p&gt;Way too late in the day, I remembered that I have the capability to record videos, so I cought the last two talks in 144.&lt;/p&gt;
&lt;p&gt;I appologize for the changing orientation.&lt;/p&gt;
-&lt;p&gt;&lt;a href="https://lukeshu.com/dump/lp-2015-last-2-short-talks.ogg"&gt;Here's the video I took&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;a href="https://lukeshu.com/dump/lp-2015-last-2-short-talks.ogg"&gt;Here’s the video I took&lt;/a&gt;.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2015 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -348,15 +410,15 @@ location @php {
<link rel="alternate" type="text/html" href="./build-bash-1.html"/>
<link rel="alternate" type="text/markdown" href="./build-bash-1.md"/>
<id>https://lukeshu.com/blog/build-bash-1.html</id>
- <updated>2016-05-02T02:20:41-04:00</updated>
+ <updated>2015-03-18T00:00:00+00:00</updated>
<published>2015-03-18T00:00:00+00:00</published>
<title>Building Bash 1.14.7 on a modern system</title>
<content type="html">&lt;h1 id="building-bash-1.14.7-on-a-modern-system"&gt;Building Bash 1.14.7 on a modern system&lt;/h1&gt;
&lt;p&gt;In a previous revision of my &lt;a href="./bash-arrays.html"&gt;Bash arrays post&lt;/a&gt;, I wrote:&lt;/p&gt;
&lt;blockquote&gt;
-&lt;p&gt;Bash 1.x won't compile with modern GCC, so I couldn't verify how it behaves.&lt;/p&gt;
+&lt;p&gt;Bash 1.x won’t compile with modern GCC, so I couldn’t verify how it behaves.&lt;/p&gt;
&lt;/blockquote&gt;
-&lt;p&gt;I recall spending a little time fighting with it, but apparently I didn't try very hard: getting Bash 1.14.7 to build on a modern box is mostly just adjusting it to use &lt;code&gt;stdarg&lt;/code&gt; instead of the no-longer-implemented &lt;code&gt;varargs&lt;/code&gt;. There's also a little fiddling with the pre-autoconf automatic configuration.&lt;/p&gt;
+&lt;p&gt;I recall spending a little time fighting with it, but apparently I didn’t try very hard: getting Bash 1.14.7 to build on a modern box is mostly just adjusting it to use &lt;code&gt;stdarg&lt;/code&gt; instead of the no-longer-implemented &lt;code&gt;varargs&lt;/code&gt;. There’s also a little fiddling with the pre-autoconf automatic configuration.&lt;/p&gt;
&lt;h2 id="stdarg"&gt;stdarg&lt;/h2&gt;
&lt;p&gt;Converting to &lt;code&gt;stdarg&lt;/code&gt; is pretty simple: For each variadic function (functions that take a variable number of arguments), follow these steps:&lt;/p&gt;
&lt;ol type="1"&gt;
@@ -366,27 +428,27 @@ location @php {
&lt;li&gt;Replace &lt;code&gt;va_start (args);&lt;/code&gt; with &lt;code&gt;va_start (args, format);&lt;/code&gt; in the function bodies.&lt;/li&gt;
&lt;li&gt;Replace &lt;code&gt;function_name ();&lt;/code&gt; with &lt;code&gt;function_name (char *, ...)&lt;/code&gt; in header files and/or at the top of C files.&lt;/li&gt;
&lt;/ol&gt;
-&lt;p&gt;There's one function that uses the variable name &lt;code&gt;control&lt;/code&gt; instead of &lt;code&gt;format&lt;/code&gt;.&lt;/p&gt;
-&lt;p&gt;I've prepared &lt;a href="./bash-1.14.7-gcc4-stdarg.patch"&gt;a patch&lt;/a&gt; that does this.&lt;/p&gt;
+&lt;p&gt;There’s one function that uses the variable name &lt;code&gt;control&lt;/code&gt; instead of &lt;code&gt;format&lt;/code&gt;.&lt;/p&gt;
+&lt;p&gt;I’ve prepared &lt;a href="./bash-1.14.7-gcc4-stdarg.patch"&gt;a patch&lt;/a&gt; that does this.&lt;/p&gt;
&lt;h2 id="configuration"&gt;Configuration&lt;/h2&gt;
-&lt;p&gt;Instead of using autoconf-style tests to test for compiler and platform features, Bash 1 used the file &lt;code&gt;machines.h&lt;/code&gt; that had &lt;code&gt;#ifdefs&lt;/code&gt; and a huge database of of different operating systems for different platforms. It's gross. And quite likely won't handle your modern operating system.&lt;/p&gt;
+&lt;p&gt;Instead of using autoconf-style tests to test for compiler and platform features, Bash 1 used the file &lt;code&gt;machines.h&lt;/code&gt; that had &lt;code&gt;#ifdefs&lt;/code&gt; and a huge database of of different operating systems for different platforms. It’s gross. And quite likely won’t handle your modern operating system.&lt;/p&gt;
&lt;p&gt;I made these two small changes to &lt;code&gt;machines.h&lt;/code&gt; to get it to work correctly on my box:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;Replace &lt;code&gt;#if defined (i386)&lt;/code&gt; with &lt;code&gt;#if defined (i386) || defined (__x86_64__)&lt;/code&gt;. The purpose of this is obvious.&lt;/li&gt;
-&lt;li&gt;Add &lt;code&gt;#define USE_TERMCAP_EMULATION&lt;/code&gt; to the section for Linux [sic] on i386 (&lt;code&gt;# if !defined (done386) &amp;amp;&amp;amp; (defined (__linux__) || defined (linux))&lt;/code&gt;). What this does is tell it to link against libcurses to use curses termcap emulation, instead of linking against libtermcap (which doesn't exist on modern GNU/Linux systems).&lt;/li&gt;
+&lt;li&gt;Add &lt;code&gt;#define USE_TERMCAP_EMULATION&lt;/code&gt; to the section for Linux [sic] on i386 (&lt;code&gt;# if !defined (done386) &amp;amp;&amp;amp; (defined (__linux__) || defined (linux))&lt;/code&gt;). What this does is tell it to link against libcurses to use curses termcap emulation, instead of linking against libtermcap (which doesn’t exist on modern GNU/Linux systems).&lt;/li&gt;
&lt;/ol&gt;
-&lt;p&gt;Again, I've prepared &lt;a href="./bash-1.14.7-machines-config.patch"&gt;a patch&lt;/a&gt; that does this.&lt;/p&gt;
+&lt;p&gt;Again, I’ve prepared &lt;a href="./bash-1.14.7-machines-config.patch"&gt;a patch&lt;/a&gt; that does this.&lt;/p&gt;
&lt;h2 id="building"&gt;Building&lt;/h2&gt;
&lt;p&gt;With those adjustments, it should build, but with quite a few warnings. Making a couple of changes to &lt;code&gt;CFLAGS&lt;/code&gt; should fix that:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;make CFLAGS=&amp;#39;-O -g -Werror -Wno-int-to-pointer-cast -Wno-pointer-to-int-cast -Wno-deprecated-declarations -include stdio.h -include stdlib.h -include string.h -Dexp2=bash_exp2&amp;#39;&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;That's a doozy! Let's break it down:&lt;/p&gt;
+&lt;p&gt;That’s a doozy! Let’s break it down:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;-O -g&lt;/code&gt; The default value for CFLAGS (defined in &lt;code&gt;cpp-Makefile&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-Werror&lt;/code&gt; Treat warnings as errors; force us to deal with any issues.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-Wno-int-to-pointer-cast -Wno-pointer-to-int-cast&lt;/code&gt; Allow casting between integers and pointers. Unfortunately, the way this version of Bash was designed requires this.&lt;/li&gt;
-&lt;li&gt;&lt;code&gt;-Wno-deprecated-declarations&lt;/code&gt; The &lt;code&gt;getwd&lt;/code&gt; function in &lt;code&gt;unistd.h&lt;/code&gt; is considered deprecated (use &lt;code&gt;getcwd&lt;/code&gt; instead). However, if &lt;code&gt;getcwd&lt;/code&gt; is available, Bash uses it's own &lt;code&gt;getwd&lt;/code&gt; wrapper around &lt;code&gt;getcwd&lt;/code&gt; (implemented in &lt;code&gt;general.c&lt;/code&gt;), and only uses the signature from &lt;code&gt;unistd.h&lt;/code&gt;, not the actuall implementation from libc.&lt;/li&gt;
+&lt;li&gt;&lt;code&gt;-Wno-deprecated-declarations&lt;/code&gt; The &lt;code&gt;getwd&lt;/code&gt; function in &lt;code&gt;unistd.h&lt;/code&gt; is considered deprecated (use &lt;code&gt;getcwd&lt;/code&gt; instead). However, if &lt;code&gt;getcwd&lt;/code&gt; is available, Bash uses it’s own &lt;code&gt;getwd&lt;/code&gt; wrapper around &lt;code&gt;getcwd&lt;/code&gt; (implemented in &lt;code&gt;general.c&lt;/code&gt;), and only uses the signature from &lt;code&gt;unistd.h&lt;/code&gt;, not the actuall implementation from libc.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;-include stdio.h -include stdlib.h -include string.h&lt;/code&gt; Several files are missing these header file includes. If not for &lt;code&gt;-Werror&lt;/code&gt;, the default function signature fallbacks would work.&lt;/li&gt;
-&lt;li&gt;&lt;code&gt;-Dexp2=bash_exp2&lt;/code&gt; Avoid a conflict between the parser's &lt;code&gt;exp2&lt;/code&gt; helper function and &lt;code&gt;math.h&lt;/code&gt;'s base-2 exponential function.&lt;/li&gt;
+&lt;li&gt;&lt;code&gt;-Dexp2=bash_exp2&lt;/code&gt; Avoid a conflict between the parser’s &lt;code&gt;exp2&lt;/code&gt; helper function and &lt;code&gt;math.h&lt;/code&gt;’s base-2 exponential function.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Have fun, software archaeologists!&lt;/p&gt;
</content>
@@ -399,14 +461,14 @@ location @php {
<link rel="alternate" type="text/html" href="./purdue-cs-login.html"/>
<link rel="alternate" type="text/markdown" href="./purdue-cs-login.md"/>
<id>https://lukeshu.com/blog/purdue-cs-login.html</id>
- <updated>2016-03-21T02:34:10-04:00</updated>
+ <updated>2015-02-06T00:00:00+00:00</updated>
<published>2015-02-06T00:00:00+00:00</published>
<title>Customizing your login on Purdue CS computers (WIP, but updated)</title>
<content type="html">&lt;h1 id="customizing-your-login-on-purdue-cs-computers-wip-but-updated"&gt;Customizing your login on Purdue CS computers (WIP, but updated)&lt;/h1&gt;
&lt;blockquote&gt;
-&lt;p&gt;This article is currently a Work-In-Progress. Other than the one place where I say &amp;quot;I'm not sure&amp;quot;, the GDM section is complete. The network shares section is a mess, but has some good information.&lt;/p&gt;
+&lt;p&gt;This article is currently a Work-In-Progress. Other than the one place where I say “I’m not sure”, the GDM section is complete. The network shares section is a mess, but has some good information.&lt;/p&gt;
&lt;/blockquote&gt;
-&lt;p&gt;Most CS students at Purdue spend a lot of time on the lab boxes, but don't know a lot about them. This document tries to fix that.&lt;/p&gt;
+&lt;p&gt;Most CS students at Purdue spend a lot of time on the lab boxes, but don’t know a lot about them. This document tries to fix that.&lt;/p&gt;
&lt;p&gt;The lab boxes all run Gentoo.&lt;/p&gt;
&lt;h2 id="gdm-the-gnome-display-manager"&gt;GDM, the Gnome Display Manager&lt;/h2&gt;
&lt;p&gt;The boxes run &lt;code&gt;gdm&lt;/code&gt; (Gnome Display Manager) 2.20.11 for the login screen. This is an old version, and has a couple behaviors that are slightly different than new versions, but here are the important bits:&lt;/p&gt;
@@ -417,14 +479,14 @@ location @php {
&lt;/ul&gt;
&lt;p&gt;User configuration:&lt;/p&gt;
&lt;ul&gt;
-&lt;li&gt;&lt;code&gt;~/.dmrc&lt;/code&gt; (more recent versions use &lt;code&gt;~/.desktop&lt;/code&gt;, but Purdue boxes aren't running more recent versions)&lt;/li&gt;
+&lt;li&gt;&lt;code&gt;~/.dmrc&lt;/code&gt; (more recent versions use &lt;code&gt;~/.desktop&lt;/code&gt;, but Purdue boxes aren’t running more recent versions)&lt;/li&gt;
&lt;/ul&gt;
-&lt;h3 id="purdues-gdm-configuration"&gt;Purdue's GDM configuration&lt;/h3&gt;
+&lt;h3 id="purdues-gdm-configuration"&gt;Purdue’s GDM configuration&lt;/h3&gt;
&lt;p&gt;Now, &lt;code&gt;custom.conf&lt;/code&gt; sets&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;BaseXsession=/usr/local/share/xsessions/Xsession
SessionDesktopDir=/usr/local/share/xsessions/&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;This is important, because there are &lt;em&gt;multiple&lt;/em&gt; locations that look like these files; I take it that they were used at sometime in the past. Don't get tricked into thinking that it looks at &lt;code&gt;/etc/X11/gdm/Xsession&lt;/code&gt; (which exists, and is where it would look by default).&lt;/p&gt;
-&lt;p&gt;If you look at the GDM login screen, it has a &amp;quot;Sessions&amp;quot; button that opens a prompt where you can select any of several sessions:&lt;/p&gt;
+&lt;p&gt;This is important, because there are &lt;em&gt;multiple&lt;/em&gt; locations that look like these files; I take it that they were used at sometime in the past. Don’t get tricked into thinking that it looks at &lt;code&gt;/etc/X11/gdm/Xsession&lt;/code&gt; (which exists, and is where it would look by default).&lt;/p&gt;
+&lt;p&gt;If you look at the GDM login screen, it has a “Sessions” button that opens a prompt where you can select any of several sessions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Last session&lt;/li&gt;
&lt;li&gt;1. MATE (&lt;code&gt;mate.desktop&lt;/code&gt;; &lt;code&gt;Exec=mate-session&lt;/code&gt;)&lt;/li&gt;
@@ -437,8 +499,8 @@ SessionDesktopDir=/usr/local/share/xsessions/&lt;/code&gt;&lt;/pre&gt;
&lt;li&gt;Failsafe Terminal (&lt;code&gt;ShowXtermFailsafeSession=true&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The main 6 are configured by the &lt;code&gt;.desktop&lt;/code&gt; files in &lt;code&gt;SessionDesktopDir=/usr/local/share/xsessions&lt;/code&gt;; the last 2 are auto-generated. The reason &lt;code&gt;ShowGnomeFailsafeSession&lt;/code&gt; correctly generates a Mate session instead of a Gnome session is because of the patch &lt;code&gt;/p/portage/*/overlay/gnome-base/gdm/files/gdm-2.20.11-mate.patch&lt;/code&gt;.&lt;/p&gt;
-&lt;p&gt;I'm not sure why Gnome shows up as &lt;code&gt;gnome.desktop&lt;/code&gt; instead of &lt;code&gt;GNOME&lt;/code&gt; as specified by &lt;code&gt;gnome.desktop:Name&lt;/code&gt;. I imagine it might be something related to the aforementioned patch, but I can't find anything in the patch that looks like it would screw that up; at least not without a better understanding of GDM's code.&lt;/p&gt;
-&lt;p&gt;Which of the main 6 is used by default (&amp;quot;Last Session&amp;quot;) is configured with &lt;code&gt;~/.dmrc:Session&lt;/code&gt;, which contains the basename of the associated &lt;code&gt;.desktop&lt;/code&gt; file (that is, without any directory part or file extension).&lt;/p&gt;
+&lt;p&gt;I’m not sure why Gnome shows up as &lt;code&gt;gnome.desktop&lt;/code&gt; instead of &lt;code&gt;GNOME&lt;/code&gt; as specified by &lt;code&gt;gnome.desktop:Name&lt;/code&gt;. I imagine it might be something related to the aforementioned patch, but I can’t find anything in the patch that looks like it would screw that up; at least not without a better understanding of GDM’s code.&lt;/p&gt;
+&lt;p&gt;Which of the main 6 is used by default (“Last Session”) is configured with &lt;code&gt;~/.dmrc:Session&lt;/code&gt;, which contains the basename of the associated &lt;code&gt;.desktop&lt;/code&gt; file (that is, without any directory part or file extension).&lt;/p&gt;
&lt;p&gt;Every one of the &lt;code&gt;.desktop&lt;/code&gt; files sets &lt;code&gt;Type=XSession&lt;/code&gt;, which means that instead of running the argument in &lt;code&gt;Exec=&lt;/code&gt; directly, it passes it as arguments to the &lt;code&gt;Xsession&lt;/code&gt; program (in the location configured by &lt;code&gt;BaseXsession&lt;/code&gt;).&lt;/p&gt;
&lt;h4 id="xsession"&gt;Xsession&lt;/h4&gt;
&lt;p&gt;So, now we get to read &lt;code&gt;/usr/local/share/xsessions/Xsession&lt;/code&gt;.&lt;/p&gt;
@@ -450,7 +512,7 @@ SessionDesktopDir=/usr/local/share/xsessions/&lt;/code&gt;&lt;/pre&gt;
&lt;li&gt;&lt;code&gt;xsetroot -default&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Fiddles with the maximum number of processes.&lt;/li&gt;
&lt;/ol&gt;
-&lt;p&gt;After that, it handles these 3 &amp;quot;special&amp;quot; arguments that were given to it by various &lt;code&gt;.desktop&lt;/code&gt; &lt;code&gt;Exec=&lt;/code&gt; lines:&lt;/p&gt;
+&lt;p&gt;After that, it handles these 3 “special” arguments that were given to it by various &lt;code&gt;.desktop&lt;/code&gt; &lt;code&gt;Exec=&lt;/code&gt; lines:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;failsafe&lt;/code&gt;: Runs a single xterm window. NB: This is NOT run by either of the failsafe options. It is likey a vestiage from a prior configuration.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;startkde&lt;/code&gt;: Displays a message saying KDE is no longer available.&lt;/li&gt;
@@ -466,16 +528,16 @@ SessionDesktopDir=/usr/local/share/xsessions/&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;custom&lt;/code&gt;: Executes &lt;code&gt;~/.xsession&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;default&lt;/code&gt;: Executes &lt;code&gt;~/.Xrc.cs&lt;/code&gt;.&lt;/li&gt;
-&lt;li&gt;&lt;code&gt;mate-session&lt;/code&gt;: It has this whole script to start DBus, run the &lt;code&gt;mate-session&lt;/code&gt; command, then cleanup when it's done.&lt;/li&gt;
+&lt;li&gt;&lt;code&gt;mate-session&lt;/code&gt;: It has this whole script to start DBus, run the &lt;code&gt;mate-session&lt;/code&gt; command, then cleanup when it’s done.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;*&lt;/code&gt; (&lt;code&gt;fvwm2&lt;/code&gt;): Runs &lt;code&gt;eval exec &amp;quot;$@&amp;quot;&lt;/code&gt;, which results in it executing the &lt;code&gt;fvwm2&lt;/code&gt; command.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="network-shares"&gt;Network Shares&lt;/h2&gt;
&lt;p&gt;Your data is on various hosts. I believe most undergrads have their data on &lt;code&gt;data.cs.purdue.edu&lt;/code&gt; (or just &lt;a href="https://en.wikipedia.org/wiki/Data_%28Star_Trek%29"&gt;&lt;code&gt;data&lt;/code&gt;&lt;/a&gt;). Others have theirs on &lt;a href="http://swfanon.wikia.com/wiki/Antor"&gt;&lt;code&gt;antor&lt;/code&gt;&lt;/a&gt; or &lt;a href="https://en.wikipedia.org/wiki/Tux"&gt;&lt;code&gt;tux&lt;/code&gt;&lt;/a&gt; (that I know of).&lt;/p&gt;
-&lt;p&gt;Most of the boxes with tons of storage have many network cards; each with a different IP; a single host's IPs are mostly the same, but with varying 3rd octets. For example, &lt;code&gt;data&lt;/code&gt; is 128.10.X.13. If you need a particular value of X, but don't want to remember the other octets; they are individually addressed with &lt;code&gt;BASENAME-NUMBER.cs.purdue.edu&lt;/code&gt;. For example, &lt;code&gt;data-25.cs.purdu.edu&lt;/code&gt; is 128.10.25.13.&lt;/p&gt;
+&lt;p&gt;Most of the boxes with tons of storage have many network cards; each with a different IP; a single host’s IPs are mostly the same, but with varying 3rd octets. For example, &lt;code&gt;data&lt;/code&gt; is 128.10.X.13. If you need a particular value of X, but don’t want to remember the other octets; they are individually addressed with &lt;code&gt;BASENAME-NUMBER.cs.purdue.edu&lt;/code&gt;. For example, &lt;code&gt;data-25.cs.purdu.edu&lt;/code&gt; is 128.10.25.13.&lt;/p&gt;
&lt;p&gt;They use &lt;a href="https://www.kernel.org/pub/linux/daemons/autofs/"&gt;AutoFS&lt;/a&gt; quite extensively. The maps are generated dynamically by &lt;code&gt;/etc/autofs/*.map&lt;/code&gt;, which are all symlinks to &lt;code&gt;/usr/libexec/amd2autofs&lt;/code&gt;. As far as I can tell, &lt;code&gt;amd2autofs&lt;/code&gt; is custom to Purdue. Its source lives in &lt;code&gt;/p/portage/*/overlay/net-fs/autofs/files/amd2autofs.c&lt;/code&gt;. The name appears to be a misnomer; seems to claim to dynamically translate from the configuration of &lt;a href="http://www.am-utils.org/"&gt;Auto Mounter Daemon (AMD)&lt;/a&gt; to AutoFS, but it actually talks to NIS. It does so using the &lt;code&gt;yp&lt;/code&gt; interface, which is in Glibc for compatibility, but is undocumented. For documentation for that interface, look at the one of the BSDs, or Mac OS X. From the comments in the file, it appears that it once did look at the AMD configuration, but has since been changed.&lt;/p&gt;
&lt;p&gt;There are 3 mountpoints using AutoFS: &lt;code&gt;/homes&lt;/code&gt;, &lt;code&gt;/p&lt;/code&gt;, and &lt;code&gt;/u&lt;/code&gt;. &lt;code&gt;/homes&lt;/code&gt; creates symlinks on-demand from &lt;code&gt;/homes/USERNAME&lt;/code&gt; to &lt;code&gt;/u/BUCKET/USERNAME&lt;/code&gt;. &lt;code&gt;/u&lt;/code&gt; mounts NFS shares to &lt;code&gt;/u/SERVERNAME&lt;/code&gt; on-demand, and creates symlinks from &lt;code&gt;/u/BUCKET&lt;/code&gt; to &lt;code&gt;/u/SERVERNAME/BUCKET&lt;/code&gt; on-demand. &lt;code&gt;/p&lt;/code&gt; mounts on-demand various NFS shares that are organized by topic; the Xinu/MIPS tools are in &lt;code&gt;/p/xinu&lt;/code&gt;, the Portage tree is in &lt;code&gt;/p/portage&lt;/code&gt;.&lt;/p&gt;
-&lt;p&gt;I'm not sure how &lt;code&gt;scratch&lt;/code&gt; works; it seems to be heterogenous between different servers and families of lab boxes. Sometimes it's in &lt;code&gt;/u&lt;/code&gt;, sometimes it isn't.&lt;/p&gt;
-&lt;p&gt;This 3rd-party documentation was very helpful to me: &lt;a href="http://www.linux-consulting.com/Amd_AutoFS/" class="uri"&gt;http://www.linux-consulting.com/Amd_AutoFS/&lt;/a&gt; It's where Gentoo points for the AutoFS homepage, as it doesn't have a real homepage. Arch just points to FreshMeat. Debian points to kernel.org.&lt;/p&gt;
+&lt;p&gt;I’m not sure how &lt;code&gt;scratch&lt;/code&gt; works; it seems to be heterogenous between different servers and families of lab boxes. Sometimes it’s in &lt;code&gt;/u&lt;/code&gt;, sometimes it isn’t.&lt;/p&gt;
+&lt;p&gt;This 3rd-party documentation was very helpful to me: &lt;a href="http://www.linux-consulting.com/Amd_AutoFS/" class="uri"&gt;http://www.linux-consulting.com/Amd_AutoFS/&lt;/a&gt; It’s where Gentoo points for the AutoFS homepage, as it doesn’t have a real homepage. Arch just points to FreshMeat. Debian points to kernel.org.&lt;/p&gt;
&lt;h3 id="lore"&gt;Lore&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/List_of_Star_Trek:_The_Next_Generation_characters#Lore"&gt;&lt;code&gt;lore&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Lore is a SunOS 5.10 box running on Sun-Fire V445 (sun4u) hardware. SunOS is NOT GNU/Linux, and sun4u is NOT x86.&lt;/p&gt;
@@ -490,14 +552,14 @@ SessionDesktopDir=/usr/local/share/xsessions/&lt;/code&gt;&lt;/pre&gt;
<link rel="alternate" type="text/html" href="./make-memoize.html"/>
<link rel="alternate" type="text/markdown" href="./make-memoize.md"/>
<id>https://lukeshu.com/blog/make-memoize.html</id>
- <updated>2016-03-21T02:34:10-04:00</updated>
+ <updated>2014-11-20T00:00:00+00:00</updated>
<published>2014-11-20T00:00:00+00:00</published>
<title>A memoization routine for GNU Make functions</title>
<content type="html">&lt;h1 id="a-memoization-routine-for-gnu-make-functions"&gt;A memoization routine for GNU Make functions&lt;/h1&gt;
-&lt;p&gt;I'm a big fan of &lt;a href="https://www.gnu.org/software/make/"&gt;GNU Make&lt;/a&gt;. I'm pretty knowledgeable about it, and was pretty active on the help-make mailing list for a while. Something that many experienced make-ers know of is John Graham-Cumming's &amp;quot;GNU Make Standard Library&amp;quot;, or &lt;a href="http://gmsl.sourceforge.net/"&gt;GMSL&lt;/a&gt;.&lt;/p&gt;
-&lt;p&gt;I don't like to use it, as I'm capable of defining macros myself as I need them instead of pulling in a 3rd party dependency (and generally like to stay away from the kind of Makefile that would lean heavily on something like GMSL).&lt;/p&gt;
+&lt;p&gt;I’m a big fan of &lt;a href="https://www.gnu.org/software/make/"&gt;GNU Make&lt;/a&gt;. I’m pretty knowledgeable about it, and was pretty active on the help-make mailing list for a while. Something that many experienced make-ers know of is John Graham-Cumming’s “GNU Make Standard Library”, or &lt;a href="http://gmsl.sourceforge.net/"&gt;GMSL&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;I don’t like to use it, as I’m capable of defining macros myself as I need them instead of pulling in a 3rd party dependency (and generally like to stay away from the kind of Makefile that would lean heavily on something like GMSL).&lt;/p&gt;
&lt;p&gt;However, one really neat thing that GMSL offers is a way to memoize expensive functions (such as those that shell out). I was considering pulling in GMSL for one of my projects, almost just for the &lt;code&gt;memoize&lt;/code&gt; function.&lt;/p&gt;
-&lt;p&gt;However, John's &lt;code&gt;memoize&lt;/code&gt; has a couple short-comings that made it unsuitable for my needs.&lt;/p&gt;
+&lt;p&gt;However, John’s &lt;code&gt;memoize&lt;/code&gt; has a couple short-comings that made it unsuitable for my needs.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Only allows functions that take one argument.&lt;/li&gt;
&lt;li&gt;Considers empty values to be unset; for my needs, an empty string is a valid value that should be cached.&lt;/li&gt;
@@ -536,7 +598,7 @@ _main = $(_$0_main)
_hash = __memoized_$(_$0_hash)
memoized = $(if $($(_hash)),,$(eval $(_hash) := _ $(_main)))$(call rest,$($(_hash)))&lt;/pre&gt;
&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
-&lt;p&gt;Now, I'm pretty sure that should work, but I have only actually tested the first version.&lt;/p&gt;
+&lt;p&gt;Now, I’m pretty sure that should work, but I have only actually tested the first version.&lt;/p&gt;
&lt;h2 id="tldr"&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;Avoid doing things in Make that would make you lean on complex solutions like an external memoize function.&lt;/p&gt;
&lt;p&gt;However, if you do end up needing a more flexible memoize routine, I wrote one that you can use.&lt;/p&gt;
@@ -550,18 +612,18 @@ memoized = $(if $($(_hash)),,$(eval $(_hash) := _ $(_main)))$(call rest,$($(_has
<link rel="alternate" type="text/html" href="./ryf-routers.html"/>
<link rel="alternate" type="text/markdown" href="./ryf-routers.md"/>
<id>https://lukeshu.com/blog/ryf-routers.html</id>
- <updated>2014-09-12T00:21:05-04:00</updated>
+ <updated>2014-09-12T00:00:00+00:00</updated>
<published>2014-09-12T00:00:00+00:00</published>
<title>I'm excited about the new RYF-certified routers from ThinkPenguin</title>
- <content type="html">&lt;h1 id="im-excited-about-the-new-ryf-certified-routers-from-thinkpenguin"&gt;I'm excited about the new RYF-certified routers from ThinkPenguin&lt;/h1&gt;
+ <content type="html">&lt;h1 id="im-excited-about-the-new-ryf-certified-routers-from-thinkpenguin"&gt;I’m excited about the new RYF-certified routers from ThinkPenguin&lt;/h1&gt;
&lt;p&gt;I just learned that on Wednesday, the FSF &lt;a href="https://www.fsf.org/resources/hw/endorsement/thinkpenguin"&gt;awarded&lt;/a&gt; the &lt;abbr title="Respects Your Freedom"&gt;RYF&lt;/abbr&gt; certification to the &lt;a href="https://www.thinkpenguin.com/TPE-NWIFIROUTER"&gt;Think Penguin TPE-NWIFIROUTER&lt;/a&gt; wireless router.&lt;/p&gt;
-&lt;p&gt;I didn't find this information directly published up front, but simply: It is a re-branded &lt;strong&gt;TP-Link TL-841ND&lt;/strong&gt; modded to be running &lt;a href="http://librecmc.com/"&gt;libreCMC&lt;/a&gt;.&lt;/p&gt;
-&lt;p&gt;I've been a fan of the TL-841/740 line of routers for several years now. They are dirt cheap (if you go to Newegg and sort by &amp;quot;cheapest,&amp;quot; it's frequently the TL-740N), are extremely reliable, and run OpenWRT like a champ. They are my go-to routers.&lt;/p&gt;
+&lt;p&gt;I didn’t find this information directly published up front, but simply: It is a re-branded &lt;strong&gt;TP-Link TL-841ND&lt;/strong&gt; modded to be running &lt;a href="http://librecmc.com/"&gt;libreCMC&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;I’ve been a fan of the TL-841/740 line of routers for several years now. They are dirt cheap (if you go to Newegg and sort by “cheapest,” it’s frequently the TL-740N), are extremely reliable, and run OpenWRT like a champ. They are my go-to routers.&lt;/p&gt;
&lt;p&gt;(And they sure beat the snot out of the Arris TG862 that it seems like everyone has in their homes now. I hate that thing, it even has buggy packet scheduling.)&lt;/p&gt;
&lt;p&gt;So this announcement is &lt;del&gt;doubly&lt;/del&gt;triply exciting for me:&lt;/p&gt;
&lt;ul&gt;
-&lt;li&gt;I have a solid recommendation for a router that doesn't require me or them to manually install an after-market firmware (buy it from ThinkPenguin).&lt;/li&gt;
-&lt;li&gt;If it's for me, or someone technical, I can cut costs by getting a stock TP-Link from Newegg, installing libreCMC ourselves.&lt;/li&gt;
+&lt;li&gt;I have a solid recommendation for a router that doesn’t require me or them to manually install an after-market firmware (buy it from ThinkPenguin).&lt;/li&gt;
+&lt;li&gt;If it’s for me, or someone technical, I can cut costs by getting a stock TP-Link from Newegg, installing libreCMC ourselves.&lt;/li&gt;
&lt;li&gt;I can install a 100% libre distribution on my existing routers (until recently, they were not supported by any of the libre distributions, not for technical reasons, but lack of manpower).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I hope to get libreCMC installed on my boxes this weekend!&lt;/p&gt;
@@ -575,40 +637,40 @@ memoized = $(if $($(_hash)),,$(eval $(_hash) := _ $(_main)))$(call rest,$($(_has
<link rel="alternate" type="text/html" href="./what-im-working-on-fall-2014.html"/>
<link rel="alternate" type="text/markdown" href="./what-im-working-on-fall-2014.md"/>
<id>https://lukeshu.com/blog/what-im-working-on-fall-2014.html</id>
- <updated>2016-02-28T07:12:18-05:00</updated>
+ <updated>2014-09-11T00:00:00+00:00</updated>
<published>2014-09-11T00:00:00+00:00</published>
<title>What I'm working on (Fall 2014)</title>
- <content type="html">&lt;h1 id="what-im-working-on-fall-2014"&gt;What I'm working on (Fall 2014)&lt;/h1&gt;
-&lt;p&gt;I realized today that I haven't updated my log in a while, and I don't have any &amp;quot;finished&amp;quot; stuff to show off right now, but I should just talk about all the cool stuff I'm working on right now.&lt;/p&gt;
+ <content type="html">&lt;h1 id="what-im-working-on-fall-2014"&gt;What I’m working on (Fall 2014)&lt;/h1&gt;
+&lt;p&gt;I realized today that I haven’t updated my log in a while, and I don’t have any “finished” stuff to show off right now, but I should just talk about all the cool stuff I’m working on right now.&lt;/p&gt;
&lt;h2 id="static-parsing-of-subshells"&gt;Static parsing of subshells&lt;/h2&gt;
&lt;p&gt;Last year I wrote a shell (for my Systems Programming class); however, I went above-and-beyond and added some really novel features. In my opinion, the most significant is that it parses arbitrarily deep subshells in one pass, instead of deferring them until execution. No shell that I know of does this.&lt;/p&gt;
-&lt;p&gt;At first this sounds like a really difficult, but minor feature. Until you think about scripting, and maintenance of those scripts. Being able to do a full syntax check of a script is &lt;em&gt;crucial&lt;/em&gt; for long-term maintenance, yet it's something that is missing from every major shell. I'd love to get this code merged into bash. It would be incredibly useful for &lt;a href="/git/mirror/parabola/packages/libretools.git"&gt;some software I maintain&lt;/a&gt;.&lt;/p&gt;
-&lt;p&gt;Anyway, I'm trying to publish this code, but because of a recent kerfuffle with a student publishing all of his projects on the web (and other students trying to pass it off as their own), I'm being cautious with this and making sure Purdue is alright with what I'm putting online.&lt;/p&gt;
+&lt;p&gt;At first this sounds like a really difficult, but minor feature. Until you think about scripting, and maintenance of those scripts. Being able to do a full syntax check of a script is &lt;em&gt;crucial&lt;/em&gt; for long-term maintenance, yet it’s something that is missing from every major shell. I’d love to get this code merged into bash. It would be incredibly useful for &lt;a href="/git/mirror/parabola/packages/libretools.git"&gt;some software I maintain&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;Anyway, I’m trying to publish this code, but because of a recent kerfuffle with a student publishing all of his projects on the web (and other students trying to pass it off as their own), I’m being cautious with this and making sure Purdue is alright with what I’m putting online.&lt;/p&gt;
&lt;h2 id="stateless-user-configuration-for-pamnss"&gt;&lt;a href="https://lukeshu.com/git/mirror/parabola/hackers.git/log/?h=lukeshu/restructure"&gt;Stateless user configuration for PAM/NSS&lt;/a&gt;&lt;/h2&gt;
-&lt;p&gt;Parabola GNU/Linux-libre users know that over this summer, we had a &lt;em&gt;mess&lt;/em&gt; with server outages. One of the servers is still out (due to things out of our control), and we don't have some of the data on it (because volunteer developers are terrible about back-ups, apparently).&lt;/p&gt;
+&lt;p&gt;Parabola GNU/Linux-libre users know that over this summer, we had a &lt;em&gt;mess&lt;/em&gt; with server outages. One of the servers is still out (due to things out of our control), and we don’t have some of the data on it (because volunteer developers are terrible about back-ups, apparently).&lt;/p&gt;
&lt;p&gt;This has caused us to look at how we manage our servers, back-ups, and several other things.&lt;/p&gt;
-&lt;p&gt;One thing that I've taken on as my pet project is making sure that if a server goes down, or we need to migrate (for example, Jon is telling us that he wants us to hurry up and switch to the new 64 bit hardware so he can turn off the 32 bit box), we can spin up a new server from scratch pretty easily. Part of that is making configurations stateless, and dynamic based on external data; having data be located in one place instead of duplicated across 12 config files and 3 databases... on the same box.&lt;/p&gt;
-&lt;p&gt;Right now, that's looking like some custom software interfacing with OpenLDAP and OpenSSH via sockets (OpenLDAP being a middle-man between us and PAM (Linux) and NSS (libc)). However, the OpenLDAP documentation is... inconsistent and frustrating. I might end up hacking up the LDAP modules for NSS and PAM to talk to our system directly, and cut OpenLDAP out of the picture. We'll see!&lt;/p&gt;
+&lt;p&gt;One thing that I’ve taken on as my pet project is making sure that if a server goes down, or we need to migrate (for example, Jon is telling us that he wants us to hurry up and switch to the new 64 bit hardware so he can turn off the 32 bit box), we can spin up a new server from scratch pretty easily. Part of that is making configurations stateless, and dynamic based on external data; having data be located in one place instead of duplicated across 12 config files and 3 databases… on the same box.&lt;/p&gt;
+&lt;p&gt;Right now, that’s looking like some custom software interfacing with OpenLDAP and OpenSSH via sockets (OpenLDAP being a middle-man between us and PAM (Linux) and NSS (libc)). However, the OpenLDAP documentation is… inconsistent and frustrating. I might end up hacking up the LDAP modules for NSS and PAM to talk to our system directly, and cut OpenLDAP out of the picture. We’ll see!&lt;/p&gt;
&lt;p&gt;PS: Pablo says that tomorrow we should be getting out-of-band access to the drive of the server that is down, so that we can finally restore those services on a different server.&lt;/p&gt;
&lt;h2 id="project-leaguer"&gt;&lt;a href="https://lukeshu.com/git/mirror/leaguer.git/"&gt;Project Leaguer&lt;/a&gt;&lt;/h2&gt;
-&lt;p&gt;Last year, some friends and I began writing some &amp;quot;eSports tournament management software&amp;quot;, primarily targeting League of Legends (though it has a module system that will allow it to support tons of different data sources). We mostly got it done last semester, but it had some rough spots and sharp edges we need to work out. Because we were all out of communication for the summer, we didn't work on it very much (but we did a little!). It's weird that I care about this, because I'm not a gamer. Huh, I guess coding with friends is just fun.&lt;/p&gt;
-&lt;p&gt;Anyway, this year, &lt;a href="https://github.com/AndrewMurrell"&gt;Andrew&lt;/a&gt;, &lt;a href="https://github.com/DavisLWebb"&gt;Davis&lt;/a&gt;, and I are planning to get it to a polished state by the end of the semester. We could probably do it faster, but we'd all also like to focus on classes and other projects a little more.&lt;/p&gt;
+&lt;p&gt;Last year, some friends and I began writing some “eSports tournament management software”, primarily targeting League of Legends (though it has a module system that will allow it to support tons of different data sources). We mostly got it done last semester, but it had some rough spots and sharp edges we need to work out. Because we were all out of communication for the summer, we didn’t work on it very much (but we did a little!). It’s weird that I care about this, because I’m not a gamer. Huh, I guess coding with friends is just fun.&lt;/p&gt;
+&lt;p&gt;Anyway, this year, &lt;a href="https://github.com/AndrewMurrell"&gt;Andrew&lt;/a&gt;, &lt;a href="https://github.com/DavisLWebb"&gt;Davis&lt;/a&gt;, and I are planning to get it to a polished state by the end of the semester. We could probably do it faster, but we’d all also like to focus on classes and other projects a little more.&lt;/p&gt;
&lt;h2 id="c1"&gt;C+=1&lt;/h2&gt;
-&lt;p&gt;People tend to lump C and C++ together, which upsets me, because I love C, but have a dislike for C++. That's not to say that C++ is entirely bad; it has some good features. My &amp;quot;favorite&amp;quot; code is actually code that is basically C, but takes advantage of a couple C++ features, while still being idiomatic C, not C++.&lt;/p&gt;
-&lt;p&gt;Anyway, with the perspective of history (what worked and what didn't), and a slightly opinionated view on language design (I'm pretty much a Rob Pike fan-boy), I thought I'd try to tackle &amp;quot;object-oriented C&amp;quot; with roughly the same design criteria as Stroustrup had when designing C++. I'm calling mine C+=1, for obvious reasons.&lt;/p&gt;
-&lt;p&gt;I haven't published anything yet, because calling it &amp;quot;working&amp;quot; would be stretching the truth. But I am using it for my assignments in CS 334 (Intro to Graphics), so it should move along fairly quickly, as my grade depends on it.&lt;/p&gt;
-&lt;p&gt;I'm not taking it too seriously; I don't expect it to be much more than a toy language, but it is an excuse to dive into the GCC internals.&lt;/p&gt;
-&lt;h2 id="projects-that-ive-put-on-the-back-burner"&gt;Projects that I've put on the back-burner&lt;/h2&gt;
-&lt;p&gt;I've got several other projects that I'm putting on hold for a while.&lt;/p&gt;
+&lt;p&gt;People tend to lump C and C++ together, which upsets me, because I love C, but have a dislike for C++. That’s not to say that C++ is entirely bad; it has some good features. My “favorite” code is actually code that is basically C, but takes advantage of a couple C++ features, while still being idiomatic C, not C++.&lt;/p&gt;
+&lt;p&gt;Anyway, with the perspective of history (what worked and what didn’t), and a slightly opinionated view on language design (I’m pretty much a Rob Pike fan-boy), I thought I’d try to tackle “object-oriented C” with roughly the same design criteria as Stroustrup had when designing C++. I’m calling mine C+=1, for obvious reasons.&lt;/p&gt;
+&lt;p&gt;I haven’t published anything yet, because calling it “working” would be stretching the truth. But I am using it for my assignments in CS 334 (Intro to Graphics), so it should move along fairly quickly, as my grade depends on it.&lt;/p&gt;
+&lt;p&gt;I’m not taking it too seriously; I don’t expect it to be much more than a toy language, but it is an excuse to dive into the GCC internals.&lt;/p&gt;
+&lt;h2 id="projects-that-ive-put-on-the-back-burner"&gt;Projects that I’ve put on the back-burner&lt;/h2&gt;
+&lt;p&gt;I’ve got several other projects that I’m putting on hold for a while.&lt;/p&gt;
&lt;ul&gt;
-&lt;li&gt;&lt;code&gt;maven-dist&lt;/code&gt; (was hosted with Parabola, apparently I haven't pushed it anywhere except the server that is down): A tool to build Apache Maven from source. That sounds easy, it's open source, right? Well, except that Maven is the build system from hell. It doesn't support cyclic dependencies, yet uses them internally to build itself. It &lt;em&gt;loves&lt;/em&gt; to just get binaries from Maven Central to &amp;quot;optimize&amp;quot; the build process. It depends on code that depends on compiler bugs that no longer exist (which I guess means that &lt;em&gt;no one&lt;/em&gt; has tried to build it from source after it was originally published). I've been working on-and-off on this for more than a year. My favorite part of it was writing a &lt;a href="/dump/jflex2jlex.sed.txt"&gt;sed script&lt;/a&gt; that translates a JFlex grammar specification into a JLex grammar, which is used to bootstrap JFlex; its both gross and delightful at the same time.&lt;/li&gt;
-&lt;li&gt;Integration between &lt;code&gt;dbscripts&lt;/code&gt; and &lt;code&gt;abslibre&lt;/code&gt;. If you search IRC logs, mailing lists, and ParabolaWiki, you can find numerous rants by me against &lt;a href="/git/mirror/parabola/dbscripts.git/tree/db-sync"&gt;&lt;code&gt;dbscripts:db-sync&lt;/code&gt;&lt;/a&gt;. I just hate the data-flow, it is almost designed to make things get out of sync, and broken. I mean, does &lt;a href="/dump/parabola-data-flow.svg"&gt;this&lt;/a&gt; look like a simple diagram? For contrast, &lt;a href="/dump/parabola-data-flow-xbs.svg"&gt;here's&lt;/a&gt; a rough (slightly incomplete) diagram of what I want to replace it with.&lt;/li&gt;
-&lt;li&gt;Git backend for MediaWiki (or, pulling out the rendering module of MediaWiki). I've made decent progress on that front, but there is &lt;em&gt;crazy&lt;/em&gt; de-normalization going on in the MediaWiki schema that makes this very difficult. I'm sure some of it is for historical reasons, and some of it for performance, but either way it is a mess for someone trying to neatly gut that part of the codebase.&lt;/li&gt;
+&lt;li&gt;&lt;code&gt;maven-dist&lt;/code&gt; (was hosted with Parabola, apparently I haven’t pushed it anywhere except the server that is down): A tool to build Apache Maven from source. That sounds easy, it’s open source, right? Well, except that Maven is the build system from hell. It doesn’t support cyclic dependencies, yet uses them internally to build itself. It &lt;em&gt;loves&lt;/em&gt; to just get binaries from Maven Central to “optimize” the build process. It depends on code that depends on compiler bugs that no longer exist (which I guess means that &lt;em&gt;no one&lt;/em&gt; has tried to build it from source after it was originally published). I’ve been working on-and-off on this for more than a year. My favorite part of it was writing a &lt;a href="/dump/jflex2jlex.sed.txt"&gt;sed script&lt;/a&gt; that translates a JFlex grammar specification into a JLex grammar, which is used to bootstrap JFlex; its both gross and delightful at the same time.&lt;/li&gt;
+&lt;li&gt;Integration between &lt;code&gt;dbscripts&lt;/code&gt; and &lt;code&gt;abslibre&lt;/code&gt;. If you search IRC logs, mailing lists, and ParabolaWiki, you can find numerous rants by me against &lt;a href="/git/mirror/parabola/dbscripts.git/tree/db-sync"&gt;&lt;code&gt;dbscripts:db-sync&lt;/code&gt;&lt;/a&gt;. I just hate the data-flow, it is almost designed to make things get out of sync, and broken. I mean, does &lt;a href="/dump/parabola-data-flow.svg"&gt;this&lt;/a&gt; look like a simple diagram? For contrast, &lt;a href="/dump/parabola-data-flow-xbs.svg"&gt;here’s&lt;/a&gt; a rough (slightly incomplete) diagram of what I want to replace it with.&lt;/li&gt;
+&lt;li&gt;Git backend for MediaWiki (or, pulling out the rendering module of MediaWiki). I’ve made decent progress on that front, but there is &lt;em&gt;crazy&lt;/em&gt; de-normalization going on in the MediaWiki schema that makes this very difficult. I’m sure some of it is for historical reasons, and some of it for performance, but either way it is a mess for someone trying to neatly gut that part of the codebase.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="other"&gt;Other&lt;/h2&gt;
-&lt;p&gt;I should consider doing a write-up of deterministic-&lt;code&gt;tar&lt;/code&gt; behavior (something that I've been implementing in Parabola for a while, meanwhile the Debian people have also been working on it).&lt;/p&gt;
-&lt;p&gt;I should also consider doing a &amp;quot;post-mortem&amp;quot; of &lt;a href="https://lukeshu.com/git/mirror/parabola/packages/pbs-tools.git/"&gt;PBS&lt;/a&gt;, which never actually got used, but launched XBS (part of the &lt;code&gt;dbscripts&lt;/code&gt;/&lt;code&gt;abslibre&lt;/code&gt; integration mentioned above), as well as serving as a good test-bed for features that did get implemented.&lt;/p&gt;
-&lt;p&gt;I over-use the word &amp;quot;anyway.&amp;quot;&lt;/p&gt;
+&lt;p&gt;I should consider doing a write-up of deterministic-&lt;code&gt;tar&lt;/code&gt; behavior (something that I’ve been implementing in Parabola for a while, meanwhile the Debian people have also been working on it).&lt;/p&gt;
+&lt;p&gt;I should also consider doing a “post-mortem” of &lt;a href="https://lukeshu.com/git/mirror/parabola/packages/pbs-tools.git/"&gt;PBS&lt;/a&gt;, which never actually got used, but launched XBS (part of the &lt;code&gt;dbscripts&lt;/code&gt;/&lt;code&gt;abslibre&lt;/code&gt; integration mentioned above), as well as serving as a good test-bed for features that did get implemented.&lt;/p&gt;
+&lt;p&gt;I over-use the word “anyway.”&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2014 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -619,13 +681,13 @@ memoized = $(if $($(_hash)),,$(eval $(_hash) := _ $(_main)))$(call rest,$($(_has
<link rel="alternate" type="text/html" href="./rails-improvements.html"/>
<link rel="alternate" type="text/markdown" href="./rails-improvements.md"/>
<id>https://lukeshu.com/blog/rails-improvements.html</id>
- <updated>2016-02-28T07:12:18-05:00</updated>
+ <updated>2014-05-08T00:00:00+00:00</updated>
<published>2014-05-08T00:00:00+00:00</published>
<title>Miscellaneous ways to improve your Rails experience</title>
<content type="html">&lt;h1 id="miscellaneous-ways-to-improve-your-rails-experience"&gt;Miscellaneous ways to improve your Rails experience&lt;/h1&gt;
-&lt;p&gt;Recently, I've been working on &lt;a href="https://github.com/LukeShu/leaguer"&gt;a Rails web application&lt;/a&gt;, that's really the baby of a friend of mine. Anyway, through its development, I've come up with a couple things that should make your interactions with Rails more pleasant.&lt;/p&gt;
+&lt;p&gt;Recently, I’ve been working on &lt;a href="https://github.com/LukeShu/leaguer"&gt;a Rails web application&lt;/a&gt;, that’s really the baby of a friend of mine. Anyway, through its development, I’ve come up with a couple things that should make your interactions with Rails more pleasant.&lt;/p&gt;
&lt;h2 id="auto-reload-classes-from-other-directories-than-app"&gt;Auto-(re)load classes from other directories than &lt;code&gt;app/&lt;/code&gt;&lt;/h2&gt;
-&lt;p&gt;The development server automatically loads and reloads files from the &lt;code&gt;app/&lt;/code&gt; directory, which is extremely nice. However, most web applications are going to involve modules that aren't in that directory; and editing those files requires re-starting the server for the changes to take effect.&lt;/p&gt;
+&lt;p&gt;The development server automatically loads and reloads files from the &lt;code&gt;app/&lt;/code&gt; directory, which is extremely nice. However, most web applications are going to involve modules that aren’t in that directory; and editing those files requires re-starting the server for the changes to take effect.&lt;/p&gt;
&lt;p&gt;Adding the following lines to your &lt;a href="https://github.com/LukeShu/leaguer/blob/c846cd71411ec3373a5229cacafe0df6b3673543/config/application.rb#L15"&gt;&lt;code&gt;config/application.rb&lt;/code&gt;&lt;/a&gt; will allow it to automatically load and reload files from the &lt;code&gt;lib/&lt;/code&gt; directory. You can of course change this to whichever directory/ies you like.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;module YourApp
class Application &amp;lt; Rails::Application
@@ -669,7 +731,7 @@ module ActionView
end
end
end&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;I'll probably update this page as I tweak other things I don't like.&lt;/p&gt;
+&lt;p&gt;I’ll probably update this page as I tweak other things I don’t like.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2014 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -680,12 +742,12 @@ end&lt;/code&gt;&lt;/pre&gt;
<link rel="alternate" type="text/html" href="./bash-redirection.html"/>
<link rel="alternate" type="text/markdown" href="./bash-redirection.md"/>
<id>https://lukeshu.com/blog/bash-redirection.html</id>
- <updated>2014-05-08T14:36:47-04:00</updated>
+ <updated>2014-02-13T00:00:00+00:00</updated>
<published>2014-02-13T00:00:00+00:00</published>
<title>Bash redirection</title>
<content type="html">&lt;h1 id="bash-redirection"&gt;Bash redirection&lt;/h1&gt;
-&lt;p&gt;Apparently, too many people don't understand Bash redirection. They might get the basic syntax, but they think of the process as declarative; in Bourne-ish shells, it is procedural.&lt;/p&gt;
-&lt;p&gt;In Bash, streams are handled in terms of &amp;quot;file descriptors&amp;quot; of &amp;quot;FDs&amp;quot;. FD 0 is stdin, FD 1 is stdout, and FD 2 is stderr. The equivalence (or lack thereof) between using a numeric file descriptor, and using the associated file in &lt;code&gt;/dev/*&lt;/code&gt; and &lt;code&gt;/proc/*&lt;/code&gt; is interesting, but beyond the scope of this article.&lt;/p&gt;
+&lt;p&gt;Apparently, too many people don’t understand Bash redirection. They might get the basic syntax, but they think of the process as declarative; in Bourne-ish shells, it is procedural.&lt;/p&gt;
+&lt;p&gt;In Bash, streams are handled in terms of “file descriptors” of “FDs”. FD 0 is stdin, FD 1 is stdout, and FD 2 is stderr. The equivalence (or lack thereof) between using a numeric file descriptor, and using the associated file in &lt;code&gt;/dev/*&lt;/code&gt; and &lt;code&gt;/proc/*&lt;/code&gt; is interesting, but beyond the scope of this article.&lt;/p&gt;
&lt;h2 id="step-1-pipes"&gt;Step 1: Pipes&lt;/h2&gt;
&lt;p&gt;To quote the Bash manual:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;A &amp;#39;pipeline&amp;#39; is a sequence of simple commands separated by one of the
@@ -693,7 +755,7 @@ control operators &amp;#39;|&amp;#39; or &amp;#39;|&amp;amp;&amp;#39;.
The format for a pipeline is
[time [-p]] [!] COMMAND1 [ [| or |&amp;amp;] COMMAND2 ...]&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Now, &lt;code&gt;|&amp;amp;&lt;/code&gt; is just shorthand for &lt;code&gt;2&amp;gt;&amp;amp;1 |&lt;/code&gt;, the pipe part happens here, but the &lt;code&gt;2&amp;gt;&amp;amp;1&lt;/code&gt; part doesn't happen until step 2.&lt;/p&gt;
+&lt;p&gt;Now, &lt;code&gt;|&amp;amp;&lt;/code&gt; is just shorthand for &lt;code&gt;2&amp;gt;&amp;amp;1 |&lt;/code&gt;, the pipe part happens here, but the &lt;code&gt;2&amp;gt;&amp;amp;1&lt;/code&gt; part doesn’t happen until step 2.&lt;/p&gt;
&lt;p&gt;First, if the command is part of a pipeline, the pipes are set up. For every instance of the &lt;code&gt;|&lt;/code&gt; metacharacter, Bash creates a pipe (&lt;code&gt;pipe(3)&lt;/code&gt;), and duplicates (&lt;code&gt;dup2(3)&lt;/code&gt;) the write end of the pipe to FD 1 of the process on the left side of the &lt;code&gt;|&lt;/code&gt;, and duplicate the read end of the pipe to FD 0 of the process on the right side.&lt;/p&gt;
&lt;h2 id="step-2-redirections"&gt;Step 2: Redirections&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;After&lt;/em&gt; the initial FD 0 and FD 1 fiddling by pipes is done, Bash looks at the redirections. &lt;strong&gt;This means that redirections can override pipes.&lt;/strong&gt;&lt;/p&gt;
@@ -710,25 +772,25 @@ cmd &amp;gt;file 2&amp;gt;&amp;amp;1 # both stdout and stderr go to file&lt;/cod
<link rel="alternate" type="text/html" href="./java-segfault.html"/>
<link rel="alternate" type="text/markdown" href="./java-segfault.md"/>
<id>https://lukeshu.com/blog/java-segfault.html</id>
- <updated>2016-02-28T07:12:18-05:00</updated>
+ <updated>2014-01-13T00:00:00+00:00</updated>
<published>2014-01-13T00:00:00+00:00</published>
<title>My favorite bug: segfaults in Java</title>
<content type="html">&lt;h1 id="my-favorite-bug-segfaults-in-java"&gt;My favorite bug: segfaults in Java&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;Update: Two years later, I wrote a more detailed version of this article: &lt;a href="./java-segfault-redux.html"&gt;My favorite bug: segfaults in Java (redux)&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
-&lt;p&gt;I've told this story orally a number of times, but realized that I have never written it down. This is my favorite bug story; it might not be my hardest bug, but it is the one I most like to tell.&lt;/p&gt;
+&lt;p&gt;I’ve told this story orally a number of times, but realized that I have never written it down. This is my favorite bug story; it might not be my hardest bug, but it is the one I most like to tell.&lt;/p&gt;
&lt;h2 id="the-context"&gt;The context&lt;/h2&gt;
-&lt;p&gt;In 2012, I was a Senior programmer on the FIRST Robotics Competition team 1024. For the unfamiliar, the relevant part of the setup is that there are 2 minute and 15 second matches in which you have a 120 pound robot that sometimes runs autonomously, and sometimes is controlled over WiFi from a person at a laptop running stock &amp;quot;driver station&amp;quot; software and modifiable &amp;quot;dashboard&amp;quot; software.&lt;/p&gt;
+&lt;p&gt;In 2012, I was a Senior programmer on the FIRST Robotics Competition team 1024. For the unfamiliar, the relevant part of the setup is that there are 2 minute and 15 second matches in which you have a 120 pound robot that sometimes runs autonomously, and sometimes is controlled over WiFi from a person at a laptop running stock “driver station” software and modifiable “dashboard” software.&lt;/p&gt;
&lt;p&gt;That year, we mostly used the dashboard software to allow the human driver and operator to monitor sensors on the robot, one of them being a video feed from a web-cam mounted on it. This was really easy because the new standard dashboard program had a click-and drag interface to add stock widgets; you just had to make sure the code on the robot was actually sending the data.&lt;/p&gt;
-&lt;p&gt;That's great, until when debugging things, the dashboard would suddenly vanish. If it was run manually from a terminal (instead of letting the driver station software launch it), you would see a core dump indicating a segmentation fault.&lt;/p&gt;
-&lt;p&gt;This wasn't just us either; I spoke with people on other teams, everyone who was streaming video had this issue. But, because it only happened every couple of minutes, and a match is only 2:15, it didn't need to run very long, they just crossed their fingers and hoped it didn't happen during a match.&lt;/p&gt;
+&lt;p&gt;That’s great, until when debugging things, the dashboard would suddenly vanish. If it was run manually from a terminal (instead of letting the driver station software launch it), you would see a core dump indicating a segmentation fault.&lt;/p&gt;
+&lt;p&gt;This wasn’t just us either; I spoke with people on other teams, everyone who was streaming video had this issue. But, because it only happened every couple of minutes, and a match is only 2:15, it didn’t need to run very long, they just crossed their fingers and hoped it didn’t happen during a match.&lt;/p&gt;
&lt;p&gt;The dashboard was written in Java, and the source was available (under a 3-clause BSD license), so I dove in, hunting for the bug. Now, the program did use Java Native Interface to talk to OpenCV, which the video ran through; so I figured that it must be a bug in the C/C++ code that was being called. It was especially a pain to track down the pointers that were causing the issue, because it was hard with native debuggers to see through all of the JVM stuff to the OpenCV code, and the OpenCV stuff is opaque to Java debuggers.&lt;/p&gt;
-&lt;p&gt;Eventually the issue lead me back into the Java code--there was a native pointer being stored in a Java variable; Java code called the native routine to &lt;code&gt;free()&lt;/code&gt; the structure, but then tried to feed it to another routine later. This lead to difficulty again--tracking objects with Java debuggers was hard because they don't expect the program to suddenly segfault; it's Java code, Java doesn't segfault, it throws exceptions!&lt;/p&gt;
-&lt;p&gt;With the help of &lt;code&gt;println()&lt;/code&gt; I was eventually able to see that some code was executing in an order that straight didn't make sense.&lt;/p&gt;
+&lt;p&gt;Eventually the issue lead me back into the Java code—there was a native pointer being stored in a Java variable; Java code called the native routine to &lt;code&gt;free()&lt;/code&gt; the structure, but then tried to feed it to another routine later. This lead to difficulty again—tracking objects with Java debuggers was hard because they don’t expect the program to suddenly segfault; it’s Java code, Java doesn’t segfault, it throws exceptions!&lt;/p&gt;
+&lt;p&gt;With the help of &lt;code&gt;println()&lt;/code&gt; I was eventually able to see that some code was executing in an order that straight didn’t make sense.&lt;/p&gt;
&lt;h2 id="the-bug"&gt;The bug&lt;/h2&gt;
&lt;p&gt;The issue was that Java was making an unsafe optimization (I never bothered to figure out if it is the compiler or the JVM making the mistake, I was satisfied once I had a work-around).&lt;/p&gt;
-&lt;p&gt;Java was doing something similar to tail-call optimization with regard to garbage collection. You see, if it is waiting for the return value of a method &lt;code&gt;m()&lt;/code&gt; of object &lt;code&gt;o&lt;/code&gt;, and code in &lt;code&gt;m()&lt;/code&gt; that is yet to be executed doesn't access any other methods or properties of &lt;code&gt;o&lt;/code&gt;, then it will go ahead and consider &lt;code&gt;o&lt;/code&gt; eligible for garbage collection before &lt;code&gt;m()&lt;/code&gt; has finished running.&lt;/p&gt;
+&lt;p&gt;Java was doing something similar to tail-call optimization with regard to garbage collection. You see, if it is waiting for the return value of a method &lt;code&gt;m()&lt;/code&gt; of object &lt;code&gt;o&lt;/code&gt;, and code in &lt;code&gt;m()&lt;/code&gt; that is yet to be executed doesn’t access any other methods or properties of &lt;code&gt;o&lt;/code&gt;, then it will go ahead and consider &lt;code&gt;o&lt;/code&gt; eligible for garbage collection before &lt;code&gt;m()&lt;/code&gt; has finished running.&lt;/p&gt;
&lt;p&gt;That is normally a safe optimization to make… except for when a destructor method (&lt;code&gt;finalize()&lt;/code&gt;) is defined for the object; the destructor can have side effects, and Java has no way to know whether it is safe for them to happen before &lt;code&gt;m()&lt;/code&gt; has finished running.&lt;/p&gt;
&lt;h2 id="the-work-around"&gt;The work-around&lt;/h2&gt;
&lt;p&gt;The routine that the segmentation fault was occurring in was something like:&lt;/p&gt;
@@ -738,7 +800,7 @@ cmd &amp;gt;file 2&amp;gt;&amp;amp;1 # both stdout and stderr go to file&lt;/cod
// `this` may now be garbage collected
return child.somethingElse(var); // segfault comes here
}&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Where the destructor method of &lt;code&gt;this&lt;/code&gt; calls a method that will &lt;code&gt;free()&lt;/code&gt; native memory that is also accessed by &lt;code&gt;child&lt;/code&gt;; if &lt;code&gt;this&lt;/code&gt; is garbage collected before &lt;code&gt;child.somethingElse()&lt;/code&gt; runs, the backing native code will try to access memory that has been &lt;code&gt;free()&lt;/code&gt;ed, and receive a segmentation fault. That usually didn't happen, as the routines were pretty fast. However, running 30 times a second, eventually bad luck with the garbage collector happens, and the program crashes.&lt;/p&gt;
+&lt;p&gt;Where the destructor method of &lt;code&gt;this&lt;/code&gt; calls a method that will &lt;code&gt;free()&lt;/code&gt; native memory that is also accessed by &lt;code&gt;child&lt;/code&gt;; if &lt;code&gt;this&lt;/code&gt; is garbage collected before &lt;code&gt;child.somethingElse()&lt;/code&gt; runs, the backing native code will try to access memory that has been &lt;code&gt;free()&lt;/code&gt;ed, and receive a segmentation fault. That usually didn’t happen, as the routines were pretty fast. However, running 30 times a second, eventually bad luck with the garbage collector happens, and the program crashes.&lt;/p&gt;
&lt;p&gt;The work-around was to insert a bogus call to this to keep &lt;code&gt;this&lt;/code&gt; around until after we were also done with &lt;code&gt;child&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public type1 getFrame() {
type2 child = this.getChild();
@@ -747,7 +809,7 @@ cmd &amp;gt;file 2&amp;gt;&amp;amp;1 # both stdout and stderr go to file&lt;/cod
this.getSize(); // bogus call to keep `this` around
return ret;
}&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Yeah. After spending weeks wading through though thousands of lines of Java, C, and C++, a bogus call to a method I didn't care about was the fix.&lt;/p&gt;
+&lt;p&gt;Yeah. After spending weeks wading through though thousands of lines of Java, C, and C++, a bogus call to a method I didn’t care about was the fix.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2014 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -758,28 +820,28 @@ cmd &amp;gt;file 2&amp;gt;&amp;amp;1 # both stdout and stderr go to file&lt;/cod
<link rel="alternate" type="text/html" href="./bash-arrays.html"/>
<link rel="alternate" type="text/markdown" href="./bash-arrays.md"/>
<id>https://lukeshu.com/blog/bash-arrays.html</id>
- <updated>2016-02-28T07:12:18-05:00</updated>
+ <updated>2013-10-13T00:00:00+00:00</updated>
<published>2013-10-13T00:00:00+00:00</published>
<title>Bash arrays</title>
<content type="html">&lt;h1 id="bash-arrays"&gt;Bash arrays&lt;/h1&gt;
-&lt;p&gt;Way too many people don't understand Bash arrays. Many of them argue that if you need arrays, you shouldn't be using Bash. If we reject the notion that one should never use Bash for scripting, then thinking you don't need Bash arrays is what I like to call &amp;quot;wrong&amp;quot;. I don't even mean real scripting; even these little stubs in &lt;code&gt;/usr/bin&lt;/code&gt;:&lt;/p&gt;
+&lt;p&gt;Way too many people don’t understand Bash arrays. Many of them argue that if you need arrays, you shouldn’t be using Bash. If we reject the notion that one should never use Bash for scripting, then thinking you don’t need Bash arrays is what I like to call “wrong”. I don’t even mean real scripting; even these little stubs in &lt;code&gt;/usr/bin&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/sh
java -jar /…/something.jar $* # WRONG!&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Command line arguments are exposed as an array, that little &lt;code&gt;$*&lt;/code&gt; is accessing it, and is doing the wrong thing (for the lazy, the correct thing is &lt;code&gt;-- &amp;quot;$@&amp;quot;&lt;/code&gt;). Arrays in Bash offer a safe way preserve field separation.&lt;/p&gt;
-&lt;p&gt;One of the main sources of bugs (and security holes) in shell scripts is field separation. That's what arrays are about.&lt;/p&gt;
+&lt;p&gt;One of the main sources of bugs (and security holes) in shell scripts is field separation. That’s what arrays are about.&lt;/p&gt;
&lt;h2 id="what-field-separation"&gt;What? Field separation?&lt;/h2&gt;
-&lt;p&gt;Field separation is just splitting a larger unit into a list of &amp;quot;fields&amp;quot;. The most common case is when Bash splits a &amp;quot;simple command&amp;quot; (in the Bash manual's terminology) into a list of arguments. Understanding how this works is an important prerequisite to understanding arrays, and even why they are important.&lt;/p&gt;
-&lt;p&gt;Dealing with lists is something that is very common in Bash scripts; from dealing with lists of arguments, to lists of files; they pop up a lot, and each time, you need to think about how the list is separated. In the case of &lt;code&gt;$PATH&lt;/code&gt;, the list is separated by colons. In the case of &lt;code&gt;$CFLAGS&lt;/code&gt;, the list is separated by whitespace. In the case of actual arrays, it's easy, there's no special character to worry about, just quote it, and you're good to go.&lt;/p&gt;
+&lt;p&gt;Field separation is just splitting a larger unit into a list of “fields”. The most common case is when Bash splits a “simple command” (in the Bash manual’s terminology) into a list of arguments. Understanding how this works is an important prerequisite to understanding arrays, and even why they are important.&lt;/p&gt;
+&lt;p&gt;Dealing with lists is something that is very common in Bash scripts; from dealing with lists of arguments, to lists of files; they pop up a lot, and each time, you need to think about how the list is separated. In the case of &lt;code&gt;$PATH&lt;/code&gt;, the list is separated by colons. In the case of &lt;code&gt;$CFLAGS&lt;/code&gt;, the list is separated by whitespace. In the case of actual arrays, it’s easy, there’s no special character to worry about, just quote it, and you’re good to go.&lt;/p&gt;
&lt;h2 id="bash-word-splitting"&gt;Bash word splitting&lt;/h2&gt;
-&lt;p&gt;When Bash reads a &amp;quot;simple command&amp;quot;, it splits the whole thing into a list of &amp;quot;words&amp;quot;. &amp;quot;The first word specifies the command to be executed, and is passed as argument zero. The remaining words are passed as arguments to the invoked command.&amp;quot; (to quote &lt;code&gt;bash(1)&lt;/code&gt;)&lt;/p&gt;
+&lt;p&gt;When Bash reads a “simple command”, it splits the whole thing into a list of “words”. “The first word specifies the command to be executed, and is passed as argument zero. The remaining words are passed as arguments to the invoked command.” (to quote &lt;code&gt;bash(1)&lt;/code&gt;)&lt;/p&gt;
&lt;p&gt;It is often hard for those unfamiliar with Bash to understand when something is multiple words, and when it is a single word that just contains a space or newline. To help gain an intuitive understanding, I recommend using the following command to print a bullet list of words, to see how Bash splits them up:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;printf ' -&gt; %s\n' &lt;var&gt;words…&lt;/var&gt;&lt;hr&gt; -&amp;gt; word one
-&amp;gt; multiline
word
-&amp;gt; third word
&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;In a simple command, in absence of quoting, Bash separates the &amp;quot;raw&amp;quot; input into words by splitting on spaces and tabs. In other places, such as when expanding a variable, it uses the same process, but splits on the characters in the &lt;code&gt;$IFS&lt;/code&gt; variable (which has the default value of space/tab/newline). This process is, creatively enough, called &amp;quot;word splitting&amp;quot;.&lt;/p&gt;
-&lt;p&gt;In most discussions of Bash arrays, one of the frequent criticisms is all the footnotes and &amp;quot;gotchas&amp;quot; about when to quote things. That's because they usually don't set the context of word splitting. &lt;strong&gt;Double quotes (&lt;code&gt;&amp;quot;&lt;/code&gt;) inhibit Bash from doing word splitting.&lt;/strong&gt; That's it, that's all they do. Arrays are already split into words; without wrapping them in double quotes Bash re-word splits them, which is almost &lt;em&gt;never&lt;/em&gt; what you want; otherwise, you wouldn't be working with an array.&lt;/p&gt;
+&lt;p&gt;In a simple command, in absence of quoting, Bash separates the “raw” input into words by splitting on spaces and tabs. In other places, such as when expanding a variable, it uses the same process, but splits on the characters in the &lt;code&gt;$IFS&lt;/code&gt; variable (which has the default value of space/tab/newline). This process is, creatively enough, called “word splitting”.&lt;/p&gt;
+&lt;p&gt;In most discussions of Bash arrays, one of the frequent criticisms is all the footnotes and “gotchas” about when to quote things. That’s because they usually don’t set the context of word splitting. &lt;strong&gt;Double quotes (&lt;code&gt;&amp;quot;&lt;/code&gt;) inhibit Bash from doing word splitting.&lt;/strong&gt; That’s it, that’s all they do. Arrays are already split into words; without wrapping them in double quotes Bash re-word splits them, which is almost &lt;em&gt;never&lt;/em&gt; what you want; otherwise, you wouldn’t be working with an array.&lt;/p&gt;
&lt;h2 id="normal-array-syntax"&gt;Normal array syntax&lt;/h2&gt;
&lt;table&gt;
&lt;caption&gt;
@@ -826,7 +888,7 @@ word
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
-&lt;p&gt;It's really that simple—that covers most usages of arrays, and most of the mistakes made with them.&lt;/p&gt;
+&lt;p&gt;It’s really that simple—that covers most usages of arrays, and most of the mistakes made with them.&lt;/p&gt;
&lt;p&gt;To help you understand the difference between &lt;code&gt;@&lt;/code&gt; and &lt;code&gt;*&lt;/code&gt;, here is a sample of each:&lt;/p&gt;
&lt;table&gt;
&lt;tbody&gt;
@@ -917,8 +979,8 @@ done&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="argument-array-syntax"&gt;Argument array syntax&lt;/h2&gt;
-&lt;p&gt;Accessing the arguments is mostly that simple, but that array doesn't actually have a variable name. It's special. Instead, it is exposed through a series of special variables (normal variables can only start with letters and underscore), that &lt;em&gt;mostly&lt;/em&gt; match up with the normal array syntax.&lt;/p&gt;
-&lt;p&gt;Setting the arguments array, on the other hand, is pretty different. That's fine, because setting the arguments array is less useful anyway.&lt;/p&gt;
+&lt;p&gt;Accessing the arguments is mostly that simple, but that array doesn’t actually have a variable name. It’s special. Instead, it is exposed through a series of special variables (normal variables can only start with letters and underscore), that &lt;em&gt;mostly&lt;/em&gt; match up with the normal array syntax.&lt;/p&gt;
+&lt;p&gt;Setting the arguments array, on the other hand, is pretty different. That’s fine, because setting the arguments array is less useful anyway.&lt;/p&gt;
&lt;table&gt;
&lt;caption&gt;
&lt;h1&gt;Accessing the arguments array&lt;/h1&gt;
@@ -956,7 +1018,7 @@ done&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;array=("${array[0]}" "${array[@]:&lt;var&gt;n+1&lt;/var&gt;}")&lt;/code&gt;&lt;/td&gt;&lt;td&gt;&lt;code&gt;shift &lt;var&gt;n&lt;/var&gt;&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
-&lt;p&gt;Did you notice what was inconsistent? The variables &lt;code&gt;$*&lt;/code&gt;, &lt;code&gt;$@&lt;/code&gt;, and &lt;code&gt;$#&lt;/code&gt; behave like the &lt;var&gt;n&lt;/var&gt;=0 entry doesn't exist.&lt;/p&gt;
+&lt;p&gt;Did you notice what was inconsistent? The variables &lt;code&gt;$*&lt;/code&gt;, &lt;code&gt;$@&lt;/code&gt;, and &lt;code&gt;$#&lt;/code&gt; behave like the &lt;var&gt;n&lt;/var&gt;=0 entry doesn’t exist.&lt;/p&gt;
&lt;table&gt;
&lt;caption&gt;
&lt;h1&gt;Inconsistencies&lt;/h1&gt;
@@ -985,11 +1047,11 @@ done&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
-&lt;p&gt;These make sense because argument 0 is the name of the script—we almost never want that when parsing arguments. You'd spend more code getting the values that it currently gives you.&lt;/p&gt;
+&lt;p&gt;These make sense because argument 0 is the name of the script—we almost never want that when parsing arguments. You’d spend more code getting the values that it currently gives you.&lt;/p&gt;
&lt;p&gt;Now, for an explanation of setting the arguments array. You cannot set argument &lt;var&gt;n&lt;/var&gt;=0. The &lt;code&gt;set&lt;/code&gt; command is used to manipulate the arguments passed to Bash after the fact—similarly, you could use &lt;code&gt;set -x&lt;/code&gt; to make Bash behave like you ran it as &lt;code&gt;bash -x&lt;/code&gt;; like most GNU programs, the &lt;code&gt;--&lt;/code&gt; tells it to not parse any of the options as flags. The &lt;code&gt;shift&lt;/code&gt; command shifts each entry &lt;var&gt;n&lt;/var&gt; spots to the left, using &lt;var&gt;n&lt;/var&gt;=1 if no value is specified; and leaving argument 0 alone.&lt;/p&gt;
-&lt;h2 id="but-you-mentioned-gotchas-about-quoting"&gt;But you mentioned &amp;quot;gotchas&amp;quot; about quoting!&lt;/h2&gt;
-&lt;p&gt;But I explained that quoting simply inhibits word splitting, which you pretty much never want when working with arrays. If, for some odd reason, you do what word splitting, then that's when you don't quote. Simple, easy to understand.&lt;/p&gt;
-&lt;p&gt;I think possibly the only case where you do want word splitting with an array is when you didn't want an array, but it's what you get (arguments are, by necessity, an array). For example:&lt;/p&gt;
+&lt;h2 id="but-you-mentioned-gotchas-about-quoting"&gt;But you mentioned “gotchas” about quoting!&lt;/h2&gt;
+&lt;p&gt;But I explained that quoting simply inhibits word splitting, which you pretty much never want when working with arrays. If, for some odd reason, you do what word splitting, then that’s when you don’t quote. Simple, easy to understand.&lt;/p&gt;
+&lt;p&gt;I think possibly the only case where you do want word splitting with an array is when you didn’t want an array, but it’s what you get (arguments are, by necessity, an array). For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Usage: path_ls PATH1 PATH2…
# Description:
# Takes any number of PATH-style values; that is,
@@ -1005,13 +1067,13 @@ path_ls() {
find -L &amp;quot;${dirs[@]}&amp;quot; -maxdepth 1 -type f -executable \
-printf &amp;#39;%f\n&amp;#39; 2&amp;gt;/dev/null | sort -u
}&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Logically, there shouldn't be multiple arguments, just a single &lt;code&gt;$PATH&lt;/code&gt; value; but, we can't enforce that, as the array can have any size. So, we do the robust thing, and just act on the entire array, not really caring about the fact that it is an array. Alas, there is still a field-separation bug in the program, with the output.&lt;/p&gt;
-&lt;h2 id="i-still-dont-think-i-need-arrays-in-my-scripts"&gt;I still don't think I need arrays in my scripts&lt;/h2&gt;
+&lt;p&gt;Logically, there shouldn’t be multiple arguments, just a single &lt;code&gt;$PATH&lt;/code&gt; value; but, we can’t enforce that, as the array can have any size. So, we do the robust thing, and just act on the entire array, not really caring about the fact that it is an array. Alas, there is still a field-separation bug in the program, with the output.&lt;/p&gt;
+&lt;h2 id="i-still-dont-think-i-need-arrays-in-my-scripts"&gt;I still don’t think I need arrays in my scripts&lt;/h2&gt;
&lt;p&gt;Consider the common code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ARGS=&amp;#39; -f -q&amp;#39;
command $ARGS # unquoted variables are a bad code-smell anyway&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Here, &lt;code&gt;$ARGS&lt;/code&gt; is field-separated by &lt;code&gt;$IFS&lt;/code&gt;, which we are assuming has the default value. This is fine, as long as &lt;code&gt;$ARGS&lt;/code&gt; is known to never need an embedded space; which you do as long as it isn't based on anything outside of the program. But wait until you want to do this:&lt;/p&gt;
+&lt;p&gt;Here, &lt;code&gt;$ARGS&lt;/code&gt; is field-separated by &lt;code&gt;$IFS&lt;/code&gt;, which we are assuming has the default value. This is fine, as long as &lt;code&gt;$ARGS&lt;/code&gt; is known to never need an embedded space; which you do as long as it isn’t based on anything outside of the program. But wait until you want to do this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ARGS=&amp;#39; -f -q&amp;#39;
if [[ -f &amp;quot;$filename&amp;quot; ]]; then
@@ -1019,7 +1081,7 @@ if [[ -f &amp;quot;$filename&amp;quot; ]]; then
fi
command $ARGS&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;Now you're hosed if &lt;code&gt;$filename&lt;/code&gt; contains a space! More than just breaking, it could have unwanted side effects, such as when someone figures out how to make &lt;code&gt;filename='foo --dangerous-flag'&lt;/code&gt;.&lt;/p&gt;
+&lt;p&gt;Now you’re hosed if &lt;code&gt;$filename&lt;/code&gt; contains a space! More than just breaking, it could have unwanted side effects, such as when someone figures out how to make &lt;code&gt;filename='foo --dangerous-flag'&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Compare that with the array version:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ARGS=(-f -q)
@@ -1032,7 +1094,7 @@ command &amp;quot;${ARGS[@]}&amp;quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Except for the little stubs that call another program with &lt;code&gt;&amp;quot;$@&amp;quot;&lt;/code&gt; at the end, trying to write for multiple shells (including the ambiguous &lt;code&gt;/bin/sh&lt;/code&gt;) is not a task for mere mortals. If you do try that, your best bet is probably sticking to POSIX. Arrays are not POSIX; except for the arguments array, which is; though getting subset arrays from &lt;code&gt;$@&lt;/code&gt; and &lt;code&gt;$*&lt;/code&gt; is not (tip: use &lt;code&gt;set --&lt;/code&gt; to re-purpose the arguments array).&lt;/p&gt;
&lt;p&gt;Writing for various versions of Bash, though, is pretty do-able. Everything here works all the way back in bash-2.0 (December 1996), with the following exceptions:&lt;/p&gt;
&lt;ul&gt;
-&lt;li&gt;&lt;p&gt;The &lt;code&gt;+=&lt;/code&gt; operator wasn't added until Bash 3.1.&lt;/p&gt;
+&lt;li&gt;&lt;p&gt;The &lt;code&gt;+=&lt;/code&gt; operator wasn’t added until Bash 3.1.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;As a work-around, use &lt;code&gt;array[${#array[*]}]=&lt;var&gt;word&lt;/var&gt;&lt;/code&gt; to append a single element.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
@@ -1043,7 +1105,7 @@ command &amp;quot;${ARGS[@]}&amp;quot;&lt;/code&gt;&lt;/pre&gt;
&lt;li&gt;In Bash 4.1 and higher, it works in the way described in the main part of this document.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
-&lt;p&gt;Now, Bash 1.x doesn't have arrays at all. &lt;code&gt;$@&lt;/code&gt; and &lt;code&gt;$*&lt;/code&gt; work, but using &lt;code&gt;:&lt;/code&gt; to select a range of elements from them doesn't. Good thing most boxes have been updated since 1996!&lt;/p&gt;
+&lt;p&gt;Now, Bash 1.x doesn’t have arrays at all. &lt;code&gt;$@&lt;/code&gt; and &lt;code&gt;$*&lt;/code&gt; work, but using &lt;code&gt;:&lt;/code&gt; to select a range of elements from them doesn’t. Good thing most boxes have been updated since 1996!&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2013 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -1054,14 +1116,14 @@ command &amp;quot;${ARGS[@]}&amp;quot;&lt;/code&gt;&lt;/pre&gt;
<link rel="alternate" type="text/html" href="./git-go-pre-commit.html"/>
<link rel="alternate" type="text/markdown" href="./git-go-pre-commit.md"/>
<id>https://lukeshu.com/blog/git-go-pre-commit.html</id>
- <updated>2014-01-26T17:00:58-05:00</updated>
+ <updated>2013-10-12T00:00:00+00:00</updated>
<published>2013-10-12T00:00:00+00:00</published>
<title>A git pre-commit hook for automatically formatting Go code</title>
<content type="html">&lt;h1 id="a-git-pre-commit-hook-for-automatically-formatting-go-code"&gt;A git pre-commit hook for automatically formatting Go code&lt;/h1&gt;
&lt;p&gt;One of the (many) wonderful things about the Go programming language is the &lt;code&gt;gofmt&lt;/code&gt; tool, which formats your source in a canonical way. I thought it would be nice to integrate this in my &lt;code&gt;git&lt;/code&gt; workflow by adding it in a pre-commit hook to automatically format my source code when I committed it.&lt;/p&gt;
-&lt;p&gt;The Go distribution contains a git pre-commit hook that checks whether the source code is formatted, and aborts the commit if it isn't. I don't remember if I was aware of this at the time (or if it even existed at the time, or if it is new), but I wanted it to go ahead and format the code for me.&lt;/p&gt;
-&lt;p&gt;I found a few solutions online, but they were all missing something—support for partial commits. I frequently use &lt;code&gt;git add -p&lt;/code&gt;/&lt;code&gt;git gui&lt;/code&gt; to commit a subset of the changes I've made to a file, the existing solutions would end up adding the entire set of changes to my commit.&lt;/p&gt;
-&lt;p&gt;I ended up writing a solution that only formats the version of the that is staged for commit; here's my &lt;code&gt;.git/hooks/pre-commit&lt;/code&gt;:&lt;/p&gt;
+&lt;p&gt;The Go distribution contains a git pre-commit hook that checks whether the source code is formatted, and aborts the commit if it isn’t. I don’t remember if I was aware of this at the time (or if it even existed at the time, or if it is new), but I wanted it to go ahead and format the code for me.&lt;/p&gt;
+&lt;p&gt;I found a few solutions online, but they were all missing something—support for partial commits. I frequently use &lt;code&gt;git add -p&lt;/code&gt;/&lt;code&gt;git gui&lt;/code&gt; to commit a subset of the changes I’ve made to a file, the existing solutions would end up adding the entire set of changes to my commit.&lt;/p&gt;
+&lt;p&gt;I ended up writing a solution that only formats the version of the that is staged for commit; here’s my &lt;code&gt;.git/hooks/pre-commit&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/bash
# This would only loop over files that are already staged for commit.
@@ -1079,8 +1141,8 @@ for file in **/*.go; do
git add &amp;quot;$file&amp;quot;
mv &amp;quot;$tmp&amp;quot; &amp;quot;$file&amp;quot;
done&lt;/code&gt;&lt;/pre&gt;
-&lt;p&gt;It's still not perfect. It will try to operate on every &lt;code&gt;*.go&lt;/code&gt; file—which might do weird things if you have a file that hasn't been checked in at all. This also has the effect of formatting files that were checked in without being formatted, but weren't modified in this commit.&lt;/p&gt;
-&lt;p&gt;I don't remember why I did that—as you can see from the comment, I knew how to only select files that were staged for commit. I haven't worked on any projects in Go in a while—if I return to one of them, and remember why I did that, I will update this page.&lt;/p&gt;
+&lt;p&gt;It’s still not perfect. It will try to operate on every &lt;code&gt;*.go&lt;/code&gt; file—which might do weird things if you have a file that hasn’t been checked in at all. This also has the effect of formatting files that were checked in without being formatted, but weren’t modified in this commit.&lt;/p&gt;
+&lt;p&gt;I don’t remember why I did that—as you can see from the comment, I knew how to only select files that were staged for commit. I haven’t worked on any projects in Go in a while—if I return to one of them, and remember why I did that, I will update this page.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2013 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -1091,7 +1153,7 @@ done&lt;/code&gt;&lt;/pre&gt;
<link rel="alternate" type="text/html" href="./fd_printf.html"/>
<link rel="alternate" type="text/markdown" href="./fd_printf.md"/>
<id>https://lukeshu.com/blog/fd_printf.html</id>
- <updated>2016-02-28T07:12:18-05:00</updated>
+ <updated>2013-10-12T00:00:00+00:00</updated>
<published>2013-10-12T00:00:00+00:00</published>
<title>`dprintf`: print formatted text directly to a file descriptor</title>
<content type="html">&lt;h1 id="dprintf-print-formatted-text-directly-to-a-file-descriptor"&gt;&lt;code&gt;dprintf&lt;/code&gt;: print formatted text directly to a file descriptor&lt;/h1&gt;
@@ -1133,16 +1195,16 @@ fd_printf(int fd, const char *format, ...)
<link rel="alternate" type="text/html" href="./emacs-as-an-os.html"/>
<link rel="alternate" type="text/markdown" href="./emacs-as-an-os.md"/>
<id>https://lukeshu.com/blog/emacs-as-an-os.html</id>
- <updated>2014-01-26T17:00:58-05:00</updated>
+ <updated>2013-08-29T00:00:00+00:00</updated>
<published>2013-08-29T00:00:00+00:00</published>
<title>Emacs as an operating system</title>
<content type="html">&lt;h1 id="emacs-as-an-operating-system"&gt;Emacs as an operating system&lt;/h1&gt;
&lt;p&gt;This was originally published on &lt;a href="https://news.ycombinator.com/item?id=6292742"&gt;Hacker News&lt;/a&gt; on 2013-08-29.&lt;/p&gt;
-&lt;p&gt;Calling Emacs an OS is dubious, it certainly isn't a general purpose OS, and won't run on real hardware. But, let me make the case that Emacs is an OS.&lt;/p&gt;
+&lt;p&gt;Calling Emacs an OS is dubious, it certainly isn’t a general purpose OS, and won’t run on real hardware. But, let me make the case that Emacs is an OS.&lt;/p&gt;
&lt;p&gt;Emacs has two parts, the C part, and the Emacs Lisp part.&lt;/p&gt;
-&lt;p&gt;The C part isn't just a Lisp interpreter, it is a Lisp Machine emulator. It doesn't particularly resemble any of the real Lisp machines. The TCP, Keyboard/Mouse, display support, and filesystem are done at the hardware level (the operations to work with these things are among the primitive operations provided by the hardware). Of these, the display being handled by the hardware isn't particularly uncommon, historically; the filesystem is a little stranger.&lt;/p&gt;
-&lt;p&gt;The Lisp part of Emacs is the operating system that runs on that emulated hardware. It's not a particularly powerful OS, it not a multitasking system. It has many packages available for it (though not until recently was there a official package manager). It has reasonably powerful IPC mechanisms. It has shells, mail clients (MUAs and MSAs), web browsers, web servers and more, all written entirely in Emacs Lisp.&lt;/p&gt;
-&lt;p&gt;You might say, &amp;quot;but a lot of that is being done by the host operating system!&amp;quot; Sure, some of it is, but all of it is sufficiently low level. If you wanted to share the filesystem with another OS running in a VM, you might do it by sharing it as a network filesystem; this is necessary when the VM OS is not designed around running in a VM. However, because Emacs OS will always be running in the Emacs VM, we can optimize it by having the Emacs VM include processor features mapping the native OS, and have the Emacs OS be aware of them. It would be slower and more code to do that all over the network.&lt;/p&gt;
+&lt;p&gt;The C part isn’t just a Lisp interpreter, it is a Lisp Machine emulator. It doesn’t particularly resemble any of the real Lisp machines. The TCP, Keyboard/Mouse, display support, and filesystem are done at the hardware level (the operations to work with these things are among the primitive operations provided by the hardware). Of these, the display being handled by the hardware isn’t particularly uncommon, historically; the filesystem is a little stranger.&lt;/p&gt;
+&lt;p&gt;The Lisp part of Emacs is the operating system that runs on that emulated hardware. It’s not a particularly powerful OS, it not a multitasking system. It has many packages available for it (though not until recently was there a official package manager). It has reasonably powerful IPC mechanisms. It has shells, mail clients (MUAs and MSAs), web browsers, web servers and more, all written entirely in Emacs Lisp.&lt;/p&gt;
+&lt;p&gt;You might say, “but a lot of that is being done by the host operating system!” Sure, some of it is, but all of it is sufficiently low level. If you wanted to share the filesystem with another OS running in a VM, you might do it by sharing it as a network filesystem; this is necessary when the VM OS is not designed around running in a VM. However, because Emacs OS will always be running in the Emacs VM, we can optimize it by having the Emacs VM include processor features mapping the native OS, and have the Emacs OS be aware of them. It would be slower and more code to do that all over the network.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2013 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -1153,22 +1215,26 @@ fd_printf(int fd, const char *format, ...)
<link rel="alternate" type="text/html" href="./emacs-shells.html"/>
<link rel="alternate" type="text/markdown" href="./emacs-shells.md"/>
<id>https://lukeshu.com/blog/emacs-shells.html</id>
- <updated>2016-02-28T07:12:18-05:00</updated>
+ <updated>2013-04-09T00:00:00+00:00</updated>
<published>2013-04-09T00:00:00+00:00</published>
<title>A summary of Emacs' bundled shell and terminal modes</title>
- <content type="html">&lt;h1 id="a-summary-of-emacs-bundled-shell-and-terminal-modes"&gt;A summary of Emacs' bundled shell and terminal modes&lt;/h1&gt;
+ <content type="html">&lt;h1 id="a-summary-of-emacs-bundled-shell-and-terminal-modes"&gt;A summary of Emacs’ bundled shell and terminal modes&lt;/h1&gt;
&lt;p&gt;This is based on a post on &lt;a href="http://www.reddit.com/r/emacs/comments/1bzl8b/how_can_i_get_a_dumbersimpler_shell_in_emacs/c9blzyb"&gt;reddit&lt;/a&gt;, published on 2013-04-09.&lt;/p&gt;
-&lt;p&gt;Emacs comes bundled with a few different shell and terminal modes. It can be hard to keep them straight. What's the difference between &lt;code&gt;M-x term&lt;/code&gt; and &lt;code&gt;M-x ansi-term&lt;/code&gt;?&lt;/p&gt;
-&lt;p&gt;Here's a good breakdown of the different bundled shells and terminals for Emacs, from dumbest to most Emacs-y.&lt;/p&gt;
+&lt;p&gt;Emacs comes bundled with a few different shell and terminal modes. It can be hard to keep them straight. What’s the difference between &lt;code&gt;M-x term&lt;/code&gt; and &lt;code&gt;M-x ansi-term&lt;/code&gt;?&lt;/p&gt;
+&lt;p&gt;Here’s a good breakdown of the different bundled shells and terminals for Emacs, from dumbest to most Emacs-y.&lt;/p&gt;
&lt;h2 id="term-mode"&gt;term-mode&lt;/h2&gt;
&lt;p&gt;Your VT100-esque terminal emulator; it does what most terminal programs do. Ncurses-things work OK, but dumping large amounts of text can be slow. By default it asks you which shell to run, defaulting to the environmental variable &lt;code&gt;$SHELL&lt;/code&gt; (&lt;code&gt;/bin/bash&lt;/code&gt; for me). There are two modes of operation:&lt;/p&gt;
&lt;ul&gt;
-&lt;li&gt;char mode: Keys are sent immediately to the shell (including keys that are normally Emacs keystrokes), with the following exceptions:&lt;/li&gt;
+&lt;li&gt;char mode: Keys are sent immediately to the shell (including keys that are normally Emacs keystrokes), with the following exceptions:
+&lt;ul&gt;
&lt;li&gt;&lt;code&gt;(term-escape-char) (term-escape-char)&lt;/code&gt; sends &lt;code&gt;(term-escape-char)&lt;/code&gt; to the shell (see above for what the default value is).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;(term-escape-char) &amp;lt;anything-else&amp;gt;&lt;/code&gt; is like doing equates to &lt;code&gt;C-x &amp;lt;anything-else&amp;gt;&lt;/code&gt; in normal Emacs.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;(term-escape-char) C-j&lt;/code&gt; switches to line mode.&lt;/li&gt;
-&lt;li&gt;line mode: Editing is done like in a normal Emacs buffer, &lt;code&gt;&amp;lt;enter&amp;gt;&lt;/code&gt; sends the current line to the shell. This is useful for working with a program's output.&lt;/li&gt;
+&lt;/ul&gt;&lt;/li&gt;
+&lt;li&gt;line mode: Editing is done like in a normal Emacs buffer, &lt;code&gt;&amp;lt;enter&amp;gt;&lt;/code&gt; sends the current line to the shell. This is useful for working with a program’s output.
+&lt;ul&gt;
&lt;li&gt;&lt;code&gt;C-c C-k&lt;/code&gt; switches to char mode.&lt;/li&gt;
+&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This mode is activated with&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;; Creates or switches to an existing &amp;quot;*terminal*&amp;quot; buffer.
@@ -1179,10 +1245,10 @@ M-x term&lt;/code&gt;&lt;/pre&gt;
; The default &amp;#39;term-escape-char&amp;#39; is &amp;quot;C-c&amp;quot; and &amp;quot;C-x&amp;quot;
M-x ansi-term&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="shell-mode"&gt;shell-mode&lt;/h2&gt;
-&lt;p&gt;The name is a misnomer; shell-mode is a terminal emulator, not a shell; it's called that because it is used for running a shell (bash, zsh, …). The idea of this mode is to use an external shell, but make it Emacs-y. History is not handled by the shell, but by Emacs; &lt;code&gt;M-p&lt;/code&gt; and &lt;code&gt;M-n&lt;/code&gt; access the history, while arrows/&lt;code&gt;C-p&lt;/code&gt;/&lt;code&gt;C-n&lt;/code&gt; move the point (which is is consistent with other Emacs REPL-type interfaces). It ignores VT100-type terminal colors, and colorizes things itself (it inspects words to see if they are directories, in the case of &lt;code&gt;ls&lt;/code&gt;). This has the benefit that it does syntax highlighting on the currently being typed command. Ncurses programs will of course not work. This mode is activated with:&lt;/p&gt;
+&lt;p&gt;The name is a misnomer; shell-mode is a terminal emulator, not a shell; it’s called that because it is used for running a shell (bash, zsh, …). The idea of this mode is to use an external shell, but make it Emacs-y. History is not handled by the shell, but by Emacs; &lt;code&gt;M-p&lt;/code&gt; and &lt;code&gt;M-n&lt;/code&gt; access the history, while arrows/&lt;code&gt;C-p&lt;/code&gt;/&lt;code&gt;C-n&lt;/code&gt; move the point (which is is consistent with other Emacs REPL-type interfaces). It ignores VT100-type terminal colors, and colorizes things itself (it inspects words to see if they are directories, in the case of &lt;code&gt;ls&lt;/code&gt;). This has the benefit that it does syntax highlighting on the currently being typed command. Ncurses programs will of course not work. This mode is activated with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;M-x shell&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="eshell-mode"&gt;eshell-mode&lt;/h2&gt;
-&lt;p&gt;This is a shell+terminal, entirely written in Emacs lisp. (Interestingly, it doesn't set &lt;code&gt;$SHELL&lt;/code&gt;, so that will be whatever it was when you launched Emacs). This won't even be running zsh or bash, it will be running &amp;quot;esh&amp;quot;, part of Emacs.&lt;/p&gt;
+&lt;p&gt;This is a shell+terminal, entirely written in Emacs lisp. (Interestingly, it doesn’t set &lt;code&gt;$SHELL&lt;/code&gt;, so that will be whatever it was when you launched Emacs). This won’t even be running zsh or bash, it will be running “esh”, part of Emacs.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2013 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -1193,7 +1259,7 @@ M-x ansi-term&lt;/code&gt;&lt;/pre&gt;
<link rel="alternate" type="text/html" href="./term-colors.html"/>
<link rel="alternate" type="text/markdown" href="./term-colors.md"/>
<id>https://lukeshu.com/blog/term-colors.html</id>
- <updated>2014-01-26T17:00:58-05:00</updated>
+ <updated>2013-03-21T00:00:00+00:00</updated>
<published>2013-03-21T00:00:00+00:00</published>
<title>An explanation of common terminal emulator color codes</title>
<content type="html">&lt;h1 id="an-explanation-of-common-terminal-emulator-color-codes"&gt;An explanation of common terminal emulator color codes&lt;/h1&gt;
@@ -1202,11 +1268,11 @@ M-x ansi-term&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So all terminals support the same 256 colors? What about 88 color mode: is that a subset?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;TL;DR: yes&lt;/p&gt;
-&lt;p&gt;Terminal compatibility is crazy complex, because nobody actually reads the spec, they just write something that is compatible for their tests. Then things have to be compatible with that terminal's quirks.&lt;/p&gt;
-&lt;p&gt;But, here's how 8-color, 16-color, and 256 color work. IIRC, 88 color is a subset of the 256 color scheme, but I'm not sure.&lt;/p&gt;
-&lt;p&gt;&lt;strong&gt;8 colors: (actually 9)&lt;/strong&gt; First we had 8 colors (9 with &amp;quot;default&amp;quot;, which doesn't have to be one of the 8). These are always roughly the same color: black, red, green, yellow/orange, blue, purple, cyan, and white, which are colors 0-7 respectively. Color 9 is default.&lt;/p&gt;
-&lt;p&gt;&lt;strong&gt;16 colors: (actually 18)&lt;/strong&gt; Later, someone wanted to add more colors, so they added a &amp;quot;bright&amp;quot; attribute. So when bright is on, you get &amp;quot;bright red&amp;quot; instead of &amp;quot;red&amp;quot;. Hence 8*2=16 (plus two more for &amp;quot;default&amp;quot; and &amp;quot;bright default&amp;quot;).&lt;/p&gt;
-&lt;p&gt;&lt;strong&gt;256 colors: (actually 274)&lt;/strong&gt; You may have noticed, colors 0-7 and 9 are used, but 8 isn't. So, someone decided that color 8 should put the terminal into 256 color mode. In this mode, it reads another byte, which is an 8-bit RGB value (2 bits for red, 2 for green, 2 for blue). The bright property has no effect on these colors. However, a terminal can display 256-color-mode colors and 16-color-mode colors at the same time, so you actually get 256+18 colors.&lt;/p&gt;
+&lt;p&gt;Terminal compatibility is crazy complex, because nobody actually reads the spec, they just write something that is compatible for their tests. Then things have to be compatible with that terminal’s quirks.&lt;/p&gt;
+&lt;p&gt;But, here’s how 8-color, 16-color, and 256 color work. IIRC, 88 color is a subset of the 256 color scheme, but I’m not sure.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;8 colors: (actually 9)&lt;/strong&gt; First we had 8 colors (9 with “default”, which doesn’t have to be one of the 8). These are always roughly the same color: black, red, green, yellow/orange, blue, purple, cyan, and white, which are colors 0–7 respectively. Color 9 is default.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;16 colors: (actually 18)&lt;/strong&gt; Later, someone wanted to add more colors, so they added a “bright” attribute. So when bright is on, you get “bright red” instead of “red”. Hence 8*2=16 (plus two more for “default” and “bright default”).&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;256 colors: (actually 274)&lt;/strong&gt; You may have noticed, colors 0–7 and 9 are used, but 8 isn’t. So, someone decided that color 8 should put the terminal into 256 color mode. In this mode, it reads another byte, which is an 8-bit RGB value (2 bits for red, 2 for green, 2 for blue). The bright property has no effect on these colors. However, a terminal can display 256-color-mode colors and 16-color-mode colors at the same time, so you actually get 256+18 colors.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2013 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -1217,28 +1283,28 @@ M-x ansi-term&lt;/code&gt;&lt;/pre&gt;
<link rel="alternate" type="text/html" href="./fs-licensing-explanation.html"/>
<link rel="alternate" type="text/markdown" href="./fs-licensing-explanation.md"/>
<id>https://lukeshu.com/blog/fs-licensing-explanation.html</id>
- <updated>2016-02-28T07:12:18-05:00</updated>
+ <updated>2013-02-21T00:00:00+00:00</updated>
<published>2013-02-21T00:00:00+00:00</published>
<title>An explanation of how "copyleft" licensing works</title>
- <content type="html">&lt;h1 id="an-explanation-of-how-copyleft-licensing-works"&gt;An explanation of how &amp;quot;copyleft&amp;quot; licensing works&lt;/h1&gt;
+ <content type="html">&lt;h1 id="an-explanation-of-how-copyleft-licensing-works"&gt;An explanation of how “copyleft” licensing works&lt;/h1&gt;
&lt;p&gt;This is based on a post on &lt;a href="http://www.reddit.com/r/freesoftware/comments/18xplw/can_software_be_free_gnu_and_still_be_owned_by_an/c8ixwq2"&gt;reddit&lt;/a&gt;, published on 2013-02-21.&lt;/p&gt;
&lt;blockquote&gt;
-&lt;p&gt;While reading the man page for readline I noticed the copyright section said &amp;quot;Readline is Copyright (C) 1989-2011 Free Software Foundation Inc&amp;quot;. How can software be both licensed under GNU and copyrighted to a single group? It was my understanding that once code became free it didn't belong to any particular group or individual.&lt;/p&gt;
-&lt;p&gt;[LiveCode is GPLv3, but also sells non-free licenses] Can you really have the same code under two conflicting licenses? Once licensed under GPL3 wouldn't they too be required to adhere to its rules?&lt;/p&gt;
+&lt;p&gt;While reading the man page for readline I noticed the copyright section said “Readline is Copyright (C) 1989-2011 Free Software Foundation Inc”. How can software be both licensed under GNU and copyrighted to a single group? It was my understanding that once code became free it didn’t belong to any particular group or individual.&lt;/p&gt;
+&lt;p&gt;[LiveCode is GPLv3, but also sells non-free licenses] Can you really have the same code under two conflicting licenses? Once licensed under GPL3 wouldn’t they too be required to adhere to its rules?&lt;/p&gt;
&lt;/blockquote&gt;
-&lt;p&gt;I believe that GNU/the FSF has an FAQ that addresses this, but I can't find it, so here we go.&lt;/p&gt;
+&lt;p&gt;I believe that GNU/the FSF has an FAQ that addresses this, but I can’t find it, so here we go.&lt;/p&gt;
&lt;h3 id="glossary"&gt;Glossary:&lt;/h3&gt;
&lt;ul&gt;
-&lt;li&gt;&amp;quot;&lt;em&gt;Copyright&lt;/em&gt;&amp;quot; is the right to control how copies are made of something.&lt;/li&gt;
-&lt;li&gt;Something for which no one holds the copyright is in the &amp;quot;&lt;em&gt;public domain&lt;/em&gt;&amp;quot;, because anyone (&amp;quot;the public&amp;quot;) is allowed to do &lt;em&gt;anything&lt;/em&gt; with it.&lt;/li&gt;
-&lt;li&gt;A &amp;quot;&lt;em&gt;license&lt;/em&gt;&amp;quot; is basically a legal document that says &amp;quot;I promise not to sue you if make copies in these specific ways.&amp;quot;&lt;/li&gt;
-&lt;li&gt;A &amp;quot;&lt;em&gt;non-free&lt;/em&gt;&amp;quot; license basically says &amp;quot;There are no conditions under which you can make copies that I won't sue you.&amp;quot;&lt;/li&gt;
-&lt;li&gt;A &amp;quot;&lt;em&gt;permissive&lt;/em&gt;&amp;quot; (type of free) license basically says &amp;quot;You can do whatever you want, BUT have to give me credit&amp;quot;, and is very similar to the public domain. If the copyright holder didn't have the copyright, they couldn't sue you to make sure that you gave them credit, and nobody would have to give them credit.&lt;/li&gt;
-&lt;li&gt;A &amp;quot;&lt;em&gt;copyleft&lt;/em&gt;&amp;quot; (type of free) license basically says, &amp;quot;You can do whatever you want, BUT anyone who gets a copy from you has to be able to do whatever they want too.&amp;quot; If the copyright holder didn't have the copyright, they couldn't sue you to make sure that you gave the source to people go got it from you, and non-free versions of these programs would start to exist.&lt;/li&gt;
+&lt;li&gt;“&lt;em&gt;Copyright&lt;/em&gt;” is the right to control how copies are made of something.&lt;/li&gt;
+&lt;li&gt;Something for which no one holds the copyright is in the “&lt;em&gt;public domain&lt;/em&gt;”, because anyone (“the public”) is allowed to do &lt;em&gt;anything&lt;/em&gt; with it.&lt;/li&gt;
+&lt;li&gt;A “&lt;em&gt;license&lt;/em&gt;” is basically a legal document that says “I promise not to sue you if make copies in these specific ways.”&lt;/li&gt;
+&lt;li&gt;A “&lt;em&gt;non-free&lt;/em&gt;” license basically says “There are no conditions under which you can make copies that I won’t sue you.”&lt;/li&gt;
+&lt;li&gt;A “&lt;em&gt;permissive&lt;/em&gt;” (type of free) license basically says “You can do whatever you want, BUT have to give me credit”, and is very similar to the public domain. If the copyright holder didn’t have the copyright, they couldn’t sue you to make sure that you gave them credit, and nobody would have to give them credit.&lt;/li&gt;
+&lt;li&gt;A “&lt;em&gt;copyleft&lt;/em&gt;” (type of free) license basically says, “You can do whatever you want, BUT anyone who gets a copy from you has to be able to do whatever they want too.” If the copyright holder didn’t have the copyright, they couldn’t sue you to make sure that you gave the source to people go got it from you, and non-free versions of these programs would start to exist.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="specific-questions"&gt;Specific questions:&lt;/h3&gt;
-&lt;p&gt;Readline: The GNU GPL is a copyleft license. If you make a modified version of Readline, and give it to others without letting them have the source code, the FSF will sue you. They can do this because they have the copyright on Readline, and in the GNU GPL (the license they used) it only says that they won't sue you if you distribute the source with the modified version. If they didn't have the copyright, they couldn't sue you, and the GNU GPL would be worthless.&lt;/p&gt;
-&lt;p&gt;LiveCode: The copyright holder for something is not required to obey the license—the license is only a promise not to sue you; of course they won't sue themselves. They can also offer different terms to different people. They can tell most people &amp;quot;I won't sue you as long as you share the source,&amp;quot; but if someone gave them a little money, they might say, &amp;quot;I also promise not sue sue this guy, even if he doesn't give out the source.&amp;quot;&lt;/p&gt;
+&lt;p&gt;Readline: The GNU GPL is a copyleft license. If you make a modified version of Readline, and give it to others without letting them have the source code, the FSF will sue you. They can do this because they have the copyright on Readline, and in the GNU GPL (the license they used) it only says that they won’t sue you if you distribute the source with the modified version. If they didn’t have the copyright, they couldn’t sue you, and the GNU GPL would be worthless.&lt;/p&gt;
+&lt;p&gt;LiveCode: The copyright holder for something is not required to obey the license—the license is only a promise not to sue you; of course they won’t sue themselves. They can also offer different terms to different people. They can tell most people “I won’t sue you as long as you share the source,” but if someone gave them a little money, they might say, “I also promise not sue sue this guy, even if he doesn’t give out the source.”&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2013 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -1249,27 +1315,27 @@ M-x ansi-term&lt;/code&gt;&lt;/pre&gt;
<link rel="alternate" type="text/html" href="./pacman-overview.html"/>
<link rel="alternate" type="text/markdown" href="./pacman-overview.md"/>
<id>https://lukeshu.com/blog/pacman-overview.html</id>
- <updated>2016-02-28T07:12:18-05:00</updated>
+ <updated>2013-01-23T00:00:00+00:00</updated>
<published>2013-01-23T00:00:00+00:00</published>
<title>A quick overview of usage of the Pacman package manager</title>
<content type="html">&lt;h1 id="a-quick-overview-of-usage-of-the-pacman-package-manager"&gt;A quick overview of usage of the Pacman package manager&lt;/h1&gt;
&lt;p&gt;This was originally published on &lt;a href="https://news.ycombinator.com/item?id=5101416"&gt;Hacker News&lt;/a&gt; on 2013-01-23.&lt;/p&gt;
-&lt;p&gt;Note: I've over-done quotation marks to make it clear when precise wording matters.&lt;/p&gt;
+&lt;p&gt;Note: I’ve over-done quotation marks to make it clear when precise wording matters.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pacman&lt;/code&gt; is a little awkward, but I prefer it to apt/dpkg, which have sub-commands, each with their own flags, some of which are undocumented. pacman, on the other hand, has ALL options documented in one fairly short man page.&lt;/p&gt;
-&lt;p&gt;The trick to understanding pacman is to understand how it maintains databases of packages, and what it means to &amp;quot;sync&amp;quot;.&lt;/p&gt;
-&lt;p&gt;There are several &amp;quot;databases&amp;quot; that pacman deals with:&lt;/p&gt;
+&lt;p&gt;The trick to understanding pacman is to understand how it maintains databases of packages, and what it means to “sync”.&lt;/p&gt;
+&lt;p&gt;There are several “databases” that pacman deals with:&lt;/p&gt;
&lt;ul&gt;
-&lt;li&gt;&amp;quot;the database&amp;quot;, (&lt;code&gt;/var/lib/pacman/local/&lt;/code&gt;)&lt;br&gt; The database of currently installed packages&lt;/li&gt;
-&lt;li&gt;&amp;quot;package databases&amp;quot;, (&lt;code&gt;/var/lib/pacman/sync/${repo}.db&lt;/code&gt;)&lt;br&gt; There is one of these for each repository. It is a file that is fetched over plain http(s) from the server; it is not modified locally, only updated.&lt;/li&gt;
+&lt;li&gt;“the database”, (&lt;code&gt;/var/lib/pacman/local/&lt;/code&gt;)&lt;br&gt; The database of currently installed packages&lt;/li&gt;
+&lt;li&gt;“package databases”, (&lt;code&gt;/var/lib/pacman/sync/${repo}.db&lt;/code&gt;)&lt;br&gt; There is one of these for each repository. It is a file that is fetched over plain http(s) from the server; it is not modified locally, only updated.&lt;/li&gt;
&lt;/ul&gt;
-&lt;p&gt;The &amp;quot;operation&amp;quot; of pacman is set with a capital flag, one of &amp;quot;DQRSTU&amp;quot; (plus &lt;code&gt;-V&lt;/code&gt; and &lt;code&gt;-h&lt;/code&gt; for version and help). Of these, &amp;quot;DTU&amp;quot; are &amp;quot;low-level&amp;quot; (analogous to dpkg) and &amp;quot;QRS&amp;quot; are &amp;quot;high-level&amp;quot; (analogous to apt).&lt;/p&gt;
-&lt;p&gt;To give a brief explanation of cover the &amp;quot;high-level&amp;quot; operations, and which databases they deal with:&lt;/p&gt;
+&lt;p&gt;The “operation” of pacman is set with a capital flag, one of “DQRSTU” (plus &lt;code&gt;-V&lt;/code&gt; and &lt;code&gt;-h&lt;/code&gt; for version and help). Of these, “DTU” are “low-level” (analogous to dpkg) and “QRS” are “high-level” (analogous to apt).&lt;/p&gt;
+&lt;p&gt;To give a brief explanation of cover the “high-level” operations, and which databases they deal with:&lt;/p&gt;
&lt;ul&gt;
-&lt;li&gt;&amp;quot;Q&amp;quot; Queries &amp;quot;the database&amp;quot; of locally installed packages.&lt;/li&gt;
-&lt;li&gt;&amp;quot;S&amp;quot; deals with &amp;quot;package databases&amp;quot;, and Syncing &amp;quot;the database&amp;quot; with them; meaning it installs/updates packages that are in package databases, but not installed on the local system.&lt;/li&gt;
-&lt;li&gt;&amp;quot;R&amp;quot; Removes packages &amp;quot;the database&amp;quot;; removing them from the local system.&lt;/li&gt;
+&lt;li&gt;“Q” Queries “the database” of locally installed packages.&lt;/li&gt;
+&lt;li&gt;“S” deals with “package databases”, and Syncing “the database” with them; meaning it installs/updates packages that are in package databases, but not installed on the local system.&lt;/li&gt;
+&lt;li&gt;“R” Removes packages “the database”; removing them from the local system.&lt;/li&gt;
&lt;/ul&gt;
-&lt;p&gt;The biggest &amp;quot;gotcha&amp;quot; is that &amp;quot;S&amp;quot; deals with all operations with &amp;quot;package databases&amp;quot;, not just syncing &amp;quot;the database&amp;quot; with them.&lt;/p&gt;
+&lt;p&gt;The biggest “gotcha” is that “S” deals with all operations with “package databases”, not just syncing “the database” with them.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2013 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -1280,15 +1346,15 @@ M-x ansi-term&lt;/code&gt;&lt;/pre&gt;
<link rel="alternate" type="text/html" href="./poor-system-documentation.html"/>
<link rel="alternate" type="text/markdown" href="./poor-system-documentation.md"/>
<id>https://lukeshu.com/blog/poor-system-documentation.html</id>
- <updated>2014-01-26T17:00:58-05:00</updated>
+ <updated>2012-09-12T00:00:00+00:00</updated>
<published>2012-09-12T00:00:00+00:00</published>
<title>Why documentation on GNU/Linux sucks</title>
<content type="html">&lt;h1 id="why-documentation-on-gnulinux-sucks"&gt;Why documentation on GNU/Linux sucks&lt;/h1&gt;
&lt;p&gt;This is based on a post on &lt;a href="http://www.reddit.com/r/archlinux/comments/zoffo/systemd_we_will_keep_making_it_the_distro_we_like/c66uu57"&gt;reddit&lt;/a&gt;, published on 2012-09-12.&lt;/p&gt;
-&lt;p&gt;The documentation situation on GNU/Linux based operating systems is right now a mess. In the world of documentation, there are basically 3 camps, the &amp;quot;UNIX&amp;quot; camp, the &amp;quot;GNU&amp;quot; camp, and the &amp;quot;GNU/Linux&amp;quot; camp.&lt;/p&gt;
-&lt;p&gt;The UNIX camp is the &lt;code&gt;man&lt;/code&gt; page camp, they have quality, terse but informative man pages, on &lt;em&gt;everything&lt;/em&gt;, including the system's design and all system files. If it was up to the UNIX camp, &lt;code&gt;man grub.cfg&lt;/code&gt;, &lt;code&gt;man grub.d&lt;/code&gt;, and &lt;code&gt;man grub-mkconfig_lib&lt;/code&gt; would exist and actually be helpful. The man page would either include inline examples, or point you to a directory. If I were to print off all of the man pages, it would actually be a useful manual for the system.&lt;/p&gt;
+&lt;p&gt;The documentation situation on GNU/Linux based operating systems is right now a mess. In the world of documentation, there are basically 3 camps, the “UNIX” camp, the “GNU” camp, and the “GNU/Linux” camp.&lt;/p&gt;
+&lt;p&gt;The UNIX camp is the &lt;code&gt;man&lt;/code&gt; page camp, they have quality, terse but informative man pages, on &lt;em&gt;everything&lt;/em&gt;, including the system’s design and all system files. If it was up to the UNIX camp, &lt;code&gt;man grub.cfg&lt;/code&gt;, &lt;code&gt;man grub.d&lt;/code&gt;, and &lt;code&gt;man grub-mkconfig_lib&lt;/code&gt; would exist and actually be helpful. The man page would either include inline examples, or point you to a directory. If I were to print off all of the man pages, it would actually be a useful manual for the system.&lt;/p&gt;
&lt;p&gt;Then GNU camp is the &lt;code&gt;info&lt;/code&gt; camp. They basically thought that each piece of software was more complex than a man page could handle. They essentially think that some individual pieces software warrant a book. So, they developed the &lt;code&gt;info&lt;/code&gt; system. The info pages are usually quite high quality, but are very long, and a pain if you just want a quick look. The &lt;code&gt;info&lt;/code&gt; system can generate good HTML (and PDF, etc.) documentation. But the standard &lt;code&gt;info&lt;/code&gt; is awkward as hell to use for non-Emacs users.&lt;/p&gt;
-&lt;p&gt;Then we have the &amp;quot;GNU/Linux&amp;quot; camp, they use GNU software, but want to use &lt;code&gt;man&lt;/code&gt; pages. This means that we get low-quality man pages for GNU software, and then we don't have a good baseline for documentation, developers each try to create their own. The documentation that gets written is frequently either low-quality, or non-standard. A lot of man pages are auto-generated from &lt;code&gt;--help&lt;/code&gt; output or info pages, meaning they are either not helpful, or overly verbose with low information density. This camp gets the worst of both worlds, and a few problems of its own.&lt;/p&gt;
+&lt;p&gt;Then we have the “GNU/Linux” camp, they use GNU software, but want to use &lt;code&gt;man&lt;/code&gt; pages. This means that we get low-quality man pages for GNU software, and then we don’t have a good baseline for documentation, developers each try to create their own. The documentation that gets written is frequently either low-quality, or non-standard. A lot of man pages are auto-generated from &lt;code&gt;--help&lt;/code&gt; output or info pages, meaning they are either not helpful, or overly verbose with low information density. This camp gets the worst of both worlds, and a few problems of its own.&lt;/p&gt;
</content>
<author><name>Luke Shumaker</name><uri>https://lukeshu.com/</uri><email>lukeshu@sbcglobal.net</email></author>
<rights type="html">&lt;p&gt;The content of this page is Copyright © 2012 &lt;a href="mailto:lukeshu@sbcglobal.net"&gt;Luke Shumaker&lt;/a&gt;.&lt;/p&gt;
@@ -1299,22 +1365,22 @@ M-x ansi-term&lt;/code&gt;&lt;/pre&gt;
<link rel="alternate" type="text/html" href="./arch-systemd.html"/>
<link rel="alternate" type="text/markdown" href="./arch-systemd.md"/>
<id>https://lukeshu.com/blog/arch-systemd.html</id>
- <updated>2016-02-28T07:12:18-05:00</updated>
+ <updated>2012-09-11T00:00:00+00:00</updated>
<published>2012-09-11T00:00:00+00:00</published>
<title>What Arch Linux's switch to systemd means for users</title>
- <content type="html">&lt;h1 id="what-arch-linuxs-switch-to-systemd-means-for-users"&gt;What Arch Linux's switch to systemd means for users&lt;/h1&gt;
+ <content type="html">&lt;h1 id="what-arch-linuxs-switch-to-systemd-means-for-users"&gt;What Arch Linux’s switch to systemd means for users&lt;/h1&gt;
&lt;p&gt;This is based on a post on &lt;a href="http://www.reddit.com/r/archlinux/comments/zoffo/systemd_we_will_keep_making_it_the_distro_we_like/c66nrcb"&gt;reddit&lt;/a&gt;, published on 2012-09-11.&lt;/p&gt;
&lt;p&gt;systemd is a replacement for UNIX System V-style init; instead of having &lt;code&gt;/etc/init.d/*&lt;/code&gt; or &lt;code&gt;/etc/rc.d/*&lt;/code&gt; scripts, systemd runs in the background to manage them.&lt;/p&gt;
-&lt;p&gt;This has the &lt;strong&gt;advantages&lt;/strong&gt; that there is proper dependency tracking, easing the life of the administrator and allowing for things to be run in parallel safely. It also uses &amp;quot;targets&amp;quot; instead of &amp;quot;init levels&amp;quot;, which just makes more sense. It also means that a target can be started or stopped on the fly, such as mounting or unmounting a drive, which has in the past only been done at boot up and shut down.&lt;/p&gt;
-&lt;p&gt;The &lt;strong&gt;downside&lt;/strong&gt; is that it is (allegedly) big, bloated&lt;a href="#fn1" class="footnoteRef" id="fnref1"&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/a&gt;, and does (arguably) more than it should. Why is there a dedicated systemd-fsck? Why does systemd encapsulate the functionality of syslog? That, and it means somebody is standing on my lawn.&lt;/p&gt;
-&lt;p&gt;The &lt;strong&gt;changes&lt;/strong&gt; an Arch user needs to worry about is that everything is being moved out of &lt;code&gt;/etc/rc.conf&lt;/code&gt;. Arch users will still have the choice between systemd and SysV-init, but rc.conf is becoming the SysV-init configuration file, rather than the general system configuration file. If you will still be using SysV-init, basically the only thing in rc.conf will be &lt;code&gt;DAEMONS&lt;/code&gt;.&lt;a href="#fn2" class="footnoteRef" id="fnref2"&gt;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; For now there is compatibility for the variables that used to be there, but that is going away.&lt;/p&gt;
+&lt;p&gt;This has the &lt;strong&gt;advantages&lt;/strong&gt; that there is proper dependency tracking, easing the life of the administrator and allowing for things to be run in parallel safely. It also uses “targets” instead of “init levels”, which just makes more sense. It also means that a target can be started or stopped on the fly, such as mounting or unmounting a drive, which has in the past only been done at boot up and shut down.&lt;/p&gt;
+&lt;p&gt;The &lt;strong&gt;downside&lt;/strong&gt; is that it is (allegedly) big, bloated&lt;a href="#fn1" class="footnote-ref" id="fnref1"&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/a&gt;, and does (arguably) more than it should. Why is there a dedicated systemd-fsck? Why does systemd encapsulate the functionality of syslog? That, and it means somebody is standing on my lawn.&lt;/p&gt;
+&lt;p&gt;The &lt;strong&gt;changes&lt;/strong&gt; an Arch user needs to worry about is that everything is being moved out of &lt;code&gt;/etc/rc.conf&lt;/code&gt;. Arch users will still have the choice between systemd and SysV-init, but rc.conf is becoming the SysV-init configuration file, rather than the general system configuration file. If you will still be using SysV-init, basically the only thing in rc.conf will be &lt;code&gt;DAEMONS&lt;/code&gt;.&lt;a href="#fn2" class="footnote-ref" id="fnref2"&gt;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; For now there is compatibility for the variables that used to be there, but that is going away.&lt;/p&gt;
&lt;section class="footnotes"&gt;
&lt;hr /&gt;
&lt;ol&gt;
-&lt;li id="fn1"&gt;&lt;p&gt;&lt;em&gt;I&lt;/em&gt; don't think it's bloated, but that is the criticism. Basically, I discount any argument that uses &amp;quot;bloated&amp;quot; without backing it up. I was trying to say that it takes a lot of heat for being bloated, and that there is be some truth to that (the systemd-fsck and syslog comments), but that these claims are largely unsubstantiated, and more along the lines of &amp;quot;I would have done it differently&amp;quot;. Maybe your ideas are better, but you haven't written the code.&lt;/p&gt;
-&lt;p&gt;I personally don't have an opinion either way about SysV-init vs systemd. I recently migrated my boxes to systemd, but that was because the SysV init scripts for NFSv4 in Arch are problematic. I suppose this is another &lt;strong&gt;advantage&lt;/strong&gt; I missed: &lt;em&gt;people generally consider systemd &amp;quot;units&amp;quot; to be more robust and easier to write than SysV &amp;quot;scripts&amp;quot;.&lt;/em&gt;&lt;/p&gt;
-&lt;p&gt;I'm actually not a fan of either. If I had more time on my hands, I'd be running a &lt;code&gt;make&lt;/code&gt;-based init system based on a research project IBM did a while ago. So I consider myself fairly objective; my horse isn't in this race.&lt;a href="#fnref1"&gt;↩&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
-&lt;li id="fn2"&gt;&lt;p&gt;You can still have &lt;code&gt;USEDMRAID&lt;/code&gt;, &lt;code&gt;USELVM&lt;/code&gt;, &lt;code&gt;interface&lt;/code&gt;, &lt;code&gt;address&lt;/code&gt;, &lt;code&gt;netmask&lt;/code&gt;, and &lt;code&gt;gateway&lt;/code&gt;. But those are minor.&lt;a href="#fnref2"&gt;↩&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
+&lt;li id="fn1"&gt;&lt;p&gt;&lt;em&gt;I&lt;/em&gt; don’t think it’s bloated, but that is the criticism. Basically, I discount any argument that uses “bloated” without backing it up. I was trying to say that it takes a lot of heat for being bloated, and that there is be some truth to that (the systemd-fsck and syslog comments), but that these claims are largely unsubstantiated, and more along the lines of “I would have done it differently”. Maybe your ideas are better, but you haven’t written the code.&lt;/p&gt;
+&lt;p&gt;I personally don’t have an opinion either way about SysV-init vs systemd. I recently migrated my boxes to systemd, but that was because the SysV init scripts for NFSv4 in Arch are problematic. I suppose this is another &lt;strong&gt;advantage&lt;/strong&gt; I missed: &lt;em&gt;people generally consider systemd “units” to be more robust and easier to write than SysV “scripts”.&lt;/em&gt;&lt;/p&gt;
+&lt;p&gt;I’m actually not a fan of either. If I had more time on my hands, I’d be running a &lt;code&gt;make&lt;/code&gt;-based init system based on a research project IBM did a while ago. So I consider myself fairly objective; my horse isn’t in this race.&lt;a href="#fnref1" class="footnote-back"&gt;↩&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
+&lt;li id="fn2"&gt;&lt;p&gt;You can still have &lt;code&gt;USEDMRAID&lt;/code&gt;, &lt;code&gt;USELVM&lt;/code&gt;, &lt;code&gt;interface&lt;/code&gt;, &lt;code&gt;address&lt;/code&gt;, &lt;code&gt;netmask&lt;/code&gt;, and &lt;code&gt;gateway&lt;/code&gt;. But those are minor.&lt;a href="#fnref2" class="footnote-back"&gt;↩&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/section&gt;
</content>
diff --git a/public/index.html b/public/index.html
index 605932c..87e9985 100644
--- a/public/index.html
+++ b/public/index.html
@@ -20,6 +20,7 @@ time {
}
</style>
<ul>
+<li><time>2016-09-30</time> - <a href="./http-notes.html">Notes on subtleties of HTTP implementation</a></li>
<li><time>2016-02-28</time> - <a href="./x11-systemd.html">My X11 setup with systemd</a></li>
<li><time>2016-02-28</time> - <a href="./java-segfault-redux.html">My favorite bug: segfaults in Java (redux)</a></li>
<li><time>2015-05-19</time> - <a href="./nginx-mediawiki.html">An Nginx configuration for MediaWiki</a></li>
@@ -27,8 +28,8 @@ time {
<li><time>2015-03-18</time> - <a href="./build-bash-1.html">Building Bash 1.14.7 on a modern system</a></li>
<li><time>2015-02-06</time> - <a href="./purdue-cs-login.html">Customizing your login on Purdue CS computers (WIP, but updated)</a></li>
<li><time>2014-11-20</time> - <a href="./make-memoize.html">A memoization routine for GNU Make functions</a></li>
-<li><time>2014-09-12</time> - <a href="./ryf-routers.html">I'm excited about the new RYF-certified routers from ThinkPenguin</a></li>
-<li><time>2014-09-11</time> - <a href="./what-im-working-on-fall-2014.html">What I'm working on (Fall 2014)</a></li>
+<li><time>2014-09-12</time> - <a href="./ryf-routers.html">I’m excited about the new RYF-certified routers from ThinkPenguin</a></li>
+<li><time>2014-09-11</time> - <a href="./what-im-working-on-fall-2014.html">What I’m working on (Fall 2014)</a></li>
<li><time>2014-05-08</time> - <a href="./rails-improvements.html">Miscellaneous ways to improve your Rails experience</a></li>
<li><time>2014-02-13</time> - <a href="./bash-redirection.html">Bash redirection</a></li>
<li><time>2014-01-13</time> - <a href="./java-segfault.html">My favorite bug: segfaults in Java</a></li>
@@ -36,12 +37,12 @@ time {
<li><time>2013-10-12</time> - <a href="./git-go-pre-commit.html">A git pre-commit hook for automatically formatting Go code</a></li>
<li><time>2013-10-12</time> - <a href="./fd_printf.html"><code>dprintf</code>: print formatted text directly to a file descriptor</a></li>
<li><time>2013-08-29</time> - <a href="./emacs-as-an-os.html">Emacs as an operating system</a></li>
-<li><time>2013-04-09</time> - <a href="./emacs-shells.html">A summary of Emacs' bundled shell and terminal modes</a></li>
+<li><time>2013-04-09</time> - <a href="./emacs-shells.html">A summary of Emacs’ bundled shell and terminal modes</a></li>
<li><time>2013-03-21</time> - <a href="./term-colors.html">An explanation of common terminal emulator color codes</a></li>
-<li><time>2013-02-21</time> - <a href="./fs-licensing-explanation.html">An explanation of how &quot;copyleft&quot; licensing works</a></li>
+<li><time>2013-02-21</time> - <a href="./fs-licensing-explanation.html">An explanation of how “copyleft” licensing works</a></li>
<li><time>2013-01-23</time> - <a href="./pacman-overview.html">A quick overview of usage of the Pacman package manager</a></li>
<li><time>2012-09-12</time> - <a href="./poor-system-documentation.html">Why documentation on GNU/Linux sucks</a></li>
-<li><time>2012-09-11</time> - <a href="./arch-systemd.html">What Arch Linux's switch to systemd means for users</a></li>
+<li><time>2012-09-11</time> - <a href="./arch-systemd.html">What Arch Linux’s switch to systemd means for users</a></li>
</ul>
</article>
diff --git a/public/index.md b/public/index.md
index 2cde070..a479d58 100644
--- a/public/index.md
+++ b/public/index.md
@@ -10,6 +10,7 @@ time {
}
</style>
+ * <time>2016-09-30</time> - [Notes on subtleties of HTTP implementation](./http-notes.html)
* <time>2016-02-28</time> - [My X11 setup with systemd](./x11-systemd.html)
* <time>2016-02-28</time> - [My favorite bug: segfaults in Java (redux)](./java-segfault-redux.html)
* <time>2015-05-19</time> - [An Nginx configuration for MediaWiki](./nginx-mediawiki.html)
diff --git a/public/java-segfault-redux.html b/public/java-segfault-redux.html
index 81f0960..acf0161 100644
--- a/public/java-segfault-redux.html
+++ b/public/java-segfault-redux.html
@@ -10,30 +10,30 @@
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » java-segfault-redux</header>
<article>
<h1 id="my-favorite-bug-segfaults-in-java-redux">My favorite bug: segfaults in Java (redux)</h1>
-<p>Two years ago, I <a href="./java-segfault.html">wrote</a> about one of my favorite bugs that I'd squashed two years before that. About a year after that, someone posted it <a href="https://news.ycombinator.com/item?id=9283571">on Hacker News</a>.</p>
-<p>There was some fun discussion about it, but also some confusion. After finishing a season of mentoring team 4272, I've decided that it would be fun to re-visit the article, and dig up the old actual code, instead of pseudo-code, hopefully improving the clarity (and providing a light introduction for anyone wanting to get into modifying the current SmartDashbaord).</p>
+<p>Two years ago, I <a href="./java-segfault.html">wrote</a> about one of my favorite bugs that I’d squashed two years before that. About a year after that, someone posted it <a href="https://news.ycombinator.com/item?id=9283571">on Hacker News</a>.</p>
+<p>There was some fun discussion about it, but also some confusion. After finishing a season of mentoring team 4272, I’ve decided that it would be fun to re-visit the article, and dig up the old actual code, instead of pseudo-code, hopefully improving the clarity (and providing a light introduction for anyone wanting to get into modifying the current SmartDashbaord).</p>
<h2 id="the-context">The context</h2>
-<p>In 2012, I was a high school senior, and lead programmer programmer on the FIRST Robotics Competition team 1024. For the unfamiliar, the relevant part of the setup is that there are 2 minute and 15 second matches in which you have a 120 pound robot that sometimes runs autonomously, and sometimes is controlled over WiFi from a person at a laptop running stock &quot;driver station&quot; software and modifiable &quot;dashboard&quot; software.</p>
+<p>In 2012, I was a high school senior, and lead programmer programmer on the FIRST Robotics Competition team 1024. For the unfamiliar, the relevant part of the setup is that there are 2 minute and 15 second matches in which you have a 120 pound robot that sometimes runs autonomously, and sometimes is controlled over WiFi from a person at a laptop running stock “driver station” software and modifiable “dashboard” software.</p>
<p>That year, we mostly used the dashboard software to allow the human driver and operator to monitor sensors on the robot, one of them being a video feed from a web-cam mounted on it. This was really easy because the new standard dashboard program had a click-and drag interface to add stock widgets; you just had to make sure the code on the robot was actually sending the data.</p>
-<p>That's great, until when debugging things, the dashboard would suddenly vanish. If it was run manually from a terminal (instead of letting the driver station software launch it), you would see a core dump indicating a segmentation fault.</p>
-<p>This wasn't just us either; I spoke with people on other teams, everyone who was streaming video had this issue. But, because it only happened every couple of minutes, and a match is only 2:15, it didn't need to run very long, they just crossed their fingers and hoped it didn't happen during a match.</p>
-<p>The dashboard was written in Java, and the source was available (under a 3-clause BSD license) via read-only SVN at <code>http://firstforge.wpi.edu/svn/repos/smart_dashboard/trunk</code> (which is unfortunately no longer online, fortunately I'd posted some snapshots on the web). So I dove in, hunting for the bug.</p>
+<p>That’s great, until when debugging things, the dashboard would suddenly vanish. If it was run manually from a terminal (instead of letting the driver station software launch it), you would see a core dump indicating a segmentation fault.</p>
+<p>This wasn’t just us either; I spoke with people on other teams, everyone who was streaming video had this issue. But, because it only happened every couple of minutes, and a match is only 2:15, it didn’t need to run very long, they just crossed their fingers and hoped it didn’t happen during a match.</p>
+<p>The dashboard was written in Java, and the source was available (under a 3-clause BSD license) via read-only SVN at <code>http://firstforge.wpi.edu/svn/repos/smart_dashboard/trunk</code> (which is unfortunately no longer online, fortunately I’d posted some snapshots on the web). So I dove in, hunting for the bug.</p>
<p>The repository was divided into several NetBeans projects (not exhaustively listed):</p>
<ul>
<li><a href="https://gitorious.org/absfrc/sources.git/?p=absfrc:sources.git;a=blob_plain;f=smartdashboard-client-2012-1-any.src.tar.xz;hb=HEAD"><code>client/smartdashboard</code></a>: The main dashboard program, has a plugin architecture.</li>
<li><a href="https://gitorious.org/absfrc/sources.git/?p=absfrc:sources.git;a=blob_plain;f=wpijavacv-208-1-any.src.tar.xz;hb=HEAD"><code>WPIJavaCV</code></a>: A higher-level wrapper around JavaCV, itself a Java Native Interface (JNI) wrapper to talk to OpenCV (C and C++).</li>
<li><a href="https://gitorious.org/absfrc/sources.git/?p=absfrc:sources.git;a=blob_plain;f=smartdashboard-extension-wpicameraextension-210-1-any.src.tar.xz;hb=HEAD"><code>extensions/camera/WPICameraExtension</code></a>: The standard camera feed plugin, processes the video through WPIJavaCV.</li>
</ul>
-<p>I figured that the bug must be somewhere in the C or C++ code that was being called by JavaCV, because that's the language where segfaults happen. It was especially a pain to track down the pointers that were causing the issue, because it was hard with native debuggers to see through all of the JVM stuff to the OpenCV code, and the OpenCV stuff is opaque to Java debuggers.</p>
-<p>Eventually the issue lead me back into the WPICameraExtension, then into WPIJavaCV--there was a native pointer being stored in a Java variable; Java code called the native routine to <code>free()</code> the structure, but then tried to feed it to another routine later. This lead to difficulty again--tracking objects with Java debuggers was hard because they don't expect the program to suddenly segfault; it's Java code, Java doesn't segfault, it throws exceptions!</p>
-<p>With the help of <code>println()</code> I was eventually able to see that some code was executing in an order that straight didn't make sense.</p>
+<p>I figured that the bug must be somewhere in the C or C++ code that was being called by JavaCV, because that’s the language where segfaults happen. It was especially a pain to track down the pointers that were causing the issue, because it was hard with native debuggers to see through all of the JVM stuff to the OpenCV code, and the OpenCV stuff is opaque to Java debuggers.</p>
+<p>Eventually the issue lead me back into the WPICameraExtension, then into WPIJavaCV—there was a native pointer being stored in a Java variable; Java code called the native routine to <code>free()</code> the structure, but then tried to feed it to another routine later. This lead to difficulty again—tracking objects with Java debuggers was hard because they don’t expect the program to suddenly segfault; it’s Java code, Java doesn’t segfault, it throws exceptions!</p>
+<p>With the help of <code>println()</code> I was eventually able to see that some code was executing in an order that straight didn’t make sense.</p>
<h2 id="the-bug">The bug</h2>
-<p>The basic flow of WPIJavaCV is you have a <code>WPICamera</code>, and you call <code>.getNewImage()</code> on it, which gives you a <code>WPIImage</code>, which you could do all kinds of fancy OpenCV things on, but then ultimately call <code>.getBufferedImage()</code>, which gives you a <code>java.awt.image.BufferedImage</code> that you can pass to Swing to draw on the screen. You do this every for frame. Which is exactly what <code>WPICameraExtension.java</code> did, except that &quot;all kinds of fancy OpenCV things&quot; consisted only of:</p>
+<p>The basic flow of WPIJavaCV is you have a <code>WPICamera</code>, and you call <code>.getNewImage()</code> on it, which gives you a <code>WPIImage</code>, which you could do all kinds of fancy OpenCV things on, but then ultimately call <code>.getBufferedImage()</code>, which gives you a <code>java.awt.image.BufferedImage</code> that you can pass to Swing to draw on the screen. You do this every for frame. Which is exactly what <code>WPICameraExtension.java</code> did, except that “all kinds of fancy OpenCV things” consisted only of:</p>
<pre><code>public WPIImage processImage(WPIColorImage rawImage) {
return rawImage;
}</code></pre>
<p>The idea was that you would extend the class, overriding that one method, if you wanted to do anything fancy.</p>
-<p>One of the neat things about WPIJavaCV was that every OpenCV object class extended had a <code>finalize()</code> method (via inheriting from the abstract class <code>WPIDisposable</code>) that freed the underlying C/C++ memory, so you didn't have to worry about memory leaks like in plain JavaCV. To inherit from <code>WPIDisposable</code>, you had to write a <code>disposed()</code> method that actually freed the memory. This was better than writing <code>finalize()</code> directly, because it did some safety with NULL pointers and idempotency if you wanted to manually free something early.</p>
+<p>One of the neat things about WPIJavaCV was that every OpenCV object class extended had a <code>finalize()</code> method (via inheriting from the abstract class <code>WPIDisposable</code>) that freed the underlying C/C++ memory, so you didn’t have to worry about memory leaks like in plain JavaCV. To inherit from <code>WPIDisposable</code>, you had to write a <code>disposed()</code> method that actually freed the memory. This was better than writing <code>finalize()</code> directly, because it did some safety with NULL pointers and idempotency if you wanted to manually free something early.</p>
<p>Now, <code>edu.wpi.first.WPIImage.disposed()</code> called <code><a href="https://github.com/bytedeco/javacv/blob/svn/src/com/googlecode/javacv/cpp/opencv_core.java#L398">com.googlecode.javacv.cpp.opencv_core.IplImage</a>.release()</code>, which called (via JNI) <code>IplImage:::release()</code>, which called libc <code>free()</code>:</p>
<pre><code>@Override
protected void disposed() {
@@ -50,12 +50,12 @@ public BufferedImage getBufferedImage() {
return image.getBufferedImage();
}</code></pre>
-<p>The <code>println()</code> output I saw that didn't make sense was that <code>someFrame.finalize()</code> was running before <code>someFrame.getBuffereImage()</code> had returned!</p>
-<p>You see, if it is waiting for the return value of a method <code>m()</code> of object <code>a</code>, and code in <code>m()</code> that is yet to be executed doesn't access any other methods or properties of <code>a</code>, then it will go ahead and consider <code>a</code> eligible for garbage collection before <code>m()</code> has finished running.</p>
-<p>Put another way, <code>this</code> is passed to a method just like any other argument. If a method is done accessing <code>this</code>, then it's &quot;safe&quot; for the JVM to go ahead and garbage collect it.</p>
-<p>That is normally a safe &quot;optimization&quot; to make… except for when a destructor method (<code>finalize()</code>) is defined for the object; the destructor can have side effects, and Java has no way to know whether it is safe for them to happen before <code>m()</code> has finished running.</p>
-<p>I'm not entirely sure if this is a &quot;bug&quot; in the compiler or the language specification, but I do believe that it's broken behavior.</p>
-<p>Anyway, in this case it's unsafe with WPI's code.</p>
+<p>The <code>println()</code> output I saw that didn’t make sense was that <code>someFrame.finalize()</code> was running before <code>someFrame.getBuffereImage()</code> had returned!</p>
+<p>You see, if it is waiting for the return value of a method <code>m()</code> of object <code>a</code>, and code in <code>m()</code> that is yet to be executed doesn’t access any other methods or properties of <code>a</code>, then it will go ahead and consider <code>a</code> eligible for garbage collection before <code>m()</code> has finished running.</p>
+<p>Put another way, <code>this</code> is passed to a method just like any other argument. If a method is done accessing <code>this</code>, then it’s “safe” for the JVM to go ahead and garbage collect it.</p>
+<p>That is normally a safe “optimization” to make… except for when a destructor method (<code>finalize()</code>) is defined for the object; the destructor can have side effects, and Java has no way to know whether it is safe for them to happen before <code>m()</code> has finished running.</p>
+<p>I’m not entirely sure if this is a “bug” in the compiler or the language specification, but I do believe that it’s broken behavior.</p>
+<p>Anyway, in this case it’s unsafe with WPI’s code.</p>
<h2 id="my-work-around">My work-around</h2>
<p>My work-around was to change this function in <code>WPIImage</code>:</p>
<pre><code>public BufferedImage getBufferedImage() {
@@ -63,7 +63,7 @@ public BufferedImage getBufferedImage() {
return image.getBufferedImage(); // `this` may get garbage collected before it returns!
}</code></pre>
-<p>In the above code, <code>this</code> is a <code>WPIImage</code>, and it may get garbage collected between the time that <code>image.getBufferedImage()</code> is dispatched, and the time that <code>image.getBufferedImage()</code> accesses native memory. When it is garbage collected, it calls <code>image.release()</code>, which <code>free()</code>s that native memory. That seems pretty unlikely to happen; that's a very small gap of time. However, running 30 times a second, eventually bad luck with the garbage collector happens, and the program crashes.</p>
+<p>In the above code, <code>this</code> is a <code>WPIImage</code>, and it may get garbage collected between the time that <code>image.getBufferedImage()</code> is dispatched, and the time that <code>image.getBufferedImage()</code> accesses native memory. When it is garbage collected, it calls <code>image.release()</code>, which <code>free()</code>s that native memory. That seems pretty unlikely to happen; that’s a very small gap of time. However, running 30 times a second, eventually bad luck with the garbage collector happens, and the program crashes.</p>
<p>The work-around was to insert a bogus call to this to keep <code>this</code> around until after we were also done with <code>image</code>:</p>
<p>to this:</p>
<pre><code>public BufferedImage getBufferedImage() {
@@ -72,10 +72,10 @@ public BufferedImage getBufferedImage() {
getWidth(); // bogus call to keep `this` around
return ret;
}</code></pre>
-<p>Yeah. After spending weeks wading through though thousands of lines of Java, C, and C++, a bogus call to a method I didn't care about was the fix.</p>
-<p>TheLoneWolfling on Hacker News noted that they'd be worried about the JVM optimizing out the call to <code>getWidth()</code>. I'm not, because <code>WPIImage.getWidth()</code> calls <code>IplImage.width()</code>, which is declared as <code>native</code>; the JVM must run it because it might have side effects. On the other hand, looking back, I think I just shrunk the window for things to go wrong: it may be possible for the garbage collection to trigger in the time between <code>getWidth()</code> being dispatched and <code>width()</code> running. Perhaps there was something in the C/C++ code that made it safe, I don't recall, and don't care quite enough to dig into OpenCV internals again. Or perhaps I'm mis-remembering the fix (which I don't actually have a file of), and I called some other method that <em>could</em> get optimized out (though I <em>do</em> believe that it was either <code>getWidth()</code> or <code>getHeight()</code>).</p>
-<h2 id="wpis-fix">WPI's fix</h2>
-<p>Four years later, the SmartDashboard is still being used! But it no longer has this bug, and it's not using my workaround. So, how did the WPILib developers fix it?</p>
+<p>Yeah. After spending weeks wading through though thousands of lines of Java, C, and C++, a bogus call to a method I didn’t care about was the fix.</p>
+<p>TheLoneWolfling on Hacker News noted that they’d be worried about the JVM optimizing out the call to <code>getWidth()</code>. I’m not, because <code>WPIImage.getWidth()</code> calls <code>IplImage.width()</code>, which is declared as <code>native</code>; the JVM must run it because it might have side effects. On the other hand, looking back, I think I just shrunk the window for things to go wrong: it may be possible for the garbage collection to trigger in the time between <code>getWidth()</code> being dispatched and <code>width()</code> running. Perhaps there was something in the C/C++ code that made it safe, I don’t recall, and don’t care quite enough to dig into OpenCV internals again. Or perhaps I’m mis-remembering the fix (which I don’t actually have a file of), and I called some other method that <em>could</em> get optimized out (though I <em>do</em> believe that it was either <code>getWidth()</code> or <code>getHeight()</code>).</p>
+<h2 id="wpis-fix">WPI’s fix</h2>
+<p>Four years later, the SmartDashboard is still being used! But it no longer has this bug, and it’s not using my workaround. So, how did the WPILib developers fix it?</p>
<p>Well, the code now lives <a href="https://usfirst.collab.net/gerrit/#/admin/projects/">in git at collab.net</a>, so I decided to take a look.</p>
<p>The stripped out WPIJavaCV from the main video feed widget, and now use a purely Java implementation of MPJPEG streaming.</p>
<p>However, the old video feed widget is still available as an extension (so that you can still do cool things with <code>processImage</code>), and it also no longer has this bug. Their fix was to put a mutex around all accesses to <code>image</code>, which should have been the obvious solution to me.</p>
diff --git a/public/java-segfault-redux.md b/public/java-segfault-redux.md
index e91dcd3..6959498 100644
--- a/public/java-segfault-redux.md
+++ b/public/java-segfault-redux.md
@@ -70,10 +70,10 @@ through all of the JVM stuff to the OpenCV code, and the OpenCV stuff
is opaque to Java debuggers.
Eventually the issue lead me back into the WPICameraExtension, then
-into WPIJavaCV--there was a native pointer being stored in a Java
+into WPIJavaCV---there was a native pointer being stored in a Java
variable; Java code called the native routine to `free()` the
structure, but then tried to feed it to another routine later. This
-lead to difficulty again--tracking objects with Java debuggers was
+lead to difficulty again---tracking objects with Java debuggers was
hard because they don't expect the program to suddenly segfault; it's
Java code, Java doesn't segfault, it throws exceptions!
diff --git a/public/java-segfault.html b/public/java-segfault.html
index c79add5..e16294b 100644
--- a/public/java-segfault.html
+++ b/public/java-segfault.html
@@ -13,18 +13,18 @@
<blockquote>
<p>Update: Two years later, I wrote a more detailed version of this article: <a href="./java-segfault-redux.html">My favorite bug: segfaults in Java (redux)</a>.</p>
</blockquote>
-<p>I've told this story orally a number of times, but realized that I have never written it down. This is my favorite bug story; it might not be my hardest bug, but it is the one I most like to tell.</p>
+<p>I’ve told this story orally a number of times, but realized that I have never written it down. This is my favorite bug story; it might not be my hardest bug, but it is the one I most like to tell.</p>
<h2 id="the-context">The context</h2>
-<p>In 2012, I was a Senior programmer on the FIRST Robotics Competition team 1024. For the unfamiliar, the relevant part of the setup is that there are 2 minute and 15 second matches in which you have a 120 pound robot that sometimes runs autonomously, and sometimes is controlled over WiFi from a person at a laptop running stock &quot;driver station&quot; software and modifiable &quot;dashboard&quot; software.</p>
+<p>In 2012, I was a Senior programmer on the FIRST Robotics Competition team 1024. For the unfamiliar, the relevant part of the setup is that there are 2 minute and 15 second matches in which you have a 120 pound robot that sometimes runs autonomously, and sometimes is controlled over WiFi from a person at a laptop running stock “driver station” software and modifiable “dashboard” software.</p>
<p>That year, we mostly used the dashboard software to allow the human driver and operator to monitor sensors on the robot, one of them being a video feed from a web-cam mounted on it. This was really easy because the new standard dashboard program had a click-and drag interface to add stock widgets; you just had to make sure the code on the robot was actually sending the data.</p>
-<p>That's great, until when debugging things, the dashboard would suddenly vanish. If it was run manually from a terminal (instead of letting the driver station software launch it), you would see a core dump indicating a segmentation fault.</p>
-<p>This wasn't just us either; I spoke with people on other teams, everyone who was streaming video had this issue. But, because it only happened every couple of minutes, and a match is only 2:15, it didn't need to run very long, they just crossed their fingers and hoped it didn't happen during a match.</p>
+<p>That’s great, until when debugging things, the dashboard would suddenly vanish. If it was run manually from a terminal (instead of letting the driver station software launch it), you would see a core dump indicating a segmentation fault.</p>
+<p>This wasn’t just us either; I spoke with people on other teams, everyone who was streaming video had this issue. But, because it only happened every couple of minutes, and a match is only 2:15, it didn’t need to run very long, they just crossed their fingers and hoped it didn’t happen during a match.</p>
<p>The dashboard was written in Java, and the source was available (under a 3-clause BSD license), so I dove in, hunting for the bug. Now, the program did use Java Native Interface to talk to OpenCV, which the video ran through; so I figured that it must be a bug in the C/C++ code that was being called. It was especially a pain to track down the pointers that were causing the issue, because it was hard with native debuggers to see through all of the JVM stuff to the OpenCV code, and the OpenCV stuff is opaque to Java debuggers.</p>
-<p>Eventually the issue lead me back into the Java code--there was a native pointer being stored in a Java variable; Java code called the native routine to <code>free()</code> the structure, but then tried to feed it to another routine later. This lead to difficulty again--tracking objects with Java debuggers was hard because they don't expect the program to suddenly segfault; it's Java code, Java doesn't segfault, it throws exceptions!</p>
-<p>With the help of <code>println()</code> I was eventually able to see that some code was executing in an order that straight didn't make sense.</p>
+<p>Eventually the issue lead me back into the Java code—there was a native pointer being stored in a Java variable; Java code called the native routine to <code>free()</code> the structure, but then tried to feed it to another routine later. This lead to difficulty again—tracking objects with Java debuggers was hard because they don’t expect the program to suddenly segfault; it’s Java code, Java doesn’t segfault, it throws exceptions!</p>
+<p>With the help of <code>println()</code> I was eventually able to see that some code was executing in an order that straight didn’t make sense.</p>
<h2 id="the-bug">The bug</h2>
<p>The issue was that Java was making an unsafe optimization (I never bothered to figure out if it is the compiler or the JVM making the mistake, I was satisfied once I had a work-around).</p>
-<p>Java was doing something similar to tail-call optimization with regard to garbage collection. You see, if it is waiting for the return value of a method <code>m()</code> of object <code>o</code>, and code in <code>m()</code> that is yet to be executed doesn't access any other methods or properties of <code>o</code>, then it will go ahead and consider <code>o</code> eligible for garbage collection before <code>m()</code> has finished running.</p>
+<p>Java was doing something similar to tail-call optimization with regard to garbage collection. You see, if it is waiting for the return value of a method <code>m()</code> of object <code>o</code>, and code in <code>m()</code> that is yet to be executed doesn’t access any other methods or properties of <code>o</code>, then it will go ahead and consider <code>o</code> eligible for garbage collection before <code>m()</code> has finished running.</p>
<p>That is normally a safe optimization to make… except for when a destructor method (<code>finalize()</code>) is defined for the object; the destructor can have side effects, and Java has no way to know whether it is safe for them to happen before <code>m()</code> has finished running.</p>
<h2 id="the-work-around">The work-around</h2>
<p>The routine that the segmentation fault was occurring in was something like:</p>
@@ -34,7 +34,7 @@
// `this` may now be garbage collected
return child.somethingElse(var); // segfault comes here
}</code></pre>
-<p>Where the destructor method of <code>this</code> calls a method that will <code>free()</code> native memory that is also accessed by <code>child</code>; if <code>this</code> is garbage collected before <code>child.somethingElse()</code> runs, the backing native code will try to access memory that has been <code>free()</code>ed, and receive a segmentation fault. That usually didn't happen, as the routines were pretty fast. However, running 30 times a second, eventually bad luck with the garbage collector happens, and the program crashes.</p>
+<p>Where the destructor method of <code>this</code> calls a method that will <code>free()</code> native memory that is also accessed by <code>child</code>; if <code>this</code> is garbage collected before <code>child.somethingElse()</code> runs, the backing native code will try to access memory that has been <code>free()</code>ed, and receive a segmentation fault. That usually didn’t happen, as the routines were pretty fast. However, running 30 times a second, eventually bad luck with the garbage collector happens, and the program crashes.</p>
<p>The work-around was to insert a bogus call to this to keep <code>this</code> around until after we were also done with <code>child</code>:</p>
<pre><code>public type1 getFrame() {
type2 child = this.getChild();
@@ -43,7 +43,7 @@
this.getSize(); // bogus call to keep `this` around
return ret;
}</code></pre>
-<p>Yeah. After spending weeks wading through though thousands of lines of Java, C, and C++, a bogus call to a method I didn't care about was the fix.</p>
+<p>Yeah. After spending weeks wading through though thousands of lines of Java, C, and C++, a bogus call to a method I didn’t care about was the fix.</p>
</article>
<footer>
diff --git a/public/java-segfault.md b/public/java-segfault.md
index 7a2d4c3..fbffb52 100644
--- a/public/java-segfault.md
+++ b/public/java-segfault.md
@@ -49,10 +49,10 @@ the pointers that were causing the issue, because it was hard with
native debuggers to see through all of the JVM stuff to the OpenCV
code, and the OpenCV stuff is opaque to Java debuggers.
-Eventually the issue lead me back into the Java code--there was a
+Eventually the issue lead me back into the Java code---there was a
native pointer being stored in a Java variable; Java code called the
native routine to `free()` the structure, but then tried to feed it to
-another routine later. This lead to difficulty again--tracking
+another routine later. This lead to difficulty again---tracking
objects with Java debuggers was hard because they don't expect the
program to suddenly segfault; it's Java code, Java doesn't segfault,
it throws exceptions!
diff --git a/public/lp2015-videos.html b/public/lp2015-videos.html
index ff88e2d..ef776c6 100644
--- a/public/lp2015-videos.html
+++ b/public/lp2015-videos.html
@@ -10,10 +10,10 @@
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » lp2015-videos</header>
<article>
<h1 id="i-took-some-videos-at-libreplanet">I took some videos at LibrePlanet</h1>
-<p>I'm at <a href="https://libreplanet.org/2015/">LibrePlanet</a>, and have been loving the talks. For most of yesterday, there was a series of short &quot;lightning&quot; talks in room 144. I decided to hang out in that room for the later part of the day, because while most of the talks were live streamed and recorded, there were no cameras in room 144; so I couldn't watch them later.</p>
+<p>I’m at <a href="https://libreplanet.org/2015/">LibrePlanet</a>, and have been loving the talks. For most of yesterday, there was a series of short “lightning” talks in room 144. I decided to hang out in that room for the later part of the day, because while most of the talks were live streamed and recorded, there were no cameras in room 144; so I couldn’t watch them later.</p>
<p>Way too late in the day, I remembered that I have the capability to record videos, so I cought the last two talks in 144.</p>
<p>I appologize for the changing orientation.</p>
-<p><a href="https://lukeshu.com/dump/lp-2015-last-2-short-talks.ogg">Here's the video I took</a>.</p>
+<p><a href="https://lukeshu.com/dump/lp-2015-last-2-short-talks.ogg">Here’s the video I took</a>.</p>
</article>
<footer>
diff --git a/public/make-memoize.html b/public/make-memoize.html
index 8505bef..2edb5a0 100644
--- a/public/make-memoize.html
+++ b/public/make-memoize.html
@@ -10,10 +10,10 @@
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » make-memoize</header>
<article>
<h1 id="a-memoization-routine-for-gnu-make-functions">A memoization routine for GNU Make functions</h1>
-<p>I'm a big fan of <a href="https://www.gnu.org/software/make/">GNU Make</a>. I'm pretty knowledgeable about it, and was pretty active on the help-make mailing list for a while. Something that many experienced make-ers know of is John Graham-Cumming's &quot;GNU Make Standard Library&quot;, or <a href="http://gmsl.sourceforge.net/">GMSL</a>.</p>
-<p>I don't like to use it, as I'm capable of defining macros myself as I need them instead of pulling in a 3rd party dependency (and generally like to stay away from the kind of Makefile that would lean heavily on something like GMSL).</p>
+<p>I’m a big fan of <a href="https://www.gnu.org/software/make/">GNU Make</a>. I’m pretty knowledgeable about it, and was pretty active on the help-make mailing list for a while. Something that many experienced make-ers know of is John Graham-Cumming’s “GNU Make Standard Library”, or <a href="http://gmsl.sourceforge.net/">GMSL</a>.</p>
+<p>I don’t like to use it, as I’m capable of defining macros myself as I need them instead of pulling in a 3rd party dependency (and generally like to stay away from the kind of Makefile that would lean heavily on something like GMSL).</p>
<p>However, one really neat thing that GMSL offers is a way to memoize expensive functions (such as those that shell out). I was considering pulling in GMSL for one of my projects, almost just for the <code>memoize</code> function.</p>
-<p>However, John's <code>memoize</code> has a couple short-comings that made it unsuitable for my needs.</p>
+<p>However, John’s <code>memoize</code> has a couple short-comings that made it unsuitable for my needs.</p>
<ul>
<li>Only allows functions that take one argument.</li>
<li>Considers empty values to be unset; for my needs, an empty string is a valid value that should be cached.</li>
@@ -52,7 +52,7 @@ _main = $(_$0_main)
_hash = __memoized_$(_$0_hash)
memoized = $(if $($(_hash)),,$(eval $(_hash) := _ $(_main)))$(call rest,$($(_hash)))</pre>
<p></code></p>
-<p>Now, I'm pretty sure that should work, but I have only actually tested the first version.</p>
+<p>Now, I’m pretty sure that should work, but I have only actually tested the first version.</p>
<h2 id="tldr">TL;DR</h2>
<p>Avoid doing things in Make that would make you lean on complex solutions like an external memoize function.</p>
<p>However, if you do end up needing a more flexible memoize routine, I wrote one that you can use.</p>
diff --git a/public/nginx-mediawiki.html b/public/nginx-mediawiki.html
index e3009a9..f18ff22 100644
--- a/public/nginx-mediawiki.html
+++ b/public/nginx-mediawiki.html
@@ -10,9 +10,9 @@
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » nginx-mediawiki</header>
<article>
<h1 id="an-nginx-configuration-for-mediawiki">An Nginx configuration for MediaWiki</h1>
-<p>There are <a href="http://wiki.nginx.org/MediaWiki">several</a> <a href="https://wiki.archlinux.org/index.php/MediaWiki#Nginx">example</a> <a href="https://www.mediawiki.org/wiki/Manual:Short_URL/wiki/Page_title_--_nginx_rewrite--root_access">Nginx</a> <a href="https://www.mediawiki.org/wiki/Manual:Short_URL/Page_title_-_nginx,_Root_Access,_PHP_as_a_CGI_module">configurations</a> <a href="http://wiki.nginx.org/RHEL_5.4_%2B_Nginx_%2B_Mediawiki">for</a> <a href="http://stackoverflow.com/questions/11080666/mediawiki-on-nginx">MediaWiki</a> floating around the web. Many of them don't block the user from accessing things like <code>/serialized/</code>. Many of them also <a href="https://labs.parabola.nu/issues/725">don't correctly handle</a> a wiki page named <code>FAQ</code>, since that is a name of a file in the MediaWiki root! In fact, the configuration used on the official Nginx Wiki has both of those issues!</p>
+<p>There are <a href="http://wiki.nginx.org/MediaWiki">several</a> <a href="https://wiki.archlinux.org/index.php/MediaWiki#Nginx">example</a> <a href="https://www.mediawiki.org/wiki/Manual:Short_URL/wiki/Page_title_--_nginx_rewrite--root_access">Nginx</a> <a href="https://www.mediawiki.org/wiki/Manual:Short_URL/Page_title_-_nginx,_Root_Access,_PHP_as_a_CGI_module">configurations</a> <a href="http://wiki.nginx.org/RHEL_5.4_%2B_Nginx_%2B_Mediawiki">for</a> <a href="http://stackoverflow.com/questions/11080666/mediawiki-on-nginx">MediaWiki</a> floating around the web. Many of them don’t block the user from accessing things like <code>/serialized/</code>. Many of them also <a href="https://labs.parabola.nu/issues/725">don’t correctly handle</a> a wiki page named <code>FAQ</code>, since that is a name of a file in the MediaWiki root! In fact, the configuration used on the official Nginx Wiki has both of those issues!</p>
<p>This is because most of the configurations floating around basically try to pass all requests through, and blacklist certain requests, either denying them, or passing them through to <code>index.php</code>.</p>
-<p>It's my view that blacklisting is inferior to whitelisting in situations like this. So, I developed the following configuration that instead works by whitelisting certain paths.</p>
+<p>It’s my view that blacklisting is inferior to whitelisting in situations like this. So, I developed the following configuration that instead works by whitelisting certain paths.</p>
<pre><code>root /path/to/your/mediawiki; # obviously, change this line
index index.php;
@@ -46,7 +46,7 @@ location @php {
fastcgi_pass unix:/run/php-fpm/wiki.sock;
}</code></pre>
<p>We are now using this configuration on <a href="https://wiki.parabola.nu/">ParabolaWiki</a>, but with an alias for <code>location = /favicon.ico</code> to the correct file in the skin, and with FastCGI caching for PHP.</p>
-<p>The only thing I don't like about this is the <code>try_files /var/emtpy</code> bits--surely there is a better way to have it go to one of the <code>@</code> location blocks, but I couldn't figure it out.</p>
+<p>The only thing I don’t like about this is the <code>try_files /var/emtpy</code> bits—surely there is a better way to have it go to one of the <code>@</code> location blocks, but I couldn’t figure it out.</p>
</article>
<footer>
diff --git a/public/nginx-mediawiki.md b/public/nginx-mediawiki.md
index 92d2d39..c450a3c 100644
--- a/public/nginx-mediawiki.md
+++ b/public/nginx-mediawiki.md
@@ -67,5 +67,5 @@ We are now using this configuration on
FastCGI caching for PHP.
The only thing I don't like about this is the `try_files /var/emtpy`
-bits--surely there is a better way to have it go to one of the `@`
+bits---surely there is a better way to have it go to one of the `@`
location blocks, but I couldn't figure it out.
diff --git a/public/pacman-overview.html b/public/pacman-overview.html
index b9385d8..2e9cae7 100644
--- a/public/pacman-overview.html
+++ b/public/pacman-overview.html
@@ -11,22 +11,22 @@
<article>
<h1 id="a-quick-overview-of-usage-of-the-pacman-package-manager">A quick overview of usage of the Pacman package manager</h1>
<p>This was originally published on <a href="https://news.ycombinator.com/item?id=5101416">Hacker News</a> on 2013-01-23.</p>
-<p>Note: I've over-done quotation marks to make it clear when precise wording matters.</p>
+<p>Note: I’ve over-done quotation marks to make it clear when precise wording matters.</p>
<p><code>pacman</code> is a little awkward, but I prefer it to apt/dpkg, which have sub-commands, each with their own flags, some of which are undocumented. pacman, on the other hand, has ALL options documented in one fairly short man page.</p>
-<p>The trick to understanding pacman is to understand how it maintains databases of packages, and what it means to &quot;sync&quot;.</p>
-<p>There are several &quot;databases&quot; that pacman deals with:</p>
+<p>The trick to understanding pacman is to understand how it maintains databases of packages, and what it means to “sync”.</p>
+<p>There are several “databases” that pacman deals with:</p>
<ul>
-<li>&quot;the database&quot;, (<code>/var/lib/pacman/local/</code>)<br> The database of currently installed packages</li>
-<li>&quot;package databases&quot;, (<code>/var/lib/pacman/sync/${repo}.db</code>)<br> There is one of these for each repository. It is a file that is fetched over plain http(s) from the server; it is not modified locally, only updated.</li>
+<li>“the database”, (<code>/var/lib/pacman/local/</code>)<br> The database of currently installed packages</li>
+<li>“package databases”, (<code>/var/lib/pacman/sync/${repo}.db</code>)<br> There is one of these for each repository. It is a file that is fetched over plain http(s) from the server; it is not modified locally, only updated.</li>
</ul>
-<p>The &quot;operation&quot; of pacman is set with a capital flag, one of &quot;DQRSTU&quot; (plus <code>-V</code> and <code>-h</code> for version and help). Of these, &quot;DTU&quot; are &quot;low-level&quot; (analogous to dpkg) and &quot;QRS&quot; are &quot;high-level&quot; (analogous to apt).</p>
-<p>To give a brief explanation of cover the &quot;high-level&quot; operations, and which databases they deal with:</p>
+<p>The “operation” of pacman is set with a capital flag, one of “DQRSTU” (plus <code>-V</code> and <code>-h</code> for version and help). Of these, “DTU” are “low-level” (analogous to dpkg) and “QRS” are “high-level” (analogous to apt).</p>
+<p>To give a brief explanation of cover the “high-level” operations, and which databases they deal with:</p>
<ul>
-<li>&quot;Q&quot; Queries &quot;the database&quot; of locally installed packages.</li>
-<li>&quot;S&quot; deals with &quot;package databases&quot;, and Syncing &quot;the database&quot; with them; meaning it installs/updates packages that are in package databases, but not installed on the local system.</li>
-<li>&quot;R&quot; Removes packages &quot;the database&quot;; removing them from the local system.</li>
+<li>“Q” Queries “the database” of locally installed packages.</li>
+<li>“S” deals with “package databases”, and Syncing “the database” with them; meaning it installs/updates packages that are in package databases, but not installed on the local system.</li>
+<li>“R” Removes packages “the database”; removing them from the local system.</li>
</ul>
-<p>The biggest &quot;gotcha&quot; is that &quot;S&quot; deals with all operations with &quot;package databases&quot;, not just syncing &quot;the database&quot; with them.</p>
+<p>The biggest “gotcha” is that “S” deals with all operations with “package databases”, not just syncing “the database” with them.</p>
</article>
<footer>
diff --git a/public/poor-system-documentation.html b/public/poor-system-documentation.html
index 1d2965e..452a4f8 100644
--- a/public/poor-system-documentation.html
+++ b/public/poor-system-documentation.html
@@ -11,10 +11,10 @@
<article>
<h1 id="why-documentation-on-gnulinux-sucks">Why documentation on GNU/Linux sucks</h1>
<p>This is based on a post on <a href="http://www.reddit.com/r/archlinux/comments/zoffo/systemd_we_will_keep_making_it_the_distro_we_like/c66uu57">reddit</a>, published on 2012-09-12.</p>
-<p>The documentation situation on GNU/Linux based operating systems is right now a mess. In the world of documentation, there are basically 3 camps, the &quot;UNIX&quot; camp, the &quot;GNU&quot; camp, and the &quot;GNU/Linux&quot; camp.</p>
-<p>The UNIX camp is the <code>man</code> page camp, they have quality, terse but informative man pages, on <em>everything</em>, including the system's design and all system files. If it was up to the UNIX camp, <code>man grub.cfg</code>, <code>man grub.d</code>, and <code>man grub-mkconfig_lib</code> would exist and actually be helpful. The man page would either include inline examples, or point you to a directory. If I were to print off all of the man pages, it would actually be a useful manual for the system.</p>
+<p>The documentation situation on GNU/Linux based operating systems is right now a mess. In the world of documentation, there are basically 3 camps, the “UNIX” camp, the “GNU” camp, and the “GNU/Linux” camp.</p>
+<p>The UNIX camp is the <code>man</code> page camp, they have quality, terse but informative man pages, on <em>everything</em>, including the system’s design and all system files. If it was up to the UNIX camp, <code>man grub.cfg</code>, <code>man grub.d</code>, and <code>man grub-mkconfig_lib</code> would exist and actually be helpful. The man page would either include inline examples, or point you to a directory. If I were to print off all of the man pages, it would actually be a useful manual for the system.</p>
<p>Then GNU camp is the <code>info</code> camp. They basically thought that each piece of software was more complex than a man page could handle. They essentially think that some individual pieces software warrant a book. So, they developed the <code>info</code> system. The info pages are usually quite high quality, but are very long, and a pain if you just want a quick look. The <code>info</code> system can generate good HTML (and PDF, etc.) documentation. But the standard <code>info</code> is awkward as hell to use for non-Emacs users.</p>
-<p>Then we have the &quot;GNU/Linux&quot; camp, they use GNU software, but want to use <code>man</code> pages. This means that we get low-quality man pages for GNU software, and then we don't have a good baseline for documentation, developers each try to create their own. The documentation that gets written is frequently either low-quality, or non-standard. A lot of man pages are auto-generated from <code>--help</code> output or info pages, meaning they are either not helpful, or overly verbose with low information density. This camp gets the worst of both worlds, and a few problems of its own.</p>
+<p>Then we have the “GNU/Linux” camp, they use GNU software, but want to use <code>man</code> pages. This means that we get low-quality man pages for GNU software, and then we don’t have a good baseline for documentation, developers each try to create their own. The documentation that gets written is frequently either low-quality, or non-standard. A lot of man pages are auto-generated from <code>--help</code> output or info pages, meaning they are either not helpful, or overly verbose with low information density. This camp gets the worst of both worlds, and a few problems of its own.</p>
</article>
<footer>
diff --git a/public/purdue-cs-login.html b/public/purdue-cs-login.html
index d514e23..fd9e402 100644
--- a/public/purdue-cs-login.html
+++ b/public/purdue-cs-login.html
@@ -11,9 +11,9 @@
<article>
<h1 id="customizing-your-login-on-purdue-cs-computers-wip-but-updated">Customizing your login on Purdue CS computers (WIP, but updated)</h1>
<blockquote>
-<p>This article is currently a Work-In-Progress. Other than the one place where I say &quot;I'm not sure&quot;, the GDM section is complete. The network shares section is a mess, but has some good information.</p>
+<p>This article is currently a Work-In-Progress. Other than the one place where I say “I’m not sure”, the GDM section is complete. The network shares section is a mess, but has some good information.</p>
</blockquote>
-<p>Most CS students at Purdue spend a lot of time on the lab boxes, but don't know a lot about them. This document tries to fix that.</p>
+<p>Most CS students at Purdue spend a lot of time on the lab boxes, but don’t know a lot about them. This document tries to fix that.</p>
<p>The lab boxes all run Gentoo.</p>
<h2 id="gdm-the-gnome-display-manager">GDM, the Gnome Display Manager</h2>
<p>The boxes run <code>gdm</code> (Gnome Display Manager) 2.20.11 for the login screen. This is an old version, and has a couple behaviors that are slightly different than new versions, but here are the important bits:</p>
@@ -24,14 +24,14 @@
</ul>
<p>User configuration:</p>
<ul>
-<li><code>~/.dmrc</code> (more recent versions use <code>~/.desktop</code>, but Purdue boxes aren't running more recent versions)</li>
+<li><code>~/.dmrc</code> (more recent versions use <code>~/.desktop</code>, but Purdue boxes aren’t running more recent versions)</li>
</ul>
-<h3 id="purdues-gdm-configuration">Purdue's GDM configuration</h3>
+<h3 id="purdues-gdm-configuration">Purdue’s GDM configuration</h3>
<p>Now, <code>custom.conf</code> sets</p>
<pre><code>BaseXsession=/usr/local/share/xsessions/Xsession
SessionDesktopDir=/usr/local/share/xsessions/</code></pre>
-<p>This is important, because there are <em>multiple</em> locations that look like these files; I take it that they were used at sometime in the past. Don't get tricked into thinking that it looks at <code>/etc/X11/gdm/Xsession</code> (which exists, and is where it would look by default).</p>
-<p>If you look at the GDM login screen, it has a &quot;Sessions&quot; button that opens a prompt where you can select any of several sessions:</p>
+<p>This is important, because there are <em>multiple</em> locations that look like these files; I take it that they were used at sometime in the past. Don’t get tricked into thinking that it looks at <code>/etc/X11/gdm/Xsession</code> (which exists, and is where it would look by default).</p>
+<p>If you look at the GDM login screen, it has a “Sessions” button that opens a prompt where you can select any of several sessions:</p>
<ul>
<li>Last session</li>
<li>1. MATE (<code>mate.desktop</code>; <code>Exec=mate-session</code>)</li>
@@ -44,8 +44,8 @@ SessionDesktopDir=/usr/local/share/xsessions/</code></pre>
<li>Failsafe Terminal (<code>ShowXtermFailsafeSession=true</code>)</li>
</ul>
<p>The main 6 are configured by the <code>.desktop</code> files in <code>SessionDesktopDir=/usr/local/share/xsessions</code>; the last 2 are auto-generated. The reason <code>ShowGnomeFailsafeSession</code> correctly generates a Mate session instead of a Gnome session is because of the patch <code>/p/portage/*/overlay/gnome-base/gdm/files/gdm-2.20.11-mate.patch</code>.</p>
-<p>I'm not sure why Gnome shows up as <code>gnome.desktop</code> instead of <code>GNOME</code> as specified by <code>gnome.desktop:Name</code>. I imagine it might be something related to the aforementioned patch, but I can't find anything in the patch that looks like it would screw that up; at least not without a better understanding of GDM's code.</p>
-<p>Which of the main 6 is used by default (&quot;Last Session&quot;) is configured with <code>~/.dmrc:Session</code>, which contains the basename of the associated <code>.desktop</code> file (that is, without any directory part or file extension).</p>
+<p>I’m not sure why Gnome shows up as <code>gnome.desktop</code> instead of <code>GNOME</code> as specified by <code>gnome.desktop:Name</code>. I imagine it might be something related to the aforementioned patch, but I can’t find anything in the patch that looks like it would screw that up; at least not without a better understanding of GDM’s code.</p>
+<p>Which of the main 6 is used by default (“Last Session”) is configured with <code>~/.dmrc:Session</code>, which contains the basename of the associated <code>.desktop</code> file (that is, without any directory part or file extension).</p>
<p>Every one of the <code>.desktop</code> files sets <code>Type=XSession</code>, which means that instead of running the argument in <code>Exec=</code> directly, it passes it as arguments to the <code>Xsession</code> program (in the location configured by <code>BaseXsession</code>).</p>
<h4 id="xsession">Xsession</h4>
<p>So, now we get to read <code>/usr/local/share/xsessions/Xsession</code>.</p>
@@ -57,7 +57,7 @@ SessionDesktopDir=/usr/local/share/xsessions/</code></pre>
<li><code>xsetroot -default</code></li>
<li>Fiddles with the maximum number of processes.</li>
</ol>
-<p>After that, it handles these 3 &quot;special&quot; arguments that were given to it by various <code>.desktop</code> <code>Exec=</code> lines:</p>
+<p>After that, it handles these 3 “special” arguments that were given to it by various <code>.desktop</code> <code>Exec=</code> lines:</p>
<ul>
<li><code>failsafe</code>: Runs a single xterm window. NB: This is NOT run by either of the failsafe options. It is likey a vestiage from a prior configuration.</li>
<li><code>startkde</code>: Displays a message saying KDE is no longer available.</li>
@@ -73,16 +73,16 @@ SessionDesktopDir=/usr/local/share/xsessions/</code></pre>
<ul>
<li><code>custom</code>: Executes <code>~/.xsession</code>.</li>
<li><code>default</code>: Executes <code>~/.Xrc.cs</code>.</li>
-<li><code>mate-session</code>: It has this whole script to start DBus, run the <code>mate-session</code> command, then cleanup when it's done.</li>
+<li><code>mate-session</code>: It has this whole script to start DBus, run the <code>mate-session</code> command, then cleanup when it’s done.</li>
<li><code>*</code> (<code>fvwm2</code>): Runs <code>eval exec &quot;$@&quot;</code>, which results in it executing the <code>fvwm2</code> command.</li>
</ul>
<h2 id="network-shares">Network Shares</h2>
<p>Your data is on various hosts. I believe most undergrads have their data on <code>data.cs.purdue.edu</code> (or just <a href="https://en.wikipedia.org/wiki/Data_%28Star_Trek%29"><code>data</code></a>). Others have theirs on <a href="http://swfanon.wikia.com/wiki/Antor"><code>antor</code></a> or <a href="https://en.wikipedia.org/wiki/Tux"><code>tux</code></a> (that I know of).</p>
-<p>Most of the boxes with tons of storage have many network cards; each with a different IP; a single host's IPs are mostly the same, but with varying 3rd octets. For example, <code>data</code> is 128.10.X.13. If you need a particular value of X, but don't want to remember the other octets; they are individually addressed with <code>BASENAME-NUMBER.cs.purdue.edu</code>. For example, <code>data-25.cs.purdu.edu</code> is 128.10.25.13.</p>
+<p>Most of the boxes with tons of storage have many network cards; each with a different IP; a single host’s IPs are mostly the same, but with varying 3rd octets. For example, <code>data</code> is 128.10.X.13. If you need a particular value of X, but don’t want to remember the other octets; they are individually addressed with <code>BASENAME-NUMBER.cs.purdue.edu</code>. For example, <code>data-25.cs.purdu.edu</code> is 128.10.25.13.</p>
<p>They use <a href="https://www.kernel.org/pub/linux/daemons/autofs/">AutoFS</a> quite extensively. The maps are generated dynamically by <code>/etc/autofs/*.map</code>, which are all symlinks to <code>/usr/libexec/amd2autofs</code>. As far as I can tell, <code>amd2autofs</code> is custom to Purdue. Its source lives in <code>/p/portage/*/overlay/net-fs/autofs/files/amd2autofs.c</code>. The name appears to be a misnomer; seems to claim to dynamically translate from the configuration of <a href="http://www.am-utils.org/">Auto Mounter Daemon (AMD)</a> to AutoFS, but it actually talks to NIS. It does so using the <code>yp</code> interface, which is in Glibc for compatibility, but is undocumented. For documentation for that interface, look at the one of the BSDs, or Mac OS X. From the comments in the file, it appears that it once did look at the AMD configuration, but has since been changed.</p>
<p>There are 3 mountpoints using AutoFS: <code>/homes</code>, <code>/p</code>, and <code>/u</code>. <code>/homes</code> creates symlinks on-demand from <code>/homes/USERNAME</code> to <code>/u/BUCKET/USERNAME</code>. <code>/u</code> mounts NFS shares to <code>/u/SERVERNAME</code> on-demand, and creates symlinks from <code>/u/BUCKET</code> to <code>/u/SERVERNAME/BUCKET</code> on-demand. <code>/p</code> mounts on-demand various NFS shares that are organized by topic; the Xinu/MIPS tools are in <code>/p/xinu</code>, the Portage tree is in <code>/p/portage</code>.</p>
-<p>I'm not sure how <code>scratch</code> works; it seems to be heterogenous between different servers and families of lab boxes. Sometimes it's in <code>/u</code>, sometimes it isn't.</p>
-<p>This 3rd-party documentation was very helpful to me: <a href="http://www.linux-consulting.com/Amd_AutoFS/" class="uri">http://www.linux-consulting.com/Amd_AutoFS/</a> It's where Gentoo points for the AutoFS homepage, as it doesn't have a real homepage. Arch just points to FreshMeat. Debian points to kernel.org.</p>
+<p>I’m not sure how <code>scratch</code> works; it seems to be heterogenous between different servers and families of lab boxes. Sometimes it’s in <code>/u</code>, sometimes it isn’t.</p>
+<p>This 3rd-party documentation was very helpful to me: <a href="http://www.linux-consulting.com/Amd_AutoFS/" class="uri">http://www.linux-consulting.com/Amd_AutoFS/</a> It’s where Gentoo points for the AutoFS homepage, as it doesn’t have a real homepage. Arch just points to FreshMeat. Debian points to kernel.org.</p>
<h3 id="lore">Lore</h3>
<p><a href="https://en.wikipedia.org/wiki/List_of_Star_Trek:_The_Next_Generation_characters#Lore"><code>lore</code></a></p>
<p>Lore is a SunOS 5.10 box running on Sun-Fire V445 (sun4u) hardware. SunOS is NOT GNU/Linux, and sun4u is NOT x86.</p>
diff --git a/public/rails-improvements.html b/public/rails-improvements.html
index fa0a7c0..2b7f527 100644
--- a/public/rails-improvements.html
+++ b/public/rails-improvements.html
@@ -10,9 +10,9 @@
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » rails-improvements</header>
<article>
<h1 id="miscellaneous-ways-to-improve-your-rails-experience">Miscellaneous ways to improve your Rails experience</h1>
-<p>Recently, I've been working on <a href="https://github.com/LukeShu/leaguer">a Rails web application</a>, that's really the baby of a friend of mine. Anyway, through its development, I've come up with a couple things that should make your interactions with Rails more pleasant.</p>
+<p>Recently, I’ve been working on <a href="https://github.com/LukeShu/leaguer">a Rails web application</a>, that’s really the baby of a friend of mine. Anyway, through its development, I’ve come up with a couple things that should make your interactions with Rails more pleasant.</p>
<h2 id="auto-reload-classes-from-other-directories-than-app">Auto-(re)load classes from other directories than <code>app/</code></h2>
-<p>The development server automatically loads and reloads files from the <code>app/</code> directory, which is extremely nice. However, most web applications are going to involve modules that aren't in that directory; and editing those files requires re-starting the server for the changes to take effect.</p>
+<p>The development server automatically loads and reloads files from the <code>app/</code> directory, which is extremely nice. However, most web applications are going to involve modules that aren’t in that directory; and editing those files requires re-starting the server for the changes to take effect.</p>
<p>Adding the following lines to your <a href="https://github.com/LukeShu/leaguer/blob/c846cd71411ec3373a5229cacafe0df6b3673543/config/application.rb#L15"><code>config/application.rb</code></a> will allow it to automatically load and reload files from the <code>lib/</code> directory. You can of course change this to whichever directory/ies you like.</p>
<pre><code>module YourApp
class Application &lt; Rails::Application
@@ -56,7 +56,7 @@ module ActionView
end
end
end</code></pre>
-<p>I'll probably update this page as I tweak other things I don't like.</p>
+<p>I’ll probably update this page as I tweak other things I don’t like.</p>
</article>
<footer>
diff --git a/public/ryf-routers.html b/public/ryf-routers.html
index e0e89f8..a8f2a58 100644
--- a/public/ryf-routers.html
+++ b/public/ryf-routers.html
@@ -9,15 +9,15 @@
<body>
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » ryf-routers</header>
<article>
-<h1 id="im-excited-about-the-new-ryf-certified-routers-from-thinkpenguin">I'm excited about the new RYF-certified routers from ThinkPenguin</h1>
+<h1 id="im-excited-about-the-new-ryf-certified-routers-from-thinkpenguin">I’m excited about the new RYF-certified routers from ThinkPenguin</h1>
<p>I just learned that on Wednesday, the FSF <a href="https://www.fsf.org/resources/hw/endorsement/thinkpenguin">awarded</a> the <abbr title="Respects Your Freedom">RYF</abbr> certification to the <a href="https://www.thinkpenguin.com/TPE-NWIFIROUTER">Think Penguin TPE-NWIFIROUTER</a> wireless router.</p>
-<p>I didn't find this information directly published up front, but simply: It is a re-branded <strong>TP-Link TL-841ND</strong> modded to be running <a href="http://librecmc.com/">libreCMC</a>.</p>
-<p>I've been a fan of the TL-841/740 line of routers for several years now. They are dirt cheap (if you go to Newegg and sort by &quot;cheapest,&quot; it's frequently the TL-740N), are extremely reliable, and run OpenWRT like a champ. They are my go-to routers.</p>
+<p>I didn’t find this information directly published up front, but simply: It is a re-branded <strong>TP-Link TL-841ND</strong> modded to be running <a href="http://librecmc.com/">libreCMC</a>.</p>
+<p>I’ve been a fan of the TL-841/740 line of routers for several years now. They are dirt cheap (if you go to Newegg and sort by “cheapest,” it’s frequently the TL-740N), are extremely reliable, and run OpenWRT like a champ. They are my go-to routers.</p>
<p>(And they sure beat the snot out of the Arris TG862 that it seems like everyone has in their homes now. I hate that thing, it even has buggy packet scheduling.)</p>
<p>So this announcement is <del>doubly</del>triply exciting for me:</p>
<ul>
-<li>I have a solid recommendation for a router that doesn't require me or them to manually install an after-market firmware (buy it from ThinkPenguin).</li>
-<li>If it's for me, or someone technical, I can cut costs by getting a stock TP-Link from Newegg, installing libreCMC ourselves.</li>
+<li>I have a solid recommendation for a router that doesn’t require me or them to manually install an after-market firmware (buy it from ThinkPenguin).</li>
+<li>If it’s for me, or someone technical, I can cut costs by getting a stock TP-Link from Newegg, installing libreCMC ourselves.</li>
<li>I can install a 100% libre distribution on my existing routers (until recently, they were not supported by any of the libre distributions, not for technical reasons, but lack of manpower).</li>
</ul>
<p>I hope to get libreCMC installed on my boxes this weekend!</p>
diff --git a/public/term-colors.html b/public/term-colors.html
index 01978eb..d4219d2 100644
--- a/public/term-colors.html
+++ b/public/term-colors.html
@@ -15,11 +15,11 @@
<p>So all terminals support the same 256 colors? What about 88 color mode: is that a subset?</p>
</blockquote>
<p>TL;DR: yes</p>
-<p>Terminal compatibility is crazy complex, because nobody actually reads the spec, they just write something that is compatible for their tests. Then things have to be compatible with that terminal's quirks.</p>
-<p>But, here's how 8-color, 16-color, and 256 color work. IIRC, 88 color is a subset of the 256 color scheme, but I'm not sure.</p>
-<p><strong>8 colors: (actually 9)</strong> First we had 8 colors (9 with &quot;default&quot;, which doesn't have to be one of the 8). These are always roughly the same color: black, red, green, yellow/orange, blue, purple, cyan, and white, which are colors 0-7 respectively. Color 9 is default.</p>
-<p><strong>16 colors: (actually 18)</strong> Later, someone wanted to add more colors, so they added a &quot;bright&quot; attribute. So when bright is on, you get &quot;bright red&quot; instead of &quot;red&quot;. Hence 8*2=16 (plus two more for &quot;default&quot; and &quot;bright default&quot;).</p>
-<p><strong>256 colors: (actually 274)</strong> You may have noticed, colors 0-7 and 9 are used, but 8 isn't. So, someone decided that color 8 should put the terminal into 256 color mode. In this mode, it reads another byte, which is an 8-bit RGB value (2 bits for red, 2 for green, 2 for blue). The bright property has no effect on these colors. However, a terminal can display 256-color-mode colors and 16-color-mode colors at the same time, so you actually get 256+18 colors.</p>
+<p>Terminal compatibility is crazy complex, because nobody actually reads the spec, they just write something that is compatible for their tests. Then things have to be compatible with that terminal’s quirks.</p>
+<p>But, here’s how 8-color, 16-color, and 256 color work. IIRC, 88 color is a subset of the 256 color scheme, but I’m not sure.</p>
+<p><strong>8 colors: (actually 9)</strong> First we had 8 colors (9 with “default”, which doesn’t have to be one of the 8). These are always roughly the same color: black, red, green, yellow/orange, blue, purple, cyan, and white, which are colors 0–7 respectively. Color 9 is default.</p>
+<p><strong>16 colors: (actually 18)</strong> Later, someone wanted to add more colors, so they added a “bright” attribute. So when bright is on, you get “bright red” instead of “red”. Hence 8*2=16 (plus two more for “default” and “bright default”).</p>
+<p><strong>256 colors: (actually 274)</strong> You may have noticed, colors 0–7 and 9 are used, but 8 isn’t. So, someone decided that color 8 should put the terminal into 256 color mode. In this mode, it reads another byte, which is an 8-bit RGB value (2 bits for red, 2 for green, 2 for blue). The bright property has no effect on these colors. However, a terminal can display 256-color-mode colors and 16-color-mode colors at the same time, so you actually get 256+18 colors.</p>
</article>
<footer>
diff --git a/public/term-colors.md b/public/term-colors.md
index 47c583a..1682484 100644
--- a/public/term-colors.md
+++ b/public/term-colors.md
@@ -23,7 +23,7 @@ is a subset of the 256 color scheme, but I'm not sure.
**8 colors: (actually 9)**
First we had 8 colors (9 with "default", which doesn't have to be one
of the 8). These are always roughly the same color: black, red, green,
-yellow/orange, blue, purple, cyan, and white, which are colors 0-7
+yellow/orange, blue, purple, cyan, and white, which are colors 0--7
respectively. Color 9 is default.
**16 colors: (actually 18)**
@@ -33,7 +33,7 @@ attribute. So when bright is on, you get "bright red" instead of
default").
**256 colors: (actually 274)**
-You may have noticed, colors 0-7 and 9 are used, but 8 isn't. So,
+You may have noticed, colors 0--7 and 9 are used, but 8 isn't. So,
someone decided that color 8 should put the terminal into 256 color
mode. In this mode, it reads another byte, which is an 8-bit RGB value
(2 bits for red, 2 for green, 2 for blue). The bright property has no
diff --git a/public/what-im-working-on-fall-2014.html b/public/what-im-working-on-fall-2014.html
index 7594b71..cad5c65 100644
--- a/public/what-im-working-on-fall-2014.html
+++ b/public/what-im-working-on-fall-2014.html
@@ -9,37 +9,37 @@
<body>
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » what-im-working-on-fall-2014</header>
<article>
-<h1 id="what-im-working-on-fall-2014">What I'm working on (Fall 2014)</h1>
-<p>I realized today that I haven't updated my log in a while, and I don't have any &quot;finished&quot; stuff to show off right now, but I should just talk about all the cool stuff I'm working on right now.</p>
+<h1 id="what-im-working-on-fall-2014">What I’m working on (Fall 2014)</h1>
+<p>I realized today that I haven’t updated my log in a while, and I don’t have any “finished” stuff to show off right now, but I should just talk about all the cool stuff I’m working on right now.</p>
<h2 id="static-parsing-of-subshells">Static parsing of subshells</h2>
<p>Last year I wrote a shell (for my Systems Programming class); however, I went above-and-beyond and added some really novel features. In my opinion, the most significant is that it parses arbitrarily deep subshells in one pass, instead of deferring them until execution. No shell that I know of does this.</p>
-<p>At first this sounds like a really difficult, but minor feature. Until you think about scripting, and maintenance of those scripts. Being able to do a full syntax check of a script is <em>crucial</em> for long-term maintenance, yet it's something that is missing from every major shell. I'd love to get this code merged into bash. It would be incredibly useful for <a href="/git/mirror/parabola/packages/libretools.git">some software I maintain</a>.</p>
-<p>Anyway, I'm trying to publish this code, but because of a recent kerfuffle with a student publishing all of his projects on the web (and other students trying to pass it off as their own), I'm being cautious with this and making sure Purdue is alright with what I'm putting online.</p>
+<p>At first this sounds like a really difficult, but minor feature. Until you think about scripting, and maintenance of those scripts. Being able to do a full syntax check of a script is <em>crucial</em> for long-term maintenance, yet it’s something that is missing from every major shell. I’d love to get this code merged into bash. It would be incredibly useful for <a href="/git/mirror/parabola/packages/libretools.git">some software I maintain</a>.</p>
+<p>Anyway, I’m trying to publish this code, but because of a recent kerfuffle with a student publishing all of his projects on the web (and other students trying to pass it off as their own), I’m being cautious with this and making sure Purdue is alright with what I’m putting online.</p>
<h2 id="stateless-user-configuration-for-pamnss"><a href="https://lukeshu.com/git/mirror/parabola/hackers.git/log/?h=lukeshu/restructure">Stateless user configuration for PAM/NSS</a></h2>
-<p>Parabola GNU/Linux-libre users know that over this summer, we had a <em>mess</em> with server outages. One of the servers is still out (due to things out of our control), and we don't have some of the data on it (because volunteer developers are terrible about back-ups, apparently).</p>
+<p>Parabola GNU/Linux-libre users know that over this summer, we had a <em>mess</em> with server outages. One of the servers is still out (due to things out of our control), and we don’t have some of the data on it (because volunteer developers are terrible about back-ups, apparently).</p>
<p>This has caused us to look at how we manage our servers, back-ups, and several other things.</p>
-<p>One thing that I've taken on as my pet project is making sure that if a server goes down, or we need to migrate (for example, Jon is telling us that he wants us to hurry up and switch to the new 64 bit hardware so he can turn off the 32 bit box), we can spin up a new server from scratch pretty easily. Part of that is making configurations stateless, and dynamic based on external data; having data be located in one place instead of duplicated across 12 config files and 3 databases... on the same box.</p>
-<p>Right now, that's looking like some custom software interfacing with OpenLDAP and OpenSSH via sockets (OpenLDAP being a middle-man between us and PAM (Linux) and NSS (libc)). However, the OpenLDAP documentation is... inconsistent and frustrating. I might end up hacking up the LDAP modules for NSS and PAM to talk to our system directly, and cut OpenLDAP out of the picture. We'll see!</p>
+<p>One thing that I’ve taken on as my pet project is making sure that if a server goes down, or we need to migrate (for example, Jon is telling us that he wants us to hurry up and switch to the new 64 bit hardware so he can turn off the 32 bit box), we can spin up a new server from scratch pretty easily. Part of that is making configurations stateless, and dynamic based on external data; having data be located in one place instead of duplicated across 12 config files and 3 databases… on the same box.</p>
+<p>Right now, that’s looking like some custom software interfacing with OpenLDAP and OpenSSH via sockets (OpenLDAP being a middle-man between us and PAM (Linux) and NSS (libc)). However, the OpenLDAP documentation is… inconsistent and frustrating. I might end up hacking up the LDAP modules for NSS and PAM to talk to our system directly, and cut OpenLDAP out of the picture. We’ll see!</p>
<p>PS: Pablo says that tomorrow we should be getting out-of-band access to the drive of the server that is down, so that we can finally restore those services on a different server.</p>
<h2 id="project-leaguer"><a href="https://lukeshu.com/git/mirror/leaguer.git/">Project Leaguer</a></h2>
-<p>Last year, some friends and I began writing some &quot;eSports tournament management software&quot;, primarily targeting League of Legends (though it has a module system that will allow it to support tons of different data sources). We mostly got it done last semester, but it had some rough spots and sharp edges we need to work out. Because we were all out of communication for the summer, we didn't work on it very much (but we did a little!). It's weird that I care about this, because I'm not a gamer. Huh, I guess coding with friends is just fun.</p>
-<p>Anyway, this year, <a href="https://github.com/AndrewMurrell">Andrew</a>, <a href="https://github.com/DavisLWebb">Davis</a>, and I are planning to get it to a polished state by the end of the semester. We could probably do it faster, but we'd all also like to focus on classes and other projects a little more.</p>
+<p>Last year, some friends and I began writing some “eSports tournament management software”, primarily targeting League of Legends (though it has a module system that will allow it to support tons of different data sources). We mostly got it done last semester, but it had some rough spots and sharp edges we need to work out. Because we were all out of communication for the summer, we didn’t work on it very much (but we did a little!). It’s weird that I care about this, because I’m not a gamer. Huh, I guess coding with friends is just fun.</p>
+<p>Anyway, this year, <a href="https://github.com/AndrewMurrell">Andrew</a>, <a href="https://github.com/DavisLWebb">Davis</a>, and I are planning to get it to a polished state by the end of the semester. We could probably do it faster, but we’d all also like to focus on classes and other projects a little more.</p>
<h2 id="c1">C+=1</h2>
-<p>People tend to lump C and C++ together, which upsets me, because I love C, but have a dislike for C++. That's not to say that C++ is entirely bad; it has some good features. My &quot;favorite&quot; code is actually code that is basically C, but takes advantage of a couple C++ features, while still being idiomatic C, not C++.</p>
-<p>Anyway, with the perspective of history (what worked and what didn't), and a slightly opinionated view on language design (I'm pretty much a Rob Pike fan-boy), I thought I'd try to tackle &quot;object-oriented C&quot; with roughly the same design criteria as Stroustrup had when designing C++. I'm calling mine C+=1, for obvious reasons.</p>
-<p>I haven't published anything yet, because calling it &quot;working&quot; would be stretching the truth. But I am using it for my assignments in CS 334 (Intro to Graphics), so it should move along fairly quickly, as my grade depends on it.</p>
-<p>I'm not taking it too seriously; I don't expect it to be much more than a toy language, but it is an excuse to dive into the GCC internals.</p>
-<h2 id="projects-that-ive-put-on-the-back-burner">Projects that I've put on the back-burner</h2>
-<p>I've got several other projects that I'm putting on hold for a while.</p>
+<p>People tend to lump C and C++ together, which upsets me, because I love C, but have a dislike for C++. That’s not to say that C++ is entirely bad; it has some good features. My “favorite” code is actually code that is basically C, but takes advantage of a couple C++ features, while still being idiomatic C, not C++.</p>
+<p>Anyway, with the perspective of history (what worked and what didn’t), and a slightly opinionated view on language design (I’m pretty much a Rob Pike fan-boy), I thought I’d try to tackle “object-oriented C” with roughly the same design criteria as Stroustrup had when designing C++. I’m calling mine C+=1, for obvious reasons.</p>
+<p>I haven’t published anything yet, because calling it “working” would be stretching the truth. But I am using it for my assignments in CS 334 (Intro to Graphics), so it should move along fairly quickly, as my grade depends on it.</p>
+<p>I’m not taking it too seriously; I don’t expect it to be much more than a toy language, but it is an excuse to dive into the GCC internals.</p>
+<h2 id="projects-that-ive-put-on-the-back-burner">Projects that I’ve put on the back-burner</h2>
+<p>I’ve got several other projects that I’m putting on hold for a while.</p>
<ul>
-<li><code>maven-dist</code> (was hosted with Parabola, apparently I haven't pushed it anywhere except the server that is down): A tool to build Apache Maven from source. That sounds easy, it's open source, right? Well, except that Maven is the build system from hell. It doesn't support cyclic dependencies, yet uses them internally to build itself. It <em>loves</em> to just get binaries from Maven Central to &quot;optimize&quot; the build process. It depends on code that depends on compiler bugs that no longer exist (which I guess means that <em>no one</em> has tried to build it from source after it was originally published). I've been working on-and-off on this for more than a year. My favorite part of it was writing a <a href="/dump/jflex2jlex.sed.txt">sed script</a> that translates a JFlex grammar specification into a JLex grammar, which is used to bootstrap JFlex; its both gross and delightful at the same time.</li>
-<li>Integration between <code>dbscripts</code> and <code>abslibre</code>. If you search IRC logs, mailing lists, and ParabolaWiki, you can find numerous rants by me against <a href="/git/mirror/parabola/dbscripts.git/tree/db-sync"><code>dbscripts:db-sync</code></a>. I just hate the data-flow, it is almost designed to make things get out of sync, and broken. I mean, does <a href="/dump/parabola-data-flow.svg">this</a> look like a simple diagram? For contrast, <a href="/dump/parabola-data-flow-xbs.svg">here's</a> a rough (slightly incomplete) diagram of what I want to replace it with.</li>
-<li>Git backend for MediaWiki (or, pulling out the rendering module of MediaWiki). I've made decent progress on that front, but there is <em>crazy</em> de-normalization going on in the MediaWiki schema that makes this very difficult. I'm sure some of it is for historical reasons, and some of it for performance, but either way it is a mess for someone trying to neatly gut that part of the codebase.</li>
+<li><code>maven-dist</code> (was hosted with Parabola, apparently I haven’t pushed it anywhere except the server that is down): A tool to build Apache Maven from source. That sounds easy, it’s open source, right? Well, except that Maven is the build system from hell. It doesn’t support cyclic dependencies, yet uses them internally to build itself. It <em>loves</em> to just get binaries from Maven Central to “optimize” the build process. It depends on code that depends on compiler bugs that no longer exist (which I guess means that <em>no one</em> has tried to build it from source after it was originally published). I’ve been working on-and-off on this for more than a year. My favorite part of it was writing a <a href="/dump/jflex2jlex.sed.txt">sed script</a> that translates a JFlex grammar specification into a JLex grammar, which is used to bootstrap JFlex; its both gross and delightful at the same time.</li>
+<li>Integration between <code>dbscripts</code> and <code>abslibre</code>. If you search IRC logs, mailing lists, and ParabolaWiki, you can find numerous rants by me against <a href="/git/mirror/parabola/dbscripts.git/tree/db-sync"><code>dbscripts:db-sync</code></a>. I just hate the data-flow, it is almost designed to make things get out of sync, and broken. I mean, does <a href="/dump/parabola-data-flow.svg">this</a> look like a simple diagram? For contrast, <a href="/dump/parabola-data-flow-xbs.svg">here’s</a> a rough (slightly incomplete) diagram of what I want to replace it with.</li>
+<li>Git backend for MediaWiki (or, pulling out the rendering module of MediaWiki). I’ve made decent progress on that front, but there is <em>crazy</em> de-normalization going on in the MediaWiki schema that makes this very difficult. I’m sure some of it is for historical reasons, and some of it for performance, but either way it is a mess for someone trying to neatly gut that part of the codebase.</li>
</ul>
<h2 id="other">Other</h2>
-<p>I should consider doing a write-up of deterministic-<code>tar</code> behavior (something that I've been implementing in Parabola for a while, meanwhile the Debian people have also been working on it).</p>
-<p>I should also consider doing a &quot;post-mortem&quot; of <a href="https://lukeshu.com/git/mirror/parabola/packages/pbs-tools.git/">PBS</a>, which never actually got used, but launched XBS (part of the <code>dbscripts</code>/<code>abslibre</code> integration mentioned above), as well as serving as a good test-bed for features that did get implemented.</p>
-<p>I over-use the word &quot;anyway.&quot;</p>
+<p>I should consider doing a write-up of deterministic-<code>tar</code> behavior (something that I’ve been implementing in Parabola for a while, meanwhile the Debian people have also been working on it).</p>
+<p>I should also consider doing a “post-mortem” of <a href="https://lukeshu.com/git/mirror/parabola/packages/pbs-tools.git/">PBS</a>, which never actually got used, but launched XBS (part of the <code>dbscripts</code>/<code>abslibre</code> integration mentioned above), as well as serving as a good test-bed for features that did get implemented.</p>
+<p>I over-use the word “anyway.”</p>
</article>
<footer>
diff --git a/public/x11-systemd.html b/public/x11-systemd.html
index 0f751ca..ef9c901 100644
--- a/public/x11-systemd.html
+++ b/public/x11-systemd.html
@@ -10,8 +10,8 @@
<header><a href="/">Luke Shumaker</a> » <a href=/blog>blog</a> » x11-systemd</header>
<article>
<h1 id="my-x11-setup-with-systemd">My X11 setup with systemd</h1>
-<p>Somewhere along the way, I decided to use systemd user sessions to manage the various parts of my X11 environment would be a good idea. If that was a good idea or not... we'll see.</p>
-<p>I've sort-of been running this setup as my daily-driver for <a href="https://lukeshu.com/git/dotfiles.git/commit/?id=a9935b7a12a522937d91cb44a0e138132b555e16">a bit over a year</a>, continually tweaking it though.</p>
+<p>Somewhere along the way, I decided to use systemd user sessions to manage the various parts of my X11 environment would be a good idea. If that was a good idea or not… we’ll see.</p>
+<p>I’ve sort-of been running this setup as my daily-driver for <a href="https://lukeshu.com/git/dotfiles.git/commit/?id=a9935b7a12a522937d91cb44a0e138132b555e16">a bit over a year</a>, continually tweaking it though.</p>
<p>My setup is substantially different than the one on <a href="https://wiki.archlinux.org/index.php/Systemd/User">ArchWiki</a>, because the ArchWiki solution assumes that there is only ever one X server for a user; I like the ability to run <code>Xorg</code> on my real monitor, and also have <code>Xvnc</code> running headless, or start my desktop environment on a remote X server. Though, I would like to figure out how to use systemd socket activation for the X server, as the ArchWiki solution does.</p>
<p>This means that all of my graphical units take <code>DISPLAY</code> as an <code>@</code> argument. To get this to all work out, this goes in each <code>.service</code> file, unless otherwise noted:</p>
<pre><code>[Unit]
@@ -19,8 +19,8 @@ After=X11@%i.target
Requisite=X11@%i.target
[Service]
Environment=DISPLAY=%I</code></pre>
-<p>We'll get to <code>X11@.target</code> later, what it says is &quot;I should only be running if X11 is running&quot;.</p>
-<p>I eschew complex XDMs or <code>startx</code> wrapper scripts, opting for the more simple <code>xinit</code>, which I either run on login for some boxes (my media station), or type <code>xinit</code> when I want X11 on others (most everything else). Essentially, what <code>xinit</code> does is run <code>~/.xserverrc</code> (or <code>/etc/X11/xinit/xserverrc</code>) to start the server, then once the server is started (which it takes a substantial amount of magic to detect) it runs run <code>~/.xinitrc</code> (or <code>/etc/X11/xinit/xinitrc</code>) to start the clients. Once <code>.xinitrc</code> finishes running, it stops the X server and exits. Now, when I say &quot;run&quot;, I don't mean execute, it passes each file to the system shell (<code>/bin/sh</code>) as input.</p>
+<p>We’ll get to <code>X11@.target</code> later, what it says is “I should only be running if X11 is running”.</p>
+<p>I eschew complex XDMs or <code>startx</code> wrapper scripts, opting for the more simple <code>xinit</code>, which I either run on login for some boxes (my media station), or type <code>xinit</code> when I want X11 on others (most everything else). Essentially, what <code>xinit</code> does is run <code>~/.xserverrc</code> (or <code>/etc/X11/xinit/xserverrc</code>) to start the server, then once the server is started (which it takes a substantial amount of magic to detect) it runs run <code>~/.xinitrc</code> (or <code>/etc/X11/xinit/xinitrc</code>) to start the clients. Once <code>.xinitrc</code> finishes running, it stops the X server and exits. Now, when I say “run”, I don’t mean execute, it passes each file to the system shell (<code>/bin/sh</code>) as input.</p>
<p>Xorg requires a TTY to run on; if we log in to a TTY with <code>logind</code>, it will give us the <code>XDG_VTNR</code> variable to tell us which one we have, so I pass this to <code>X</code> in <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/serverrc">my <code>.xserverrc</code></a>:</p>
<pre><code>#!/hint/sh
if [ -z &quot;$XDG_VTNR&quot; ]; then
@@ -28,8 +28,8 @@ if [ -z &quot;$XDG_VTNR&quot; ]; then
else
exec /usr/bin/X -nolisten tcp &quot;$@&quot; vt$XDG_VTNR
fi</code></pre>
-<p>This was the default for <a href="https://projects.archlinux.org/svntogit/packages.git/commit/trunk/xserverrc?h=packages/xorg-xinit&amp;id=f9f5de58df03aae6c8a8c8231a83327d19b943a1">a while</a> in Arch, to support <code>logind</code>, but was <a href="https://projects.archlinux.org/svntogit/packages.git/commit/trunk/xserverrc?h=packages/xorg-xinit&amp;id=5a163ddd5dae300e7da4b027e28c37ad3b535804">later removed</a> in part because <code>startx</code> (which calls <code>xinit</code>) started adding it as an argument as well, so <code>vt$XDG_VTNR</code> was being listed as an argument twice, which is an error. IMO, that was a problem in <code>startx</code>, and they shouldn't have removed it from the default system <code>xserverrc</code>, but that's just me. So I copy/pasted it into my user <code>xserverrc</code>.</p>
-<p>That's the boring part, though. Where the magic starts happening is in <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/clientrc">my <code>.xinitrc</code></a>:</p>
+<p>This was the default for <a href="https://projects.archlinux.org/svntogit/packages.git/commit/trunk/xserverrc?h=packages/xorg-xinit&amp;id=f9f5de58df03aae6c8a8c8231a83327d19b943a1">a while</a> in Arch, to support <code>logind</code>, but was <a href="https://projects.archlinux.org/svntogit/packages.git/commit/trunk/xserverrc?h=packages/xorg-xinit&amp;id=5a163ddd5dae300e7da4b027e28c37ad3b535804">later removed</a> in part because <code>startx</code> (which calls <code>xinit</code>) started adding it as an argument as well, so <code>vt$XDG_VTNR</code> was being listed as an argument twice, which is an error. IMO, that was a problem in <code>startx</code>, and they shouldn’t have removed it from the default system <code>xserverrc</code>, but that’s just me. So I copy/pasted it into my user <code>xserverrc</code>.</p>
+<p>That’s the boring part, though. Where the magic starts happening is in <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/clientrc">my <code>.xinitrc</code></a>:</p>
<pre><code>#!/hint/sh
if [ -z &quot;$XDG_RUNTIME_DIR&quot; ]; then
@@ -45,15 +45,15 @@ cat &lt; &quot;${XDG_RUNTIME_DIR}/x11-wm@${_DISPLAY}&quot; &amp;
systemctl --user start &quot;X11@${_DISPLAY}.target&quot; &amp;
wait
systemctl --user stop &quot;X11@${_DISPLAY}.target&quot;</code></pre>
-<p>There are two contracts/interfaces here: the <code>X11@DISPLAY.target</code> systemd target, and the <code>${XDG_RUNTIME_DIR}/x11-wm@DISPLAY</code> named pipe. The systemd <code>.target</code> should be pretty self explanatory; the most important part is that it starts the window manager. The named pipe is just a hacky way of blocking until the window manager exits (&quot;traditional&quot; <code>.xinitrc</code> files end with the line <code>exec your-window-manager</code>, so this mimics that behavior). It works by assuming that the window manager will open the pipe at startup, and keep it open (without necessarily writing anything to it); when the window manager exits, the pipe will get closed, sending EOF to the <code>wait</code>ed-for <code>cat</code>, allowing it to exit, letting the script resume. The window manager (WMII) is made to have the pipe opened by executing it this way in <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/wmii@.service">its <code>.service</code> file</a>:</p>
+<p>There are two contracts/interfaces here: the <code>X11@DISPLAY.target</code> systemd target, and the <code>${XDG_RUNTIME_DIR}/x11-wm@DISPLAY</code> named pipe. The systemd <code>.target</code> should be pretty self explanatory; the most important part is that it starts the window manager. The named pipe is just a hacky way of blocking until the window manager exits (“traditional” <code>.xinitrc</code> files end with the line <code>exec your-window-manager</code>, so this mimics that behavior). It works by assuming that the window manager will open the pipe at startup, and keep it open (without necessarily writing anything to it); when the window manager exits, the pipe will get closed, sending EOF to the <code>wait</code>ed-for <code>cat</code>, allowing it to exit, letting the script resume. The window manager (WMII) is made to have the pipe opened by executing it this way in <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/wmii@.service">its <code>.service</code> file</a>:</p>
<pre><code>ExecStart=/usr/bin/env bash -c &#39;exec 8&gt;${XDG_RUNTIME_DIR}/x11-wm@%I; exec wmii&#39;</code></pre>
-<p>which just opens the file on file descriptor 8, then launches the window manager normally. The only further logic required by the window manager with regard to the pipe is that in the window manager <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/config.sh">configuration</a>, I should close that file descriptor after forking any process that isn't &quot;part of&quot; the window manager:</p>
+<p>which just opens the file on file descriptor 8, then launches the window manager normally. The only further logic required by the window manager with regard to the pipe is that in the window manager <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/config.sh">configuration</a>, I should close that file descriptor after forking any process that isn’t “part of” the window manager:</p>
<pre><code>runcmd() (
...
exec 8&gt;&amp;- # xinit/systemd handshake
...
)</code></pre>
-<p>So, back to the <code>X11@DISPLAY.target</code>; I configure what it &quot;does&quot; with symlinks in the <code>.requires</code> and <code>.wants</code> directories:</p>
+<p>So, back to the <code>X11@DISPLAY.target</code>; I configure what it “does” with symlinks in the <code>.requires</code> and <code>.wants</code> directories:</p>
<ul class="tree">
<li>
<p><a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user">.config/systemd/user/</a></p>
@@ -73,13 +73,13 @@ systemctl --user stop &quot;X11@${_DISPLAY}.target&quot;</code></pre>
</li>
</ul>
<p>The <code>.requires</code> directory is how I configure which window manager it starts. This would allow me to configure different window managers on different displays, by creating a <code>.requires</code> directory with the <code>DISPLAY</code> included, e.g. <code>X11@:2.requires</code>.</p>
-<p>The <code>.wants</code> directory is for general X display setup; it's analogous to <code>/etc/X11/xinit/xinitrc.d/</code>. All of the files in it are simple <code>Type=oneshot</code> service files. The <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xmodmap@.service">xmodmap</a> and <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xresources@.service">xresources</a> files are pretty boring, they're just systemd versions of the couple lines that just about every traditional <code>.xinitrc</code> contains, the biggest difference being that they look at <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/modmap"><code>~/.config/X11/modmap</code></a> and <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/resources"><code>~/.config/X11/resources</code></a> instead of the traditional locations <code>~/.xmodmap</code> and <code>~/.Xresources</code>.</p>
-<p>What's possibly of note is <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xresources-dpi@.service"><code>xresources-dpi@.service</code></a>. In X11, there are two sources of DPI information, the X display resolution, and the XRDB <code>Xft.dpi</code> setting. It isn't defined which takes precedence (to my knowledge), and even if it were (is), application authors wouldn't be arsed to actually do the right thing. For years, Firefox (well, Iceweasel) happily listened to the X display resolution, but recently it decided to only look at <code>Xft.dpi</code>, which objectively seems a little silly, since the X display resolution is always present, but <code>Xft.dpi</code> isn't. Anyway, Mozilla's change drove me to to create a <a href="https://lukeshu.com/git/dotfiles/tree/.local/bin/xrdb-set-dpi">script</a> to make the <code>Xft.dpi</code> setting match the X display resolution. Disclaimer: I have no idea if it works if the X server has multiple displays (with possibly varying resolution).</p>
+<p>The <code>.wants</code> directory is for general X display setup; it’s analogous to <code>/etc/X11/xinit/xinitrc.d/</code>. All of the files in it are simple <code>Type=oneshot</code> service files. The <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xmodmap@.service">xmodmap</a> and <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xresources@.service">xresources</a> files are pretty boring, they’re just systemd versions of the couple lines that just about every traditional <code>.xinitrc</code> contains, the biggest difference being that they look at <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/modmap"><code>~/.config/X11/modmap</code></a> and <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/X11/resources"><code>~/.config/X11/resources</code></a> instead of the traditional locations <code>~/.xmodmap</code> and <code>~/.Xresources</code>.</p>
+<p>What’s possibly of note is <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/xresources-dpi@.service"><code>xresources-dpi@.service</code></a>. In X11, there are two sources of DPI information, the X display resolution, and the XRDB <code>Xft.dpi</code> setting. It isn’t defined which takes precedence (to my knowledge), and even if it were (is), application authors wouldn’t be arsed to actually do the right thing. For years, Firefox (well, Iceweasel) happily listened to the X display resolution, but recently it decided to only look at <code>Xft.dpi</code>, which objectively seems a little silly, since the X display resolution is always present, but <code>Xft.dpi</code> isn’t. Anyway, Mozilla’s change drove me to to create a <a href="https://lukeshu.com/git/dotfiles/tree/.local/bin/xrdb-set-dpi">script</a> to make the <code>Xft.dpi</code> setting match the X display resolution. Disclaimer: I have no idea if it works if the X server has multiple displays (with possibly varying resolution).</p>
<pre><code>#!/usr/bin/env bash
dpi=$(LC_ALL=C xdpyinfo|sed -rn &#39;s/^\s*resolution:\s*(.*) dots per inch$/\1/p&#39;)
xrdb -merge &lt;&lt;&lt;&quot;Xft.dpi: ${dpi}&quot;</code></pre>
-<p>Since we want XRDB to be set up before any other programs launch, we give both of the <code>xresources</code> units <code>Before=X11@%i.target</code> (instead of <code>After=</code> like everything else). Also, two programs writing to <code>xrdb</code> at the same time has the same problem as two programs writing to the same file; one might trash the other's changes. So, I stuck <code>Conflicts=xresources@:i.service</code> into <code>xresources-dpi.service</code>.</p>
-<p>And that's the &quot;core&quot; of my X11 systemd setup. But, you generally want more things running than just the window manager, like a desktop notification daemon, a system panel, and an X composition manager (unless your window manager is bloated and has a composition manager built in). Since these things are probably window-manager specific, I've stuck them in a directory <code>wmii@.service.wants</code>:</p>
+<p>Since we want XRDB to be set up before any other programs launch, we give both of the <code>xresources</code> units <code>Before=X11@%i.target</code> (instead of <code>After=</code> like everything else). Also, two programs writing to <code>xrdb</code> at the same time has the same problem as two programs writing to the same file; one might trash the other’s changes. So, I stuck <code>Conflicts=xresources@:i.service</code> into <code>xresources-dpi.service</code>.</p>
+<p>And that’s the “core” of my X11 systemd setup. But, you generally want more things running than just the window manager, like a desktop notification daemon, a system panel, and an X composition manager (unless your window manager is bloated and has a composition manager built in). Since these things are probably window-manager specific, I’ve stuck them in a directory <code>wmii@.service.wants</code>:</p>
<ul class="tree">
<li>
<p><a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user">.config/systemd/user/</a></p>
@@ -97,13 +97,13 @@ xrdb -merge &lt;&lt;&lt;&quot;Xft.dpi: ${dpi}&quot;</code></pre>
</ul>
<p>For the window manager <code>.service</code>, I <em>could</em> just say <code>Type=simple</code> and call it a day (and I did for a while). But, I like to have <code>lxpanel</code> show up on all of my WMII tags (desktops), so I have <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/config.sh">my WMII configuration</a> stick this in the WMII <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/rules"><code>/rules</code></a>:</p>
<pre><code>/panel/ tags=/.*/ floating=always</code></pre>
-<p>Unfortunately, for this to work, <code>lxpanel</code> must be started <em>after</em> that gets inserted into WMII's rules. That wasn't a problem pre-systemd, because <code>lxpanel</code> was started by my WMII configuration, so ordering was simple. For systemd to get this right, I must have a way of notifying systemd that WMII's fully started, and it's safe to start <code>lxpanel</code>. So, I stuck this in <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/wmii@.service">my WMII <code>.service</code> file</a>:</p>
+<p>Unfortunately, for this to work, <code>lxpanel</code> must be started <em>after</em> that gets inserted into WMII’s rules. That wasn’t a problem pre-systemd, because <code>lxpanel</code> was started by my WMII configuration, so ordering was simple. For systemd to get this right, I must have a way of notifying systemd that WMII’s fully started, and it’s safe to start <code>lxpanel</code>. So, I stuck this in <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/wmii@.service">my WMII <code>.service</code> file</a>:</p>
<pre><code># This assumes that you write READY=1 to $NOTIFY_SOCKET in wmiirc
Type=notify
NotifyAccess=all</code></pre>
<p>and this in <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/wmiirc">my WMII configuration</a>:</p>
<pre><code>systemd-notify --ready || true</code></pre>
-<p>Now, this setup means that <code>NOTIFY_SOCKET</code> is set for all the children of <code>wmii</code>; I'd rather not have it leak into the applications that I start from the window manager, so I also stuck <code>unset NOTIFY_SOCKET</code> after forking a process that isn't part of the window manager:</p>
+<p>Now, this setup means that <code>NOTIFY_SOCKET</code> is set for all the children of <code>wmii</code>; I’d rather not have it leak into the applications that I start from the window manager, so I also stuck <code>unset NOTIFY_SOCKET</code> after forking a process that isn’t part of the window manager:</p>
<pre><code>runcmd() (
...
unset NOTIFY_SOCKET # systemd
@@ -111,14 +111,14 @@ NotifyAccess=all</code></pre>
exec 8&gt;&amp;- # xinit/systemd handshake
...
)</code></pre>
-<p>Unfortunately, because of a couple of <a href="https://github.com/systemd/systemd/issues/2739">bugs</a> and <a href="https://github.com/systemd/systemd/issues/2737">race conditions</a> in systemd, <code>systemd-notify</code> isn't reliable. If systemd can't receive the <code>READY=1</code> signal from my WMII configuration, there are two consequences:</p>
+<p>Unfortunately, because of a couple of <a href="https://github.com/systemd/systemd/issues/2739">bugs</a> and <a href="https://github.com/systemd/systemd/issues/2737">race conditions</a> in systemd, <code>systemd-notify</code> isn’t reliable. If systemd can’t receive the <code>READY=1</code> signal from my WMII configuration, there are two consequences:</p>
<ol type="1">
<li><code>lxpanel</code> will never start, because it will always be waiting for <code>wmii</code> to be ready, which will never happen.</li>
-<li>After a couple of minutes, systemd will consider <code>wmii</code> to be timed out, which is a failure, so then it will kill <code>wmii</code>, and exit my X11 session. That's no good!</li>
+<li>After a couple of minutes, systemd will consider <code>wmii</code> to be timed out, which is a failure, so then it will kill <code>wmii</code>, and exit my X11 session. That’s no good!</li>
</ol>
-<p>Using <code>socat</code> to send the message to systemd instead of <code>systemd-notify</code> &quot;should&quot; always work, because it tries to read from both ends of the bi-directional stream, and I can't imagine that getting EOF from the <code>UNIX-SENDTO</code> end will ever be faster than the systemd manager from handling the datagram that got sent. Which is to say, &quot;we work around the race condition by being slow and shitty.&quot;</p>
+<p>Using <code>socat</code> to send the message to systemd instead of <code>systemd-notify</code> “should” always work, because it tries to read from both ends of the bi-directional stream, and I can’t imagine that getting EOF from the <code>UNIX-SENDTO</code> end will ever be faster than the systemd manager from handling the datagram that got sent. Which is to say, “we work around the race condition by being slow and shitty.”</p>
<pre><code>socat STDIO UNIX-SENDTO:&quot;$NOTIFY_SOCKET&quot; &lt;&lt;&lt;READY=1 || true</code></pre>
-<p>But, I don't like that. I'd rather write my WMII configuration to the world as I wish it existed, and have workarounds encapsulated elsewhere; <a href="http://blog.robertelder.org/interfaces-most-important-software-engineering-concept/">&quot;If you have to cut corners in your project, do it inside the implementation, and wrap a very good interface around it.&quot;</a>. So, I wrote a <code>systemd-notify</code> compatible <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/workarounds.sh">function</a> that ultimately calls <code>socat</code>:</p>
+<p>But, I don’t like that. I’d rather write my WMII configuration to the world as I wish it existed, and have workarounds encapsulated elsewhere; <a href="http://blog.robertelder.org/interfaces-most-important-software-engineering-concept/">“If you have to cut corners in your project, do it inside the implementation, and wrap a very good interface around it.”</a>. So, I wrote a <code>systemd-notify</code> compatible <a href="https://lukeshu.com/git/dotfiles.git/tree/.config/wmii-hg/workarounds.sh">function</a> that ultimately calls <code>socat</code>:</p>
<pre><code>##
# Just like systemd-notify(1), but slower, which is a shitty
# workaround for a race condition in systemd.
@@ -159,7 +159,7 @@ systemd-notify() {
printf -v n &#39;%s\n&#39; &quot;${our_env[@]}&quot;
socat STDIO UNIX-SENDTO:&quot;$NOTIFY_SOCKET&quot; &lt;&lt;&lt;&quot;$n&quot;
}</code></pre>
-<p>So, one day when the systemd bugs have been fixed (and presumably the Linux kernel supports passing the cgroup of a process as part of its credentials), I can remove that from <code>workarounds.sh</code>, and not have to touch anything else in my WMII configuration (I do use <code>systemd-notify</code> in a couple of other, non-essential, places too; this wasn't to avoid having to change just 1 line).</p>
+<p>So, one day when the systemd bugs have been fixed (and presumably the Linux kernel supports passing the cgroup of a process as part of its credentials), I can remove that from <code>workarounds.sh</code>, and not have to touch anything else in my WMII configuration (I do use <code>systemd-notify</code> in a couple of other, non-essential, places too; this wasn’t to avoid having to change just 1 line).</p>
<p>So, now that <code>wmii@.service</code> properly has <code>Type=notify</code>, I can just stick <code>After=wmii@.service</code> into my <code>lxpanel@.service</code>, right? Wrong! Well, I <em>could</em>, but my <code>lxpanel</code> service has nothing to do with WMII; why should I couple them? Instead, I create <a href="https://lukeshu.com/git/dotfiles/tree/.config/systemd/user/wm-running@.target"><code>wm-running@.target</code></a> that can be used as a synchronization point:</p>
<pre><code># wmii@.service
Before=wm-running@%i.target
@@ -167,7 +167,7 @@ Before=wm-running@%i.target
# lxpanel@.service
After=X11@%i.target wm-running@%i.target
Requires=wm-running@%i.target</code></pre>
-<p>Finally, I have my desktop started and running. Now, I'd like for programs that aren't part of the window manager to not dump their stdout and stderr into WMII's part of the journal, like to have a record of which graphical programs crashed, and like to have a prettier cgroup/process graph. So, I use <code>systemd-run</code> to run external programs from the window manager:</p>
+<p>Finally, I have my desktop started and running. Now, I’d like for programs that aren’t part of the window manager to not dump their stdout and stderr into WMII’s part of the journal, like to have a record of which graphical programs crashed, and like to have a prettier cgroup/process graph. So, I use <code>systemd-run</code> to run external programs from the window manager:</p>
<pre><code>runcmd() (
...
unset NOTIFY_SOCKET # systemd
@@ -175,9 +175,9 @@ Requires=wm-running@%i.target</code></pre>
exec 8&gt;&amp;- # xinit/systemd handshake
exec systemd-run --user --scope -- sh -c &quot;$*&quot;
)</code></pre>
-<p>I run them as a scope instead of a service so that they inherit environment variables, and don't have to mess with getting <code>DISPLAY</code> or <code>XAUTHORITY</code> into their units (as I <em>don't</em> want to make them global variables in my systemd user session).</p>
-<p>I'd like to get <code>lxpanel</code> to also use <code>systemd-run</code> when launching programs, but it's a low priority because I don't really actually use <code>lxpanel</code> to launch programs, I just have the menu there to make sure that I didn't break the icons for programs that I package (I did that once back when I was Parabola's packager for Iceweasel and IceCat).</p>
-<p>And that's how I use systemd with X11.</p>
+<p>I run them as a scope instead of a service so that they inherit environment variables, and don’t have to mess with getting <code>DISPLAY</code> or <code>XAUTHORITY</code> into their units (as I <em>don’t</em> want to make them global variables in my systemd user session).</p>
+<p>I’d like to get <code>lxpanel</code> to also use <code>systemd-run</code> when launching programs, but it’s a low priority because I don’t really actually use <code>lxpanel</code> to launch programs, I just have the menu there to make sure that I didn’t break the icons for programs that I package (I did that once back when I was Parabola’s packager for Iceweasel and IceCat).</p>
+<p>And that’s how I use systemd with X11.</p>
</article>
<footer>