<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Posts on Matthew Garrett&#39;s Blog</title>
        <link>https://codon.org.uk/~mjg59/blog/post/</link>
        <description>Recent content in Posts on Matthew Garrett&#39;s Blog</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en-gb</language>
        <lastBuildDate>Tue, 31 Mar 2026 19:35:43 -0700</lastBuildDate><atom:link href="https://codon.org.uk/~mjg59/blog/post/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>Self hosting as much of my online presence as practical</title>
        <link>https://codon.org.uk/~mjg59/blog/p/self-hosting-as-much-of-my-online-presence-as-practical/</link>
        <pubDate>Tue, 31 Mar 2026 19:35:43 -0700</pubDate>
        
        <guid>https://codon.org.uk/~mjg59/blog/p/self-hosting-as-much-of-my-online-presence-as-practical/</guid>
        <description>&lt;p&gt;Because I am bad at giving up on things, I&amp;rsquo;ve been running my own email
server for over 20 years. Some of that time it&amp;rsquo;s been a PC at the end of a
DSL line, some of that time it&amp;rsquo;s been a Mac Mini in a data centre, and some
of that time it&amp;rsquo;s been a hosted VM. Last year I decided to bring it in
house, and since then I&amp;rsquo;ve been gradually consolidating as much of the rest
of my online presence as possible on it. I mentioned this &lt;a class=&#34;link&#34; href=&#34;https://nondeterministic.computer/@mjg59/116321518908968091&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;on
Mastodon&lt;/a&gt; and a
couple of people asked for more details, so here we are.&lt;/p&gt;
&lt;p&gt;First: &lt;a class=&#34;link&#34; href=&#34;https://www.monkeybrains.net/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;my ISP&lt;/a&gt; doesn&amp;rsquo;t guarantee a static
IPv4 unless I&amp;rsquo;m on a business plan and that seems like it&amp;rsquo;d cost a bunch
more, so I&amp;rsquo;m doing what I &lt;a class=&#34;link&#34; href=&#34;https://mjg59.dreamwidth.org/72095.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;described
here&lt;/a&gt;: running a Wireguard link
between a box that sits in a cupboard in my living room and the smallest
&lt;a class=&#34;link&#34; href=&#34;https://us.ovhcloud.com/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;OVH&lt;/a&gt; instance I can, with an additional IP
address allocated to the VM and NATted over the VPN link. The practical
outcome of this is that my home IP address is irrelevant and can change as
much as it wants - my DNS points at the OVH IP, and traffic to that all ends
up hitting my server.&lt;/p&gt;
&lt;p&gt;The server itself is pretty uninteresting. It&amp;rsquo;s a refurbished HP EliteDesk
which idles at 10W or so, along 2TB of NVMe and 32GB of RAM that I found
under a pile of laptops in my office. We&amp;rsquo;re not talking rackmount Xeon
levels of performance, but it&amp;rsquo;s entirely adequate for everything I&amp;rsquo;m doing
here.&lt;/p&gt;
&lt;p&gt;So. Let&amp;rsquo;s talk about the services I&amp;rsquo;m hosting.&lt;/p&gt;
&lt;h2 id=&#34;web&#34;&gt;Web
&lt;/h2&gt;&lt;p&gt;This one&amp;rsquo;s trivial. I&amp;rsquo;m not really hosting much of a website right now, but
what there is is served via Apache with a Let&amp;rsquo;s Encrypt certificate. Nothing
interesting at all here, other than the proxying that&amp;rsquo;s going to be relevant
later.&lt;/p&gt;
&lt;h2 id=&#34;email&#34;&gt;Email
&lt;/h2&gt;&lt;p&gt;Inbound email is easy enough. I&amp;rsquo;m running Postfix with a pretty stock
configuration, and my MX records point at me. The same Let&amp;rsquo;s Encrypt
certificate is there for TLS delivery. I&amp;rsquo;m using Dovecot as an IMAP server
(again with the same cert). You can find plenty of guides on setting this
up.&lt;/p&gt;
&lt;p&gt;Outbound email? That&amp;rsquo;s harder. I&amp;rsquo;m on a residential IP address, so if I send
email directly nobody&amp;rsquo;s going to deliver it. Going via my OVH address isn&amp;rsquo;t
going to be a lot better. I have a Google Workspace, so in the end I just
made use of &lt;a class=&#34;link&#34; href=&#34;https://knowledge.workspace.google.com/admin/gmail/advanced/route-outgoing-smtp-relay-messages-through-google&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Google&amp;rsquo;s SMTP relay
service&lt;/a&gt;. There&amp;rsquo;s
various commerical alternatives available, I just chose this one because it
didn&amp;rsquo;t cost me anything more than I&amp;rsquo;m already paying.&lt;/p&gt;
&lt;h2 id=&#34;blog&#34;&gt;Blog
&lt;/h2&gt;&lt;p&gt;My blog is largely static content generated by
&lt;a class=&#34;link&#34; href=&#34;https://gohugo.io/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Hugo&lt;/a&gt;. Comments are &lt;a class=&#34;link&#34; href=&#34;https://remark42.com/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Remark42&lt;/a&gt;
running in a Docker container. If you don&amp;rsquo;t want to handle even that level
of dynamic content you can use a third party comment provider like
&lt;a class=&#34;link&#34; href=&#34;https://disqus.com&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Disqus&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;mastodon&#34;&gt;Mastodon
&lt;/h2&gt;&lt;p&gt;I&amp;rsquo;m deploying Mastodon pretty much along the lines of the &lt;a class=&#34;link&#34; href=&#34;https://github.com/mastodon/mastodon/blob/main/docker-compose.yml&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;upstream compose
file&lt;/a&gt;. Apache
is proxying /api/v1/streaming to the websocket provided by the streaming
container and / to the actual Mastodon service. The only thing I tripped
over for a while was the need to set the &amp;ldquo;X-Forwarded-Proto&amp;rdquo; header since
otherwise you get stuck in a redirect loop of Mastodon receiving a request
over http (because TLS termination is being done by the Apache proxy) and
redirecting to https, except that&amp;rsquo;s where we just came from.&lt;/p&gt;
&lt;p&gt;Mastodon is easily the heaviest part of all of this, using around 5GB of RAM
and 60GB of disk for an instance with 3 users. This is more a point of
principle than an especially good idea.&lt;/p&gt;
&lt;h2 id=&#34;bluesky&#34;&gt;Bluesky
&lt;/h2&gt;&lt;p&gt;I&amp;rsquo;m arguably cheating here. Bluesky&amp;rsquo;s federation model is quite different to
Mastodon - while running a Mastodon service implies running the webview and
other infrastructure associated with it, Bluesky has split that into
&lt;a class=&#34;link&#34; href=&#34;https://docs.bsky.app/docs/advanced-guides/federation-architecture&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;multiple
parts&lt;/a&gt;. User
data is stored on Personal Data Servers, then aggregated from those by
Relays, and then displayed on Appviews. Third parties can run any of these,
but a user&amp;rsquo;s actual posts are stored on a PDS. There are various reasons to
run the others, for instance to implement alternative moderation policies,
but if all you want is to ensure that you have control over your data,
running a PDS is sufficient. I followed &lt;a class=&#34;link&#34; href=&#34;https://cprimozic.net/notes/posts/notes-on-self-hosting-bluesky-pds-alongside-other-services/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;these
instructions&lt;/a&gt;,
other than using Apache as the frontend proxy rather than nginx, and it&amp;rsquo;s
all been working fine since then. In terms of ensuring that my data remains
under my control, it&amp;rsquo;s sufficient.&lt;/p&gt;
&lt;h2 id=&#34;backups&#34;&gt;Backups
&lt;/h2&gt;&lt;p&gt;I&amp;rsquo;m using &lt;a class=&#34;link&#34; href=&#34;https://torsion.org/borgmatic/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;borgmatic&lt;/a&gt;, backing up to a local
Synology NAS and also to my parents&amp;rsquo; home (where I have another HP EliteDesk
set up with an equivalent OVH IPv4 fronting setup). At some point I&amp;rsquo;ll check
that I&amp;rsquo;m actually able to restore them.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion
&lt;/h2&gt;&lt;p&gt;Most of what I post is now stored on a system that&amp;rsquo;s happily living under a
TV, but is available to the rest of the world just as visibly as if I used a
hosted provider. Is this necessary? No. Does it improve my life? In no
practical way. Does it generate additional complexity? Absolutely. Should
you do it? Oh good heavens no. But you can, and once it&amp;rsquo;s working it largely
just keeps working, and there&amp;rsquo;s a certain sense of comfort in knowing that
my online presence is carefully contained in a small box making a gentle
whirring noise.&lt;/p&gt;
</description>
        </item>
        <item>
        <title>SSH certificates and git signing</title>
        <link>https://codon.org.uk/~mjg59/blog/p/ssh-certificates-and-git-signing/</link>
        <pubDate>Sat, 21 Mar 2026 12:38:07 -0700</pubDate>
        
        <guid>https://codon.org.uk/~mjg59/blog/p/ssh-certificates-and-git-signing/</guid>
        <description>&lt;p&gt;When you&amp;rsquo;re looking at source code it can be helpful to have some evidence
indicating who wrote it. Author tags give a surface level indication, &lt;a class=&#34;link&#34; href=&#34;https://github.com/torvalds/linux/commit/ac632c504d0b881d7cfb44e3fdde3ec30eb548d9&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;but
it turns out you can just
lie&lt;/a&gt;
and if someone isn&amp;rsquo;t paying attention when merging stuff there&amp;rsquo;s certainly a
risk that a commit could be merged with an author field that doesn&amp;rsquo;t
represent reality. Account compromise can make this even worse - a PR being
opened by a compromised user is going to be hard to distinguish from the
authentic user. In a world where supply chain security is an increasing
concern, it&amp;rsquo;s easy to understand why people would want more evidence that
code was actually written by the person it&amp;rsquo;s attributed to.&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://git-scm.com/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;git&lt;/a&gt; has support for cryptographically signing
commits and tags. Because git is about choice even if Linux isn&amp;rsquo;t, you can
do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You&amp;rsquo;re
probably going to be unsurprised about my feelings around OpenPGP and the
web of trust, and X.509 certificates are an absolute nightmare. That leaves
SSH keys, but bare cryptographic keys aren&amp;rsquo;t terribly helpful in isolation -
you need some way to make a determination about which keys you trust. If
you&amp;rsquo;re using someting like &lt;a class=&#34;link&#34; href=&#34;https://github.com&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;GitHub&lt;/a&gt; you can extract that
information from the set of keys associated with a user account&lt;sup id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;, but
that means that a compromised GitHub account is now also a way to alter the
set of trusted keys and also when was the last time you audited your keys
and how certain are you that every trusted key there is still 100% under
your control?  Surely there&amp;rsquo;s a better way.&lt;/p&gt;
&lt;h2 id=&#34;ssh-certificates&#34;&gt;SSH Certificates
&lt;/h2&gt;&lt;p&gt;And, thankfully, there is. &lt;a class=&#34;link&#34; href=&#34;https://openssh.com&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;OpenSSH&lt;/a&gt; supports
certificates, an SSH public key that&amp;rsquo;s been signed by some trusted party and
so now you can assert that it&amp;rsquo;s trustworthy in some form. SSH Certificates
also contain metadata in the form of Principals, a list of identities that
the trusted party included in the certificate. These might simply be
usernames, but they might also provide information about group
membership. There&amp;rsquo;s also, unsurprisingly, native support in SSH for
forwarding them (using the agent forwarding protocol), so you can keep your
keys on your local system, ssh into your actual dev system, and have access
to them without any additional complexity.&lt;/p&gt;
&lt;p&gt;And, wonderfully, you can use them in git! Let&amp;rsquo;s find out how.&lt;/p&gt;
&lt;h2 id=&#34;local-config&#34;&gt;Local config
&lt;/h2&gt;&lt;p&gt;There&amp;rsquo;s two main parameters you need to set. First,&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-fallback&#34; data-lang=&#34;fallback&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;git config set gpg.format ssh
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;because unfortunately for historical reasons all the git signing config is
under the &lt;code&gt;gpg&lt;/code&gt; namespace even if you&amp;rsquo;re not using OpenPGP. Yes, this makes
me sad. But you&amp;rsquo;re also going to need something else. Either
&lt;code&gt;user.signingkey&lt;/code&gt; needs to be set to the path of your certificate, or you
need to set &lt;code&gt;gpg.ssh.defaultKeyCommand&lt;/code&gt; to a command that will talk to an
SSH agent and find the certificate for you (this can be helpful if it&amp;rsquo;s
stored on a smartcard or something rather than on disk). Thankfully for you,
I&amp;rsquo;ve &lt;a class=&#34;link&#34; href=&#34;https://gitlab.com/mjg59/find-ssh-certificate&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;written one&lt;/a&gt;. It will
talk to an SSH agent (either whatever&amp;rsquo;s pointed at by the &lt;code&gt;SSH_AUTH_SOCK&lt;/code&gt;
environment variable or with the &lt;code&gt;-agent&lt;/code&gt; argument), find a certificate
signed with the key provided with the &lt;code&gt;-ca&lt;/code&gt; argument, and then pass that
back to git. Now you can simply pass &lt;code&gt;-S&lt;/code&gt; to &lt;code&gt;git commit&lt;/code&gt; and various other
commands, and you&amp;rsquo;ll have a signature.&lt;/p&gt;
&lt;h2 id=&#34;validating-signatures&#34;&gt;Validating signatures
&lt;/h2&gt;&lt;p&gt;This is a bit more annoying. Using native git tooling ends up calling out to
&lt;code&gt;ssh-keygen&lt;/code&gt;&lt;sup id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;, which validates signatures against a file in a format
that looks somewhat like &lt;code&gt;authorized-keys&lt;/code&gt;. This lets you add something like:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-fallback&#34; data-lang=&#34;fallback&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;* cert-authority ssh-rsa AAAA…
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;which will match all principals (the wildcard) and succeed if the signature
is made with a certificate that&amp;rsquo;s signed by the key following
cert-authority. I recommend you don&amp;rsquo;t read the &lt;a class=&#34;link&#34; href=&#34;https://github.com/git/git/blob/ca1db8a0f7dc0dbea892e99f5b37c5fe5861be71/gpg-interface.c#L461&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;code that does this in
git&lt;/a&gt;
because I made that mistake myself, but it does work. Unfortunately it
doesn&amp;rsquo;t provide a lot of granularity around things like &amp;ldquo;Does the
certificate need to be valid at this specific time&amp;rdquo; and &amp;ldquo;Should the user
only be able to modify specific files&amp;rdquo; and that kind of thing, but also if
you&amp;rsquo;re using GitHub or GitLab you wouldn&amp;rsquo;t need to do this at all because
they&amp;rsquo;ll just do this magically and put a &amp;ldquo;verified&amp;rdquo; tag against anything
with a valid signature, right?&lt;/p&gt;
&lt;p&gt;Haha. No.&lt;/p&gt;
&lt;p&gt;Unfortunately while both GitHub and GitLab support using SSH certificates
for authentication (so a user can&amp;rsquo;t push to a repo unless they have a
certificate signed by the configured CA), there&amp;rsquo;s currently no way to say
&amp;ldquo;Trust all commits with an SSH certificate signed by this CA&amp;rdquo;. I am unclear
on why. So, I &lt;a class=&#34;link&#34; href=&#34;https://gitlab.com/mjg59/validate-git-signatures&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;wrote my
own&lt;/a&gt;. It takes a range of
commits, and verifies that each one is signed with either a certificate
signed by the key in &lt;code&gt;CA_PUB_KEY&lt;/code&gt; or (optionally) an OpenPGP key provided in
&lt;code&gt;ALLOWED_PGP_KEYS&lt;/code&gt;. Why OpenPGP? Because even if you sign all of your own
commits with an SSH certificate, anyone using the API or web interface will
end up with their commits signed by an OpenPGP key, and if you want to have
those commits validate you&amp;rsquo;ll need to handle that.&lt;/p&gt;
&lt;p&gt;In any case, this should be easy enough to integrate into whatever CI
pipeline you have. This is currently very much a proof of concept and I
wouldn&amp;rsquo;t recommend deploying it anywhere, but I am interested in merging
support for additional policy around things like expiry dates or group
membership.&lt;/p&gt;
&lt;h2 id=&#34;doing-it-in-hardware&#34;&gt;Doing it in hardware
&lt;/h2&gt;&lt;p&gt;Of course, certificates don&amp;rsquo;t buy you any additional security if an attacker
is able to steal your private key material - they can steal the certificate
at the same time. This can be avoided on almost all modern hardware by
storing the private key in a separate cryptographic coprocessor - a &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Trusted_Platform_Module&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Trusted
Platform Module&lt;/a&gt; on
PCs, or the &lt;a class=&#34;link&#34; href=&#34;https://support.apple.com/guide/security/the-secure-enclave-sec59b0b31ff/web&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Secure
Enclave&lt;/a&gt;
on Macs. If you&amp;rsquo;re on a Mac then &lt;a class=&#34;link&#34; href=&#34;https://secretive.dev/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Secretive&lt;/a&gt; has
been around for some time, but things are a little harder on Windows and
Linux - there&amp;rsquo;s various things you can do with
&lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/PKCS_11&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;PKCS#11&lt;/a&gt; but you&amp;rsquo;ll hate yourself
even more than you&amp;rsquo;ll hate me for suggesting it in the first place, and
there&amp;rsquo;s &lt;a class=&#34;link&#34; href=&#34;https://github.com/Foxboron/ssh-tpm-agent&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;ssh-tpm-agent&lt;/a&gt; except
it&amp;rsquo;s Linux only and quite tied to Linux.&lt;/p&gt;
&lt;p&gt;So, obviously, I wrote &lt;a class=&#34;link&#34; href=&#34;https://gitlab.com/mjg59/attestation-tpm-agent&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;my
own&lt;/a&gt;. This makes use of the
&lt;a class=&#34;link&#34; href=&#34;https://github.com/google/go-attestation&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;go-attestation&lt;/a&gt; library my team
at Google wrote, and is able to generate TPM-backed keys and export them
over the SSH agent protocol. It&amp;rsquo;s also able to proxy requests back to an
existing agent, so you can just have it take care of your TPM-backed keys
and continue using your existing agent for everything else. In theory it
should also work on Windows&lt;sup id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;3&lt;/a&gt;&lt;/sup&gt; but this is all in preparation for a
&lt;a class=&#34;link&#34; href=&#34;https://sched.co/2E1g3&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;talk&lt;/a&gt; I only found out I was giving about two weeks
beforehand, so I haven&amp;rsquo;t actually had time to test anything other than that
it builds.&lt;/p&gt;
&lt;p&gt;And, delightfully, because the agent protocol doesn&amp;rsquo;t care about where the
keys are actually stored, this still works just fine with forwarding - you
can ssh into a remote system and sign something using a private key that&amp;rsquo;s
stored in your local TPM or Secure Enclave. Remote use can be as transparent
as local use.&lt;/p&gt;
&lt;h2 id=&#34;wait-attestation&#34;&gt;Wait, attestation?
&lt;/h2&gt;&lt;p&gt;Ah yes you may be wondering why I&amp;rsquo;m using go-attestation and why the term
&amp;ldquo;attestation&amp;rdquo; is in my agent&amp;rsquo;s name. It&amp;rsquo;s because when I&amp;rsquo;m generating the
key I&amp;rsquo;m also generating all the artifacts required to prove that the key was
generated on a particular TPM. I haven&amp;rsquo;t actually implemented the other end
of that yet, but if implemented this would allow you to verify that a key
was generated in hardware before you issue it with an SSH certificate - and
in an age of agentic bots accidentally exfiltrating whatever they find on
disk, that gives you a lot more confidence that a commit was signed on
hardware you own.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion
&lt;/h2&gt;&lt;p&gt;Using SSH certificates for git commit signing is great - the tooling is a
bit rough but otherwise they&amp;rsquo;re basically better than every other
alternative, and also if you already have infrastructure for issuing SSH
certificates then you can just reuse it&lt;sup id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;4&lt;/a&gt;&lt;/sup&gt; and everyone wins.&lt;/p&gt;
&lt;div class=&#34;footnotes&#34; role=&#34;doc-endnotes&#34;&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;
&lt;p&gt;Did you know you can just download people&amp;rsquo;s SSH pubkeys from github from &lt;code&gt;https://github.com/&amp;lt;username&amp;gt;.keys&lt;/code&gt;? Now you do&amp;#160;&lt;a href=&#34;#fnref:1&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;
&lt;p&gt;Yes it is somewhat confusing that the &lt;code&gt;keygen&lt;/code&gt; command does things
other than generate keys&amp;#160;&lt;a href=&#34;#fnref:2&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;
&lt;p&gt;This is &lt;a class=&#34;link&#34; href=&#34;https://mjg59.dreamwidth.org/67402.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;more difficult than it sounds&lt;/a&gt;&amp;#160;&lt;a href=&#34;#fnref:3&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;
&lt;p&gt;And if you don&amp;rsquo;t, by implementing this you now have infrastructure
for issuing SSH certificates and can use that for SSH authentication as
well.&amp;#160;&lt;a href=&#34;#fnref:4&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
        </item>
        <item>
        <title>To update blobs or not to update blobs</title>
        <link>https://codon.org.uk/~mjg59/blog/p/to-update-blobs-or-not-to-update-blobs/</link>
        <pubDate>Mon, 02 Mar 2026 19:09:48 -0800</pubDate>
        
        <guid>https://codon.org.uk/~mjg59/blog/p/to-update-blobs-or-not-to-update-blobs/</guid>
        <description>&lt;p&gt;A lot of hardware runs non-free software. Sometimes that non-free software is in ROM. Sometimes it&amp;rsquo;s in flash. Sometimes it&amp;rsquo;s not stored on the device at all, it&amp;rsquo;s pushed into it at runtime by another piece of hardware or by the operating system. We typically refer to this software as &amp;ldquo;firmware&amp;rdquo; to differentiate it from the software run on the CPU after the OS has started&lt;sup id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;, but a lot of it (and, these days, probably most of it) is software written in C or some other systems programming language and targeting Arm or RISC-V or maybe MIPS and even sometimes x86&lt;sup id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;. There&amp;rsquo;s no real distinction between it and any other bit of software you run, except it&amp;rsquo;s generally not run within the context of the OS&lt;sup id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;. Anyway. It&amp;rsquo;s code. I&amp;rsquo;m going to simplify things here and stop using the words &amp;ldquo;software&amp;rdquo; or &amp;ldquo;firmware&amp;rdquo; and just say &amp;ldquo;code&amp;rdquo; instead, because that way we don&amp;rsquo;t need to worry about semantics.&lt;/p&gt;
&lt;p&gt;A fundamental problem for free software enthusiasts is that almost all of the code we&amp;rsquo;re talking about here is non-free. In some cases, it&amp;rsquo;s cryptographically signed in a way that makes it difficult or impossible to replace it with free code. In some cases it&amp;rsquo;s even encrypted, such that even examining the code is impossible. But because it&amp;rsquo;s code, sometimes the vendor responsible for it will provide updates, and now you get to choose whether or not to apply those updates.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m now going to present some things to consider. These are not in any particular order and are not intended to form any sort of argument in themselves, but are representative of the opinions you will get from various people and I would like you to read these, think about them, and come to your own set of opinions before I tell you what my opinion is.&lt;/p&gt;
&lt;p&gt;THINGS TO CONSIDER&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Does this blob do what it claims to do? Does it suddenly introduce functionality you don&amp;rsquo;t want? Does it introduce security flaws? Does it introduce deliberate backdoors? Does it make your life better or worse?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You&amp;rsquo;re almost certainly being provided with a blob of compiled code, with no source code available. You can&amp;rsquo;t just diff the source files, satisfy yourself that they&amp;rsquo;re fine, and then install them. To be fair, even though you (as someone reading this) are probably more capable of doing that than the average human, you&amp;rsquo;re likely not doing that even if you &lt;strong&gt;are&lt;/strong&gt; capable because you&amp;rsquo;re also likely installing kernel upgrades that contain vast quantities of code beyond your ability to understand&lt;sup id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;4&lt;/a&gt;&lt;/sup&gt;. We don&amp;rsquo;t rely on our personal ability, we rely on the ability of those around us to do that validation, and we rely on an existing (possibly transitive) trust relationship with those involved. You don&amp;rsquo;t know the people who created this blob, you likely don&amp;rsquo;t know people who do know the people who created this blob, these people probably don&amp;rsquo;t have an online presence that gives you more insight. Why should you trust them?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If it&amp;rsquo;s in ROM and it turns out to be hostile then nobody can fix it ever&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The people creating these blobs largely work for the same company that built the hardware in the first place. When they built that hardware they could have backdoored it in any number of ways. And if the hardware has a built-in copy of the code it runs, why do you trust that that copy isn&amp;rsquo;t backdoored? Maybe it isn&amp;rsquo;t and updates &lt;em&gt;would&lt;/em&gt; introduce a backdoor, but in that case if you buy new hardware that runs new code aren&amp;rsquo;t you putting yourself at the same risk?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Designing hardware where you&amp;rsquo;re able to provide updated code and nobody else can is just a dick move&lt;sup id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;5&lt;/a&gt;&lt;/sup&gt;. We shouldn&amp;rsquo;t encourage vendors who do that.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Humans are bad at writing code, and code running on ancilliary hardware is no exception. It contains bugs. These bugs are sometimes very bad. &lt;a class=&#34;link&#34; href=&#34;https://cs.ru.nl/~cmeijer/publications/Self_Encrypting_Deception_Weaknesses_in_the_Encryption_of_Solid_State_Drives.pdf&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;This paper&lt;/a&gt; describes a set of vulnerabilities identified in code running on SSDs that made it possible to bypass encryption secrets. The SSD vendors released updates that fixed these issues. If the code couldn&amp;rsquo;t be replaced then anyone relying on those security features would need to replace the hardware.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Even if blobs are signed and can&amp;rsquo;t easily be replaced, the ones that aren&amp;rsquo;t encrypted can still be examined. The SSD vulnerabilities above were identifiable because researchers were able to reverse engineer the updates. It can be more annoying to audit binary code than source code, but it&amp;rsquo;s still possible.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Vulnerabilities in code running on other hardware &lt;a class=&#34;link&#34; href=&#34;https://i.blackhat.com/us-18/Thu-August-9/us-18-Grassi-Exploitation-of-a-Modern-Smartphone-Baseband-wp.pdf&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;can still compromise the OS&lt;/a&gt;. If someone can compromise the code running on your wifi card then if you don&amp;rsquo;t have a strong &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_management_unit&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;IOMMU&lt;/a&gt; setup they&amp;rsquo;re going to be able to overwrite your running OS.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Replacing one non-free blob with another non-free blob increases the total number of non-free blobs involved in the whole system, but doesn&amp;rsquo;t increase the number that are actually executing at any point in time.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ok we&amp;rsquo;re done with the things to consider. Please spend a few seconds thinking about what the tradeoffs are here and what your feelings are. Proceed when ready.&lt;/p&gt;
&lt;p&gt;I trust my CPU vendor. I don&amp;rsquo;t trust my CPU vendor because I want to, I trust my CPU vendor because I have no choice. I don&amp;rsquo;t think it&amp;rsquo;s likely that my CPU vendor has designed a CPU that identifies when I&amp;rsquo;m generating cryptographic keys and biases the RNG output so my keys are significantly weaker than they look, but it&amp;rsquo;s not literally impossible. I generate keys on it anyway, because what choice do I have? At some point I will buy a new laptop because Electron will no longer fit in 32GB of RAM and I will have to make the same affirmation of trust, because the alternative is that I just don&amp;rsquo;t have a computer. And in any case, I will be communicating with other people who generated their keys on CPUs I have no control over, and I will also be relying on them to be trustworthy. If I refuse to trust my CPU then I don&amp;rsquo;t get to computer, and if I don&amp;rsquo;t get to computer then I will be sad. I suspect I&amp;rsquo;m not alone here.&lt;/p&gt;
&lt;p&gt;Why would I install a code update on my CPU when my CPU&amp;rsquo;s job is to run my code in the first place? Because it turns out that CPUs are complicated and messy and they have their own bugs, and those bugs may be functional (for example, some performance counter functionality was broken on Sandybridge at release, and was then &lt;a class=&#34;link&#34; href=&#34;https://groups.google.com/g/linux.kernel/c/Bk3lNiC0Ys0&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;fixed&lt;/a&gt; with a microcode blob update) and if you update it your hardware works better. Or it might be that you&amp;rsquo;re running a CPU with &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Transient_execution_CPU_vulnerability&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;speculative execution bugs&lt;/a&gt; and there&amp;rsquo;s a microcode update that provides a mitigation for that even if your CPU is slower when you enable it, but at least now you can run virtual machines without code in those virtual machines being able to reach outside the hypervisor boundary and extract secrets from other contexts. When it&amp;rsquo;s put that way, why would I &lt;em&gt;not&lt;/em&gt; install the update?&lt;/p&gt;
&lt;p&gt;And the straightforward answer is that theoretically it could include new code that doesn&amp;rsquo;t act in my interests, either deliberately or not. And, yes, this is theoretically possible. Of course, if you don&amp;rsquo;t trust your CPU vendor, why are you buying CPUs from them, but well maybe they&amp;rsquo;ve been corrupted (in which case don&amp;rsquo;t buy any new CPUs from them either) or maybe they&amp;rsquo;ve just introduced a new vulnerability by accident, and also you&amp;rsquo;re in a position to determine whether the alleged security improvements matter to you at all. Do you care about speculative execution attacks if all software running on your system is trustworthy? Probably not! Do you need to update a blob that fixes something you don&amp;rsquo;t care about and which might introduce some sort of vulnerability? Seems like no!&lt;/p&gt;
&lt;p&gt;But there&amp;rsquo;s a difference between a recommendation for a fully informed device owner who has a full understanding of threats, and a recommendation for an average user who just wants their computer to work and to not be ransomwared. A code update on a wifi card may introduce a backdoor, or it may fix the ability for someone to compromise your machine with a hostile access point. Most people are just not going to be in a position to figure out which is more likely, and there&amp;rsquo;s no single answer that&amp;rsquo;s correct for everyone. What we &lt;em&gt;do&lt;/em&gt; know is that where vulnerabilities in this sort of code have been discovered, updates have tended to fix them - but nobody has flagged such an update as a real-world vector for system compromise.&lt;/p&gt;
&lt;p&gt;My personal opinion? You should make your own mind up, but also you shouldn&amp;rsquo;t impose that choice on others, because your threat model is not necessarily their threat model. Code updates are a reasonable default, but they shouldn&amp;rsquo;t be unilaterally imposed, and nor should they be blocked outright. And the best way to shift the balance of power away from vendors who insist on distributing non-free blobs is to demonstrate the benefits gained from them being free - a vendor who ships free code on their system enables their customers to improve their code and enable new functionality and make their hardware more attractive.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s impossible to say with absolute certainty that your security will be improved by installing code blobs. It&amp;rsquo;s also impossible to say with absolute certainty that it won&amp;rsquo;t. So far evidence tends to support the idea that most updates that claim to fix security issues do, and there&amp;rsquo;s not a lot of evidence to support the idea that updates add new backdoors. Overall I&amp;rsquo;d say that providing the updates is likely the right default for most users - and that that should never be strongly enforced, because people should be allowed to define their own security model, and whatever set of threats I&amp;rsquo;m worried about, someone else may have a good reason to focus on different ones.&lt;/p&gt;
&lt;div class=&#34;footnotes&#34; role=&#34;doc-endnotes&#34;&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;
&lt;p&gt;Code that runs on the CPU &lt;em&gt;before&lt;/em&gt; the OS is still usually described as firmware - UEFI is firmware even though it&amp;rsquo;s executing on the CPU, which should give a strong indication that the difference between &amp;ldquo;firmware&amp;rdquo; and &amp;ldquo;software&amp;rdquo; is largely arbitrary&amp;#160;&lt;a href=&#34;#fnref:1&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;
&lt;p&gt;And, obviously &lt;a class=&#34;link&#34; href=&#34;https://www.google.com/search?q=foone&amp;#43;8051&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;8051&lt;/a&gt;&amp;#160;&lt;a href=&#34;#fnref:2&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;
&lt;p&gt;Because UEFI makes everything more complicated, UEFI makes this more complicated. Triggering a UEFI runtime service involves your OS jumping into firmware code at runtime, in the same context as the OS kernel. Sometimes this will trigger a jump into System Management Mode, but other times it won&amp;rsquo;t, and it&amp;rsquo;s just your kernel executing code that got dumped into RAM when your system booted.&amp;#160;&lt;a href=&#34;#fnref:3&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;
&lt;p&gt;&lt;em&gt;I&lt;/em&gt; don&amp;rsquo;t understand most of the diff between one kernel version and the next, and I don&amp;rsquo;t have time to read all of it either.&amp;#160;&lt;a href=&#34;#fnref:4&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;
&lt;p&gt;There&amp;rsquo;s a bunch of reasons to do this, the most reasonable of which is probably not wanting customers to replace the code and break their hardware and deal with the support overhead of that, but not being able to replace code running on hardware I own is always going to be an affront to me.&amp;#160;&lt;a href=&#34;#fnref:5&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
        </item>
        <item>
        <title>What is a PC compatible?</title>
        <link>https://codon.org.uk/~mjg59/blog/p/what-is-a-pc-compatible/</link>
        <pubDate>Sat, 03 Jan 2026 19:11:36 -0800</pubDate>
        
        <guid>https://codon.org.uk/~mjg59/blog/p/what-is-a-pc-compatible/</guid>
        <description>&lt;p&gt;Wikipedia says &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/IBM_PC_compatible&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;&amp;ldquo;An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models&amp;rdquo;&lt;/a&gt;. But what does this &lt;em&gt;actually&lt;/em&gt; mean? The obvious literal interpretation is for a device to be PC compatible, all software originally written for the IBM 5150 must run on it. Is this a reasonable definition? Is it one that any modern hardware can meet?&lt;/p&gt;
&lt;p&gt;Before we dig into that, let&amp;rsquo;s go back to the early days of the x86 industry. IBM had launched the PC built almost entirely around off-the-shelf Intel components, and shipped full schematics in the &lt;a class=&#34;link&#34; href=&#34;https://bitsavers.org/pdf/ibm/pc/pc/6025008_PC_Technical_Reference_Aug81.pdf&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;IBM PC Technical Reference Manual&lt;/a&gt;. Anyone could buy the same parts from Intel and build a compatible board. They&amp;rsquo;d still need an operating system, but Microsoft was happy to sell MS-DOS to anyone who&amp;rsquo;d turn up with money. The only thing stopping people from cloning the entire board was the BIOS, the component that sat between the raw hardware and much of the software running on it. The concept of a BIOS originated in &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/CP/M&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;CP/M&lt;/a&gt;, an operating system originally written in the 70s for systems based on the Intel 8080. At that point in time there was no meaningful standardisation - systems might use the same CPU but otherwise have entirely different hardware, and any software that made assumptions about the underlying hardware wouldn&amp;rsquo;t run elsewhere. CP/M&amp;rsquo;s BIOS was effectively an abstraction layer, a set of code that could be modified to suit the specific underlying hardware without needing to modify the rest of the OS. As long as applications only called BIOS functions, they didn&amp;rsquo;t need to care about the underlying hardware and would run on all systems that had a working CP/M port.&lt;/p&gt;
&lt;p&gt;By 1979, boards based on the 8086, Intel&amp;rsquo;s successor to the 8080, were hitting the market. The 8086 wasn&amp;rsquo;t machine code compatible with the 8080, but 8080 assembly code could be assembled to 8086 instructions to simplify porting old code. Despite this, the 8086 version of CP/M was taking some time to appear, and a company called &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Seattle_Computer_Products&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Seattle Computer Products&lt;/a&gt; started producing a new OS closely modelled on CP/M and using the same BIOS abstraction layer concept. When IBM started looking for an OS for their upcoming 8088 (an 8086 with an 8-bit data bus rather than a 16-bit one) based PC, a complicated chain of events resulted in Microsoft paying a one-off fee to Seattle Computer Products, porting their OS to IBM&amp;rsquo;s hardware, and the rest is history.&lt;/p&gt;
&lt;p&gt;But one key part of this was that despite what was now MS-DOS existing only to support IBM&amp;rsquo;s hardware, the BIOS abstraction remained, and the BIOS was owned by the hardware vendor - in this case, IBM. One key difference, though, was that while CP/M systems typically included the BIOS on boot media, IBM integrated it into ROM. This meant that MS-DOS floppies didn&amp;rsquo;t include all the code needed to run on a PC - you needed IBM&amp;rsquo;s BIOS. To begin with this wasn&amp;rsquo;t obviously a problem in the US market since, in a way that seems &lt;em&gt;extremely&lt;/em&gt; odd from where we are now in history, it wasn&amp;rsquo;t clear that machine code was actually copyrightable. In 1982 &lt;a class=&#34;link&#34; href=&#34;https://law.justia.com/cases/federal/appellate-courts/F2/685/870/301267/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Williams v. Artic&lt;/a&gt; determined that it could be even if fixed in ROM - this ended up having broader industry impact in &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Apple_Computer,_Inc._v._Franklin_Computer_Corp.&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Apple v. Franklin&lt;/a&gt; and it became clear that clone machines making use of the original vendor&amp;rsquo;s ROM code wasn&amp;rsquo;t going to fly. Anyone wanting to make hardware compatible with the PC was going to have to find another way.&lt;/p&gt;
&lt;p&gt;And here&amp;rsquo;s where things diverge somewhat. Compaq famously performed clean-room reverse engineering of the IBM BIOS to produce a functionally equivalent implementation without violating copyright. Other vendors, well, were less fastidious - they came up with BIOS implementations that either implemented a subset of IBM&amp;rsquo;s functionality, or didn&amp;rsquo;t implement all the same behavioural quirks, and compatibility was restricted. In this era several vendors shipped customised versions of MS-DOS that supported different hardware (which you&amp;rsquo;d think wouldn&amp;rsquo;t be necessary given that&amp;rsquo;s what the BIOS was for, but still), and the set of PC software that would run on their hardware varied wildly. This was the era where vendors even shipped systems based on the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Intel_80186&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Intel 80186&lt;/a&gt;, an improved 8086 that was both faster than the 8086 at the same clock speed and was also available at higher clock speeds. Clone vendors saw an opportunity to ship hardware that outperformed the PC, and some of them went for it.&lt;/p&gt;
&lt;p&gt;You&amp;rsquo;d think that IBM would have immediately jumped on this as well, but no - the 80186 integrated many components that were separate chips on 8086 (and 8088) based platforms, but crucially didn&amp;rsquo;t maintain compatibility. As long as everything went via the BIOS this shouldn&amp;rsquo;t have mattered, but there were &lt;em&gt;many&lt;/em&gt; cases where going via the BIOS introduced performance overhead or simply didn&amp;rsquo;t offer the functionality that people wanted, and since this was the era of single-user operating systems with no memory protection, there was nothing stopping developers from just hitting the hardware directly to get what they wanted. Changing the underlying hardware would break them.&lt;/p&gt;
&lt;p&gt;And that&amp;rsquo;s what happened. IBM was the biggest player, so people targeted IBM&amp;rsquo;s platform. When BIOS interfaces weren&amp;rsquo;t sufficient they hit the hardware directly - and even if they weren&amp;rsquo;t doing that, they&amp;rsquo;d end up depending on behavioural quirks of IBM&amp;rsquo;s BIOS implementation. The market for DOS-compatible but not PC-compatible mostly vanished, although there were notable exceptions - in Japan the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/PC-98&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;PC-98&lt;/a&gt; platform achieved significant success, largely as a result of the Japanese market being pretty distinct from the rest of the world at that point in time, but also because it actually handled Japanese at a point where the PC platform was basically restricted to ASCII or minor variants thereof.&lt;/p&gt;
&lt;p&gt;So, things remained fairly stable for some time. Underlying hardware changed - the 80286 introduced the ability to access more than a megabyte of address space and would promptly have broken a bunch of things except IBM came up with an utterly terrifying hack that bit me &lt;a class=&#34;link&#34; href=&#34;https://mjg59.livejournal.com/118098.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;back in 2009&lt;/a&gt;, and which ended up sufficiently codified into Intel design that it was one mechanism for &lt;a class=&#34;link&#34; href=&#34;https://connortumbleson.com/2021/07/19/the-xbox-and-a20-line/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;breaking the original XBox security&lt;/a&gt;. The first 286 PC even introduced a new keyboard controller that supported better keyboards but which remained backwards compatible with the original PC to avoid breaking software. Even when IBM launched the PS/2, the first significant rearchitecture of the PC platform with a brand new expansion bus and associated patents to prevent people cloning it without paying off IBM, they made sure that all the hardware was backwards compatible.
For decades, PC compatibility meant not only supporting the officially supported interfaces, it meant supporting the underlying hardware. This is what made it possible to ship install media that was expected to work on any PC, even if you&amp;rsquo;d need some additional media for hardware-specific drivers. It&amp;rsquo;s something that still distinguishes the PC market from the ARM desktop market. But it&amp;rsquo;s not as true as it used to be, and it&amp;rsquo;s interesting to think about whether it ever was as true as people thought.&lt;/p&gt;
&lt;p&gt;Let&amp;rsquo;s take an extreme case. If I buy a modern laptop, can I run 1981-era DOS on it? The answer is clearly no. First, modern systems largely don&amp;rsquo;t implement the legacy BIOS. The entire abstraction layer that DOS relies on isn&amp;rsquo;t there, having been replaced with UEFI. When UEFI first appeared it generally shipped with a Compatibility Services Module, a layer that would translate BIOS interrupts into UEFI calls, allowing vendors to ship hardware with more modern firmware and drivers without having to duplicate them to support older operating systems&lt;sup id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;. Is this system PC compatible? By the strictest of definitions, no.&lt;/p&gt;
&lt;p&gt;Ok. But the hardware is broadly the same, right? There&amp;rsquo;s projects like &lt;a class=&#34;link&#34; href=&#34;https://github.com/FlyGoat/CSMWrap&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;CSMWrap&lt;/a&gt; that allow a CSM to be implemented on top of stock UEFI, so everything that hits BIOS should work just fine. And well yes, assuming they implement the BIOS interfaces fully, anything using the BIOS interfaces will be happy. But what about stuff that doesn&amp;rsquo;t? Old software is going to expect that my &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Sound_Blaster&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Sound Blaster&lt;/a&gt; is going to be on a limited set of IRQs and is going to assume that it&amp;rsquo;s going to be able to install its own interrupt handler and ACK those on the interrupt controller itself and that&amp;rsquo;s really not going to work when you have a PCI card that&amp;rsquo;s been mapped onto some &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;APIC&lt;/a&gt; vector, and also if your keyboard is attached via USB or SPI then reading it via the CSM will work (because it&amp;rsquo;s calling into UEFI to get the actual data) but trying to read the keyboard controller directly won&amp;rsquo;t&lt;sup id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;, so you&amp;rsquo;re still actually relying on the firmware to do the right thing but it&amp;rsquo;s not, because the average person who wants to run DOS on a modern computer owns three fursuits and some knee length socks and while you are important and vital and I love you all you&amp;rsquo;re not enough to actually convince a transglobal megacorp to flip the bit in the chipset that makes all this old stuff work.&lt;/p&gt;
&lt;p&gt;But imagine you are, or imagine you&amp;rsquo;re the sort of person who (like me) thinks writing their own firmware for their weird Chinese Thinkpad knockoff motherboard is a good and sensible use of their time - can you make this work fully? Haha no of course not. Yes, you can probably make sure that the PCI Sound Blaster that&amp;rsquo;s plugged into a Thunderbolt dock has interrupt routing to something that is absolutely no longer an &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Intel_8259&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;8259&lt;/a&gt; but is pretending to be so you can just handle IRQ 5 yourself, and you can probably still even write some SMM code that will make your keyboard work, but what about the corner cases? What if you&amp;rsquo;re trying to run something &lt;a class=&#34;link&#34; href=&#34;https://www.os2museum.com/wp/the-a20-gate-it-wasnt-wordstar/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;built with IBM Pascal 1.0&lt;/a&gt;? There&amp;rsquo;s a risk that it&amp;rsquo;ll assume that trying to access an address just over 1MB will give it the data stored just above 0, and now it&amp;rsquo;ll break. It&amp;rsquo;d work fine on an actual PC, and it won&amp;rsquo;t work here, so are we PC compatible?&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s a very interesting abstract question and I&amp;rsquo;m going to entirely ignore it. Let&amp;rsquo;s talk about PC graphics&lt;sup id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;. The original PC shipped with two different optional graphics cards - the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/IBM_Monochrome_Display_Adapter&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Monochrome Display Adapter&lt;/a&gt; and the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Color_Graphics_Adapter&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Color Graphics Adapter&lt;/a&gt;. If you wanted to run games you were doing it on CGA, because MDA had no mechanism to address individual pixels so you could only render full characters. So, even on the original PC, there was software that would run on some hardware but not on other hardware.&lt;/p&gt;
&lt;p&gt;Things got worse from there. CGA was, to put it mildly, shit. Even IBM knew this - in 1984 they launched the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/IBM_PCjr&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;PCjr&lt;/a&gt;, intended to make the PC platform more attractive to home users. As well as maybe the worst keyboard ever to be associated with the IBM brand, IBM added some new video modes that allowed displaying more than 4 colours on screen at once&lt;sup id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;4&lt;/a&gt;&lt;/sup&gt;, and software that depended on that wouldn&amp;rsquo;t display correctly on an original PC. Of course, because the PCjr was a complete commercial failure, it wouldn&amp;rsquo;t display correctly on any future PCs either. This is going to become a theme.&lt;/p&gt;
&lt;p&gt;There&amp;rsquo;s never been a properly specified PC graphics platform. BIOS support for advanced graphics modes&lt;sup id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;5&lt;/a&gt;&lt;/sup&gt; ended up specified by &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/VESA_BIOS_Extensions&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;VESA&lt;/a&gt; rather than IBM, and even then getting good performance involved hitting hardware directly. It wasn&amp;rsquo;t until Microsoft specced &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/DirectX&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;DirectX&lt;/a&gt; that anything was broadly usable even if you limited yourself to Microsoft platforms, and this was an OS-level API rather than a hardware one. If you stick to BIOS interfaces then CGA-era code will work fine on graphics hardware produced up until the 20-teens, but if you were trying to hit CGA hardware registers directly then you&amp;rsquo;re going to have a bad time. This isn&amp;rsquo;t even a new thing - even if we restrict ourselves to the authentic IBM PC range (and ignore the PCjr), by the time we get to the &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Enhanced_Graphics_Adapter&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Enhanced Graphics Adapter&lt;/a&gt; we&amp;rsquo;re &lt;a class=&#34;link&#34; href=&#34;https://www.vogons.org/viewtopic.php?t=44444&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;not entirely CGA compatible&lt;/a&gt;. Is an IBM PC/AT with EGA PC compatible? You&amp;rsquo;d likely say &amp;ldquo;yes&amp;rdquo;, but there&amp;rsquo;s software written for the original PC that won&amp;rsquo;t work there.&lt;/p&gt;
&lt;p&gt;And, well, let&amp;rsquo;s go even more basic. The original PC had a well defined CPU frequency and a well defined CPU that would take a well defined number of cycles to execute any given instruction. People could write software that depended on that. When CPUs got faster, some software broke. This resulted in systems with a &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/Turbo_button&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Turbo Button&lt;/a&gt; - a button that would drop the clock rate to something approximating the original PC so stuff would stop breaking. It&amp;rsquo;s fine, we&amp;rsquo;d later end up with &lt;a class=&#34;link&#34; href=&#34;https://www.os2museum.com/wp/those-win9x-crashes-on-fast-machines/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Windows crashing on fast machines&lt;/a&gt; because hardware details will absolutely bleed through.&lt;/p&gt;
&lt;p&gt;So, what&amp;rsquo;s a PC compatible? No modern PC will run the DOS that the original PC ran. If you try hard enough you can get it into a state where it&amp;rsquo;ll run most old software, as long as it doesn&amp;rsquo;t have assumptions about memory segmentation or your CPU or want to talk to your GPU directly. And even then it&amp;rsquo;ll potentially be unusable or crash because time is hard.&lt;/p&gt;
&lt;p&gt;The truth is that there&amp;rsquo;s no way we can technically describe a PC Compatible now - or, honestly, ever. If you sent a modern PC back to 1981 the media would be amazed and also point out that it didn&amp;rsquo;t run Flight Simulator. &amp;ldquo;PC Compatible&amp;rdquo; is a socially defined construct, just like &amp;ldquo;Woman&amp;rdquo;. We can get hung up on the details or we can just chill.&lt;/p&gt;
&lt;div class=&#34;footnotes&#34; role=&#34;doc-endnotes&#34;&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;
&lt;p&gt;Windows 7 is entirely happy to boot on UEFI systems except that it relies on being able to use a BIOS call to set the video mode during boot, which has resulted in things like &lt;a class=&#34;link&#34; href=&#34;https://github.com/manatails/uefiseven&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;UEFISeven&lt;/a&gt; to make that work on modern systems that don&amp;rsquo;t provide BIOS compatibility&amp;#160;&lt;a href=&#34;#fnref:1&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;
&lt;p&gt;Back in the 90s and early 2000s operating systems didn&amp;rsquo;t necessarily have native drivers for USB input devices, so there was hardware support for trapping OS accesses to the keyboard controller and redirecting that into &lt;a class=&#34;link&#34; href=&#34;https://en.wikipedia.org/wiki/System_Management_Mode&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;System Management Mode&lt;/a&gt; where some software that was invisible to the OS would speak to the USB controller and then fake a response anyway that&amp;rsquo;s how I made a laptop that could &lt;a class=&#34;link&#34; href=&#34;https://mjg59.dreamwidth.org/52149.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;boot unmodified MacOS X&lt;/a&gt;&amp;#160;&lt;a href=&#34;#fnref:2&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;
&lt;p&gt;(my name will not be &lt;a class=&#34;link&#34; href=&#34;https://www.bonequest.com/150&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Wolfwings Shadowflight&lt;/a&gt;)&amp;#160;&lt;a href=&#34;#fnref:3&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;
&lt;p&gt;Yes yes ok &lt;a class=&#34;link&#34; href=&#34;https://trixter.oldskool.org/2015/04/07/8088-mph-we-break-all-your-emulators/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;8088 MPH&lt;/a&gt; demonstrates that if you &lt;em&gt;really&lt;/em&gt; want to you can do better than that on CGA&amp;#160;&lt;a href=&#34;#fnref:4&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;
&lt;p&gt;and by advanced we&amp;rsquo;re still talking about the 90s, don&amp;rsquo;t get excited&amp;#160;&lt;a href=&#34;#fnref:5&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
        </item>
        
    </channel>
</rss>
