Improving Google Fonts Performance

Mikes Notes

I use Noto Sans font at Ajabbi. Maybe this will improve speed performance.

Resources

Improving Google Fonts Performance

By: Cosima Mielke (cm)

Smashing Magazine: #475, 25/09/2024

How can we make Google Fonts load faster? Scott Jehl set up a repository for testing Google’s default font embed code against more optimal approaches. The setup loads the SUSE Google Font from Google’s font server in a variety of ways to find out which approach is most efficient.



As Scott’s tests show, loading fonts with @font-face is faster than the render-blocking default embed we get from Google. The greatest performance improvements show when @font-face is combined with font-display: swap. In this case, text rendering happens a full second sooner than the default Google embed. And even when the page doesn’t use font-display: swap, using @font-face still renders the pages 300ms earlier than the Google standard solution. (cm)

Website migration disrupts service

Mikes Notes

Green Geeks hosts the ajabbi.com website at its Singapore data centre. The website is being shifted to a new web server and service will be disrupted for up to 12 hours as the new DNS record propagates.

Localisation around the world

Mikes Notes

These cartoons say it all.

Resources

Bookworms around the world

"Funny how no matter where you're from, too much reading will always earn an affectionate jibe from your fellows. Though, "bookworm" doesn't have the same zest as "letter-smitten."

For more witty, observational comics about language and culture, check out Itchyfeet's website." - Lokalise Blog

UX Design KPI

Mikes Notes

This list of KPIs comes from Vitaly Friedman's free Smashing Magazine workshop on Inclusive Design Patterns, which was held this morning on Zoom.

Resources

UX Design KPI

Improve

  • Accuracy of data ≈ 100%.
  • Time to complete < 35s.
  • Time to relevance < 30s.
  • Frequency of errors < 3/v.
  • Error recovery speed < 7s.
  • Top tasks success > 80%.
  • System Usability Scale > 70.
  • WCAG AA coverage ≈ 100%.
  • Core Web Vitals ≈ 100%.

Measure

  • Sales/marketing costs < $15K/w.

Reduce

  • Flesch reading ease score > 60.
  • “Turn-around” score < 1 week.
  • Service desk inquiries < 35/w.
  • Search query iterations < 3/query.
  • Time to release/update < 14 days.
  • Non-content on a page < 25%.
  • Environmental impact < 0.3g/p.
  • Onboarding time < 15 sec.

Normalize.css

Mikes Notes

Another modern alternative to CSS resets is Normalize.css. It normalizes styles for various elements, corrects bugs and browser inconsistencies, improves usability with subtle modifications, and uses detailed comments to explain what code does. - Smashing Magazine

It can also be used with HTML5 Boilerplate.

…as used by Twitter, TweetDeck, GitHub, Soundcloud, Guardian, Medium, GOV.UK, Bootstrap, HTML5 Boilerplate, and many others.

Normalize.css makes browsers render all elements more consistently and in line with modern standards. It precisely targets only the styles that need normalizing.

Version

The version downloaded was v8.0.1, and the code is listed below.

What does it do?

  • Preserves useful defaults, unlike many CSS resets.
  • Normalizes styles for a wide range of elements.
  • Corrects bugs and common browser inconsistencies.
  • Improves usability with subtle modifications.
  • Explains what code does using detailed comments.

Browser support

  • Chrome
  • Edge
  • Firefox ESR+
  • Internet Explorer 10+
  • Safari 8+
  • Opera

Resources

About normalize.css

By: Nicolas Gallagher

Normalize.css is a small CSS file that provides better cross-browser consistency in the default styling of HTML elements. It’s a modern, HTML5-ready, alternative to the traditional CSS reset.

  • Normalize.css project site
  • Normalize.css source on GitHub

At the time of writing, normalize.css is used in some form by Twitter Bootstrap, HTML5 Boilerplate, GOV.UK, Rdio, CSS Tricks, and many other frameworks, toolkits, and sites.

Overview

Normalize.css is an alternative to CSS resets. The project is the product of 100’s of hours of extensive research on the differences between default browser styles.

The aims of normalize.css are as follows:

  • Preserve useful browser defaults rather than erasing them.
  • Normalize styles for a wide range of HTML elements.
  • Correct bugs and common browser inconsistencies.
  • Improve usability with subtle improvements.
  • Explain the code using comments and detailed documentation.

It supports a wide range of browsers (including mobile browsers) and includes CSS that normalizes HTML5 elements, typography, lists, embedded content, forms, and tables.

Despite the project being based on the principle of normalization, it uses pragmatic defaults where they are preferable.

Normalize vs Reset

It’s worth understanding in greater detail how normalize.css differs from traditional CSS resets.

Normalize.css preserves useful defaults

Resets impose a homogenous visual style by flattening the default styles for almost all elements. In contrast, normalize.css retains many useful default browser styles. This means that you don’t have to redeclare styles for all the common typographic elements.

When an element has different default styles in different browsers, normalize.css aims to make those styles consistent and in line with modern standards when possible.

Normalize.css corrects common bugs

It fixes common desktop and mobile browser bugs that are out of scope for resets. This includes display settings for HTML5 elements, correcting font-size for preformatted text, SVG overflow in IE9, and many form-related bugs across browsers and operating systems.

For example, this is how normalize.css makes the new HTML5 search input type cross-browser consistent and stylable:

Resets often fail to bring browsers to a level starting point with regards to how an element is rendered. This is particularly true of forms – an area where normalize.css can provide some significant assistance.

Normalize.css doesn’t clutter your debugging tools

A common irritation when using resets is the large inheritance chain that is displayed in browser CSS debugging tools.

This is not such an issue with normalize.css because of the targeted styles and the conservative use of multiple selectors in rulesets.

Normalize.css is modular

The project is broken down into relatively independent sections, making it easy for you to see exactly which elements need specific styles. Furthermore, it gives you the potential to remove sections (e.g., the form normalizations) if you know they will never be needed by your website.

Normalize.css has extensive documentation

The normalize.css code is based on detailed cross-browser research and methodical testing. The file is heavily documented inline and further expanded upon in the GitHub Wiki. This means that you can find out what each line of code is doing, why it was included, what the differences are between browsers, and more easily run your own tests.

The project aims to help educate people on how browsers render elements by default, and make it easier for them to be involved in submitting improvements.

How to use normalize.css

First, install or download normalize.css from GitHub. There are then 2 main ways to make use of it.

  • Approach 1: use normalize.css as a starting point for your own project’s base CSS, customising the values to match the design’s requirements.
  • Approach 2: include normalize.css untouched and build upon it, overriding the defaults later in your CSS if necessary.

Closing comments

Normalize.css is significantly different in scope and execution to CSS resets. It’s worth trying it out to see if it fits with your development approach and preferences.

The project is developed in the open on GitHub. Anyone can report issues and submit patches. The full history of the project is available for anyone to see, and the context and reasoning for all changes can be found in the commit messages and the issue threads.

Related reading

Detailed information on default UA styles: WHATWG suggestions for rendering HTML documents, Internet Explorer User Agent Style Sheets,and CSS2.1 User Agent Style Sheet Defaults.

Normalize.css code

/*! normalize.css v8.0.1 | MIT License | github.com/necolas/normalize.css */

/* Document
   ========================================================================== */

/**
 * 1. Correct the line height in all browsers.
 * 2. Prevent adjustments of font size after orientation changes in iOS.
 */

html {
  line-height: 1.15; /* 1 */
  -webkit-text-size-adjust: 100%; /* 2 */
}

/* Sections
   ========================================================================== */

/**
 * Remove the margin in all browsers.
 */

body {
  margin: 0;
}

/**
 * Render the `main` element consistently in IE.
 */

main {
  display: block;
}

/**
 * Correct the font size and margin on `h1` elements within `section` and
 * `article` contexts in Chrome, Firefox, and Safari.
 */

h1 {
  font-size: 2em;
  margin: 0.67em 0;
}

/* Grouping content
   ========================================================================== */

/**
 * 1. Add the correct box sizing in Firefox.
 * 2. Show the overflow in Edge and IE.
 */

hr {
  box-sizing: content-box; /* 1 */
  height: 0; /* 1 */
  overflow: visible; /* 2 */
}

/**
 * 1. Correct the inheritance and scaling of font size in all browsers.
 * 2. Correct the odd `em` font sizing in all browsers.
 */

pre {
  font-family: monospace, monospace; /* 1 */
  font-size: 1em; /* 2 */
}

/* Text-level semantics
   ========================================================================== */

/**
 * Remove the gray background on active links in IE 10.
 */

a {
  background-color: transparent;
}

/**
 * 1. Remove the bottom border in Chrome 57-
 * 2. Add the correct text decoration in Chrome, Edge, IE, Opera, and Safari.
 */

abbr[title] {
  border-bottom: none; /* 1 */
  text-decoration: underline; /* 2 */
  text-decoration: underline dotted; /* 2 */
}

/**
 * Add the correct font weight in Chrome, Edge, and Safari.
 */

b,
strong {
  font-weight: bolder;
}

/**
 * 1. Correct the inheritance and scaling of font size in all browsers.
 * 2. Correct the odd `em` font sizing in all browsers.
 */

code,
kbd,
samp {
  font-family: monospace, monospace; /* 1 */
  font-size: 1em; /* 2 */
}

/**
 * Add the correct font size in all browsers.
 */

small {
  font-size: 80%;
}

/**
 * Prevent `sub` and `sup` elements from affecting the line height in
 * all browsers.
 */

sub,
sup {
  font-size: 75%;
  line-height: 0;
  position: relative;
  vertical-align: baseline;
}

sub {
  bottom: -0.25em;
}

sup {
  top: -0.5em;
}

/* Embedded content
   ========================================================================== */

/**
 * Remove the border on images inside links in IE 10.
 */

img {
  border-style: none;
}

/* Forms
   ========================================================================== */

/**
 * 1. Change the font styles in all browsers.
 * 2. Remove the margin in Firefox and Safari.
 */

button,
input,
optgroup,
select,
textarea {
  font-family: inherit; /* 1 */
  font-size: 100%; /* 1 */
  line-height: 1.15; /* 1 */
  margin: 0; /* 2 */
}

/**
 * Show the overflow in IE.
 * 1. Show the overflow in Edge.
 */

button,
input { /* 1 */
  overflow: visible;
}

/**
 * Remove the inheritance of text transform in Edge, Firefox, and IE.
 * 1. Remove the inheritance of text transform in Firefox.
 */

button,
select { /* 1 */
  text-transform: none;
}

/**
 * Correct the inability to style clickable types in iOS and Safari.
 */

button,
[type="button"],
[type="reset"],
[type="submit"] {
  -webkit-appearance: button;
}

/**
 * Remove the inner border and padding in Firefox.
 */

button::-moz-focus-inner,
[type="button"]::-moz-focus-inner,
[type="reset"]::-moz-focus-inner,
[type="submit"]::-moz-focus-inner {
  border-style: none;
  padding: 0;
}

/**
 * Restore the focus styles unset by the previous rule.
 */

button:-moz-focusring,
[type="button"]:-moz-focusring,
[type="reset"]:-moz-focusring,
[type="submit"]:-moz-focusring {
  outline: 1px dotted ButtonText;
}

/**
 * Correct the padding in Firefox.
 */

fieldset {
  padding: 0.35em 0.75em 0.625em;
}

/**
 * 1. Correct the text wrapping in Edge and IE.
 * 2. Correct the color inheritance from `fieldset` elements in IE.
 * 3. Remove the padding so developers are not caught out when they zero out
 *    `fieldset` elements in all browsers.
 */

legend {
  box-sizing: border-box; /* 1 */
  color: inherit; /* 2 */
  display: table; /* 1 */
  max-width: 100%; /* 1 */
  padding: 0; /* 3 */
  white-space: normal; /* 1 */
}

/**
 * Add the correct vertical alignment in Chrome, Firefox, and Opera.
 */

progress {
  vertical-align: baseline;
}

/**
 * Remove the default vertical scrollbar in IE 10+.
 */

textarea {
  overflow: auto;
}

/**
 * 1. Add the correct box sizing in IE 10.
 * 2. Remove the padding in IE 10.
 */

[type="checkbox"],
[type="radio"] {
  box-sizing: border-box; /* 1 */
  padding: 0; /* 2 */
}

/**
 * Correct the cursor style of increment and decrement buttons in Chrome.
 */

[type="number"]::-webkit-inner-spin-button,
[type="number"]::-webkit-outer-spin-button {
  height: auto;
}

/**
 * 1. Correct the odd appearance in Chrome and Safari.
 * 2. Correct the outline style in Safari.
 */

[type="search"] {
  -webkit-appearance: textfield; /* 1 */
  outline-offset: -2px; /* 2 */
}

/**
 * Remove the inner padding in Chrome and Safari on macOS.
 */

[type="search"]::-webkit-search-decoration {
  -webkit-appearance: none;
}

/**
 * 1. Correct the inability to style clickable types in iOS and Safari.
 * 2. Change font properties to `inherit` in Safari.
 */

::-webkit-file-upload-button {
  -webkit-appearance: button; /* 1 */
  font: inherit; /* 2 */
}

/* Interactive
   ========================================================================== */

/*
 * Add the correct display in Edge, IE 10+, and Firefox.
 */

details {
  display: block;
}

/*
 * Add the correct display in all browsers.
 */

summary {
  display: list-item;
}

/* Misc
   ========================================================================== */

/**
 * Add the correct display in IE 10+.
 */

template {
  display: none;
}

/**
 * Add the correct display in IE 10.
 */

[hidden] {
  display: none;
}

Evolutionary Algorithms: Nature as Inspiration for Problem Solving

Mikes Notes

How do Evolutionary Algorithms work?

The basic concept behind evolutionary algorithms is relatively simple. A population of potential solutions to a problem is generated randomly. These solutions are then evaluated based on their fitness - their ability to solve the given problem. The individuals with the highest fitness are selected to reproduce and produce offspring. These offspring inherit some of the characteristics of their parents, and this process of selection, recombination, and mutation is repeated over many generations.

Evolutionary algorithms can converge towards an optimal or near-optimal solution to the given problem by continuously iterating and refining the solutions in the population.

Applications of Evolutionary Algorithms

Evolutionary algorithms have found applications in a wide range of fields and disciplines. Some typical applications include:

  • Engineering: Evolutionary algorithms are often used to design and optimise complex systems and structures.
  • Finance: These algorithms have been applied in stock market analysis, portfolio optimization, and risk management.
  • Biology: Evolutionary algorithms are utilized in phylogenetics, protein structure prediction, and genome assembly.
  • Computer Science: They are commonly used in data mining, image processing, and pattern recognition.

Conclusion

Evolutionary algorithms offer a powerful and versatile approach to problem-solving that continues to find new applications and opportunities across various domains. Drawing on the principles of evolution in nature, these algorithms have proven effective in finding innovative solutions to complex optimization problems.

So, next time you are faced with a challenging problem, consider looking to nature for inspiration and exploring the fascinating world of evolutionary algorithms.

The Architecture of Open Source Applications

Mikes Notes

The latest Quastor engineering newsletter included a link to these free books, which teach software architecture using practical open-source examples.

Resources

Article

Architects look at thousands of buildings during their training, and study critiques of those buildings written by masters. In contrast, most software developers only ever get to know a handful of large programs well—usually programs they wrote themselves—and never study the great programs of history. As a result, they repeat one another's mistakes rather than building on one another's successes.

Our goal is to change that. In these two books, the authors of four dozen open source applications explain how their software is structured, and why. What are each program's major components? How do they interact? And what did their builders learn during their development? In answering these questions, the contributors to these books provide unique insights into how they think.

If you are a junior developer, and want to learn how your more experienced colleagues think, these books are the place to start. If you are an intermediate or senior developer, and want to see how your peers have solved hard design problems, these books can help you too. - AOSA

AOSA Volume 1

AOSA Volume 2

The Performance of Open Source Applications

500 Lines or Less

Introduction Michael DiBernardo
1 Blockcode: A visual programming toolkit Dethe Elza
2 A Continuous Integration System Malini Das
3 Clustering by Consensus Dustin J. Mitchell
4 Contingent: A Fully Dynamic Build System Brandon Rhodes and Daniel Rocco
5 A Web Crawler With asyncio Coroutines A. Jesse Jiryu Davis and Guido van Rossum
6 Dagoba: an in-memory graph database Dann Toliver
7 DBDB: Dog Bed Database Taavi Burns
8 An Event-Driven Web Framework Leo Zovic
9 A Flow Shop Scheduler Dr. Christian Muise
10 An Archaeology-Inspired Database Yoav Rubin
11 Making Your Own Image Filters Cate Huston
12 A Python Interpreter Written in Python Allison Kaptur
13 A 3D Modeller Erick Dransch
14 A Simple Object Model Carl Friedrich Bolz
15 Optical Character Recognition (OCR) Marina Samuel
16 A Pedometer in the Real World Dessy Daskalov
17 The Same-Origin Policy Eunsuk Kang, Santiago Perez De Rosso, and Daniel Jackson
18 A Rejection Sampler Jessica B. Hamrick
19 Web Spreadsheet Audrey Tang
20 Static Analysis Leah Hanson
21 A Template Engine Ned Batchelder
22 A Simple Web Server Greg Wilson

License and Royalties

This work is made available under the Creative Commons Attribution 3.0 Unported license. Please see the full description of the license for details. All royalties from sales of these books will be donated to Amnesty International.

Contributing

Dozens of volunteers worked hard to create this book, but there is still lots to do. You can help by reporting errors, by helping to translate the content into other languages and formats, or by describing the architecture of other open source projects. Please contact us the coordinators for various translations listed below, or mail us directly at gvwilson@third-bit.com if you would like to start a new translation or write a chapter yourself.

Digital signatures and how to avoid them

Mikes Notes

An interesting article on security.

Resources

Digital signatures and how to avoid them

By: Neil Madden

neilmadden.blog: 18 September 2024

Wikipedia’s definition of a digital signature is:

A digital signature is a mathematical scheme for verifying the authenticity of digital messages or documents. A valid digital signature on a message gives a recipient confidence that the message came from a sender known to the recipient. - Wikipedia

They also have a handy diagram of the process by which digital signatures are created and verified:

Source: https://commons.m.wikimedia.org/wiki/File:Private_key_signing.svg#mw-jump-to-license (CC-BY-SA)

Alice signs a message using her private key and Bob can then verify that the message came from Alice, and hasn’t been tampered with, using her public key. This all seems straightforward and uncomplicated and is probably most developers’ view of what signatures are for and how they should be used. This has led to the widespread use of signatures for all kinds of things: validating software updates, authenticating SSL connections, and so on.

But cryptographers have a different way of looking at digital signatures that has some surprising aspects. This more advanced way of thinking about digital signatures can tell us a lot about what are appropriate, and inappropriate, use-cases.

Identification protocols

There are several ways to build secure signature schemes. Although you might immediately think of RSA, the scheme perhaps most beloved by cryptographers is Schnorr signatures. These form the basis of modern EdDSA signatures, and also (in heavily altered form) DSA/ECDSA.

The story of Schnorr signatures starts not with a signature scheme, but instead with an interactive identification protocol. An identification protocol is a way to prove who you are (the “prover”) to some verification service (the “verifier”). Think logging into a website. But note that the protocol is only concerned with proving who you are, not in establishing a secure session or anything like that.

There are a whole load of different ways to do this, like sending a username and password or something like WebAuthn/passkeys (an ironic mention that we’ll come back to later). One particularly elegant protocol is known as Schnorr’s protocol. It’s elegant because it is simple and only relies on basic security conjectures that are widely accepted, and it also has some nice properties that we’ll mention shortly.

The basic structure of the protocol involves three phases: Commit-Challenge-Response. If you are familiar with challenge-response authentication protocols this just adds an additional commitment message at the start.

Alice (for it is she!) wants to prove to Bob who she is. Alice already has a long-term private key, a, and Bob already has the corresponding public key, A. These keys are in a Diffie-Hellman-like finite field or elliptic curve group, so we can say A = g^a mod p where g is a generator and p is the prime modulus of the group. The protocol then works like this:

  1. Alice generates a random ephemeral keyr, and the corresponding public key R = g^r mod p. She sends R to Bob as the commitment.
  2. Bob stores R and generates a random challenge, c and sends that to Alice.
  3. Alice computes s = ac + r and sends that back to Bob as the response.
  4. Finally, Bob checks if g^s = A^c * R (mod p). If it is then Alice has successfully authenticated, otherwise it’s an imposter. The reason this works is that g^s = g^(ac + r) and A^c * R = (g^a)^c * g^r = g^(ac + r) too. Why it’s secure is another topic for another day.

Don’t worry if you don’t understand all this. I’ll probably do a blog post about Schnorr identification at some point, but there are plenty of explainers online if you want to understand it. For now, just accept that this is indeed a secure identification scheme. It has some nice properties too.

One is that it is a (honest-verifier) zero knowledge proof of knowledge (of the private key). That means that an observer watching Alice authenticate, and the verifier themselves, learn nothing at all about Alice’s private key from watching those runs, but the verifier is nonetheless convinced that Alice knows it.

This is because it is easy to create valid runs of the protocol for any private key by simply working backwards rather than forwards, starting with a response and calculating the challenge and commitment that fit that response. Anyone can do this without needing to know anything about the private key. That is, for any given challenge you can find a commitment for which it is easy to compute the correct response. (What they cannot do is correctly answer a random challenge after they’ve already sent a commitment). So they learn no information from observing a genuine interaction.

Fiat-Shamir

So what does this identification protocol have to do with digital signatures? The answer is that there is a process known as the Fiat-Shamir heuristic by which you can automatically transform certain interactive identification protocols into a non-interactive signature scheme. You can’t do this for every protocol, only ones that have a certain structure, but Schnorr identification meets the criteria. The resulting signature scheme is known, amazingly, as the Schnorr signature scheme.

You may be relieved to hear that the Fiat-Shamir transformation is incredibly simple. We basically just replace the challenge part of the protocol with a cryptographic hash function, computed over the message we want to sign and the commitment public key: c = H(R, m).

That’s it. The signature is then just the pair (R, s).

Note that Bob is now not needed in the process at all and Alice can compute this all herself. To validate the signature, Bob (or anyone else) recomputes c by hashing the message and R and then performs the verification step just as in the identification protocol.

Schnorr signatures built this way are secure (so long as you add some critical security checks!) and efficient. The EdDSA signature scheme is essentially just a modern incarnation of Schnorr with a few tweaks.

What does this tell us about appropriate uses of signatures

The way I’ve just presented Schnorr signatures and Fiat-Shamir is the way they are usually presented in cryptography textbooks. We start with an identification protocol, performed a simple transformation and ended with a secure signature scheme. Happy days! These textbooks then usually move on to all the ways you can use signatures and never mention identification protocols again. But the transformation isn’t an entirely positive process: a lot was lost in translation!

There are many useful aspects of interactive identification protocols that are lost by signature schemes:

  • A protocol run is only meaningful for the two parties involved in the interaction (Alice and Bob). By contrast a signature is equally valid for everyone.
  • A protocol run is specific to a given point in time. Alice’s response is to a specific challenge issued by Bob just prior. A signature can be verified at any time.

These points may sound like bonuses for signature schemes, but they are actually drawbacks in many cases. Signatures are often used for authentication, where we actually want things to be tied to a specific interaction. This lack of context in signatures is why standards like JWT have to add lots of explicit statements such as audience and issuer checks to ensure the JWT came from the expected source and arrived at the intended destination, and expiry information or unique identifiers (that have to be remembered) to prevent replay attacks. A significant proportion of JWT vulnerabilities in the wild are caused by developers forgetting to perform these checks.

WebAuthn is another example of this phenomenon. On paper it is a textbook case of an identification protocol. But because it is built on top of digital signatures it requires adding a whole load of “contextual bindings” for similar reasons to JWTs. Ironically, the most widely used WebAuthn signature algorithm, ECDSA, is itself a Schnorr-ish scheme.

TLS also uses signatures for what is essentially an identification protocol, and similarly has had a range of bugs due to insufficient context binding information being included in the signed data. (SSL also uses signatures for verifying certificates, which is IMO a perfectly good use of the technology. Certificates are exactly a case of where you want to convert an interactive protocol into a non-interactive one. But then again we also do an interactive protocol (DNS) in that case anyway :shrug:).

In short, many uses of digital signatures are actually identification schemes of one form or another and would be better off using an actual identification scheme. But that doesn’t mean using something like Schnorr’s protocol! There are actually better alternatives that I’ll come back to at the end.

Special Soundness: fragility by design

Before I look at alternatives, I want to point out that pretty much all in-use signature schemes are extremely fragile in practice. The zero-knowledge security of Schnorr identification is based on it having a property called special soundness. Special soundness essentially says that if Alice accidentally reuses the same commitment (R) for two runs of the protocol, then any observer can recover her private key.

This sounds like an incredibly fragile notion to build into your security protocol! If I accidentally reuse this random value then I leak my entire private key??! And in fact it is: such nonce-reuse bugs are extremely common in deployed signature systems, and have led to compromise of lots of private keys (eg Playstation 3, various Bitcoin wallets etc).

But despite its fragility, this notion of special soundness is crucial to the security of many signature systems. They are truly a cursed technology!

To solve this problem, some implementations and newer standards like EdDSA use deterministic commitments, which are based on a hash of the private key and the message. This ensures that the commitment will only ever be the same if the message is identical: preventing the private key from being recovered. Unfortunately, such schemes turned out to be more susceptible to fault injection attacks (a much less scalable or general attack vector), and so now there are “hedged” schemes that inject a bit of randomness back into the hash. It’s cursed turtles all the way down.

If your answer to this is to go back to good old RSA signatures, don’t be fooled. There are plenty of ways to blow your foot off using old faithful, but that’s for another post.

Did you want non-repudiation with that?

Another way that signatures cause issues is that they are too powerful for the job they are used for. You just wanted to authenticate that an email came from a legitimate server, but now you are providing irrefutable proof of the provenance of leaked private communications. Oops!

Signatures are very much the hammer of cryptographic primitives. As well as authenticating a message, they also provide third-party verifiability and (part of) non-repudiation.

You don’t need to explicitly want anonymity or deniability to understand that these strong security properties can have damaging and unforeseen side-effects. Non-repudiation should never be the default in open systems.

I could go on. From the fact that there are basically zero acceptable post-quantum signature schemes (all way too large or too risky), to issues with non-canonical signatures and cofactors and on and on. The problems of signature schemes never seem to end.

What to use instead?

Ok, so if signatures are so bad, what can I use instead?

Firstly, if you can get away with using a simple shared secret scheme like HMAC, then do so. In contrast to public key crypto, HMAC is possibly the most robust crypto primitive ever invented. You’d have to go really far out of your way to screw up HMAC. (I mean, there are timing attacks and that time that Bouncy Castle confused bits and bytes and used 16-bit HMAC keys, so still do pay attention a little bit…)

If you need public key crypto, then… still use HMAC. Use an authenticated KEM with X25519 to generate a shared secret and use that with HMAC to authenticate your message. This is essentially public key authenticated encryption without the actual encryption. (Some people mistakenly refer to such schemes as designated verifier signatures, but they are not).

Signatures are good for software/firmware updates and pretty terrible for everything else.