Moving In: What makes a computer feel like home

Last week was new computer day at work. As I was looking between the new machine and the old one, I was thinking about what makes a computer feel like mine. There are settings, little utilities, and how I arrange things that make it feel like home.

I’ve been shockingly consistent over the years. Here’s a screenshot from 2005.

Screenshot of my Titanium PowerBook in 2005 using largely the same configuration I use today

And here’s my personal machine today (with the Dock visible for the photo op).

Screenshot of my 13" 2015 MacBook Pro

(I promise I’ve changed my desktop background a few times, but Hurricane Isabel from the ISS is in my regular rotation.)

  1. Make things small by cranking up the scaled resolution. On a laptop that means the smallest Apple offers — or smaller. On my 13” personal machine I used a hack to enable 1920 × 1200 HiDPI. I don’t go full-native on my 27” external 4K display, but I do use the second-from-highest, 3360 × 1890.
  2. Colors: I set the system accent color to gray (aka Graphite) but keep the highlight color blue.
  3. Clock settings: Day of week, no date, 24-hour clock, show the seconds.
  4. Take basically everything out of the Dock (all I have there permanently is an editor to drag files to), turn off showing recent apps, and turn on auto-hiding. I also make it smaller, using the second-from-smallest tick when resizing while holding . But yes, I keep my Dock at the bottom.
  5. Non-default gestures and hot corners:
    • ExposéMission Control: 4 fingers up
    • App windows: 4 fingers down and top left corner
    • Move between spaces/full-screen apps: 4 fingers side-to-side
    • Paging: 3 finger swipe
    • Desktop: top right corner
    • Never sleep: bottom right corner
    • Display sleep: bottom left corner
  6. Moom with SizeUp-inspired keyboard shortcuts.
  7. Set up a keyboard shortcut () for Notification Center. (I didn’t have a Magic Trackpad for a while, so wanted a quick way to access it. Now it’s habit.)
  8. Revert a couple of recent design changes via accessibility settings: turn on persistent proxy icons and Reduce Transparency.
  9. Finder settings:
    • Turn on the status and path bars
    • Have new windows default to my documents folder (inside ~/Documents/)
    • Set search scope to the current window
    • Show all file extensions
    • Put the path item in the toolbar (though I usually end up -clicking the titlebar)
    • Windows default to list view (though I’m constantly switching between ⌘2 list and ⌘3 columns)
  10. Turn off the new window animation
  11. The menu bar: after the clock, I start out right to left with battery, wifi, volume, MenuMeters[1] average CPU graph, MenuMeters network activity graph, and Day One. Everything else is hidden by Bartender (with a handful of show-for-updates exceptions).
  12. Install Alfred[2] and set it to the “macOS” theme. The muscle memory for the launcher and J for clipboard history are deeply ingrained.
  13. Keyboard layout to Dvorak. (What can I say, I switched 20 years ago.)
  14. And rounding out (pun intended) the I Hate Change category is Displaperture, which I use to round the menu bar on non-notched displays.

  1. I also have iStat Menus, but I’ve been using MenuMeters since ~2004 and honestly I think it just feels more Mac-like and at home in the menu bar. ↩︎

  2. I’d much rather support an established indie Mac developer than an upstart awash in Silicon Valley money and culture. ↩︎

🌊 Waves! Part 1: scroll phaser

This website has a pretty boring design, with one small exception: waves. The bottom of the header gets wAvEy when you scroll the page, and the main navigation links at the top “pluck” when hovered.

It seemed like a delightful but subtle way to combine my interests in UX, audio signal processing, data visualization, and web development. That’s the why, but let’s dig into the details: what and how.

In this post, I focus on the header; in a future post I’ll get into the details of the navigation links.

What

This is a sine wave. It oscillates between -1 and 1 forever in both directions.

f1(x)=sin(x)f_1(x) = \sin(x)

Let’s play with that sine wave. Recall from middle school algebra that if you add or subtract from xx the graph essentially moves from left to right.

f2(x)=sin(x+ϕ)f_2(x) = \sin(x + \phi)

A sine wave repeats every 2π2\pi in the xx direction, so we can’t really say where it is in an absolute sense; an offset of π\pi is the same as an offset of 3π3\pi. For that reason, the offset is described as a phase between 00 and 2π2\pi rather of an absolute value outside that range.

When you add multiple waves together they interfere with each other. Such patterns can get quite complex, but when there are just two waves of the same amplitude and frequency, the interference pattern is rather simple. When the two waves are in phase (i.e., lined up), you get a wave that is twice the amplitude of the individual waves, and when they’re out of phase they sum to zero.

f1(x)=sin(x)f_1(x) = \sin(x) f2(x)=sin(x+ϕ)f_2(x) = \sin(x + \phi) f3(x)=f1(x)+f2(x)f_3(x) = f_1(x) + f_2(x)

So as the green wave changes phase, the orange wave (the sum of the blue and green waves) also changes. That’s what the bottom of the page header does: it traces the curve formed by summing two sine waves. As you scroll, one of the sine waves’ phases shifts relative to the other, creating a curve just like the orange line.

How

Note: The code samples here are out of context and are just intended to be explanatory. To see the whole thing in action, it’s on CodePen.

The gist of making the bottom of the header wavy is creating a CSS clip-path with that wavy shape. clip-path is a CSS property that lets you specify a shape where inside the shape, the element is visible, and outside is hidden. So by making a shape to clip the page header, you can make it look wavy. The shape can be specified using an SVG path string, which is what I did.

To start, let’s not worry about changing the phase or linking it to scrolling. Let’s just take a sine wave, add it to a phase-shifted sine wave, and get a path string.

Computers don’t do well with continuous functions. To get a sense of the shape of a sine wave, you have to sample it, measuring its value at a number of points along the way. The fewer points you can sample, the less computationally intensive the whole thing is, but at the cost of accuracy.

But it turns out you can get a pretty nice looking sine wave by only sampling at its maxima and zero-crossings (so, every π2\frac{\pi}{2}) — assuming you use the right interpolation[1]. So for each point xx between 0 and an arbitrary number of cycles (I chose 2), I calculated sin(x)+sin(x+ϕ)\sin(x) + \sin(x + \phi) at π2\frac{\pi}{2} intervals. ϕ\phi will ultimately be the phase offset determined by the scroll position, but for now we can pick an arbitrary value, like 1.

const cycles = 2;
const xValues = d3.range(
0,
cycles * 2*Math.PI + Math.PI/2,
Math.PI/2
);
let phaseOffset = 1;
let values = xValues
.map(d => ({ x: d, y: Math.sin(d) + Math.sin(d + phaseOffset) }));

That gives the values, but we still need a shape to set the CSS clip-path. For that, there’s the always-useful D3-shape line. Configure it with an x scale that maps the x values above to 0–width of the page, a y scale that maps −2–2 to the desired amplitude (shifted to the bottom of the header), and an interpolation curve (d3.curveNatural). Then, put in the x and y values we just calculated, and out pops an SVG path string.

const xScale = d3.scaleLinear()
.domain([0, cycles * 2*Math.PI])
.range([0, widthPx]);
const yScale = d3.scaleLinear()
.domain([-2, 2])
.range([heightPx - 2 * amplitudePx, heightPx]);
const pathGen = d3.line()
.x(d => xScale(d.x))
.y(d => yScale(d.y))
.curve(d3.curveNatural);

let pathString = pathGen(values);

Now, that’s the just the bottom border of the header, but we want to it to be a closed area around the entire header, so we need to tack V0 H0 Z to the end of it[2].

Diagram showing a schematic of the header with the wavy path starting in the lower left, a line showing it being drawn from left to right, then showing how V0 draws a vertical line to y = 0, H0 draws a line to x = 0, and Z closes the path.

A little detail is that I didn’t want the waviness to change the overall height of the header or affect page flow, but the waves necessarily extend below the bottom edge. So, I had to make the whole header taller by the amplitude, then subtract the amplitude from the bottom margin.

The final operation is to actually set the CSS:

header.style("clip-path", `path("${pathString}")`);

Now all that’s left is to hook it up to scrolling. I used the scroll offset given by window.scrollY and mapped it to a phase for one of two sine waves (the green one above). To make the header flat when scrolled to the top and when scrolled to the height of the header, the phase offset at 0 needs to be an odd multiple of π\pi radians at both ends. I created a linear scale that maps the scrollY domain from 0–header height to a range of π\pi3π3\pi.

const phaseScale = d3.scaleLinear()
.domain([0, heightPx])
.range([Math.PI, 3*Math.PI]);

let phaseOffset = phaseScale(scrollY);

The naive way to listen for scroll events is to add an event listener to the document scroll event that directly updates the recalculates values, gets the path string, and sets it as the clip-path on the header. But you don’t want to do that because scroll events can fire faster than the frame rate at which the browser repaints, so instead I used this approach to debounce the updates. The scroll handler only tracks scrollY, and uses a requestAnimationFrame callback to do the more expensive operations.

There are a couple of other details, like respecting someone’s prefers-reduced-motion setting and using an intersection observer to only do all of this when the header is visible, but that’s about it! Now I have a header with a bottom border that, when scrolled, is a phase-shifting sum of sine waves.

Be sure to check out the whole thing on CodePen.

Part 2, about the plucky nav underlines, is now up.


  1. This sampling rate is 4 times the frequency, which is twice what the Shannon–Nyquist theorem says you need, but that assumes a very specific interpolation involving sinc\mathrm{sinc} functions. ↩︎

  2. In retrospect I could have done this with d3.area(), but I had originally intended to use two clip-paths, one of which was a rectangular gradient that covered everything outside the wavy area. That didn’t work because of a Safari bug I found with a similar approach, and my head was already thinking in terms of line. ↩︎

There is no vaccine for climate change

As I sat waiting the requisite 15 minutes to make sure I didn’t go into anaphylactic shock, I looked out over the arena and reflected on the historic nature of that moment. A building designed for basketball, concerts, and large-scale events sat empty for nearly a year because gatherings were a threat to public health. It had then been repurposed into a makeshift medical facility where vaccines were being administered on a mass scale. There’s something dark — and decidedly not normal — about a space designed for fun being used as a medical facility.

UIC arena sits empty as people get vaccines on the upper level

I felt grateful for the incredible work of the many brilliant and hard-working people both before and during the pandemic who allowed us to reach that moment. But fear of what this medical marvel might symbolize was also on my mind.

We had all been waiting for medicine to end the pandemic, any too many people had been ignoring epidemiology’s “inconvenient” non-medical interventions like social distancing and mask wearing. We had been passively waiting for science to save us with vaccines, and this time we got lucky: science delivered.

This technological solutionism, waiting for a technological savior instead of making sacrifices, is at play in climate change, too. I am absolutely thrilled that mRNA vaccine technology was practically ready and waiting to be applied to SARS-CoV-2 in record time, but it scares me that it reinforces a solutionist attitude: “See? Science saved us from the pandemic, so it’ll also save us from climate change!”

There is no vaccine for climate change.

We’ll need science to get us out of this, yes, but also political will. Political will to reign in corporations. Political will to fund science that can get us even partway there. Political will to do things that hurt in the short term before the status quo does even more damage to more people.

Had we heeded epidemiologists’ advice on COVID, millions of lives around the world could have been saved. Let’s not make the same mistake with climate change, squandering the remaining time we have while waiting for a scientific miracle.

Encode Mighty Things

On February 22, 2021, NASA landed the Perseverance rover on Mars. Encoded in the parachute used for landing was the JPL motto and Teddy Roosevelt quote, “Dare mighty things”. Despite being a secret before landing, the internet quickly decoded the message.

Perseverance parachute

I thought that was pretty cool, so decided to make a little site that lets you make your own parachute using the same encoding. Other people have explained the encoding better than I could, so without further adieu, here is Encode Mighty Things.

Encode Mighty Things

Telephone colophon: Or, how I overengineered my call audio

2023 update using Audio Hijack instead of Reaper

Toward the beginning of the pandemic, a friend asked me how she could use an external vocal mic and a guitar with a pickup on Zoom calls. Sounds easy, right?

But to have the amount of control a musician really wants, it turned out to be a bit more involved. Plus, when working from home for a microphone company, it’s pretty common to use a decent mic in meetings.

This post explains the setup I’ve been using for my calls.

desk with mic, headphones, and speakers (Don’t let the speakers fool you. Use headphones or it’ll feed back when echo cancellation is turned off!)

Ingredients:

The goals are:

  1. Mix the mic or other inputs going into the USB interface in the DAW
  2. Be able to hear/monitor the mix
  3. Route the output of the DAW to a Zoom call
  4. Be able to hear the far end of the call through the same headphones as monitoring the mix

Get Blackhole

The key ingredient here is BlackHole, a virtual audio driver that acts as a passthrough from each input to the corresponding output[1]. This actually needs two instances of BlackHole because Zoom can only send and receive from the first two channels of any audio interface. Fortunately, they offer direct downloads (email required) of each (and have nice instructions for building from source). I have one called BlackHole 16ch and one called BlackHole 2ch, which — surprise — have 16 channels and 2 channels, respectively.

The 16-channel BlackHole device will function as the Zoom speaker; the 2-channel BlackHole will be the Zoom “microphone”.

Set up an aggregate audio device

Reaper will handle all of the audio routing, but since it doesn’t support having different input and output devices, the first thing to do is create an aggregate device in Audio MIDI Setup. This allows the system to treat multiple devices as a single device with all of the channels from the individual devices. It doesn’t really matter what order you add them to the aggregate device, but it should include both BlackHole devices and the audio interface. I have the USB interface set to be the clock source, with the two BlackHole instances set for drift correction.

audio midi setup screenshot

Route and mix in the DAW

Once I got the audio devices set up, I had to route everything in Reaper. The general approach is:

But before doing that, make sure Reaper is set to use the aggregate device in the device preferences.

Reaper device preferences

For every input I want to mix, I created a track. Selecting the input for that track feels almost like just using the regular USB audio interface, but with a whole bunch of other channels thanks to being aggregated with BlackHole.

Reaper input selection

By default, Reaper sends each track to the master out, but in order to hear live input, you have to arm the track and turn on record monitoring.

A fun bonus of routing through a DAW is that you can use plugins! I use a simple NR plugin to deal with HVAC noise, and some compression.

The master out needs to go two places:

So, from the master track’s routing window, add outputs to the USB interface and the two channels of the 2-channel BlackHole interface. The fader/mute button for the USB interface on the output routing of the master is how I adjust whether/how much of myself I want to monitor in my ears.

master output routing

That’s it for everything I want to send to Zoom, but I still want to be able to hear the far end of the call. I could just tell Zoom to send out to the hardware interface, but I want it in the DAW, too. This is useful for recording a tape sync, and so you don’t have to mess up your monitoring volume to change the volume of the far end.

For that, I created a special track and set its input to the 16–channel BlackHole instance. When you set up Zoom to use a particular output device, it sends the audio to the first two channels, so I had to use channels 1 and 2. Here’s where the track becomes special: you have to make sure it doesn’t send to the master out (it’ll feed back if you do). Instead, send it directly to the USB interface’s out.

zoom monitor routing

And that’s it for the DAW.

reaper's mixer

Set up Zoom

The basics of setting up Zoom are simple: it receives the master output of the DAW by setting its microphone to BlackHole 2ch, and setting the speaker to BlackHole 16ch sends the far end’s audio to the DAW on channels 1 and 2 of BlackHole 16ch. Since you can control the output level from the DAW, I maxed out Zoom’s output and input faders and turned off the automatic gain control.

zoom audio settings

That’s really all you need for the basics, but Zoom has a bunch of cool advanced audio settings. Under “Music and Professional Audio” you can tell Zoom to let you turn off all of its audio processing, sending “original sound”. This is great, because what’s the point of having a decent mic if Zoom is going to band-limit and compress it to death? You can also turn on stereo, but I only use that if I really need to, which is rare. (Keep in mind that in order to actually activate these settings, you have to press “Turn on original sound” in the upper left of a call.)

zoom advanced audio settings

Bonus! Sharing system sound

Zoom can share system sound, but when using a setup like this, I don’t recommend it. Turning it on activates some sort of additional virtual audio device on the system, which can mess with things. Remember that Zoom can only send audio out on the first two channels of a device. Thankfully, the system isn’t so limited. To share system sound, I went back to Audio MIDI Setup and under Configure Speakers told it that for stereo out, BlackHole 16ch uses channels 3 and 4.

Audio MIDI Setup speaker config

Now I can set my system output device to BlackHole 16ch, make a new track in the DAW, set its input to BlackHole 16ch, and system sound comes in there. So the far side of a Zoom call comes in on channels 1 and 2, and system sound on 3 and 4.

And that’s it. Happy calling!


  1. I used BlackHole because it’s free and did what I needed. You can achieve the same thing with a nice UI using Loopback from the excellent Rogue Amoeba. ↩︎

Entropic Timer: The Information Entropy of Crossing the Street

You know those countdown timers at crosswalks? Sometimes when crossing the street, I like to try to guess what number it’s on even when I can’t see the whole thing (like when approaching the intersection at an oblique angle).

Crosswalk signal countdown timer

This got me (over)thinking: if I want to know how much time is left, is it better to see the right side of the countdown timer (approaching from the left), or the left side (approaching from the right)? In other words, does the left or right side of the display carry more information?

These timers use seven-segment displays. Even if you didn’t know they were called seven-segment displays, you see them all over the place. They use seven separate segments, labeled A–G, to create each of the 10 digits from 0–9.

A
B
C
D
E
F
G

To form each of the ten digits, the seven segments are turned on (1) or off (0) in different combinations. Here are the standard representations of 0–9.

ABCDEFG
1111110
0110000
1101101
1111001
0110011
1011011
1011111
1110000
1111111
1111011

The seven segments aren’t on all turned on an equal number of times over the course of the ten digits. That means seeing some segments turned on is more probable than others.

.8
.8
.9
.7
.4
.6
.7
On for how many digits?
Segment A8/10
Segment B8/10
Segment C9/10
Segment D7/10
Segment E4/10
Segment F6/10
Segment G7/10

So how can we tell which of these seven segments communicates the most information?

Information entropy

The segments that are on or off for close to half the digits contain more information than those that are either on or off for most digits.

This is intuitive for the same reason a fair coin toss contains more information than tossing a coin with heads on both sides: you’re less certain what you’re going to get, so learn more by observing the value.

Claude Shannon’s[1] concept of entropy from information theory is a good way to quantify this problem. Entropy, HH, is defined as

H(X)=i=1nP(xi)logbP(xi)H(X) = -\sum_{i = 1}^{n} P(x_i)\log_bP(x_i)

Oh no.

Here’s what’s that means in the case of a seven-segment display. XX is a random variable representing whether a segment is on or off. Since a segment can only have two states, the random variable XX's actual values are either on or off. PP is the probability operator, so P(xi)P(x_i) really means the probability that a segment is on or off. (bb is the base of the logarithm. We’re going to use 2 because we like bits.)

Let’s take segment A as an example. It’s on for 8 out of 10 digits, and off for 2 out of 10. That means the probability of seeing it on is 0.8, and the probability of seeing it off is 0.2. In other words (well, symbols), P(xon)=0.8P(x_{\mathrm{on}}) = 0.8 and P(xoff)=0.2P(x_{\mathrm{off}}) = 0.2.

Plugging that in,

H(A)=0.8log20.80.2log20.2=0.722H(A) = -0.8 \log_2 0.8 - 0.2 \log_2 0.2 = 0.722

In Shannon’s terms, there are 0.722 bits of information communicated by segment A of a seven-segment display.

Doing this for all seven segments, we get these entropy values:

.72
.72
.47
.88
.97
.97
.88
Shannon entropy
Segment A0.721928
Segment B0.721928
Segment C0.468996
Segment D0.881291
Segment E0.970951
Segment F0.970951
Segment G0.881291

It sure looks like segments E and F carry the most information. That makes sense because they’re the closest to being on/off 50% of the time. Guess it’s better to approach an intersection from the right in order to see the left-hand segments.

But wait.

When approaching an intersection, you can see both right segments (B and C), or both left segments (E and F). A pair of segments from a single display are anything but independent because they’re both showing part of the same digit, so we can’t just add up their entropies.

Instead, treat each pair as if it holds a single value. Taken together, two segments can take on any of four values (off–off, off–on, on–off, on–on), which is binary for 0–3.

Segments B & CBinaryDecimal
On – On 113
On – On 113
On – Off102
On – On 113
On – On 113
Off – On011
Off – On011
On – On 113
On – On 113
On – On 113
Segments E & FBinaryDecimal
On – On 113
Off – Off 000
On – Off102
Off – Off 000
Off – On 011
Off – On011
On – On113
Off – Off 000
On – On 113
Off – On 011

In this case, our random variable XX can take on four possible values rather than just two. Taking segments E and F as an example, the joint value is 0 for 3/10 digits, 1 for 3/10 digits, 2 for 1/10 digits, and 3 for 3/10 digits. Going back to the initial definition of entropy, we get

H(EF)=310log2310310log2310.1log2.1310log2310=1.90H(EF) = -\tfrac{3}{10}\log_2 \tfrac{3}{10} - \tfrac{3}{10}\log_2 \tfrac{3}{10} - .1\log_2.1 - \tfrac{3}{10}\log_2 \tfrac{3}{10} = 1.90

So we get 1.16 bits of information in joint segments B–C, and 1.90 bits in joint segments E–F. So there you have it: it’s still better to approach an intersection from the right.

But wait!

When was the last time you walked up to an intersection and only saw the timer on one number? If you look for at least half a second (on average), you’ll see it tick down.

A
B
C
D
E
F
G

Luckily, Wikipedia says that

For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is:

H(S)=ipijpi(j)logpi(j)H(\mathcal{S}) = -\sum_i p_i\sum_jp_i(j)\log p_i(j)

where ii is a state (certain preceding characters) and pi(j)p_{i}(j) is the probability of jj given ii as the previous character.

But actually, I don’t like this notation, so I’m going to rewrite it as

H(S)=iP(xi)jP(xjxi)logbP(xjxi)H(\mathcal{S}) = -\sum_i P(x_i)\sum_j P(x_j|x_i)\log_b P(x_j|x_i)

Alright, then. The probability of seeing a given state is the same as before. As for the conditional probabilities, let’s go back to the 0–3 binary values and assume 0 loops back to 9[2]. If we see segments B and C in a 1 state (off–on), the next tick it will be in a 1 state half the time, and a 3 state half the time. Going through the rest of the states and transitions, we get these transition probabilities:

State transition probabilities

So for segments E and F, when i=0i = 0 and j=2j = 2, P(xi)=310P(x_i) = \frac{3}{10} as with before, and P(xjxi)=13P(x_j|x_i) = \frac{1}{3} because, as those circles show, a 0 transitions to a 2 a third of the time.

Now it’s just a matter of an inelegant nested for loop to determine that the first-order entropy rate of segments B–C is 1.00 bits, and 1.03 bits for segments E–F.

So, if you can manage to stare at either the left or right segments for a whole second, you’re still better off looking at the left segments, but not by much.

I’ll leave figuring out the entropy rates for looking at it longer as an exercise for the reader, because I’m done overthinking this (for now).


The 7-segment display CSS is on CodePen.


  1. Shannon and I both got undergrad degrees in EE from the University of Michigan, but he went on to create information theory, and I went on to write this stupid blog post. ↩︎

  2. This makes sense for the 1s place for segments B–C, but not for E–F. ↩︎

Is yontef early this year (dot com)

If two points (or posts) make a trend, interactive data visualizations of the Hebrew calendar are a thing I blog about now. In the long and storied tradition of single-use novelty sites, I’ve created isyontefearlythisyear.com (and its evil twin, isyonteflatethisyear.com). Now you can point to real data when the conversation inevitably comes up before every holiday.

screenshot showing an early Chanukah 2018

When is Passover (or Easter)?

A few weeks ago, @iamreddave tweeted a plot of Easter dates since 1600. I thought it was a very cool looking pattern, with very clear cyclicality.

Being Jewish, I immediately thought of the calendrical connection between Easter and Passover. Specifically, since Easter is usually around Passover, does the 19-year cycles of Hebrew leap years play a role in when Easter falls?

Very briefly (and approximately), a solar year is aligned with the seasons (because a year is one orbit of the earth around the sun), but the Hebrew calendar is based on a lunar calendar in which a month is determined by one cycle through the phases of the moon. The solar year is approximately 365 days, while 12 lunar months are approximately 354 days, or 11 days shorter. If the Hebrew calendar were a pure lunar calendar, over time the months would drift around the year. To make up for this shortfall, a 30-day leap month is added to the Hebrew calendar every two to three years, seven times in a 19-year cycle (years 3, 6, 8, 11, 14, 17, and 19). (30 days × 7 years ≈ 11 days × 19 years. Hey, I said this explanation is approximate.)

To see the effect of Hebrew leap years on Easter dates, I recreated iamreddave’s graph, but with larger points for leap years and points colored by position in the 19-year cycle.

Interact with these graphs at https://projects.noahliebman.net/pesach-easter/

Easter dates

What jumps out to me is that all of the late Easter dates are Hebrew leap years, which is what you’d expect when an additional month has recently been inserted, but all of the early Easter dates are also Hebrew leap years.

Passover, on the other hand, always occurs late in a leap year, as you’d expect:

Passover dates

Toggling between the two, it looks like it’s years with the latest Passovers that get leap-year–early Easters.

Animating between Easter and Passover

Zoom in a bit and you’ll find that the early Easter dates are always years 8, 11, and 19 of the 19-year cycle:

Easter, zoomed in on about 20 years

I thought maybe this happens because the Christian 19-year cycle is shifted by three years from the Jewish cycle (2014 was the first year of the Christian cycle, while 2017/5777 is the first year of the Jewish cycle), but this isn’t the case. Here’s what seems to be happening:

Easter is (by definition) the first Sunday after the full moon after the vernal (in the northern hemisphere) equinox. Typically, that’s the full moon of Nissan (the Hebrew month which contains Passover), but in those three years the leap month pushes Passover so late that it’s a full month later than the equinox. In other words, in those years the new moon that marks the start of Nissan is at least ~14 days after the equinox, which puts a full moon very shortly after the equinox, which is still in Adar II (the month before Nissan).

Shout out to the Hebcal team for their amazing tools!

Interact with these graphs at https://projects.noahliebman.net/pesach-easter/

On fear, nationalism, and oppression in Shmot

With parshat Shmot coinciding with the inauguration (err, Put-in) of Donald Trump, this image from Yossi Fendel has been making the rounds on social media. It quotes the eighth verse of the parsha (and book):

וַיָּ֥קָם מֶֽלֶךְ־חָדָ֖שׁ עַל־מִצְרָ֑יִם אֲשֶׁ֥ר לֹֽא־יָדַ֖ע [אֶת־יוֹסֵֽף]׃

A new king arose over Egypt who did not know [Joseph].

A new king arose who did not know. Image by Yossi Fendel

It’s an ominous image, and makes an important point, but it’s the next couple of sentences that have really stuck me for the last several years:

וַיֹּ֖אמֶר אֶל־עַמּ֑וֹ הִנֵּ֗ה עַ֚ם בְּנֵ֣י יִשְׂרָאֵ֔ל רַ֥ב וְעָצ֖וּם מִמֶּֽנּוּ׃

And he said to his people, “Look, the Israelite people are much too numerous for us.

הָ֥בָה נִֽתְחַכְּמָ֖ה ל֑וֹ פֶּן־יִרְבֶּ֗ה וְהָיָ֞ה כִּֽי־תִקְרֶ֤אנָה מִלְחָמָה֙ וְנוֹסַ֤ף גַּם־הוּא֙ עַל־שֹׂ֣נְאֵ֔ינוּ וְנִלְחַם־בָּ֖נוּ וְעָלָ֥ה מִן־הָאָֽרֶץ׃

Let us deal shrewdly with them, so that they may not increase; otherwise in the event of war they may join our enemies in fighting against us and rise from the ground.”

The liturgy talks a lot about the exodus from Egypt, but focuses far less on why the Israelites became enslaved in the first place. The answer, this parsha makes clear, is fear. Fear of shifting demographics. Fear of an ethnic group that looked different, spoke differently, and had different practices and customs — yet served an important economic function by doing the job no Egyptian was willing to do.

Faced with that fear from shifting demographics, the Pharaoh had at least a couple of courses of action. He could have pushed an agenda of multiculturalism, encouraging the Egyptians and Israelites to get to know one another, thereby mitigating their fear. Instead, he felt that it was more important to maintain what he considered the fundamentally Egyptian character of Egypt.

The United States — at least in theory — was founded not as “a place for a people”, but as a place for all people. Sadly, there are people who believe that America was a white country (back when it was great or something 🙄), and they are now feeling the same fear and oppressive urges the biblical Pharaoh felt.

This is precisely the danger that comes along with ethnic, racial, or religious nationalism. A nation founded as “a place for a people” cannot simultaneously offer full and equal rights/privileges to all, and continue to exist should that people become a minority. And the only ways to maintain the “desired” demographics are exclusion and oppression. Whether it’s in the context of Trump-emboldened white nationalism here in America, or Zionism, its moral equivalent, let’s learn from this week’s well-timed parsha: national ideals that depend on maintaining certain demographics are inherently oppressive.

In a place like America, although changing demographics can bring up a natural fear of the stranger, it also provides us with an opportunity to not be like Pharaoh and to strive for a multicultural ideal. The Torah reminds readers that, because the Israelites were strangers in Egypt, not only is one forbidden to oppress the stranger [1, 2], but it explains how: by loving that stranger [3]. But loving the stranger is abstract. Perhaps it’s better to take a cue from the JPS translation and befriend the stranger. Friends are way less scary than strangers.