Last week was new computer day at work. As I was looking between the new machine and the old one, I was thinking about what makes a computer feel like mine. There are settings, little utilities, and how I arrange things that make it feel like home.
I’ve been shockingly consistent over the years. Here’s a screenshot from 2005.
And here’s my personal machine today (with the Dock visible for the photo op).
(I promise I’ve changed my desktop background a few times, but Hurricane Isabel from the ISS is in my regular rotation.)
Make things small by cranking up the scaled resolution. On a laptop that means the smallest Apple offers — or smaller. On my 13” personal machine I used a hack to enable 1920 × 1200 HiDPI. I don’t go full-native on my 27” external 4K display, but I do use the second-from-highest, 3360 × 1890.
Colors: I set the system accent color to gray (aka Graphite) but keep the highlight color blue.
Clock settings: Day of week, no date, 24-hour clock, show the seconds.
Take basically everything out of the Dock (all I have there permanently is an editor to drag files to), turn off showing recent apps, and turn on auto-hiding. I also make it smaller, using the second-from-smallest tick when resizing while holding ⌥. But yes, I keep my Dock at the bottom.
Non-default gestures and hot corners:
ExposéMission Control: 4 fingers up
App windows: 4 fingers down and top left corner
Move between spaces/full-screen apps: 4 fingers side-to-side
Set up a keyboard shortcut (⌃⇧⬅) for Notification Center. (I didn’t have a Magic Trackpad for a while, so wanted a quick way to access it. Now it’s habit.)
Revert a couple of recent design changes via accessibility settings: turn on persistent proxy icons and Reduce Transparency.
Finder settings:
Turn on the status and path bars
Have new windows default to my documents folder (inside ~/Documents/)
Set search scope to the current window
Show all file extensions
Put the path item in the toolbar (though I usually end up ⌘-clicking the titlebar)
Windows default to list view (though I’m constantly switching between ⌘2 list and ⌘3 columns)
The menu bar: after the clock, I start out right to left with battery, wifi, volume, MenuMeters[1] average CPU graph, MenuMeters network activity graph, and Day One. Everything else is hidden by Bartender (with a handful of show-for-updates exceptions).
Install Alfred[2] and set it to the “macOS” theme. The ⇧⌥⎵ muscle memory for the launcher and ⌘⌥J for clipboard history are deeply ingrained.
Keyboard layout to Dvorak. (What can I say, I switched 20 years ago.)
And rounding out (pun intended) the I Hate Change category is Displaperture, which I use to round the menu bar on non-notched displays.
I also have iStat Menus, but I’ve been using MenuMeters since ~2004 and honestly I think it just feels more Mac-like and at home in the menu bar. ↩︎
This website has a pretty boring design, with one small exception: waves. The bottom of the header gets wAvEy when you scroll the page, and the main navigation links at the top “pluck” when hovered.
It seemed like a delightful but subtle way to combine my interests in UX, audio signal processing, data visualization, and web development. That’s the why, but let’s dig into the details: what and how.
In this post, I focus on the header; in a future post I’ll get into the details of the navigation links.
What
This is a sine wave. It oscillates between -1 and 1 forever in both directions.
f1(x)=sin(x)
Let’s play with that sine wave. Recall from middle school algebra that if you add or subtract from x the graph essentially moves from left to right.
f2(x)=sin(x+ϕ)
A sine wave repeats every 2π in the x direction, so we can’t really say where it is in an absolute sense; an offset of π is the same as an offset of 3π. For that reason, the offset is described as a phase between 0 and 2π rather of an absolute value outside that range.
When you add multiple waves together they interfere with each other. Such patterns can get quite complex, but when there are just two waves of the same amplitude and frequency, the interference pattern is rather simple. When the two waves are in phase (i.e., lined up), you get a wave that is twice the amplitude of the individual waves, and when they’re out of phase they sum to zero.
f1(x)=sin(x)f2(x)=sin(x+ϕ)f3(x)=f1(x)+f2(x)
So as the green wave changes phase, the orange wave (the sum of the blue and green waves) also changes. That’s what the bottom of the page header does: it traces the curve formed by summing two sine waves. As you scroll, one of the sine waves’ phases shifts relative to the other, creating a curve just like the orange line.
How
Note: The code samples here are out of context and are just intended to be explanatory. To see the whole thing in action, it’s on CodePen.
The gist of making the bottom of the header wavy is creating a CSS clip-path with that wavy shape. clip-path is a CSS property that lets you specify a shape where inside the shape, the element is visible, and outside is hidden. So by making a shape to clip the page header, you can make it look wavy. The shape can be specified using an SVG path string, which is what I did.
To start, let’s not worry about changing the phase or linking it to scrolling. Let’s just take a sine wave, add it to a phase-shifted sine wave, and get a path string.
Computers don’t do well with continuous functions. To get a sense of the shape of a sine wave, you have to sample it, measuring its value at a number of points along the way. The fewer points you can sample, the less computationally intensive the whole thing is, but at the cost of accuracy.
But it turns out you can get a pretty nice looking sine wave by only sampling at its maxima and zero-crossings (so, every 2π) — assuming you use the right interpolation[1]. So for each point x between 0 and an arbitrary number of cycles (I chose 2), I calculated sin(x)+sin(x+ϕ) at 2π intervals. ϕ will ultimately be the phase offset determined by the scroll position, but for now we can pick an arbitrary value, like 1.
That gives the values, but we still need a shape to set the CSS clip-path. For that, there’s the always-useful D3-shape line. Configure it with an x scale that maps the x values above to 0–width of the page, a y scale that maps −2–2 to the desired amplitude (shifted to the bottom of the header), and an interpolation curve (d3.curveNatural). Then, put in the x and y values we just calculated, and out pops an SVG path string.
Now, that’s the just the bottom border of the header, but we want to it to be a closed area around the entire header, so we need to tack V0 H0 Z to the end of it[2].
A little detail is that I didn’t want the waviness to change the overall height of the header or affect page flow, but the waves necessarily extend below the bottom edge. So, I had to make the whole header taller by the amplitude, then subtract the amplitude from the bottom margin.
Now all that’s left is to hook it up to scrolling. I used the scroll offset given by window.scrollY and mapped it to a phase for one of two sine waves (the green one above). To make the header flat when scrolled to the top and when scrolled to the height of the header, the phase offset at 0 needs to be an odd multiple of π radians at both ends. I created a linear scale that maps the scrollY domain from 0–header height to a range of π–3π.
The naive way to listen for scroll events is to add an event listener to the document scroll event that directly updates the recalculates values, gets the path string, and sets it as the clip-path on the header. But you don’t want to do that because scroll events can fire faster than the frame rate at which the browser repaints, so instead I used this approach to debounce the updates. The scroll handler only tracks scrollY, and uses a requestAnimationFrame callback to do the more expensive operations.
There are a couple of other details, like respecting someone’s prefers-reduced-motion setting and using an intersection observer to only do all of this when the header is visible, but that’s about it! Now I have a header with a bottom border that, when scrolled, is a phase-shifting sum of sine waves.
Part 2, about the plucky nav underlines, is now up.
This sampling rate is 4 times the frequency, which is twice what the Shannon–Nyquist theorem says you need, but that assumes a very specific interpolation involving sinc functions. ↩︎
In retrospect I could have done this with d3.area(), but I had originally intended to use two clip-paths, one of which was a rectangular gradient that covered everything outside the wavy area. That didn’t work because of a Safari bug I found with a similar approach, and my head was already thinking in terms of line. ↩︎
My grandfather was a rock ’n’ roll radio DJ in the 1950s and 1960s, most notably at KYW/WKYC in Cleveland. In 1967, he and his family moved to Detroit for a job at WXYZ, but shortly thereafter the radio station changed formats and he lost his job. That led him to found the Specs Howard School of Broadcast Arts in 1970 to train others to be in the broadcasting industry. Over the years the school had a number of logos, but this is one of the originals, and — in my opinion — one of the best.
The logo is reminiscent of the Shure (where I work now) Unidyne 55. Being the third generation in my family with an interest in audio (my dad had a recording studio for many years), I have long felt a connection to the school and its history. The school was absorbed by Lawerence Technical University in 2021, and, sadly, my grandpa died in September of this year. My adaptation of the logo is a small tribute to that history.
As I sat waiting the requisite 15 minutes to make sure I didn’t go into anaphylactic shock, I looked out over the arena and reflected on the historic nature of that moment.
A building designed for basketball, concerts, and large-scale events sat empty for nearly a year because gatherings were a threat to public health.
It had then been repurposed into a makeshift medical facility where vaccines were being administered on a mass scale.
There’s something dark — and decidedly not normal — about a space designed for fun being used as a medical facility.
I felt grateful for the incredible work of the many brilliant and hard-working people both before and during the pandemic who allowed us to reach that moment.
But fear of what this medical marvel might symbolize was also on my mind.
We had all been waiting for medicine to end the pandemic, any too many people had been ignoring epidemiology’s “inconvenient” non-medical interventions like social distancing and mask wearing.
We had been passively waiting for science to save us with vaccines, and this time we got lucky: science delivered.
This technological solutionism, waiting for a technological savior instead of making sacrifices, is at play in climate change, too.
I am absolutely thrilled that mRNA vaccine technology was practically ready and waiting to be applied to SARS-CoV-2 in record time, but it scares me that it reinforces a solutionist attitude: “See? Science saved us from the pandemic, so it’ll also save us from climate change!”
There is no vaccine for climate change.
We’ll need science to get us out of this, yes, but also political will.
Political will to reign in corporations.
Political will to fund science that can get us even partway there.
Political will to do things that hurt in the short term before the status quo does even more damage to more people.
Had we heeded epidemiologists’ advice on COVID, millions of lives around the world could have been saved.
Let’s not make the same mistake with climate change, squandering the remaining time we have while waiting for a scientific miracle.
I thought that was pretty cool, so decided to make a little site that lets you make your own parachute using the same encoding.
Other people have explained the encoding better than I could, so without further adieu, here is Encode Mighty Things.
Toward the beginning of the pandemic, a friend asked me how she could use an external vocal mic and a guitar with a pickup on Zoom calls.
Sounds easy, right?
But to have the amount of control a musician really wants, it turned out to be a bit more involved.
Plus, when working from home for a microphone company, it’s pretty common to use a decent mic in meetings.
This post explains the setup I’ve been using for my calls.
(Don’t let the speakers fool you. Use headphones or it’ll feed back when echo cancellation is turned off!)
Mix the mic or other inputs going into the USB interface in the DAW
Be able to hear/monitor the mix
Route the output of the DAW to a Zoom call
Be able to hear the far end of the call through the same headphones as monitoring the mix
Get Blackhole
The key ingredient here is BlackHole, a virtual audio driver that acts as a passthrough from each input to the corresponding output[1].
This actually needs two instances of BlackHole because Zoom can only send and receive from the first two channels of any audio interface.
Fortunately, they offer direct downloads (email required) of each (and have nice instructions for building from source).
I have one called BlackHole 16ch and one called BlackHole 2ch, which — surprise — have 16 channels and 2 channels, respectively.
The 16-channel BlackHole device will function as the Zoom speaker; the 2-channel BlackHole will be the Zoom “microphone”.
Set up an aggregate audio device
Reaper will handle all of the audio routing, but since it doesn’t support having different input and output devices, the first thing to do is create an aggregate device in Audio MIDI Setup.
This allows the system to treat multiple devices as a single device with all of the channels from the individual devices.
It doesn’t really matter what order you add them to the aggregate device, but it should include both BlackHole devices and the audio interface.
I have the USB interface set to be the clock source, with the two BlackHole instances set for drift correction.
Route and mix in the DAW
Once I got the audio devices set up, I had to route everything in Reaper.
The general approach is:
The master out is going to Zoom, and my ears for monitoring
In other words, (almost) every track in the DAW is “normal” in the sense that what I hear is what the far end of the call hears
The output of Zoom is not going to the master out, so it doesn’t feed back
But before doing that, make sure Reaper is set to use the aggregate device in the device preferences.
For every input I want to mix, I created a track.
Selecting the input for that track feels almost like just using the regular USB audio interface, but with a whole bunch of other channels thanks to being aggregated with BlackHole.
By default, Reaper sends each track to the master out, but in order to hear live input, you have to arm the track and turn on record monitoring.
A fun bonus of routing through a DAW is that you can use plugins!
I use a simple NR plugin to deal with HVAC noise, and some compression.
The master out needs to go two places:
The USB interface so you can hear in your headphones
BlackHole, to get it into Zoom
So, from the master track’s routing window, add outputs to the USB interface and the two channels of the 2-channel BlackHole interface.
The fader/mute button for the USB interface on the output routing of the master is how I adjust whether/how much of myself I want to monitor in my ears.
That’s it for everything I want to send to Zoom, but I still want to be able to hear the far end of the call.
I could just tell Zoom to send out to the hardware interface, but I want it in the DAW, too.
This is useful for recording a tape sync, and so you don’t have to mess up your monitoring volume to change the volume of the far end.
For that, I created a special track and set its input to the 16–channel BlackHole instance.
When you set up Zoom to use a particular output device, it sends the audio to the first two channels, so I had to use channels 1 and 2.
Here’s where the track becomes special: you have to make sure it doesn’t send to the master out (it’ll feed back if you do).
Instead, send it directly to the USB interface’s out.
And that’s it for the DAW.
Set up Zoom
The basics of setting up Zoom are simple: it receives the master output of the DAW by setting its microphone to BlackHole 2ch, and setting the speaker to BlackHole 16ch sends the far end’s audio to the DAW on channels 1 and 2 of BlackHole 16ch.
Since you can control the output level from the DAW, I maxed out Zoom’s output and input faders and turned off the automatic gain control.
That’s really all you need for the basics, but Zoom has a bunch of cool advanced audio settings.
Under “Music and Professional Audio” you can tell Zoom to let you turn off all of its audio processing, sending “original sound”.
This is great, because what’s the point of having a decent mic if Zoom is going to band-limit and compress it to death?
You can also turn on stereo, but I only use that if I really need to, which is rare.
(Keep in mind that in order to actually activate these settings, you have to press “Turn on original sound” in the upper left of a call.)
Bonus! Sharing system sound
Zoom can share system sound, but when using a setup like this, I don’t recommend it.
Turning it on activates some sort of additional virtual audio device on the system, which can mess with things.
Remember that Zoom can only send audio out on the first two channels of a device.
Thankfully, the system isn’t so limited.
To share system sound, I went back to Audio MIDI Setup and under Configure Speakers told it that for stereo out, BlackHole 16ch uses channels 3 and 4.
Now I can set my system output device to BlackHole 16ch, make a new track in the DAW, set its input to BlackHole 16ch, and system sound comes in there.
So the far side of a Zoom call comes in on channels 1 and 2, and system sound on 3 and 4.
And that’s it.
Happy calling!
I used BlackHole because it’s free and did what I needed. You can achieve the same thing with a nice UI using Loopback from the excellent Rogue Amoeba. ↩︎
You know those countdown timers at crosswalks? Sometimes when crossing the street, I like to try to guess what number it’s on even when I can’t see the whole thing (like when approaching the intersection at an oblique angle).
This got me (over)thinking: if I want to know how much time is left, is it better to see the right side of the countdown timer (approaching from the left), or the left side (approaching from the right)? In other words, does the left or right side of the display carry more information?
These timers use seven-segment displays. Even if you didn’t know they were called seven-segment displays, you see them all over the place. They use seven separate segments, labeled A–G, to create each of the 10 digits from 0–9.
A
B
C
D
E
F
G
To form each of the ten digits, the seven segments are turned on (1) or off (0) in different combinations. Here are the standard representations of 0–9.
A
B
C
D
E
F
G
1
1
1
1
1
1
0
0
1
1
0
0
0
0
1
1
0
1
1
0
1
1
1
1
1
0
0
1
0
1
1
0
0
1
1
1
0
1
1
0
1
1
1
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
0
1
1
The seven segments aren’t on all turned on an equal number of times over the course of the ten digits. That means seeing some segments turned on is more probable than others.
.8
.8
.9
.7
.4
.6
.7
On for how many digits?
Segment A
8/10
Segment B
8/10
Segment C
9/10
Segment D
7/10
Segment E
4/10
Segment F
6/10
Segment G
7/10
So how can we tell which of these seven segments communicates the most information?
Information entropy
The segments that are on or off for close to half the digits contain more information than those that are either on or off for most digits.
This is intuitive for the same reason a fair coin toss contains more information than tossing a coin with heads on both sides: you’re less certain what you’re going to get, so learn more by observing the value.
Claude Shannon’s[1] concept of entropy from information theory is a good way to quantify this problem. Entropy, H, is defined as
Here’s what’s that means in the case of a seven-segment display. X is a random variable representing whether a segment is on or off. Since a segment can only have two states, the random variable X's actual values are either on or off. P is the probability operator, so P(xi) really means the probability that a segment is on or off. (b is the base of the logarithm. We’re going to use 2 because we like bits.)
Let’s take segment A as an example. It’s on for 8 out of 10 digits, and off for 2 out of 10. That means the probability of seeing it on is 0.8, and the probability of seeing it off is 0.2. In other words (well, symbols), P(xon)=0.8 and P(xoff)=0.2.
Plugging that in,
H(A)=−0.8log20.8−0.2log20.2=0.722
In Shannon’s terms, there are 0.722 bits of information communicated by segment A of a seven-segment display.
Doing this for all seven segments, we get these entropy values:
.72
.72
.47
.88
.97
.97
.88
Shannon entropy
Segment A
0.721928
Segment B
0.721928
Segment C
0.468996
Segment D
0.881291
Segment E
0.970951
Segment F
0.970951
Segment G
0.881291
It sure looks like segments E and F carry the most information. That makes sense because they’re the closest to being on/off 50% of the time. Guess it’s better to approach an intersection from the right in order to see the left-hand segments.
But wait.
When approaching an intersection, you can see both right segments (B and C), or both left segments (E and F). A pair of segments from a single display are anything but independent because they’re both showing part of the same digit, so we can’t just add up their entropies.
Instead, treat each pair as if it holds a single value. Taken together, two segments can take on any of four values (off–off, off–on, on–off, on–on), which is binary for 0–3.
Segments B & C
Binary
Decimal
On – On
11
3
On – On
11
3
On – Off
10
2
On – On
11
3
On – On
11
3
Off – On
01
1
Off – On
01
1
On – On
11
3
On – On
11
3
On – On
11
3
Segments E & F
Binary
Decimal
On – On
11
3
Off – Off
00
0
On – Off
10
2
Off – Off
00
0
Off – On
01
1
Off – On
01
1
On – On
11
3
Off – Off
00
0
On – On
11
3
Off – On
01
1
In this case, our random variable X can take on four possible values rather than just two. Taking segments E and F as an example, the joint value is 0 for 3/10 digits, 1 for 3/10 digits, 2 for 1/10 digits, and 3 for 3/10 digits. Going back to the initial definition of entropy, we get
So we get 1.16 bits of information in joint segments B–C, and 1.90 bits in joint segments E–F. So there you have it: it’s still better to approach an intersection from the right.
But wait!
When was the last time you walked up to an intersection and only saw the timer on one number? If you look for at least half a second (on average), you’ll see it tick down.
For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is:
H(S)=−i∑pij∑pi(j)logpi(j)
where i is a state (certain preceding characters) and pi(j) is the probability of j given i as the previous character.
But actually, I don’t like this notation, so I’m going to rewrite it as
H(S)=−i∑P(xi)j∑P(xj∣xi)logbP(xj∣xi)
Alright, then. The probability of seeing a given state is the same as before. As for the conditional probabilities, let’s go back to the 0–3 binary values and assume 0 loops back to 9[2]. If we see segments B and C in a 1 state (off–on), the next tick it will be in a 1 state half the time, and a 3 state half the time. Going through the rest of the states and transitions, we get these transition probabilities:
So for segments E and F, when i=0 and j=2, P(xi)=103 as with before, and P(xj∣xi)=31 because, as those circles show, a 0 transitions to a 2 a third of the time.
Now it’s just a matter of an inelegant nested for loop to determine that the first-order entropy rate of segments B–C is 1.00 bits, and 1.03 bits for segments E–F.
So, if you can manage to stare at either the left or right segments for a whole second, you’re still better off looking at the left segments, but not by much.
I’ll leave figuring out the entropy rates for looking at it longer as an exercise for the reader, because I’m done overthinking this (for now).
Shannon and I both got undergrad degrees in EE from the University of Michigan, but he went on to create information theory, and I went on to write this stupid blog post. ↩︎
This makes sense for the 1s place for segments B–C, but not for E–F. ↩︎
If two points (or posts) make a trend, interactive data visualizations of the Hebrew calendar are a thing I blog about now. In the long and storied tradition of single-usenoveltysites, I’ve created isyontefearlythisyear.com (and its evil twin, isyonteflatethisyear.com). Now you can point to real data when the conversation inevitably comes up before every holiday.
Being Jewish, I immediately thought of the calendrical connection between Easter and Passover. Specifically, since Easter is usually around Passover, does the 19-year cycles of Hebrew leap years play a role in when Easter falls?
Very briefly (and approximately), a solar year is aligned with the seasons (because a year is one orbit of the earth around the sun), but the Hebrew calendar is based on a lunar calendar in which a month is determined by one cycle through the phases of the moon. The solar year is approximately 365 days, while 12 lunar months are approximately 354 days, or 11 days shorter. If the Hebrew calendar were a pure lunar calendar, over time the months would drift around the year. To make up for this shortfall, a 30-day leap month is added to the Hebrew calendar every two to three years, seven times in a 19-year cycle (years 3, 6, 8, 11, 14, 17, and 19). (30 days × 7 years ≈ 11 days × 19 years. Hey, I said this explanation is approximate.)
To see the effect of Hebrew leap years on Easter dates, I recreated iamreddave’s graph, but with larger points for leap years and points colored by position in the 19-year cycle.
What jumps out to me is that all of the late Easter dates are Hebrew leap years, which is what you’d expect when an additional month has recently been inserted, but all of the early Easter dates are also Hebrew leap years.
Passover, on the other hand, always occurs late in a leap year, as you’d expect:
Toggling between the two, it looks like it’s years with the latest Passovers that get leap-year–early Easters.
Zoom in a bit and you’ll find that the early Easter dates are always years 8, 11, and 19 of the 19-year cycle:
I thought maybe this happens because the Christian 19-year cycle is shifted by three years from the Jewish cycle (2014 was the first year of the Christian cycle, while 2017/5777 is the first year of the Jewish cycle), but this isn’t the case. Here’s what seems to be happening:
Easter is (by definition) the first Sunday after the full moon after the vernal (in the northern hemisphere) equinox. Typically, that’s the full moon of Nissan (the Hebrew month which contains Passover), but in those three years the leap month pushes Passover so late that it’s a full month later than the equinox. In other words, in those years the new moon that marks the start of Nissan is at least ~14 days after the equinox, which puts a full moon very shortly after the equinox, which is still in Adar II (the month before Nissan).
Shout out to the Hebcal team for their amazing tools!
With parshatShmot coinciding with the inauguration (err, Put-in) of Donald Trump, this image from Yossi Fendel has been making the rounds on social media. It quotes the eighth verse of the parsha (and book):
Let us deal shrewdly with them, so that they may not increase; otherwise in the event of war they may join our enemies in fighting against us and rise from the ground.”
The liturgy talks a lot about the exodus from Egypt, but focuses far less on why the Israelites became enslaved in the first place. The answer, this parsha makes clear, is fear. Fear of shifting demographics. Fear of an ethnic group that looked different, spoke differently, and had different practices and customs — yet served an important economic function by doing the job no Egyptian was willing to do.
Faced with that fear from shifting demographics, the Pharaoh had at least a couple of courses of action. He could have pushed an agenda of multiculturalism, encouraging the Egyptians and Israelites to get to know one another, thereby mitigating their fear. Instead, he felt that it was more important to maintain what he considered the fundamentally Egyptian character of Egypt.
The United States — at least in theory — was founded not as “a place for a people”, but as a place for all people. Sadly, there are people who believe that America was a white country (back when it was great or something 🙄), and they are now feeling the same fear and oppressive urges the biblical Pharaoh felt.
This is precisely the danger that comes along with ethnic, racial, or religious nationalism. A nation founded as “a place for a people” cannot simultaneously offer full and equal rights/privileges to all, and continue to exist should that people become a minority. And the only ways to maintain the “desired” demographics are exclusion and oppression. Whether it’s in the context of Trump-emboldened white nationalism here in America, or Zionism, its moral equivalent, let’s learn from this week’s well-timed parsha: national ideals that depend on maintaining certain demographics are inherently oppressive.
In a place like America, although changing demographics can bring up a natural fear of the stranger, it also provides us with an opportunity to not be like Pharaoh and to strive for a multicultural ideal. The Torah reminds readers that, because the Israelites were strangers in Egypt, not only is one forbidden to oppress the stranger [1, 2], but it explains how: by loving that stranger [3]. But loving the stranger is abstract. Perhaps it’s better to take a cue from the JPS translation and befriend the stranger. Friends are way less scary than strangers.