<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Igor47's Blog</title>
        <link>https://igor.moomers.org/</link>
        <description>The feed for Igor's personal writing</description>
        <lastBuildDate>Thu, 29 Jan 2026 05:24:14 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>NextJs + Feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved 2023, Igor Serebryany</copyright>
        <category>Technology</category>
        <category>Philosophy</category>
        <category>Humanity</category>
        <item>
            <title><![CDATA[Making Medical Chocolate]]></title>
            <link>https://igor.moomers.org/posts/medical-chocolate</link>
            <guid>https://igor.moomers.org/posts/medical-chocolate</guid>
            <pubDate>Wed, 28 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Making medical cannabis chocolate for my mom's dementia.
]]></description>
            <content:encoded><![CDATA[
It's basically impossible to get high-CBD chocolate.
I've only really been able to find two brands of chocolate in a high-CBD formulation: [Kiva](https://www.kivaconfections.com/flavor/cbd-51-dark-chocolate), and [Revive Pure Life](https://www.revivepurelife.com/products).
Neither is available from any dispensary in Los Angeles or the Bay Area, at least according to [Weedmaps](https://weedmaps.com/).

This is a problem for my family.
My mother has been in memory care for a few months now, but she continues to be very agitated there.
She's been prescribed two different kinds of atypical antipsychotics to help manage her agitation, but most of these medications are all off-label for dementia patients, and in fact come with [black-box warnings](https://publichealth.jhu.edu/2025/what-is-a-black-box-warning) recommending against their use due to increased risk of death.
Their efficacy is [dubious at best](https://pmc.ncbi.nlm.nih.gov/articles/PMC3516138/).
The FDA has only [recently approved brexpiprazole](https://www.fda.gov/news-events/press-announcements/fda-approves-first-drug-treat-agitation-symptoms-associated-dementia-due-alzheimers-disease) as the first atypical antipsychotic for dementia-related agitation, and that approval is [not without controversy](https://www.madinamerica.com/2023/05/fda-approval-antipsychotic-rexulti/).

Out of a sense of helplessness and desperation, we decided to try cannabis.
Surprisingly, this seemed to help!
However, my mom is a pretty picky eater, and she wanted nothing to do with the typical gummies or weed candies.
She's obviously not going to smoke, and administering a tincture is challenging, given how much subterfuge is already required to get her to take all her other medication.
Pretty soon, her coffee is going to be more medication than actual coffee!

She *does* like chocolate, though, and I was initially able to source the Revive Pure Life at a dispensary not too distant from Mom's memory care facility.
Alas, that appeared to have been the only available batch in Los Angeles.
When we ran out a few weeks later, we were staring down the barrel of a major relapse in her behavior issues, and were totally helpless to source more chocolate.
I attempted to contact the manufacturer directly, but never heard back from them.
I also tried to talk several dispensaries into making a custom order for me -- but no luck there, either!
In desperation, I decided to just make my own chocolate.

## Gathering Supplies

It turns out making chocolate is not too difficult.
I ordered a few [chocolate molds](https://www.amazon.com/dp/B0DFH26GZS) and a [digital thermometer](https://www.amazon.com/dp/B0F5X4FM3Q) on Amazon.
I also picked up [a large bar of chocolate](https://www.traderjoes.com/home/products/pdp/pound-plus-72-cacao-dark-chocolate-048875).

Next, I needed cannabis.
I decided to get tinctures of cannabis in MCT oil, since oil could dissolve in the chocolate without affecting the texture.
I also wanted to match her previous 7:1 dosage from the Revive Pure Life, to avoid having to get a new script from her psychiatrist.
There are no 7:1 tinctures conveniently available, but I was able to get Papa & Barkley's [30:1](https://www.papaandbarkley.com/products/30-1-releaf-tincture) and a [1:1](https://www.papaandbarkley.com/products/1-1-releaf-tincture) tinctures from [a nearby dispensary](https://maps.app.goo.gl/w8NPzro4BE1bAPCv5).

## Making Chocolate

I began with a test bar, both to figure out the quantity of chocolate per bar and to get the technique right.
I heated the chocolate in a pot, set into another pot full of water, until it was 120°F.
Then, I allowed it to cool until it was around 90°F before pouring into the mold.
Waiting for it to cool down took a substantial amount of time.
I weighed the pot before and after the pour, and determined that my molds comfortably fit around 65 grams of chocolate per bar.

Now, it was time for some arithmetic.
I wanted to match the Revive Pure Life bars: 7mg CBD and 1mg THC per serving, or 70mg CBD and 10mg THC per 10-square bar.
I needed to figure out how much of my 1:1 and 30:1 tinctures to use to get the correct ratio in the chocolate.
Thankfully the tinctures were *very* well-labeled.

![THC and CBD tinctures](/images/chocolate-tinctures.jpg "CBD and THC tinctures for the chocolate")

To figure out how much of each tincture to use, it is necessary to solve a system of two linear equations, one for the total THC and one for the total CBD.
Rather than walk through the math, here's a calculator:

(Interactive calculator available at https://igor.moomers.org/posts/medical-chocolate)
<!-- CHOCOLATE_CALCULATOR -->

This works out to 2.44ml of oil added to 65 grams of chocolate.
However -- this is a problem.
I'm using 72% chocolate, which contains about 30% fat (cocoa butter) -- about 20 grams per 65g bar.
Adding 2.44ml of oil would increase the fat content by over 10%, likely resulting in a soft, greasy-feeling bar.
To make sure I ended up with a nicely tempered bar, I ended up halving the oil, making the dose 2 squares instead of 1 square.

To make the medicated chocolate, I heated enough chocolate for 4 bars to around 120°F, and let it cool to 90°F.
Then, I added my cannabis tincture and stirred gently but thoroughly.
Finally, I poured the chocolate into the mold and allowed it to set at room temperature for quite a while.

![The chocolate is poured into molds](/images/chocolate-poured.jpg "Chocolate is poured into molds")

After letting it sit for a while, it solidified nicely and got a very pretty swirly texture.
I wonder if this was the result of the tincture I added?

![The chocolate has set](/images/chocolate-solidified.jpg "Chocolate after it's solidified")

## Upshot

I wrapped the chocolate I made in aluminum foil and proudly delivered it to my mom's memory care community.
The next day, I learned that the staff cannot legally administer this chocolate to my mom.
Apparently, the medicine cart cannot include an obviously home-made product.
They're only allowed to administer cannabis products that are "commercially produced".

So in the end, this project was, technically, a complete waste of time.
Still, I had fun learning about chocolate!
Plus, now I have a bunch of CBD chocolate for home consumption.

When you're on a caregiving journey for a loved one with dementia, you quickly learn to take the small wins and look at the bright side.
Stay tuned for my next post -- producing professional-looking cannabis product wrappers.
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/chocolate-header.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Preventing Political Fundraising Spam]]></title>
            <link>https://igor.moomers.org/posts/political-fundraising-spam</link>
            <guid>https://igor.moomers.org/posts/political-fundraising-spam</guid>
            <pubDate>Fri, 23 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Donating to political campaigns, without getting bombarded by apocalyptic messages from randos.
]]></description>
            <content:encoded><![CDATA[
This is a post about how to preserve your privacy and avoid spam while still supporting Democratic campaigns and movements.
The TL;DR is that I want you to take action now.
My recommended actions:

* Sign [petition 1](https://actionnetwork.org/petitions/pledge-to-stop-donating-to-candidates-who-wont-protect-your-email-address) and [petition 2](https://actionnetwork.org/petitions/tell-the-email-sending-platforms-dont-let-your-clients-spam-me).
* See `Ready to do more?` at [ethicalemail.org](https://ethicalemail.org/) and send opt-out emails to democratic data brokers
* (if you're in California) Sign up for [DROP](https://consumer.drop.privacy.ca.gov/)
* Donate money to [Movement Voter Project](https://movement.vote/donate/)
* Donate money to candidates via [Oath.vote](https://app.oath.vote)

Read on for more details!

## Background

In my phone's SMS app, the suggested first word for a response is `stop`.
This is because I get an ungodly amount of political spam.
No good deed goes unpunished, and my reward for caring about politics is to be forever blasted by messages with subject lines like `URGENT`, `Live Poll OPEN - For Democrats ONLY`, and `WE ARE BEGGING YOU`.
Though I diligently `stop to end`, I continue getting on lists for newly-formed PACs or campaigns of the Democratic candidates for Comptroller of the City of Muncie, Indiana.

Now, I don't have to tell you that things are pretty dire out there.
An unaccountable paramilitary force rampages around American cities, our "leaders" are engaging in massive corruption, and the global order is being dismantled for no reason.
Meanwhile, our civilization has real problems; to name a few: climate change, the risks of AI, centralized wealth, rising authoritarianism, nuclear proliferation, novel or treatment-resistant pathogens.
I would love to help elect leaders who are, you know, interested in some of these.

On the other hand, the spam...

## Whence the spam

I had assumed that the spam is the fault of [ActBlue](https://secure.actblue.com/directory).
However, it turns out [this is not true](https://matthodges.com/posts/2024-08-25-actblue-isnt-selling-your-data/), a fact I discovered by reading [The Movement Voter Project's FAQ](https://movement.vote/faq/will-my-donor-information-stay-private/).
Instead, when you donate to a candidate/campaign via ActBlue, they share your information with that specific candidate/campaign.
The campaign then later sells your data.
Sometimes, they sell it to other campaigns.
Other times, it goes to [shady consulting firms extracting money from anxious seniors by using dramatic or false claims](https://data4democracy.substack.com/p/the-mothership-vortex-an-investigation).
(Though, actually, that investigation into Mothership Strategies by Adam Bonica [did lead ActBlue](https://data4democracy.substack.com/p/the-mothership-vortex-a-quick-update) to implement some [policy changes](https://www.actblue.com/posts/actblue-takes-action-how-were-protecting-donors-from-deceptive-practices/)).

## Stemming the Tide

From Movement Voter, I learned about [Ethical Email](https://ethicalemail.org/), which is trying to organize resistance to the Big Blue Spam Machine.
The great thing about Ethical Email is they're very action-oriented.
They have two petitions I signed -- one pledging [not to support candidates who sell their donor data](https://actionnetwork.org/petitions/pledge-to-stop-donating-to-candidates-who-wont-protect-your-email-address), and one urging NGPVAN, which runs much Democratic campaign tooling, [to do a better job preventing non-consensual spam](https://actionnetwork.org/petitions/tell-the-email-sending-platforms-dont-let-your-clients-spam-me).

Beyond that, they also provide convenient templates to request data deletion from a few big Democratic data brokers (see the `Ready to do more?` section of Ethical Email).
Californians like me have special rights under our [consumer protection act](https://oag.ca.gov/privacy/ccpa), and I invoked that in the messages I sent to the data brokers.
(BTW, if you're a California resident, California runs a program called [DROP](https://consumer.drop.privacy.ca.gov/) which will automatically delete your data from a bunch of data brokers; sign up right now!)

## Prevention: 16x Better Than Cure

If you're like me, you might want to donate to political candidates without also signing up for a bunch of spam from other candidates you know nothing about.
*I want this too!*
Alas, your candidate is probably too busy raising money and filming TikToks, and has left day-to-day campaign operations to the same consultants who brought us Mothership Strategies.
What to do while the Ethical Email petitions get traction?

Well, you might start by donating money to privacy-conscious organizations.
The main one I recommend is [Movement Voter Project](https://movement.vote/).
I actually prefer organizations like MVP over giving money directly to campaigns (aka [hard money](https://govfacts.org/elections-voting/candidates-campaigns/campaign-finance-rules-disclosure/hard-money-vs-soft-money-how-campaign-finance-rules-shape-american-elections/)).
I want to invest in building movements and organizations which survive past a single campaign, and can go on to build political coalitions.

On the other hand, voices I trust [strongly recommend the hard-money approach](https://www.slowboring.com/i/178237616/some-general-considerations).
Through that blog post, I learned about [oath.vote](https://app.oath.vote/), which not only researches effective candidates but advocates for donor privacy with those candidates.
Their donation is organized around specific causes, like [Protecting Democracy](https://app.oath.vote/donate?p=pd) or [Flip the House](https://app.oath.vote/donate?p=fh).
Those are the two I personally donated to.

## Act Now

Whether you care about spam or nah, this is no time to sit on the sidelines.
If you have the means, you should be donating to anti-MAGA campaigns.
I've suggested some privacy-preserving donation paths, and also some approaches to reclaim your privacy from data brokers.
Matt Yglesias has [other recommendations](https://www.slowboring.com/i/178237616/our-top-recommendations-for-now).
Either way, don't get caught up in analysis paralysis; take action.
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/political-spam.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[My WLED Christmas Lights]]></title>
            <link>https://igor.moomers.org/posts/wled-christmas-lights</link>
            <guid>https://igor.moomers.org/posts/wled-christmas-lights</guid>
            <pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[My setup, how to weather-proof it, flashing WLED on an ESP32C3, and power tips
]]></description>
            <content:encoded><![CDATA[
I've got some Christmas lights on the front of my house.

![Lights on my house](/images/house-lights.jpg)

I actually leave these on all year long, and use different colors/patterns at different times of the year.
These are WS2812 (or compatible) LEDs, which are readily available and cheap.
The [signal protocol](https://www.arrow.com/en/research-and-events/articles/protocol-for-the-ws2812b-programmable-led) for WS2812 lights is pretty ingenious, and it always amazes me just how fast modern electronics are.
To shift out 24 bits of color to each of 100 LEDs, it takes ~125 *micro*seconds.
This means you can run animations on these LEDs at around 8,000 frames per second!

I think the lights on my house are [these ones](https://www.amazon.com/dp/B08KS4LXFD) from Amazon.
I cut off the USB controller that comes with the lights, and solder my own 3-wire pigtail onto them.
These lights don't actually implement the standard WS2812 protocol.
Instead of grabbing the first 24 bits and then shifting out the rest of the data to the next LED, the LEDs in these strings know their position in the string.
This means you can't connect two of them in series -- or rather, you can, but the second string will just display whatever the first string displays.
Also, if you cut off the first LED, then the first 24 bits will just go into the void.
Still, it's hard to beat the price and ready availability.

I control these using an [ESP32](https://www.espressif.com/en/products/socs/esp32) chip running [WLED](https://kno.wled.ge/).
This lets me use an app on my phone and pick from a bunch of pre-defined color palettes and patterns.
I can set specific presets to turn on at specific times of day.

## Weatherproofing

One problem I've had has been weatherproofing the ESP32 controllers.
People generally expect that any amount of water combined with any kind of electricity or electronics will result in a catastrophic reaction.
In practice, I've found that most electronics, especially low-voltage devices, are reasonably water-resilient.
For instance, the LED strings I'm using are not explicitly rated for any kind of water resistance, but seem to do okay left out in the rain for multiple seasons.

On the other hand, getting rainwater into my microcontrollers has not worked out well for me.
My initial outdoor controller died after the first rain, losing my hard-programmed schedules and presets.
Here's my v2:

![V2 of my lights controller](/images/house-lights-v2.jpg)

All I had on hand was this metal enclosure.
To avoid the project board shorting out, I covered the bottom with tape and then hot glue.
I also hot-glued the wires to avoid water build-up.
This version lasted through one wet season, but then shorted out soon after I moved to Berkeley.

I decided v3 would be the final version, and got actual waterproof enclosures.
I drilled a hole for the wires, and epoxied over the hole with waterproof epoxy.
I dispensed with the project board, and just soldered the wires directly to the microcontroller.
I also got a much smaller microcontroller, a [Seeed Studio XIAO](https://www.seeedstudio.com/Seeed-XIAO-ESP32C3-p-5431.html), to make sure I had space in the enclosure.
Here's my v3:

![V3 of my lights controller, top view](/images/house-lights-v3-top.jpg)
![V3 of my lights controller, side view](/images/house-lights-v3-side.jpg)

I'm using [this enclosure](https://www.amazon.com/dp/B07H5C8BB6), which ended up being generously large given how small the microcontroller is.
The black sticker on the lid is the WiFi antenna from the XIAO.
Of the three pigtails, one is for power, and the other two are for driving two separate strings of LEDs.

## Flashing

I wanted to install WLED on my new ESP32C3 microcontrollers, but the [official directions](https://kno.wled.ge/basics/install-binary/) on the WLED site are pretty out-of-date.
The recommended approach is to use the [WLED web installer](https://install.wled.me/).
However, this doesn't work in Firefox.
Trying to use Chromium on my Arch Linux laptop also didn't work, giving me the error `Serial port is not ready. Close any other application using it and try again.`
Asking an LLM to help me debug the issue was similarly unproductive.

I eventually resorted to analyzing [the web installer's source code](https://github.com/wled-install/wled-install.github.io) to figure out what the website is doing.
The important file seems to be [`build.py`](https://github.com/wled-install/wled-install.github.io/blob/main/scripts/build.py).
The official [WLED releases](https://github.com/wled/WLED/releases) include a binary build for just WLED itself.
However, for ESP32C3, there are at least 3 additional files necessary:

* bootloader -- initializes the microcontroller. This comes from Espressif, the maker of the ESP32
* partitions -- a map of the flash space, read by the bootloader to understand how to run the main code
* boot_app0 -- used by the OTA update process to understand which version of the app to run, kinda like the slots in an Android filesystem

All of these files are available in the web installer's repo.
My job was to parse through `build.py` and the relevant `_template.json` file for my ESP32C3 microcontroller to figure out the correct files and flash offset locations.
This resulted in the following incantation to get WLED running on the controller:

```console
$ esptool --port /dev/ttyACM0 write_flash 0x0 bootloader_esp32c3.bin
$ esptool --port /dev/ttyACM0 write_flash 0x8000 partitions_v2022.bin
$ esptool --port /dev/ttyACM0 write_flash 0xE000 boot_app0_v2022.bin
$ esptool --port /dev/ttyACM0 write_flash 0x10000 WLED_0.15.3_ESP32-C3.bin
```

This took me the better part of 2 hours to figure out, so I'm writing it down to help you (who might be future me).

## Powering

With the controller flashed and weatherproofed, the final boss is powering the whole setup outdoors.
I've got 200 LEDs at, say, 50mA per LED, equalling 10A or 50W of (peak) power.
I haven't been able to find any weather-resistant power bricks that can source that much current on Amazon.
Something like 3A or 5A is more typical, and even then the weather resistance is questionable, as is voltage sag at higher currents.

I ended up getting a 12V power brick; those are readily available on Amazon at 3A and even 5A, with good reviews and (claimed?) UL listing.
A smaller brick fits pretty well in my outdoor receptacle, which keeps it out of the direct rain.
I then connected it to a [5V voltage regulator](https://www.amazon.com/dp/B0C4L66SZ9), which is potted and actually does seem pretty waterproof.
With 3A at 12V, I'm limited to 36W of power, somewhat below my estimated 50W.
Thankfully, you can set the current limits in WLED, and it will automatically limit LED brightness in software to avoid exceeding your power budget.
In practice, my LEDs seem bright enough.
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/xiao-esp32c3.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Data Boundary Layer for HTMX with Zod]]></title>
            <link>https://igor.moomers.org/posts/zod-schemas-htmx</link>
            <guid>https://igor.moomers.org/posts/zod-schemas-htmx</guid>
            <pubDate>Fri, 14 Nov 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[My experience creating a data boundary layer with Zod in Hono and HTMX.
]]></description>
            <content:encoded><![CDATA[
The last few months, I've been working on [CSheet](https://www.csheet.net), my LLM-assisted DnD-5e character sheet tracker.
You know that old chestnut about how AI is easy and AV is hard?
I've had that same experience, but with incoming data parsing.

I was expecting that a challenging bit would be designing, for example, a good data flow with `htmx`.
Or maybe I would run into problems integrating complicated UI components (e.g. an image cropper) into a vanilla-JS app, given the dominance of front-end frameworks like React.
Also, I had never integrated an LLM into a product, and expected this to also be difficult.

Instead, the piece that I've ended up iterating on the most has to do with how to receive and process incoming data.
I wanted to leave a few breadcrumbs here for anyone who comes after (or, let's face it, their LLM agents).

## React: The Usual Sitch

These days most web apps are written in React, and React has a somewhat-smaller data representation surface area, at least for incoming data.
It's of course impossible to avoid the browser's data representation, which is full of quirks.
For instance, numeric input elements are represented in the browser as strings, so you have to deal with converting between `"5"` and `5`.
Incomplete inputs are `""`.
Checkboxes are `"on"` when checked, unless there's a `value` , and they're totally missing when unchecked.
`multiple` inputs are just multiple key-value pairs.

In React apps, you usually try to parse the browser's `FormData` representation into JSON as soon as possible.
Your APIs then accept JSON, which is sensible and has numbers, booleans, arrays, and objects.
On the server-side, you're usually still validating that the JSON's *shape* conforms to your expected shape.
But you're not dealing with the nitty-gritty of turning a boolean string `"true"`, which might be missing, into the boolean value `true`.
That step happens right next to the data, in the browser.

Note that for outgoing data, the situation is quite reversed.
In REST APIs, you might have a dozen representations of your internal objects for specific API purposes.
GraphQL is an attempt to deal with an exploding number of statically-defined representations, and it solves the problem by creating an unbounded number of dynamic representations.
In server-side rendered apps, you might be passing the exact same internal representation to all your view-rendering logic, although sometimes you still get an explosion for different permissions structures.
But I digress...

## Parsing `FormData`

When you're dealing with browser-submitted forms, you have to deal with `multipart/form-data`-encoded request bodies.
This is a string representation containing a bunch of `key=value` pairs, which you're going to want to turn into some kind of object.
This step can be somewhat arcane; it's underspecified, and usually left to framework conventions.

For instance, let's say you have `<select multiple name="fruit">` in your form, and the user only selects `apple`.
You get `fruit=apple` in your request body.
Ruby's [rack](https://github.com/rack/rack) will turn this into an object like `params.fruit = "apple"`.
What if the user select both `apple` and `orange`?
Well, in that case you'll get `params.fruit = ["apple", "orange"]`!
You can solve this problem by using `name="fruit[]"` instead, which `rack` uses as a hint to remove the `[]` and always turn `fruit` into an array.
This is pure convention -- the `[]` have no meaning outside the request parsing logic in that stack.
Your favorite stack might also do this, but you don't know unless you look it up.

In Hono, the situation is actually even more annoying.
You would typically use [`parseBody`](https://hono.dev/docs/api/request#parsebody), which returns `Record<string, string | File>`.
If your user selects `apple` and `orange`, you'll get `params.fruit = "orange"`, totally dropping the `apple`.
You should invoke `parseBody({ all: true })` instead, in which case you'll get a `Record<string, string | | File | (string | File)[]>`.
You don't need to add `[]` to your input `name`s -- Hono ignores this convention, and since it doesn't strip the `[]` you'll end up with inconvenient field names in your objects.
Then there's also `{ dot: true }`, which treats `.` as special in your field names and turns dot-separated fields into nested objects.

## Zod and Service Representation

In your services, you will probably want to use objects with sensible types.
For instance, you might want something like:

```ts
const dietSchema = z.object({
    fruit: z.array(z.enum(["apple" | "banana" | "orange"])).min(1),
    perDay: z.number().int().positive()
})
```

This will produce a type like:

```ts
type DietSchemaT = {
  fruit: ("apple" | "banana" | "orange")[],
  perDay: Number,
}
```

To get from your `parseBody` representation to this typed representation, you have to both parse and validate.
For example, `perDay` might be any of:

* (happy path) a string containing a number, like `"5"`
* an empty string `""`
* some random string `"bob"`
* something else totally unexpected

I took several stabs at this problem in CSheet.
Initially, I split up my validation and submission logic.
I had my validation run on the unparsed schema, which is a `Record<string, ...>` type.
This meant doing a bunch of parsing directly in validation.
For example, I might do something like:

```ts
if (body.perDay) {
  const perDay = parseInt(body.perDay, 10)
  if (isNaN(perDay)) {
    errors.perDay = "Invalid number"
```

Then in my submission logic, I would attempt to parse using the `zod` schema, turning any parsing errors into form errors.
And my business logic operated on the parsed service schema.
This is obviously a lot of duplication.

In version 2, I unified validation and submission.
I added an `is_check` parameter to all my requests, representing either a request to validate or perform the submission.
Both began with attempting to parse the request body using the `zod` schema.
This meant I could lean on `zod` to perform the parsing:

```ts
type DietSchemaT = {
  perDay: z.preprocess((val) => {
    if (val === "" || val === null || val === undefined) {
      return null
    }
    if (typeof val === "string") {
      const num = Number(val)
      return isNaN(num) ? val : num;
    }
    return val
  }, z.number().int().positive())
}
```

There's a lot of common patterns here, so I eventually factored these out into what I call [`formSchemas`](https://github.com/igor47/csheet/blob/main/src/lib/formSchemas.ts).

When validating, an empty form field is not an error -- it just means the user hasn't gotten to the form field yet.
To deal with this, my version-2 schema definitions/service logic had two features:

* I actually parsed twice -- for validation, using the [`partial`](https://zod.dev/api#partial) version of the schema
* Since `zod` doesn't well-represent `z.preprocess(method, schema).optional()`, I had to embed optionality into my `schema`

This eventually let me to V3: just define the schema the way it *should* be.
Arguably, I should have done this from the beginning.
I still wanted to avoid displaying errors on missing fields after validation.
But I realized I could do this when rendering the form.
If the field has an error, I now check if (a) we were validating (vs submitting) the form and (b) the field is blank.
If both of those are true, I [ignore the error](https://github.com/igor47/csheet/blob/06df22002c79c2fad32388c98f3762e6d5871e07/src/lib/formErrors.ts#L67-L85).

## Re-rendering the Form

After validation, we re-render the form for `htmx`, possibly containing new fields or errors.
We want to pass the user's answers back to the component, so that the form can be rendered with the old answers populated.
We have at least 3 choices for which representation to pass to the form component:

1. generate a `FormData` from the string encoding
2. the version we get from `parseBody`
3. the parsed version we get inside the service

I went back and forth several times on which representation to use.
It's tempting to use (3), the typed service representation.
It's strongly typed, and it seems somehow more correct to pass around a `{ perDay: Number }` type than a `Record<string, unknown>` type.

However, using the typed representation in form rendering meant converting values back to their `FormData` types.
I would have to do `value={String(values.perDay)}`.
Also, if parsing fails, I don't actually have a typed representation to use!

I eventually settled on using (2) as the least of all evils.
It's an object, so for forms with nested fields or arrays I can at least do sensible iteration to render the form.
It also has the benefit of containing exactly what the user input, avoiding the unexpected UX of your values changing for you.

## LLM Representation

Since I have an LLM assistant with tool calling embedded into the system, my incoming request data might also be generated by the LLM instead of a form.
I use [Vercel AI](https://ai-sdk.dev/) to interface with LLMs, and it accepts `zod` schemas to create descriptions for LLM tool-calling.
Making the schemas strict/non-optional helped tighten up what the LLM generates.
Now, a field is marked optional only if it's actually optional in the tool call -- not because it might be optional during validation.

For `zod`, there's a distinction between input and output schemas.
My output schema is the strongly typed representation the service operates on.
When I give this schema to the LLM, it generates strongly-typed (JSON) tool calls.
However, my *input* schema is implicitly written to consider the output of browser forms.
As a result, since I perform schema parsing *inside* the service, I unfortunately have to convert the LLM's strongly-typed tool call into a "stringly"-typed input for the service.
The strings are then immediately parsed back into the strongly-typed representation.
There might be a good way to handle this, probably by parsing the form data outside the main service function, but I'm not sure if the added complexity is worth it.

## Takeaways

After three iterations, I landed on a pattern that works well:
- Define schemas as they **should be** for your services (strict, typed, non-optional)
- Use `zod.preprocess` to handle the string→type conversions from `FormData`
- Handle validation-vs-submission differences at render time, not in the schema

The same schemas work for both browser forms and LLM tool calls, which has been a fortunate convergence.
I do find myself missing `pydantic`, which feels more ergonomic for parsing data thanks to its default coercion
However, `zod`'s `preprocess` works, and is definitely more explicit. 

I've factored the common patterns into [`formSchemas.ts`](https://github.com/igor47/csheet/blob/main/src/lib/formSchemas.ts).
If there's interest in packaging this as a standalone library, ping me on [the issue](https://github.com/igor47/csheet/issues/57).
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/abstract-data-parsing-quilt.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Notes from SF Climate Week 2025]]></title>
            <link>https://igor.moomers.org/posts/climate-week-2025</link>
            <guid>https://igor.moomers.org/posts/climate-week-2025</guid>
            <pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[My takeaways from the (few) events I attended at SF Climate Week 2025.
]]></description>
            <content:encoded><![CDATA[
We're in a critical moment for the climate.
[2024 was the warmest year on record](https://www.nasa.gov/news-release/temperatures-rising-nasa-confirms-2024-warmest-year-on-record/), and we're continuing to hit [devastating](https://www.bbc.com/news/articles/cd9qy4knd8wo) [climate](https://content-drupal.climate.gov/news-features/event-tracker/hurricane-helenes-extreme-rainfall-and-catastrophic-inland-flooding) [disasters](https://www.theguardian.com/environment/2024/sep/27/heat-wave-death-record-southwest).
Meanwhile, we are facing [the most climate-hostile administration](https://www.theguardian.com/environment/2025/may/01/trump-air-climate-pollution-regulation-100-days), even while international corporations [retreat](https://apnews.com/article/bp-oil-green-reset-1b9cfca4c2da138f83ace86b5945e053) from their [climate pledges](https://www.spglobal.com/market-intelligence/en/news-insights/articles/2025/1/with-jpmorgan-gone-all-major-us-banks-have-now-left-global-climate-alliance-85961423).

What's a climate-concerned citizen to do in this time?
To find out, I went to two events at SF Climate week this year, both hosted by [Giving Green](https://www.givinggreen.earth/).
The first was a private panel specifically focused on philanthropy in the age of Trump.
The panelists included representatives from [outlier projects](https://www.outlierprojects.org/), [1.5Climate](https://onepointfiveclimate.org/), and [Skyline Foundation](https://skylinefoundation.org/).
Though the event was ostensibly a panel, it really took the form more of a general conversation, with folks throwing ideas back and forth around the room.

The second was a [public event](https://lu.ma/674vd8e0) more general focused on "how to do climate philanthropy".
This was more of a panel and a Q&A format, with two parts.
The first, hosted by Giving Green, featured [Steve Newman](https://climateer.substack.com/), and Younger Family Fund advisor, [Nathan Aleman](https://www.linkedin.com/in/nathan-aleman-8b17082/) discussing their journey in climate philanthropy.
The second part was hosted by Breene Murphy of the [Carbon Collective](https://www.carboncollective.co/) and focused on a discussion with green impact investing with Dan Chu, Executive Director of the [Sierra Club Foundation](https://www.sierraclubfoundation.org/).

I learned a lot, and I wanted to synthesize some learnings while they're still fresh in my mind.

## What to do About Trump

Giving Green's 2024 priorities included 8 areas where their research converges on high scale, reasonable feasibility of making a difference, and high need for additional financial contributions:

![Chart listing giving green priorities](/images/giving-green-priorities-2024.png)

Their response to Trump was to take a basically defensive posture focused on advancing next-gen nuclear and advanced geothermal, which both have strong bi-partisan support.
The response in the room, and on the panel, was more varied.
Some ideas I heard:

* There is a lot of opportunity for what my colleague [Aimee](https://www.linkedin.com/in/aimee-gotway-bailey/) calls "revenge policymaking" in the states.
  California recently became the 4th largest economy in the world, eclipsing Japan, and there's certainly a lot of appetite here and in other blue states (NY, Washington) to continue making climate progress.
  This thread aligns well with movements like Abundance ([substack](https://substack.com/@modernpower), [book](https://bookshop.org/p/books/abundance-what-progress-takes-derek-thompson/20165403)), which are also oriented around building effective governance.
  I broadly agree that Dems have a brand problem in state and local governments, a (not-unwarranted) perception of dysfunction that doesn't do us any favors when running for national office.
  It seems like a good idea, in a moment when we're so disempowered nationally, to clean house locally.

* At the same time, it took a lot of effort to get the IRA and IIJA passed federally, and it was not possible without a lot of infrastructure in DC.
  To retreat from federal policy would cause that infrastructure to wither -- that is, to wither **more** given the attrition that's already happening in DC as a result of DOGE and other efforts.
  People were particularly concerned about NOAA and the Loan Programs Office at the DoE.
  Nobody thinks philanthropy can fully back-stop the massive federal spending cuts, but we can at least reduce the damage.

* There is clear and growing need for adaptation on top of mitigation.
  Outlier Projects in particular is focused on getting past the "moral hazard"/"you're giving in" reactions to approaches like [SRM](https://en.wikipedia.org/wiki/Solar_radiation_modification).
  This is also a place where climate tech and philanthropy approaches more traditional community building.
  In a time of overshoot and climate disruption, community resilience might be more important than CDR.
  My experience at [Recoolit](https://www.recoolit.com/) has me feeling skeptical that there's anyone out there willing to pay for the latter.

* The US is just one emitter, and there's a whole world out there.
  Another group I'm involved with, the [Founders Pledge Climate Fund](https://www.founderspledge.com/funds/climate-fund/about), has also been very focused on growing emissions in the developing world.
  Opinions in the room mostly leaned towards "We are American philanthropists, familiar with the US space and most interested in making a difference in the US".
  One bogeyman was a rumored upcoming executive order that would prohibit 501(c)(3) investments outside the US, though folks were both prepared for legal action in response, and convinced that there would continue to be *some* way to fund work outside the country.
  Another concern was something along the lines of "should we be telling people in other countries what to do".
  I strongly disagree with this objection, and strongly feel that we on the left should have a much stronger focus on building and wielding political power.
  There's a large space between wielding power coercively and being a strong advocate for things we believe in.
  Our aversion to power has ceded the field to folks who are much less scrupulous about things.

* Besides being defensive/reactive, a lot of folks in the room discussed going on the offense.
  For instance, folks brought up coming up with our own version of Project 2025, so that we're ready with a comprehensive agenda when the moment times.
  Besides promoting green policies in more local governments, we can also work on blocking anti-green policies ("owning the cons"?)
  One person in the room proposed going after cryptocurrencies and AI companies, both bases of power for our political opponents.

There were several additional threads in the room that I was unable to follow, given my position as an interested outsider in the space.
For instance, I was not aware of the retreat of [Breakthrough Energy Ventures](https://www.breakthroughenergy.org/) from the climate space, a development which seems to have left a large funding gap in the space.

Overall, this event was both informative and inspiring.
It helped me feel less alone in a turbulent and scary time.
It's good to look into the faces of dedicated, talented people who share my concerns, are thinking through solutions from so many angles, and who are both generous with their own ideas and so open-minded and receptive to others'.
It definitely felt like a community-building event -- I would love to remain connected with many of the folks I met!

## Climate Philanthropy

My second SF Climate Week event began with [Dan Stein](https://www.linkedin.com/in/daniel-stein-8210a639/) hosting [Steve Newman](https://climateer.substack.com/) and [Nathan Aleman](https://www.linkedin.com/in/nathan-aleman-8b17082/).
Both are experienced climate philanthropists, and it was interesting to hear how they approach the problem space.
To some extent, there's a bit of confirmation bias here.
I already generally agree with Giving Green's thesis about the importance of climate policy advocacy, and the speakers they brought in are preaching to the choir.
Newman in particular mentioned several times that, had Giving Green existed at the start of his journey, he would have done much less research and much more of just letting Giving Green manage this climate philanthropy dollars.

It was interesting to hear Nathan Aleman discussing his climate philanthropy given the diverse political viewpoints in his family.
I'm very worried about the continuing polarization/politicization of climate, and always looking for non-partisan climate opportunities.
Aleman mentioned [Deploy/US](https://www.deployus.org/) as the organization they fund, though he did mention that the impact is not yet clear.
[ClearPath](https://clearpath.org/) is another organization that seems both right-leaning and climate-focused.

I was curious to hear how both panelists thought about the effectiveness of their philanthropy.
For metrics and data oriented people, it can be difficult to think through contributions whose effects might not be evident for decades.
Newman mentioned that he generally attempts to establish his grantees are experts, and then defers to their expertise.
He mentioned a useful heuristic of "do they want to get off the phone with me to get back to work", vs "are they more focused on developing relationships with donors like me".

## Impact Investing

The final event of the day was a discussion between [Breene Murphy](https://www.linkedin.com/in/breene-murphy-climate-friendly-401k/) and [Dan Chu](https://www.linkedin.com/in/dan-chu-scf/).
This was probably the most informative and action-oriented event of the day.

For most of us, our savings and assets are invested for the purposes of maximizing financial return, agnostic of how that's accomplished.
Folks are broadly aware that some of the organizations in our portfolios might not be, like, super cool or whatever.
But investing is simultaneously very complicated and connected with deep-seated feeling of security.
Leave it up to the experts?
At best, we might shift assets into [ESG](https://en.wikipedia.org/wiki/Environmental,_social,_and_governance) funds.
Those have become a [subject of controversy](https://www.volts.wtf/p/the-depthless-stupidity-of-republicans) in the past few years, while simultaneously having [dubious real-world impact](https://insights.som.yale.edu/insights/green-investing-could-push-polluters-to-emit-more-greenhouse-gases) or perhaps outright [greenwashing](https://fsc.org/en/blog/what-is-greenwashing).

Is there a better way to leverage the power latent in the combined assets of climate-concious investors?
This is the broad space of [impact investing](https://www.investopedia.com/terms/i/impact-investing.asp).
The Sierra Club's [Shifting Trillions](https://www.sierraclubfoundation.org/shifting-trillions) initiative is aimed at building a movement around this project.
The talk was both a primer in the space, and a set of real-world examples of Sierra Club's impact investing fund from Dan Chu.

There are a couple of approaches to the space:

* **Divestment**, which aims to move money out of polluting companies.
  There are a few organizations primarily focused on this approach.
  For example, [Fossil Free California](https://fossilfreeca.org/) aims to divest California's pension funds from fossil fuel companies.
  Both Breene Murphy and Dan Stein discussed divestment as "the least effective" option in the toolkit, and [other sources agree](https://www.gsb.stanford.edu/insights/why-divestment-doesnt-hurt-dirty-companies)

* **Shareholder Activism**, where investors in polluting companies vote their shares for better outcomes.
  A poster-child in this fight is the Engine1 ETF, which famously [installed climate-aligned members on the board of ExxonMobil](https://engine1.com/transforming/articles/engagement-with-exxon-strengthened-company-value-for-shareholders/) (here, again, Dan Chu alluded to the relative ineffectiveness of such an approach; what's the purpose of installing climate activists on the board of an oil company?)
  Arguably, as an individual investor, devoting any attention to your shareholder votes is [a huge waste of time](https://www.shareholderforum.com/access/Library/20240924_Bloomberg.htm).
  However, by aggregating the votes of many smaller investors, it might be possible to force companies to take bigger action.
  The panelists mentioned [iconik](https://www.iconikapp.com/) as a tool to help do this.
  Apparently, opportunities for [impact via shareholder activism do exist](https://www.morningstar.com/sustainable-investing/4-climate-votes-that-matter-this-years-proxy-voting-season).

* **Reinvesting** into climate solutions, where instead of just dumping specific "bad" companies, you focus on investing in "good" ones.
  Especially early-stage investments can be a huge help here, but the deal flow for early-stage companies is a limiting factor.
  With respect to shifting trillions, there must be a market for these large flows of money to go.
  Breene Murphy's [Carbon Collective's CCSO](https://www.carboncollectivefunds.com/ccso/) ETF is one great example of this approach.
  Another organization mentioned by several folks was [Prime Coalition](https://www.primecoalition.org/mission-and-vision).
  Carbon Collective also has a [green bonds fund](https://www.carboncollectivefunds.com/ccsb/), theoretically enabling an entirely green balanced portfolio.

* **Storytelling** as an action which enables all the others.
  For instance, [fossil fuels have underperformed the market](https://ieefa.org/articles/another-bad-year-and-decade-fossil-fuel-stocks) recently, making them a comparatively bad investment.
  Renewable energy, meanwhile, has [outperformed](https://blog.carboncollective.co/top-renewable-energy-stocks-beat-fossil-fuels/).
  Making this story more salient in the minds of investors might help accelerate the shift.

Dan Chu highlighted a few examples of the dramatic impact that the Sierra Club's impact portfolio was able to have on the world.
For instance, they funded [Solar Holler](https://www.solarholler.com/), which helps decarbonization while building critical allies in red states.
Another powerful example was a non-recoverable grant they made to Standing Rock, enabling 

Dan had what I perceived as an interesting take on "impact investing".
In his (approximate) words:

> If you're doing market rate of return, you're participating in the existing extractive economy

This is somewhat of a hard sell.
I think the folks at Carbon Collective would like to argue that you can have your cake and eat it, too -- investing in values-aligned organizations while preserving your portfolio for continued activism, or maybe even building wealth.
Other sources agree:

<blockquote>
  <p>Our investment conviction is that sustainability-integrated portfolios can provide better risk-adjusted returns to investors.</p>
  <footer>— <cite><a href="https://ieefa.org/resources/ieefa-update-blackrock-investors-sustainable-portfolios-provide-stronger-risk-adjusted">Larry Fink, BlackRock</a></cite></footer>
</blockquote>

I lean more on Larry Fink than Dan Chus side here.
I think climate investing has the potential to both make a difference, while providing *better* returns.
I'm down to put some money behind this conviction, and would like to encourage my friends and family to do the same.

## Summing Up

In an unprecedented dark moment for climate change, I found my day at SF Climate week inspiring and full of actionable insights.
An important next step for me is to continue to participate in this community.
This also feels like a good moment to step up my climate philanthropy, and encourage my fellow philanthropists to do the same.
We can't close the gaps left by the retreat of federal funding, but we should strive to preserve what we can.

Additionally, I have a ton of next steps on the impact investing front:

* Look into shareholder activism using iconik
* Look into shifting my assets, perhaps using something like [Values Advisor](https://valuesadvisor.org/)
* Continue doing more early-stage climate investing

If you're looking for angel investors in your climate tech company -- get in touch!
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/green-vs-dystopia.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[A Solar-powered EV at Burning Man 2024]]></title>
            <link>https://igor.moomers.org/posts/solar-ev-at-burning-man-2024</link>
            <guid>https://igor.moomers.org/posts/solar-ev-at-burning-man-2024</guid>
            <pubDate>Sat, 28 Sep 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[How to go to Burning Man in an EV, and charge it with solar while you're there!
]]></description>
            <content:encoded><![CDATA[
# A Solar-powered EV at Burning Man 2024

For burning man 2024, I had two goals which sadly remain kind of unusual in 2024:
* Drive my EV there
* Power my camp with solar panels

As an auxiliary goal, remembering the crippling heat of 2022, I was hoping to be able to run the AC in the EV for a few hours a day to provide an air-conditioned hidey hole.
I was able to accomplish all of these goals -- sort of!
Here's some photos of my system.
Read on to learn about the process and all the choices and pieces that went into the project.

![The complete system at Burning Man](/images/bmsolar/system-at-bm.jpg)

## Panel Mounting

The Rivian R1S has a glass roof, and is thus a greenhouse.
As an experiment, I parked in a sunny spot in my driveway and ran the AC on max.
The car remained somewhere north of 85, and the AC was clearly struggling.

It made sense to use the solar panels to shade the car, which then further implies just mounting the panels on the roof.
Solar panel mounting is somewhat of a black art, so far as my internet research went.
I eventually discovered unistrut, which is available [at my local Home Depot](https://www.homedepot.com/p/Superstrut-10-ft-12-Gauge-Half-Slotted-Metal-Framing-Strut-Channel-in-Gold-Galvanized-ZA1200HS-10/100125003).
I could mount it to my cargo cross-bars using [t-channel bolts](https://www.rivianforums.com/forum/threads/crossbar-track-bolts-my-findings-what-are-you-using.14193/).
It comes in 10' lengths, and I needed somewhat more, but it was easy to get the necessary hardware to join a 2' length to the 10' length.

All the hardware for this at Home Depot was 3/8", including these [square washers](https://www.homedepot.com/p/Superstrut-3-8-in-Square-Strut-Washer-Silver-Galvanized-5-Pack-ZAB2413-8EG-10/100390468) which I could use to hold the panels to the strut where two panels abutted.
However, I could not use the square washers to hold the end of the panel -- they slipped off when not supported on both sides.
I needed solar-panel specific [end-clamps](https://amzn.to/4dshqy7).

As solar panels are all made in China, most solar hardware is metric, and I could only find those clamps with 8mm bolts.
To make life "simpler" for myself, I wanted to minimize the variety of fasteners.
I "imperialized" the metric hardware by drilled out the holes out for a 3/8" bolt.
I also had to use a dremel to grind away some of the metal around the hole, so I could get a hex-head bolt to rotate in the tight space.

The end result looks like this:
![car with panels mounted, in my driveway](/images/bmsolar/car-with-panels.jpg)

## Solar Panel Selection

Craigslist is a great place to get solar panels.
My original plan was to buy used panels, which tend to be quite cheap in $/watt, but also have a lower efficiency and so are larger for a given system wattage.
Since I was constrained on space on the top of the car, I ended up getting new panels from a CL reseller.
I ended up with some pretty nice modern bifacial panels at 20.9% efficiency at about $0.31/watt.
[Here's the datasheet](/images/bmsolar/solar-datasheet.pdf) for the panels I used.

I wish I could have found panels that were longer and narrower.
This would have enabled me to put more panels on the roof, but also to have the panels shade the side of the car.
Alas, panels seem to come in generally squarish dimensions.
I ended up with a piece of aluminet hanging off the unistrut for side-shade.

I also wish I could have had thinner, lighter, more flexible panels.
Besides trying to save space and weight, I'm also pretty nervous about breaking those big sheets of glass.
However, the glass panels with aluminum mounting borders are most common for industrial installations, and so are cheapest.
The flexible panel market is more for hobbyists, and those panels tend to be higher cost per watt, lower watts per m<sup>2</sup>, and less readily available for purchase.

## Calculating Power Needs

I estimated that, if I wanted to run the AC in the Rivian for a few hours a day, that would be my biggest power draw, far dwarfing any other loads I might need (e.g. lights, charging personal batteries, etc...).
How much power did I actually need?
This is not an easy question to answer.
Rivian studiously avoids any mentions of watts, volts, or amps in the in-car UI.
All battery displays are in %, with the total capacity of the battery (in kWh) nowhere to be found, or in miles, which are a meaningless unit only peripherally correlated to actual road miles given the wildly differing efficiencies given terrain, tires, driving style, and a thousand other factors.

So, how much power would I use keeping the AC on?
I did a bunch of hacky tests, keeping the AC on in my driveway and checking the differences in % with my eyeballs every period of time.
But I was only really able to answer the question when I learned about a [secret RiDE menu](https://www.rivianforums.com/forum/threads/latest-ride-menu-code.21755/).

Using this menu, I was able to learn that when awake, my Rivian R1S uses about 500W of power *baseline*, just sitting there.
With the AC on, the usage goes to about 2500W.
That was more than double the amount of solar I planned for, meaning that for every hour of running the AC, I would need about 2.5 hours of charging to keep the same SoC.
It's actually slightly worse because, when charging at L1/120V, the Rivian seems to put only about 60% of the power into the HV battery.
Presumably, the rest is going into some combo of staying awake (500W!) and keeping a built-in inverter running.

## Inverters and Batteries

I spent a LOT of time researching solar inverters.
The first annoying part is that I had to get a battery.
This is frustrating, because my Rivian is already basically a giant battery on wheels.
Why did I need *another* battery?

One reason is that the R1S is a 400V architecture, and I didn't really see any inverters running at those voltages.
Another is that, while [Rivian continues promising V2G and bi-directional charging](https://enteligent.com/products/enteligent%E2%84%A2-tlcev-t1-trusted-charging-presale), this so far remains vaporware.
Without bidirectional charging, there's no way to way to use power when the sun is not shining.
So I would need a battery if we wanted to run any loads at night.
But also, most inverters need either a battery or grid power to function.
There do seem to be *some* off-grid solar EV chargers that don't require an intermediate battery, but these are rare.
I found [this one](https://enteligent.com/products/enteligent%E2%84%A2-tlcev-t1-trusted-charging-presale), but it's in pre-order only and not yet generally available.

In the end, I went for [this solar inverter](https://richsolar.com/collections/inverters/products/nova-3k-3000-watt-48-volt-off-grid-hybrid-inverter), which was temporarily available from [ShopSolarKits](https://shopsolarkits.com/collections/off-grid-solar-inverters/products/3000-watt-48v-all-in-one-inverter) for only $400.
I also snagged [this battery](https://www.amazon.com/gp/product/B0CP7FZC1P) for about $500.
Finally, it seemed like the inverter didn't have a good way of showing the power in/out of the battery, so I last-minute purchased [this little power meter](https://www.amazon.com/dp/B013PKYILS) to install on the battery.
This last purchase was incredibly clutch, allowing me to track power consumption at a glance.

## Wiring and Dust

The inverter I got was not rated for dust, and I didn't want it failing half-way through the burn.
I decided to build an enclosure for it, with air filters on top and bottom and an auxiliary fan so it could shed heat while not getting too dusty.
I began by mounting all the components on the back -- a friend helped with this.

![Initial system being wired](/images/bmsolar/system-with-friend.jpg)

This allowed me to do the full system test.
Nothing exploded!

![First system test](/images/bmsolar/first-system-test.jpg)

I used three breaks -- one as a battery disconnect, one for the HVDC solar, and another for the inter AC output.
There are not a lot of devices that run on 48V, so I added a [12V step-down regulator](https://www.amazon.com/gp/product/B07GPZWG1S) meant for golf carts.
In hindsight, I wished I had gotten a larger one so I could power more USB-C ports in parallel, but this one was okay.

Next, I built a plywood box around the back plate.
I used a pocket hole jig to join all the sides.
The molding you see on the bottom in this photo is where the bottom air filter is meant to rest.

![Initial three-sided enclosure](/images/bmsolar/initial-enclosure.jpg)

On the front of the enclosure, I wanted a door so I could turn switches on/off and see inverter status.
I routed an opening out of the front panel, and added molding so the door would have something to sit against as a dust barrier.
I used velcro to keep the door closed.

![Opening routed out of the front panel](/images/bmsolar/front-panel-opening.jpg)

I also routed out openings for the 120V outlet, and for handles on the sides of the box.
I used bolts through the back panel to bring out battery power, as well as a ground connection which I didn't end up using.
I brought PV in and 12VDC out using [Anderson power pole panel mounts](https://www.amazon.com/dp/B097QG383J).
Here's a final walkthrough of the system I made for the camp:

<div class="d-flex justify-content-center">
<video controls disablepictureinpicture style="max-height: 800px">
  <source src="/videos/solar-system-walkthrough.mp4" type="video/mp4" />
  Download the <a href="/videos/solar-system-walkthrough.mp4">MP4</a>.
</video>
</div>

## So ... How Did It All Work Out?

On the whole, the system worked well.
Mounting the panels on the car was easy, and they felt secure.
Nothing died, and my air filters kept the dust out of the enclosure (although it wasn't a very dusty burn).
I got surprisingly good power production, more than 1kW at peak.

It wasn't a very hot year, and we didn't end up needing the Rivian's AC.
However, a lot of folks in camp had swamp coolers, and we ran a swamp cooler grid off the solar.
It's always convenient when your biggest loads run when the sun is shining!

We also had several morning when we made "solar waffles" using electric waffle irons powered by the solar system.
I found this incredibly satisfying, though opinions differed on whether this made the waffles taste any better.

On the other hand, some things didn't work well, mostly having to do with the Rivian.
First -- we pulled a pretty heavy trailer, which really affected my range.

![Rivian with the trailer](/images/bmsolar/rivian-with-trailer.png)

As a result, we arrived to camp with about 49% range -- clearly not enough to get back to a charger in Fernley.
All week long, I kept trying to charge up the Rivian off solar, but all I managed to do was keep it at 47%.
One reason was that, although I set the Rivian to `stay off` mode, I failed to turn off an AC schedule that was set up around my partner's daily commute, and which cooled the car down for her in the mornings and afternoons.
I discovered this on Thursday, when I went into the Rivian to grab something and found it refreshingly cool.
Chalk another one up to the Rivian's counter-intuitive UI!

The Rivian's portable charger, in 120V mode, allowed me to set the charge rate between 8A (960W) and 12A (1500W).
Even at 8A, I would be draining the battery somewhat.
Whenever I plugged the Rivian in, I had to remember to unplug it, which I sometimes forgot to do until too late in the day to fully re-charge the 48V battery.

As a result, on two morning I found the 48V battery totally dead in BMS undervolt protection mode.
This is is when I learned that my inverter wouldn't turn on without battery power.
I ended up having to "jump-start" the inverter.
This, I did by wiring two 20V DeWalt batteries in series (40V total, sufficient to meet the inverters 36V min-voltage threshold) and connecting them to the battery terminals long enough for the inverter to boot, and then disconnecting the jump starter once the solar kicked in.

My conclusion was that 1kW of solar is enough to either run camp, or to charge my EV, but not both.
On the final day of the burn, I ended up plugging the Rivian into the Silicon Village grid, where a few plugs had freed up once a few of their RVs had left.
In hindsight, I would have had a better time with the system if I hadn't attempted to charge the Rivian off the solar at all.

## What's Next

I think the basics of the system are solid, but I definitely need MOAR SOLAR.
With another 5 panels, I could have enough power to charge the EV at max 120V speed, and still keep the camp fully powered.
The big question for me then is, how to mount 8 solar panels, ideally in a way that also makes use of the shade they cast.

I honestly think that building the mounting structure on playa is the biggest obstacle to adoption.
Looking at a few big solar camps, systems varied wildly, mostly using steel tubing with some kind of panel mount clamps.
My experience locating the correct hardware even just for my unistrut helps me realize just how much work is involved in the hardware selection.

I wonder if there's a market for easy plug-and-play systems for camps, including all the components -- panels, structural members, mounting hardware, inverter, and battery.
This is like a [black rock hardware](https://formandreform.com/blackrock-hardware/) for in-camp solar.
If you're interested, let me know -- maybe I can put this together?

Overall, it was pretty encouraging to see *much* more solar at the burn this year, and commensurately fewer loud, smelly generators.
However, there's still a lot of work to do.
Also, bringing an EV to the burn continues to be fairly challenging.
I'm excited to keep iterating on the problem with my fellow burners!
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/bmsolar/back-of-system-burn.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Now: March 2024]]></title>
            <link>https://igor.moomers.org/posts/now-march-2024</link>
            <guid>https://igor.moomers.org/posts/now-march-2024</guid>
            <pubDate>Sun, 17 Mar 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[Now page for March 2024
]]></description>
            <content:encoded><![CDATA[
**Note**: This is an old NowNowNow post!
The next post after this one [is here](/posts/now-feb-2025).
The most recent post [is here](/now).

---

I'm finally getting around to making a [now page](https://nownownow.com/about).
I've been meaning to do this for a while, ever since I heard about `now` pages from [Raph Lee](https://www.linkedin.com/in/raphaeltlee/).
Thanks as always, Raph!

### Work ###

Direct emissions from just residential buildings are [almost 6% of all CO2 emissions](hhttps://www.iea.org/data-and-statistics/charts/global-co2-emissions-from-buildings-including-embodied-emissions-from-new-construction-2022) -- that's 2x the emissions from aviation.
Indirect emissions -- that is, emissions due to energy use in residential buildings -- are another **11% of all** emissions.
To reduce building emissions, we have to electrify buildings and then/simultaneously de-carbonize the grid.
The [IRA](https://home.treasury.gov/policy-issues/inflation-reduction-act) has a number of provisions to accelerate building electrification.
For instance, sections [25C and 25D](https://assets.ctfassets.net/v4qx5q5o44nj/3FYfJiYMILiXGFghFEUx0D/279f180456183d560d9c68d4de8baa67/factsheet_25C_25D.pdf) provide generous tax credits for moving to heat pump or geothermal home heating.

Besides the federal tax incentives, there are also local incentives at the state, county, city, and utility provider level.
Combined, these can make the cost of projects like replacing fossil-gas furnaces with heat pumps much cheaper.
However, actually getting these incentives is a complex process.
There are barriers at every step of the way:
* Finding out about the incentives
* Understanding the requirements
* Applying for the incentives
* Budgeting and executing the actual project

To help with this process. I've joined [Rock Rabbit](https://rockrabbit.ai), so far as a contract software engineer.
At RR, we've built a database of the available incentives, including their eligibility and application requirements.
We turned this data into a wizard which allows homeowners or contractors to plan a project, understand how much money they'll get back in rebates or credits, and smooth the application process.
In some cases, we can directly submit the application to an incentive provider and track the rebate progress.

I've been playing a hybrid full-stack tech lead / eng manager role.
On the backend, I've implemented CI, cleaned up our infrastructure and deployment process, and added tests to help us be more confident that we're returning the correct set of incentives for a project.
On the front-end, I've built the scaffold for a web app (we've been mobile-only so far).
I'm particularly excited about auto-generating an API client from our our FastAPI/OpenAPI spec.
This allows us to keep backend Python types in sync with FE TypeScript types automatically.

### Projects ###

Besides this blog, my main project has been my self-hosted infra.
In my ideal world, there are no giant cloud service providers who make money by selling my data and my attention.
I generally agree with the likes of [Yuval Noah Harari](https://www.ynharari.com/), [Jaron Lanier](https://www.jaronlanier.com/), or [Cory Doctorow](https://pluralistic.net/) that those business model of the internet are unsustainable, unethical, and harmful to individual and collective well-being.

Instead, I want small groups of friends to collectively run personal infrastructure.
This is connected both with my ideas on electronic liberty, and also with my ideas of group cohesion and bonding.
Traditionally, we've relied on our social groups for our survival.
Today, we all work remotely for different organizations from our own bedrooms.
When we have friends at all, it's merely for entertainment.
I would like to bring back a world in which we depend on each other and collaborate to accomplish shared goals.
Digital infra is a good place to start.

My personal cloud started with an email server back in 2003 or so.
We've been running a shared media collection with services like [Subsonic](https://www.subsonic.org/) for more than a decade.
However, usability has been limited to my nerdiest friends.
My goal over the past few months has been to both set up more services, and to make them more usable.

Setting up more services has been much easier thanks to Docker and Docker Compose.
Things got even better once I nailed secret management with [dcsm](/posts/secrets-in-docker-compose).
For usability, I wanted to create an SSO system and a login portal.
I brought up [Authentik](https://goauthentik.io/) for SSO, so now there's a self-service signup flow.
I had to modify several services to get them to support SSO.
For instance, I have [a PR](https://github.com/janeczku/calibre-web/pull/2899) to [Calibre Web](https://github.com/janeczku/calibre-web) to add SSO support.

A big milestone was announcing the project to my broader group of friends.
I did that a few weeks ago, and now have almost a dozen active users in the system!

### Travel ###

I'm still living in Sacramento, with regular trips to the Bay Area.
However, over the next month I have some big trips coming up.
First, I'm going to Cabo San Lucas for a cousin's wedding.
I'm hoping to get at least a couple of days of scuba diving while I'm there.

After that, I will be driving to Austin, Texas with a friend.
We'll be at the [Texas Eclipse Gathering](https://seetexaseclipse.com/), and then road-tripping back home.
Excited to do another long EV road trip, and am curious how the infrastructure has come along in the past year.
Fingers crossed that Rivian rolls out NACS charging on the Tesla network and ships me an adapter before we leave!

### Reading ###

I've been reading mostly fiction lately.
A big project for me was re-reading [Anathem](https://bookshop.org/p/books/anathem-neal-stephenson/8961850) by [Neal Stephenson](https://www.nealstephenson.com/).
It's been a decade since I read it the first time, and I enjoyed it even more the second time around.
It made me wish I was living in the Mathic world, spending all my time learning and debating ideas with my friends.
I also enjoyed the mind-bending multiverse hijinks the concept of [Hylean flow](https://anathem.fandom.com/wiki/Hylean_Flow).

I also re-read [Recursion](https://bookshop.org/p/books/recursion-blake-crouch/9597794) by [Blake Crouch](https://www.blakecrouch.com/).
I pulled it up randomly in my library, and initially had no memory of reading it the first time -- a fun trip for a book all about memory!

Currently, I'm reading [The Deluge](https://bookshop.org/p/books/the-deluge-stephen-markley/18405115).
The book is quite well-written, with realistic characters, a good understanding of climate policy, and lots of fun insider-baseball politics.
On the other hand, is it explicitly a dystopian novel?
It anyway feels like one, and there's enough catastrophe to go around in the book, both for the planet and for the lives of the characters.
I generally avoid dystopian fiction, but now that I'm in it, I want to see how it turns out.

In the last few months I also plowed through all of the [Bobiverse](https://bookshop.org/p/books/we-are-legion-we-are-bob-dennis-e-taylor/6389676) books.
Just a fun, hard sci-fi romp through the galaxy.

### Future ###

I am still thinking about whether I want to go to grad school and do a career transition into energy engineering.
I still really want to work on the transmission and distribution grid.
I want to tackle [GETs](https://inl.gov/national-security/grid-enhancing-technologies/) and the problem of the [interconnection queue](https://www.utilitydive.com/news/energy-transition-interconnection-reform-ferc-qcells/628822/).
I am sure there is a lot of work for a skilled software engineer in this space.
If you work in the space or have ideas for me, please reach out!
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/rr-jira-pica.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Light Up Boat Parade 2023]]></title>
            <link>https://igor.moomers.org/posts/light-up-boats</link>
            <guid>https://igor.moomers.org/posts/light-up-boats</guid>
            <pubDate>Mon, 18 Dec 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[We decorated a boat for the Sausalito light-up boat parade!
]]></description>
            <content:encoded><![CDATA[
In October, I completed [ASA104](https://asa.com/certifications/asa-104-bareboat-cruising/) with [Modern Sailing](https://www.modernsailing.com/content/bareboat-cruising-asa-104).
I wanted to take out some of their larger cruising boats, but due to the [wind patterns in the San Francisco Bay](https://boardsportscalifornia.com/understanding-san-francisco-bay-area-weather-the-wind-beneath-our-wings/), there's no wind in the winter.

Sailboats are for sailing -- I am not a huge fan just sitting around burning diesel for fun.
My excuse came in the form of the [Sausalito light-up boat parade](https://www.winterfestsausalito.com/).
I could get on the water in a big boat, and not feel **too** bad about just motoring around all day.
Plus, I had never been in a parade before!

## Decoration

Modern Sailing rentals are a 24-hour period starting at 9am.
Our plan was to get the boat first thing in the morning and spend the day decorating.
We would go to the parade at 6 pm, then sleep on the boat and take the decorations down in the morning.

Here's our final result:

<p>
<video controls muted loop disablepictureinpicture>
  <source src="/videos/lightupboat.webm" type="video/webm" />
  Download the <a href="/videos/lightupboat.webm">WEBM</a>.
</video>
</p>

## Rigging The Mainsail

A friend had a giant cache of [twinkly strings](https://twinkly.com/en-us/products/strings-multicolor).
Our plan was to hang them up in the mainsail triangle, and display some cool mapped patterns in 2D.
Here's our rigging plan:

![Rigging Plan](/images/light-up-boat-rigging.svg)

I was really worried about losing the main halyard, so we used a serious line with some extra-redundant knots to secure it at the mainsail clip and on the deck.
For the rigging, we used paracord with some alpine butterflies tied into it:

![Alpine Butterfly](/images/alpine-butterfly.jpg)

The first few twinkly strings, next to the mast, used the entire length of the string.
For the later strings, we could start at the top, go back down to the boom, then go back up to the top again.
This required hosting the rigging, figuring out where the string would end up, and securing it on the rigging that ran along the boom.
Then we could lower the whole rigging, and secure the other end of the string in the loop that ran along the topping lift.

This was a huge pain.
Christmas lights **want** to get tangled, and hosting them up and down gives them just the chance they're looking for.
It took us until the very end to figure out that we could just put extra paracord through the loops, and use it to hoist one string at a time.
Next time, we'll definitely just install extra paracord in every loop from the very beginning.

We ended up using 2 strings of 400 lights, and 1 string of 600 lights -- 1400 LEDs total on just the mainsail triangle.

## Mapping

We hoped to use the Twinkly app to map the lights to a 2D image.
This **absolutely** did not work.
First, the mast is really tall -- it was difficult to even get the whole set of lights in the frame.
To do that, we had to stand pretty far back, which made each LED hard to distinguish in the picture.
Finally, the strings swayed in the wind, and the whole boat rocked in the water -- no way to get a still image.

Thankfully, with just the default unmapped patterns, the Twinkly lights look pretty good.
However, I am now kind of obsessed with the idea of mapping the lights to a 2D image.
I think using a hybrid automatic-manual approach should work well.
I should be able to take a single photo as the base of the scene.
Then, I should be able to turn on sections of the lights, and then indicate their position in the base image.
It doesn't have to be perfect -- just good enough to get the general shape of the boat.

I would like to try writing some software for this in time for next year's parade.

## Other Lights

I bought some of [these lights](https://amzn.to/3RSKvv1) to put on the lifelines.
My plan was to control them with [WLED](https://kno.wled.ge/).
But I discovered that, though they are individually addressable, they have some bug in the implementation of the protocol that causes them to flicker unpredictably.

I did bring some of [these strings](https://amzn.to/3NI0LfP).
Unlike the previous lights, these don't explicitly advertise WS2812.
However, they do work well with WLED.
Their implementation of WS2812 is kind of [odd](https://todbot.com/blog/2021/01/01/ws2812-compatible-fairy-light-leds-that-know-their-address/).
Rather than shifting out the bits for the next light, they allow the controller to wiggle the entire common data line, and each LED knows its own address.
This means you cannot link multiple strands together serially -- the second string will just mirror the first.

I didn't have time to do anything fancy with these lights.
We used them with their default controllers using some built-in patterns.

## Future Work

For next time, I would dispense with the fancy Twinkly lights (which, wow, are pretty expensive!).
Instead, I would use the cheap strings off Amazon and write my own software to control them over WS2812.
Let me know if you have ideas for how to create a 2D mapping UI for this!

## The Parade

Oh yeah, there was a parade!
That part was super-fun.
We got to see all the other boats up-close, and listening to the marine radio chatter with the stressed-out parade organizers trying to keep everything going was pretty entertaining.
Next time I would like some other crew to feel confident driving -- turns out operating a 40' boat in close quarters with a bunch of other boats, in the dark, in a shallow marina, is stressful!
We didn't win any prizes, but we had a great time.
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/lights.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Docker Compose Secrets Manager]]></title>
            <link>https://igor.moomers.org/posts/secrets-in-docker-compose</link>
            <guid>https://igor.moomers.org/posts/secrets-in-docker-compose</guid>
            <pubDate>Fri, 15 Dec 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[A service and an approach for managing secrets in docker compose repos.
]]></description>
            <content:encoded><![CDATA[
TL;Dr: store your secrets in `git` alongside your `compose.yml` file.
My new service, [`dcsm`](https://github.com/igor47/dcsm), decrypts the secrets and templates them into your config files.

## Primer on `docker compose` Repos

Lately, there's been a thriving ecosystem for running self-hosted services using `docker`.
Packaging services with docker means abstracting away the complexities of configuring a local environment.
Updates are consistent across services.
Plus, there are lots of utilities to make life easier.
For instance, [`traefik`](https://traefik.io/) will automatically terminate SSL and reverse-proxy to your service -- no more manual certificate management.
As a result, I am running increasingly more services using `docker compose` files.

I consider each `compose.yml` file to define a "cluster" of services that are logically grouped.
For instance, I have a media cluster that handles movie ([jellyfin](https://jellyfin.org/)), music ([navidrome](https://www.navidrome.org/)), book ([calibre-web](https://github.com/janeczku/calibre-web)), and audiobook ([audiobookshelf](https://www.audiobookshelf.org/)) hosting services.

I keep each cluster in its own git repo.
The repo includes the `compose.yml` file and the configuration for all the services in that file.
Many services do not need any configuration beyond what is in the [`environment` key](https://docs.docker.com/compose/compose-file/compose-file-v3/#environment) of the `compose.yml`.
Often, though, a config file is required or is a more ergonomic way to specify the configuration.
For instance, all my clusters have a `config/traefik/traefik.yml` file to configure `traefik`.
I then bind-mount the config files into the container filesystem:

```yaml
volumes:
  - ./config/traefik:/etc/traefik
```

## How to Manage Secrets?

Suppose I need a credential inside that config file?
Before writing [`dcsm`](https://github.com/igor47/dcsm), I was at the mercy of the service author.
For instance, every piece of `grafana`'s configuration [can be overridden with environment variables](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#override-configuration-with-environment-variables).
So, I could check a `grafana.ini` file into the repo with most of my config.
Then, to add a secret (e.g., an OpenID-Connect client id/secret pair), I would:

1. create a `grafana/environment` file containing just the overridden secret keys
1. add the file to `compose.yml` under `env_file`:

```yaml
grafana:
  env_file:
    - path/to/grafana/environment
```

This is confusing -- now the configuration is split between several places.
Also, the `grafana/environment` file could not be checked into the repo.
Its management becomes out-of-band; as a DevOps practitioner, I don't like that.

Grafana is one of the better services here.
Lots of services require using a config file.
Sometimes, you can extract just the secret-containing part of the config and manage *that* out-of-band.
Then there are services like [`synapse`](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html) which requires a bunch of secrets in a common config file and has no mechanism for either including environment variables in the config or sourcing sub-files.
Now, your entire config file cannot be checked into the repo.

## DCSM

[`dcsm`](https://github.com/igor47/dcsm) is a simple service containing some python code and [`age`](https://age-encryption.org/) for symmetric-key encryption.
To use DCSM, you add it to your `compose.yml`:

```yaml
  dcsm:
    build: .
    environment:
      - DCSM_KEYFILE=/example/key.private
      - DCSM_SECRETS_FILE=/example/secrets.encrypted
      - DCSM_SOURCE_FILE=/example/secrets.yaml
      - DCSM_TEMPLATE_DIR=/example/templates
    volumes:
      - ./example:/example
```

The variables `DCSM_KEYFILE` and `DCSM_SECRETS_FILE` are required for basic operation.
You may optionally set `DCSM_SOURCE_FILE` to tell `dcsm` about your unencrypted secrets source.
This allows you to use the `encrypt` and `decrypt` commands, though you can also perform those operations by running `age` locally.

Your secrets source is a `yaml` file containing your secrets.
For example:

```yaml
GRAFANA_OAUTH_CLIENT_ID: this_is_secret
GRAFANA_OAUTH_CLIENT_SECRET: "this is also a secret"
```

This file, along with your `DCSM_KEYFILE`, should be `.git-ignore`ed from your repo
The keyfile must be copied out-of-band between your dev environment and your cluster runtime machine.

You may set any number of directories with the environment variable prefix `DCSM_TEMPLATE_`.
In these directories, `dcsm` will find files ending with `.template` and replace template strings with secrets from your encrypted `DCSM_SECRETS_FILE`.
For example, here is that grafana config file:

```ini
[auth.generic_oauth]
enabled = true
client_id = $DCSM{GRAFANA_OAUTH_CLIENT_ID}
client_secret = $DCSM{GRAFANA_OAUTH_CLIENT_SECRET}
scopes = openid profile email
```

This approach enables you to keep your cluster repo consistent.
You can easily refer to a secret in multiple places.
Finally -- if you need to pass secrets as environment variables, you can just template an `env_file`.
For instance, your template could be:

```bash
GF_AUTH_GENERIC_OAUTH_CLIENT_ID=$DCSM{GRAFANA_OAUTH_CLIENT_ID}
GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$DCSM{GRAFANA_OAUTH_CLIENT_SECRET}
```

If you store this file in your repo at `config/grafana/oauth.env.template`, then you could use it like so:

```yaml
services:
  dcsm:
    image: ghcr.io/igor47/dcsm:v0.3.0
    environment:
      - DCSM_KEYFILE=/secrets/key.private
      - DCSM_SECRETS_FILE=/secrets/secrets.encrypted
      - DCSM_SOURCE_FILE=/secrets/secrets.yaml
      - DCSM_TEMPLATE_DIR=/config
    volumes:
      - ./secrets:/secrets
      - ./config:/config

  grafana:
    image: grafana/grafana-enterprise
    restart: unless-stopped
    depends_on:
      dcsm:
        condition: service_completed_successfully
    env_file:
      - ./config/grafana/oauth.env
```

You can see that `grafana` has a `depends_on` the success of `dcsm`.
This allows `dcsm` to run first and template your config files with your secrets.
By the time the `grafana` service starts, the config files are ready for action!

## That's It

I wrote this tool to meet my own need, but I hope others will find it useful as well.
I think managing clusters via a configuration-as-code/infrastructure-as-code repo works pretty well.
Secret management was the missing piece -- but, with [`dcsm`](https://github.com/igor47/dcsm), no longer.
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/whales-with-secrets.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Soviet KGB Stories II]]></title>
            <link>https://igor.moomers.org/posts/soviet-kgb-stories-pt2</link>
            <guid>https://igor.moomers.org/posts/soviet-kgb-stories-pt2</guid>
            <pubDate>Thu, 21 Sep 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[A story of how my grandfather Lev was exiled to the uranium mines.
]]></description>
            <content:encoded><![CDATA[
<small>**Note**: There is a [previous post](/posts/soviet-kgb-stories-pt1), about my uncle.</small>

I never met my grandfather Lev -- he died (of stubbornness) just about a year before I was born.
But I heard lots of stories.
In the land of [блат (blat)](https://en.wikipedia.org/wiki/Blat_(favors)), the person in charge of the pharmacy containing scarce but potentially life-saving medication is a good person to have on your side.
And my grandfather, by all accounts, knew how to play the game.
In my home city of Nikolaev, everyone knew him, and everyone owed him a favor -- therefore, or possibly also, everyone respected him.

My grandfather could possibly have aspired to a greater status than a two-bedroom apartment in a [хрущёвка](https://en.wikipedia.org/wiki/Khrushchevka) and the once-yearly government-sponsored vacation to the Black Sea.
In his later years, he was offered an opportunity to join [The Party](https://en.wikipedia.org/wiki/Communist_Party_of_the_Soviet_Union) -- a key opportunity for advancement in the USSR.
But he turned it down, and this is the story of why.

## Some Flavor

Before we get into the meat of my story, an anecdote.

There's a common expression among Soviet Jews, which goes "Бьют не по паспорту, а по морде."
Transliterated -- "biut-ne-po-pasportu-a-po-morde".
Translated -- "they punch you, not in the passport, but in the face".

This is a reference to the ["Пятая графа"](https://ru.wikipedia.org/wiki/%D0%9F%D1%8F%D1%82%D0%B0%D1%8F_%D0%B3%D1%80%D0%B0%D1%84%D0%B0) -- the "fifth line" of the Soviet passport, which described the passport-holder's "Nationality".
For Soviet Jews, this line was filled in, "Jewish", and it served to close many doors.
But -- they don't punch you in the passport.
Anti-semites are keenly attuned to stereotypical ethnic features, and it was enough to simply look Jewish to get into trouble.

Lev, for what it was worth, didn't look particularly Jewish.
So, when he showed up at my grandmother's house to ask her father for her hand, he was at first confused for a government inspector coming for a surprise visit to their family pharmacy.
While they figured out what to do with him -- the chaos abated somewhat when my grandmother's precocious kid brother announced, "That's not an inspector, that's Sofia's boyfriend!" -- they bustled him into the kitchen.

My grandmother's grandmother was tasked with feeding Lev, which she went about grudgingly.
Because he didn't look Jewish, she felt free to mutter her complaints in Yiddish while she served him food.
"They put this goy in my kitchen, and I have to feed him?
What's next, he's gonna ask for an (alcoholic) drink?"

My grandfather, however, spoke fluent Yiddish.
He learned it at a yeshiva in Kherson, which he attended at the same time as my grandfather Semyon -- they actually knew each other in school, decades before my dad met my mom in Nikolaev.
So, hearing the muttering, he responded, "Actually, a shot of vodka wouldn't go amiss."
When the old woman realized that the suitor was actually Jewish after all, her reluctance instantly disappeared.
Moments later the table was covered with the best in the house.

## The War Years

On June 22nd, 1941, Germany declared war on the Soviet Union.
My grandfather Lev had just finished his first year of medical school, and he was immediately drafted.
He ended up serving as a medic on the front lines through the entire war.
After Germany surrendered in 1945, Lev was not immediately released from the military.
He ended up serving in the occupation for another year, helping rebuild medical facilities in a Germany that had been reduced to rubble by the Allied campaign.

When Lev finally returned to Odessa in October of 1946, his medical school had already finished more than a month of classes.
He was told to come back the following year to restart his education.
Lev was already way behind -- he would be four or five years older than his classmates -- and he didn't want to put life off for another year.
Wandering around Odessa, he saw an announcement that the pharmacy school was still recruiting students.
My grandmother had actually seen the same announcement when she showed up in Odessa to start school, a little late due to illness.
And that's why I'm here to write this story today!

## After School

My grandparents were married in 1949, and my uncle from [the previous story](/posts/soviet-kgb-stories-pt1) was born in 1950.
They moved to Nikolaev, where my grandfather immediately became the head of the pharmacological supply warehouse.
I tried to figure out why a kid right out of school was immediately in charge of a medical warehouse.
It seems to boil down to two reasons.
The first is a massive shortage of men in the aftermath of the war.
While Soviet casualties [are disputed](https://en.wikipedia.org/wiki/World_War_II_casualties_of_the_Soviet_Union), they total at least 9 million soldiers, all men.
The disputes all purport the official figures to be much too low, and there are estimates as high as 40 million military and civilian deaths -- twice as much as official figures.

The second reason is that the USSR had what, by modern US standards, unreasonably generous medical leave.
There was a baby boom in the USSR, just as there was in the US, and women were going into декрет, making them unreliable employees.

## Arrest, Exile, and Release

As Lev was running the pharmaceutical warehouse, the USSR was in the grips of a paranoid Stalin.
As a result of the [Doctors' plot](https://en.wikipedia.org/wiki/Doctors%27_plot), many Jewish doctors and medical workers were "dismissed from their jobs, arrested, and tortured to produce admissions".

In 1951, Lev was using petty cash from his warehouse to purchase newspapers, which he hung up on the walls for public reading.
Because of this, he was accused by the local KGB of "misappropriation of public funds".
In a swift trial, he was pronounced guilty.

At this time, the USSR was in a Nuclear arms race with the USA.
Uranium ore was discovered in [Жёлтые Воды (Yellow Waters)](https://en.wikipedia.org/wiki/Zhovti_Vody), and the USSR needed people to mine the ore.
I cannot tell if the Yellow River was actually yellow because of yellowcake uranium, or if the color was a coincidence.
In any case, Lev was sent to this region of Ukraine to work in the mines.

Because of his medical training and experience, Lev was spared hard labor in the mines.
Instead, he provided medical services to the miners and surrounding community.
Nevertheless, he was not free to leave, and my grandmother could not visit him.

Instead, my grandmother expanded much effort trying to free him.
She spoke to many lawyers, and visited many government officials.
She was finally told by a well-respected advocate in Kiev, -- "Child, you just have to wait.
There is no case against him, and they will let him out soon."

## Aftermath

Stalin died in 1953.
It was not until 1956 that his successor, Khrushchev, [denounced Stalin](https://www.britannica.com/event/Khrushchevs-secret-speech) and began the "de-Stalinization" of the USSR.
However, almost immediately after Stalin's death, many of the political prisoners arrested during his rule were released.
Among them, my grandfather Lev.

After his release, Lev was restored to his former rights and returned to his previous job.
He went on to make many advances in his field.
Among them, building one the first pharmacies located inside a hospital, an innovation copied from the USA.

For his service to the USSR in both military and civilian roles, he was awarded several medals.
My mom likes to recount how, when he wanted to call someone in Moscow, he would connect to the operator and just say "This is Lev, connect me with so and so" -- an ordeal that was much more difficult for other people.

However, though he was liked and respected, he never forgave the USSR for sending him to prison for more than 2 years.
He did not join the party, and refused to get involved in any sort of politics.
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/uranium.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[A self-hosted URL shortener in Rust]]></title>
            <link>https://igor.moomers.org/posts/self-hosted-link-shortener</link>
            <guid>https://igor.moomers.org/posts/self-hosted-link-shortener</guid>
            <pubDate>Wed, 30 Aug 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[I wrote a self-hosted link shortener!
]]></description>
            <content:encoded><![CDATA[
I embarked on a yak shave so epic, it resulted in me writing an entire URL shortening service in rust.
I'm calling it `smrs` (get it? "`sm`all and in `r`u`s`t!), and [here is the github repo for it](https://github.com/igor47/smrs).
It's hosted publically, but I don't want to share the link because, from a few minutes of casual reading online, it looks like URL shorteners are frequently abused (see below).

## For the love of God, why?

The yak shave is that I want to write some microcontroller code, and as I don't really remember any `C`, I figured I'd better just learn embedded `Rust`.
I cracked open [the embedded Rust book](https://docs.rust-embedded.org/book/), and immediately came across:

> You are comfortable using the Rust Programming Language, and have written, run, and debugged Rust applications on a desktop environment. 

I was like, "Nope, I am not", and proceeded to the [general-purpose Rust book](https://rust-book.cs.brown.edu/) (I've been reading the Brown-hosted version because I want to take the little embedded quizzes).
The book is *really* good, but there's too much reading in between the practical sections, and [I learn by doing](https://psycnet.apa.org/fulltext/2014-55719-001.pdf).
So I resuscitated a previous idea of hosting my own link shortener, and here we are.

## Choices

This project has a really odd mix of new and old technology.
The modern approach is to run the project as a single binary which includes an embedded web server.
The binary is responsible for serving any static files, and also for generating the dynamic responses.

This seemed like too much work, and using an existing web server crate seemed like **too little** work.
So, I am running the service behind Apache and using good-old [CGI](https://en.wikipedia.org/wiki/Common_Gateway_Interface) for the dynamic content.
Remember CGI?
Instead of your application code running as a long-lived process, it's invoked by the web server for each request.
The details of the request are placed in the environment, and whatever is written to `stdout` is sent back to the client as the HTTP response.
It's probably slower than having the process already running (`fork` and `exec` are slow!), but it lets the program be really simple.

For storage, I looked at a bunch of options (like `sled` or `leveldb`) but ended up with good-old `sqlite`.

Also, I'm probably doing a non-conventional thing with sessions.
I didn't want to implement logins, plus probably for lots of things you don't even care about user-level persistence.
But I figured it might be nice, and instead just exposed the user's session ID for them to see.
If they want to return to the site later and see their short links, they can just "log in" with their session ID.

For the front-end, I wanted it to be as lightweight as possible.
I immediately discovered that I need to either have multiple pages, or write an SPA.
But for multiple pages, how do I share common elements like a header/footer?

This is where I learned about Apache's [SSI](https://httpd.apache.org/docs/2.4/howto/ssi.html) (server-side includes).
I factored my header out into a separate file, and was able to include it in each page with a simple `<!--#include virtual="/header.html" -->`.

I still had some javascript to write, and I attempted to just use plain JS with no libraries.
But then I discovered [Alpine.js](https://alpinejs.dev/), which is just **so** lightweight and easy to use.
For CSS, I used [Skeleton](http://getskeleton.com/) but I'm not super-excited about it.
This is because it doesn't have a responsive grid system -- once something is, say, 6 columns, it's **always** 6 columns.
I'm probably going to migrate this to [Bluma](https://bulma.io/) at some point to fix appearance on intermediate-size screens.

## Deploying

I run my self-hosted services via `docker-compose` on my server, and it was really easy to add this one.
I set up [my `Dockerfile`](https://github.com/igor47/smrs/blob/master/Dockerfile) for multi-stage builds -- my `dev` target just contains configured Apache.
I bind-mount my `htdocs` into the container, and iterating on the rust code means building it locally and copying into the container filesystem.

For production, my final stage builds the release version of the code and generates a container with the `htdocs` baked in.
I have this container built and tagged via [Github Actions](https://github.com/igor47/smrs/blob/master/.github/workflows/publish.yaml).
Then, in my `docker-compose`, I can just pull the image from `ghcr.io`.

I ran into trouble because I'm hosting this behind a Cloudflare proxy, and so `traefik` couldn't get an SSL certificate for it.
I had to add a custom `traefik` config for validating SSL via modifying Cloudflare DNS:

```yaml
  # for zones behind cloudflare proxied DNS, we use the cloudflare dns provider
  # see:
  #   https://www.techaddressed.com/tutorials/certbot-cloudflare-reverse-proxy/
  # this relies on credentials in a file on purr. see the `env_file` directive in
  # docker-compose for traefik
  cf:
    acme:
      email: admins@example.org
      storage: /acme/acme-cf.json
      dnsChallenge:
        provider: "cloudflare"
```

I then had to generate a Cloudflare API token with the `Zone.DNS` permission.
I stored that in a file on my server, and then added it to my `docker-compose` via the `env_file` directive:

```yaml
    env_file:
      - ${STORAGE}/traefik/cloudflare.env
```

## Link Shortener Criticism

There's a whole section on [Awesome Self-Hosted](https://awesome-selfhosted.net/index.html) for [URL shorteners](https://awesome-selfhosted.net/tags/url-shorteners.html).
This is where I came across [this Wikipedia section](https://en.wikipedia.org/wiki/URL_shortening#Shortcomings) which lists a bunch of criticisms of URL shorteners.
In fact, a bunch of the shorteners in the Awesome list have a publically-hosted instance, but every one of those seems dead.
Others explicitly say they had to put their public version behind a password or login because of abuse -- for instance, [liteshort](https://git.ikl.sh/132ikl/liteshort) says about their live demo:

> (Unfortunately, I had to restrict creation of shortlinks due to malicious use by bad actors, but you can still see the interface.)

I put mine behind Cloudflare, but I'm still seeing a bunch of nonsense (like wordpress hack attempts) in the request logs.
So, I'm keeping the link a "secret" for now.
I might add a password system to create new links if I see a bunch of abuse.

]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/rusty-scissors.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[My Rivian R1S: Initial Thoughts]]></title>
            <link>https://igor.moomers.org/posts/my-rivian-r1s-initial-thoughts</link>
            <guid>https://igor.moomers.org/posts/my-rivian-r1s-initial-thoughts</guid>
            <pubDate>Fri, 19 May 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[I picked up my Rivian R1S just over a month ago; how is it going?
]]></description>
            <content:encoded><![CDATA[
I confirmed my pre-order for my Rivian R1S in February 2019.
In February 2023, I finally got a delivery window.
After some hassle, I drove away in the vehicle about a month ago.
Almost immediately, I went on a 2,500 mile road trip to Colorado through Utah and Nevada.
I'm at almost 4,000 miles on my odometer.
So... how's it going?

## How It Drives

The Rivian is pretty fun to drive.
It's really nice being able to just take on any road.
We went to a little hot springs in Colorado that had a really rough, rocky final stretch of road.
I would have hesitated to drive down that in my Subaru Outback, but with the Rivian there was no concern.
We also did several rounds of offroading in Nevada and Utah, and both were **really** fun.

Highway and city driving are also pretty good.
One nice thing is that EVs are *really* quiet, which translates into a much more pleasant highway experience.

## Charging

I had some battery and charger anxiety when considering buying an EV.
Those anxieties are totally gone now.
The 300-mile range means that it's actually kind of difficult to be too far from a DC fast charger.
[PlugShare](https://www.plugshare.com/) helps me be confident that the chargers on the map will actually work.
I haven't had to wait for a charging station yet.

The recommended route to Boulder from California would have taken us though Wyoming, which has few EV chargers.
We opted to take a slightly longer route through Green River instead.
Knowing what I know now, I think the Wyoming route would also be okay.

We were worried that we'd be waiting for the car to charge at our stops.
In fact, it's mostly been the other way, with the car being fully charged while S and I are still dithering somewhere.
On the drive back, because we were more comfortable with the charging infrastructure, we would stop for a quick bio break while the car gained 20% to 30% of charge, and then we'd head to the next charger.
It's much nicer to hang out at most chargers than at gas stations (though we **really** miss windshield cleaning supplies at most charging stations).

My body is much happier with more, longer stops and opportunities to walk around and stretch while the car is charging.

## Camping

Camping in the Rivian is much nicer than in my Subaru.
It's much more roomy, and has fewer hard edges with the seats down.
The glass roof is also quite lovely.

![My bed in the rivian](/images/rivian-bed.jpg "Rivian Bed")

There's more storage space (the frunk!), so we didn't need to bring a cargo rack.
The door opens in two halves, and I really enjoy sleeping with the bottom half closed.
It feels like my bed is less likely to slide out of the car, and we can still keep the top half open for a nice breeze.

I really like all the camping settings in software.
For instance, self-leveling is better than having to look for correct-sized rocks to put under the car.
On the other hand, I've found the software quite buggy (see below).
Our first attempt to use self-leveling in Nevada resulted in the car constantly adjusting it's height until the compressor overheated, and then we just had to wait for it to cool down and try again.
On another occasion in Utah, I reset self-leveling while packing the car, and then when I got in it wasn't reset and I had to reset it again.

While I appreciate all the attempts to control lighting while camping, they, too, are very buggy.
For instance, there's a "keep screens off" setting, which claims you'll need to press the brake to turn the screens back on.
In fact, the screens come back on when you touch them.
The light controls are sometimes-broken and sometimes-buggy.
For instance, the tail gate light just keeps turning on whenever I open a door, despite me being in camp courtesy mode and also explicitly turning that light off several times.
To avoid that bright light coming on whenever I exit the car to pee, I've resorted to just setting the car to "do not use energy" mode.
It's nice that this exists -- but it also means that all the other lights don't work.

Finally, it seems like even in "do not use energy" mode, the windows still work.
This is great -- in my Subie, I would have to insert the key and turn the car on to get fresh air or keep out bugs/the cold.
However, at least once, the windows *do not* work in that mode -- another bug?

## Driver+

I'm used to the adaptive cruise control (ACC) from my Subaru.
The Rivian's Driver+ adds more effective lane-keeping.
This is great when it works, mostly on straight interstate highways.
It does not work on any roads that are not the interstate.
It turns off when you change lanes.
It turns off in less-than-ideal visibility, like when it's raining ("camera blurry").
It turns off for some, but not most, tunnels.
It also sometimes turns off for no reason, at the oddest time, with a loud beep, and then you swerve and catch the car before it smashes into something.

Part of the reason we didn't bring the cargo rack is that using a "rear accessory" turns off all self-driving features, including good-old adaptive cruise control.
The Subaru still allowed me to use ACC with my cargo rack plugged in (it has brake and turn lights), so on a longer road trip with more outdoor/camping time, I would be pretty annoyed about that.

A common complaint on forums is about how Driver+ mutes your music for a moment whenever anything changes.
This is indeed pretty annoying.
Finally, I tried their lane assist, which is not full lane keeping, but does nudge you back into the lane if you depart it.
This was pretty aggressive and I turned it off.
They claim to have made it better in the recent update, but I haven't tested it yet.

## The Nav

The screens in this car are trying to kill me.
They're so big and shiny and full of buttons that change position.
As I navigate this 4-ton monstrosity down the public roadways, the screens are telling me, "Look at us, not the road!"

The in-car nav system is basic.
You **have** to use it if you want to keep track of your state of charge, or have the car pre-condition the battery.
But it barely knows about traffic.
It sometimes doesn't work (like, it won't actually give you directions, or it will just sit there trying to compute the route).
You can't just tap a destination and have it route you there.
You can't add side trips (any new destination replaces the previous one).
It's always routing me to the back of any business I try to go to.
It includes some non-existent chargers, and excludes some existing ones.
It doesn't know about the state of most chargers (e.g, Electrify America ones, which are most common on road trips).
It doesn't know about speed traps or construction.

I installed a phone holder on my dash, but honestly having 3 screens to look at is too many, and so I end up just using the car nav system while disliking it.
Much has been said about Android Auto and how Rivian is not going to add it.
I think this is a customer-hostile move -- I just don't see how it benefits the customers, only the company's own dreams of world domination through software.

## Music

I have a version of the car with the Meridian-branded sound system, and it's ... okay.
I mean, it sounds fine, somewhat better than the Subaru, but mostly I think because the car is pretty quiet.
The bluetooth in the car seems fairly reliable, better than the Subaru.
Audio books work well in the `voice` EQ preset, and I tend to use the `rock` preset for all music.

I also use the in-car version of Spotify, which is barely passable.
Basic things -- for example, searching for a song, and then adding it to the current playlist -- are not possible.
But it's nice that there's music without having to connect my phone to the car.

## Camp Speaker

This thing is maddening, and I honestly wish it didn't exist, rather than existing in it's current, broken state.
It sometimes works okay, and sometimes it refuses to connect to my phone.
It's so finicky and frustrating that S had to prevent me from just flinging it into the forest.

One time, I was having some people over, and I wanted to put some music on the camp speaker.
I proceeded to spend 20 minutes trying to get the speaker to stay connected to my phone before finally admitting defeat.
It did this thing where it would *almost* connect, like just for a moment, and then disconnect again, that was close enough to working that I Just. Kept. Messing. With. It.
On another occasion, I finally got connected and was playing music, and then it just stopped working for no reason and  I couldn't get it to work again.
On yet a few more occasions, it powered on and made it's loud boot-up noises while we were sleeping in the car and the speaker was inserted into it's speaker-slot.
This was alarming and embarrassing (because there were some people sleeping in the car next to us).

In short, I hate the camp speaker.

## Access

I like having the app as a backup way to get into the car.
But I would prefer to use the key fob, which unfortunately is the worst.
First, it has four identically-shaped, black-on-black buttons.
After a month, I still don't know which one does what.
Good look trying to figure out which side of the fob is even the one with the buttons in the dark.

Once you figure out what button to use -- good luck getting the car to respond.
I just love standing in the rain with my friend and her 5-year-old, all of us getting wet while I repeatedly hit the unlock button while the car *thinks about something*.
S also pointed out that this makes her feel unsafe.
Imagine a lone woman walking to her car on a dark street -- she wants that car to freakin' unlock!

I don't understand why this could work instantly in my Subaru, but takes sometimes a minute on my Rivian.
I bet it has something to do with the software.
Speaking of which...

## The Software

Okay, so, I'm a software guy.
This car is chock full of software.
And -- it's pretty bad.
It's buggy as all hell.

I gave some examples already, such as the tailgate light or the display not staying off, or the nav being unreliable.
Other examples:
* all of the lights will just come on for no reason
* the lights button on the rear screen and the front screen are not aware of each other
* the car forgets about it's leveling state
* the bluetooth will just randomly turn back on, even after it's been turned off
* sometimes battery pre-conditioning doesn't activate automatically; no way to do it manually
* sometimes the car will refuse to charge, and you have to hard-reboot it

Oh, Rivian support recommends a hard-reboot for most troubleshooting.
I was worried enough about the update [bricking my car](https://www.reddit.com/r/Rivian/comments/136y8gd/r1s_bricked_after_accepting_update_last_night/) that I avoided doing it until we returned from our road trip.

I'm honestly worried and annoyed about the state of the software.
Progress seems slow -- for instance, a major revision between January and May in Rivian software-land was hiding the trip odometer deeper into a settings sub-menu, despite people on forums asking for trip odometer to be *more* accessible.
It's not clear how to report software issues.
I have been using the `Service` requests feature in the app, but clearly software issues are not service items.
Are these reports going anywhere?
I don't know.

There is a Rivian employee occasionally on Reddit, and people leave comments like "I really hope this person sees this comment".
People repeatedly report the same software issue over and over, and there's no way to know if Rivian already knows about it or is planning to fix it or what.

I guess I bought this very expensive beta piece of software, and I was kind of resigned to the early user experience and helping to improve the product.
But actually, with the other half-finished software projects I use, there's public issue trackers and road maps.
Rivian has none of that.
They're not telling whether anyone is even hearing me, much less if they plan to fix things.
The end result is that I'm much more frustrated than I anticipated.
On the basis of the software bugs alone, I would NOT recommend the Rivian to most people.

## Support

When we picked up the car, we noticed a problem with the head liner.
There was no availability for an appointment at the service center for several weeks.
So, the car is finally going in to get that fixed the week after next.
This might be a blessing in disguise, since I've found a bunch of other issues that can be fixed in the same visit (dead USB-C port, broken floor matt pin, inoperable lighting, etc...).
Though, I guess it would be better if the car *hadn't* come with a bunch of issues?

However, I'm also scared.
People are saying that [the service centers are chaotic and overwhelmed](https://www.reddit.com/r/Rivian/comments/13l4nci/rivian_please_work_on_your_service_centers/).
Also, it seems like even minor accidents can result in [huge repair bills](https://www.theautopian.com/heres-why-that-rivian-r1t-repair-cost-42000-after-just-a-minor-fender-bender/).
I'm worried that, if I need immediate maintenance, I might be out of luck.
Thankfully, this is an adventure mobile and I don't critically need it, but it would suck to miss a trip.

## Minor Peeves

The car goes out of it's way to hide kilowatts from you -- range is measured in `%` or miles.
It does show efficiency in miles per kWh, but not how much kWh are left in the battery.
I would really like to connect the speed of the charger I'm pulling up to -- invariably in kWh -- to the capacity of the battery, which is a mystery and nowhere in the UI.
I would also *love* to see a live indication of how much power the car is using while camping.

There is no cruise control resume.
If you brake because a car swerved into your lane, or because you stopped at at a sign on a long stretch of highway, you'll have to re-set your previous cruise control speed manually.

The camera button is hidden in an overflow menu.
If you need help navigating into a tight parking spot, you'll have to pause and click around mid-park.

The windshield wipers are kind of hard to use manually.
The control is small, and the first tap only shows you the current setting, so you always need at least two.
I would love to just leave it on auto, but it's often way too fast in low-rain conditions.

The AC controls decide on their own whether the car is heating or cooling.
I really miss just having a fan blow outside air.
This would be especially nice while camping, where I don't want to use power for heat or AC, but a little extra airflow would be lovely (and help keep bugs off me!).

## Summing Up

I am definitely really enjoying having an EV.
It's also really fun to have a very capable off-road vehicle.
On the other hand, I'm finding it frustrating to be a Rivian early-adopter.
The car is buggy, and there's not really a good communication channel to the company to help resolve the bugs.
I fear that I'll be stuck living with these bugs forever.

Hopefully, in the next year, Rivian will improve the quality of the software (though I think some things, like Android Auto, are never going to happen).
They might also add features I'm really hoping for, such as V2H/V2G.
I'm also excited about all of the other electric SUVs and adventure vehicles coming onto the market in the next few years.
Hopefully, there will be less-buggy, more-polished alternatives to the Rivian available in the next few years, and those might be worth waiting for.

Finally, I'm also just pretty scared of the high-software closed-source proprietary-nonsense vehicle future that we're entering.
What happens when Rivian burns through it's cash and runs out of money?
Will I be left with an expensive brick?
Why does my car need always-on internet connectivity?
Who are they selling my location data to, or sharing it with?
I'm sad that the EV transition is also being used as an opportunity to take control away from vehicle owners and transfer it to car companies and their big-business allies (in Rivian's case, Amazon).

I really wish there was a car company that was transparent and consumer-friendly.
Alas, I don't think that's Rivian.
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/rivian-in-the-wild.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Purchase Allocation via Graph]]></title>
            <link>https://igor.moomers.org/posts/purchase-allocation-via-graph</link>
            <guid>https://igor.moomers.org/posts/purchase-allocation-via-graph</guid>
            <pubDate>Thu, 11 May 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Recoolit is focused on transparency, and every purchase is allocated to a specific molecules recovered in the field. This post describes how we do that.
]]></description>
            <content:encoded><![CDATA[
<small>**Note**: This is cross-posted to the [Recoolit blog](https://www.recoolit.com/post/purchase-allocations-eng).</small>

When you [buy carbon credits from Recoolit](https://www.recoolit.com/buy), you are buying a specific amount of prevented atmospheric warming.
We prevent warming by preventing the release of refrigerants into the atmosphere.
(Note: if you want to learn more about how refrigerants cause warming, check out [our post on refrigerants](/posts/refrigerants-what-are-they).)
Our purchase receipts are designed to be transparent -- you see exactly how much warming you are preventing, and exactly how.
This post explains the technical details of how we create this transparency.

## Brief Overview of Recoolit

When refrigerators, air conditioners, or other kinds of heat pumps reach end-of-life or need maintenance, the refrigerants inside need to be removed from the system.
This is because refrigerant is at high pressure inside the system, and needs to be depressurized before the system can be serviced safely.
It is common for refrigerants to be vented into the atmosphere during this process.
Recoolit provides technicians with the tools, know-how, and incentives to capture these waste gasses instead of releasing them.
Then, we destroy the captured gasses in a high-temperature incinerator, permanently preventing their release into the atmosphere.

Our main mechanism for incentivizing refrigerant capture is to pay technicians for the refrigerants they capture.
We also have to pay for the cost of destroying the refrigerants, including getting the destruction facility inspected and certified to international standards.
Finally, we have to cover our operational expenses.
This includes buying and maintaining the pumps and cylinders that we use to capture and store refrigerants.
We maintain several depots, where technicians can borrow the equipment needed for recovery.
Finally, we pay for a warehouse where we store refrigerants before they are destroyed.

We cover all of these costs by selling carbon credits to individuals and organizations that want to offset their carbon emissions.

## Why Transparency?

The world has almost no way to pay for pollution remediation.
Sometimes, governments will force polluters to pay for the cost of cleaning up their pollution.
Other times, governments will directly pay for cleanup using tax dollars, especially when the pollution is a public health hazard and there is no obvious responsible party who can be forced to tackle the cost.

So, even with the growing awareness that climate change is a big, looming problem, there are too few ways to get money to organizations that are tackling the issue.
The voluntary carbon credit markets -- where individuals and companies pay for carbon credits out of a sense of social responsibility, and not for any regulatory reason -- are one of the few ways to pay for climate change remediation.

However, the voluntary carbon credit markets have been a bit of a mess.
There are no international agreements or standards on how to quantify the impact of carbon credits.
A few private organizations have stepped in to fill this gap, but their standards are not universally accepted.
Finally, some of the largest private organizations -- known as registries -- have faced controversy.
For instance, Verra, one of the largest registries, [has been criticized](https://www.theguardian.com/environment/2023/jan/18/revealed-forest-carbon-offsets-biggest-provider-worthless-verra-aoe) for allowing carbon credits to be sold for projects that were already underway, and for projects that were not actually preventing warming.

For these reasons, Recoolit's founder Louis was determined for the company to be as transparent as possible from day one.
We collect detailed data on every step of our operational process.
When you make a purchase, we show you all of the data we've collected that pertains to the molecules that went into your purchase.
Finally, we show as much data as possible on [our public registry](https://registry.recoolit.com/registry), so that anyone can see exactly what we are doing.

## What Is Our Data?

Every Recoolit carbon credit begins with a recovery.
This is where a technician captures refrigerants from a refrigerator, air conditioner, or other heat pump.
At this step, we collect photos of the equipment that's being serviced, the reason for the refrigerant recovery, the amount of gas recovered, and the type of gas recovered (though we don't always know the exact type of gas at this point).

Next, technicians return the cylinder used for the recovery to one of our depots.
We verify the amount of gas recovered, and we also test the gas using a gas analyzer.
We pay the technician for the amount of gas they collected.
This payment compensates the technician for the time and effort they spent performing the recovery.

Because we want to return the smaller recovery cylinders back into the field, we will often consolidate the gas from multiple recovery cylinders into a larger storage cylinder.
We cannot mix different types of refrigerants, so we have to keep track of the type of gas in each cylinder.
Every time gas is transferred, we keep detailed records on the source and destination cylinders and weights.
Some small amount of gas is always lost during transfers, and the losses are not included in the carbon credits we sell.

Next, we transport the gas to our destruction facility.
Every time we transport gas, we weigh and test the cylinders at both ends.
We maintain a chain of custody for the cylinders at all times, using signed transport manifests.
Some of the refrigerants we transport are becoming expensive, because they are no longer allowed to be produced under internal agreements.
Our procedures protect against loss or theft of refrigerants during transport.

Finally, our refrigerants go through the destruction process.
First, we take a sample of each cylinder that has arrived at the destruction facility.
The sample is lab-tested under rigorous standards, to confirm the exact makeup of the contents of the cylinder.
Next, the cylinder is hooked up to a high-temperature incinerator.
In Indonesia, we partner with cement kiln operators, because their facilities reach the high temperatures needed to destroy refrigerants and need only minimal modifications to do so.

After destruction, the cylinders are shipped back to us.
We vacuum-test the cylinders to make sure they're not leaking, and use them for future recoveries or consolidations.

## Building the transfer graph

When you buy a carbon credit from Recoolit, you are buying a specific quantity of a specific gas that was destroyed.
In order to show you all of the data that went into your destruction, we need to trace the path of the gas from the recovery to the destruction.
We call this data structure the "transfer graph", and it is the core of our transparency system.

The transfer graph is a directed acyclic graph (DAG).
In this graph, the edges are transfers from a source to a destination node.
Each edge has a weight, which is the amount of gas transferred, as well as a gas type.

Because cylinders are reused, the nodes are actually a specific cylinder ID during a specific time interval.
A node is created when gas is first transferred into it, and "closes" when all of the gas is transferred out and we vacuum out the cylinder.
A subsequent transfer into the same cylinder ID would create a new node.

Each node can have multiple incoming and also outgoing edges.
This is because we sometimes do partial transfers.
Finally, each node can have a series of "events" associated with it.
Events include things like "the gas was tested", "the cylinder was transported", or "the cylinder was weighed".

## An example

This might be easier with an example, so let's use some real data from our public registry.
I bought some credits from Recoolit, and [here is my receipt for that purchase](https://registry.recoolit.com/purchases/f436f5ca-a6fe-49c4-b10c-f4e46cafcb8d).
This is a view of the same data in our internal system:

![Igor's purchase graph](/images/igor-purchase-graph.png)

To allocate this purchase, we first look through all of our destructions to find one that has enough gas to cover the purchase.
We might need to combine multiple destructions to cover the purchase.
In this case, we find that we destroyed about 50kg of R-22 from cylinder `berkabut-panas-bebek`.
R-22 is such a high-GWP refrigerant that only 511 grams of it were needed to cover my purchase of 1 tonne of CO2e.

Next, we do a depth-first search through the graph, starting at the destruction node, and looking for a path that has enough gas to cover the purchase.
We see that about 10 kg of R-22 arrived in `berkabut-panas-bebek` from `dingin-kering-harimau`, so we allocate 511 grams of that 10kg.
Finally, we see that 5.8 kg of R-22 was recovered directly into `dingin-kering-harimau`, so we allocate 511 grams of that 5.8kg.

This was a fairly easy case, as it only involved a single trajectory through the graph.
Here's an example of a more complicated sale, which involved multiple kinds of gas allocated through multiple destructions:

![A more complicated purchase graph](/images/complicated-purchase-graph.png)

In this case, the purchase of 30 tonnes of CO2e was covered by 3 different destructions.
We destroyed 13235 grams of R-410a and 193 grams of R-32 to cover this purchase, and these gasses were recovered by two technicians on 3 different occasions.
Three of the five recoveries were on the same day into three different cylinders, indicating a large recovery job that filled up multiple cylinders!

## Allocating purchases

We allocate purchases using a depth-first search through the graph, starting at the destruction node.
Every time we find a node that has enough gas to cover the purchase, we recursively look through the source nodes.
The recursion terminates at a recovery node.
We then propagate the list back up through the stack, allocating the purchase to each node in the path.

Allocating the purchase means creating edges.
For each purchase, we create a `sale` node in the graph.
For each node that contributes gas to the purchase, we create a `sale` edge, from the source node to the `sale` node.
To find all the nodes involved in allocating a sale, we look for all nodes connected to the `sale` node through a `sale` edge.

Each `sale` edge includes the amount of gas that was allocated to the sale.
To figure out if a node still has enough gas to cover the sale, we subtract the sum of all `sale` edges from the weight of the node's outgoing transfer edge.

## Formatting for display

We've already discussed all of the data we collect during operations to enable this kind of transparency.
You can now also know how our system allocates your purchase to destructions and recoveries.
The final piece is presenting this data in a way that is easy to understand.

In your receipt, we show you a path-decomposed version of the transfer graph.
A path is a linked list of edges and nodes, starting at a recovery and ending at a destruction.
However, in your purchase subgraph, a single node or edge might be involved in multiple paths.
When we do the decomposition, we clone the shared nodes and edges, so that each path has its own copy.
Here's an example of a non-decomposed graph:

![Non-path decomposed graph](/images/transfer-path.png)

You can see that this includes two recoveries -- one in the blue box, and one in the red box.
The green box includes a consolidation and destruction nodes that are shared between the two paths.
When we display it, it would look more like this:

![Decomposed paths](/images/transfer-path-decomposed.png)

There are two paths in this graph -- the one on the left, and the one on the right -- and the nodes in green are duplicated between the two paths.
Here's how the same data might look in your purchase receipt:

![Paths in the purchase receipts](/images/transfer-path-final.png)

Doing this is surprisingly non-trivial because it's not clear, just from the subgraph, how many paths through a particular node there are.
To make it easier, we actually keep track of the paths when we construct the graph.
Each time we begin trying to allocate gas from a destruction, we generate a path identifier.
When we find a path that works, we store the path identifier in the `sale` edges that track the allocated purchase.
This means that a node might actually have multiple edges connecting it to a `sale` node, each with a different path identifier.

## Wrapping up

As you can see, we've put a lot of thought into how to make our data as transparent as possible.
Our goal is to create the highest-quality carbon credits, with the greatest possible assurance to buyers that their purchase is actually making a difference.
If you like what we're doing here, and want to support us, we urge you again to [buy some carbon credits](https://registry.recoolit.com/buy)!
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/transfer-graph-2.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Refrigerants: What Are They?]]></title>
            <link>https://igor.moomers.org/posts/refrigerants-what-are-they</link>
            <guid>https://igor.moomers.org/posts/refrigerants-what-are-they</guid>
            <pubDate>Mon, 08 May 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Recoolit prevents global warming by collecting and destroying refrigerants. But when even are refrigerants? Read to find out what refrigerants do, the different kinds that exist, why they can be so concerning, and how the world has managed the risk.
]]></description>
            <content:encoded><![CDATA[
<small>**Note**: This is cross-posted to the [Recoolit blog](https://www.recoolit.com/post/refrigerants-what-are-they).</small>

At [Recoolit](https://www.recoolit.com), our mission is to reduce climate impact by collecting and destroying waste refrigerants.
This mission is often unclear to people who aren't familiar with the refrigeration industry.
We often hear questions like, "What are refrigerants?" or "Why are refrigerants a problem?"
We've skipped over such questions on our [explainer page](https://www.recoolit.com/our-work) because they're quite technical, and we don't want to overwhelm our customers.
This post is for those special nerdy few who want to know all the gory details!

## Heat pumps -- How do they work?

Refrigerators and air conditioners are all a type of heat pump -- a device that moves heat from one location to another.
They do this by means of a fluid that is pumped through a closed loop.

First, the fluid is compressed, which causes it to heat up, turn into a gas, and become much warmer than the surrounding air.
Next, the gas is pumped through a heat exchanger -- like a radiator on a car -- which results in the gas becoming cooler while the surrounding air becomes warmer.
In a heat pump on "warming" mode, this heat exchanger is located inside a building or room, and this warming is the desired effect.

The gas is then pumped to a second heat exchanger, where it is allowed to expand, turning it back into a liquid.
The expansion cools the fluid.
It's then pumped through a second heat exchanger, where it becomes warmer, while the air around it becomes cooler.
In a heat pump on "cooling" mode, this second heat exchanger is inside the refrigerator, freezer, or too-warm room or building.

The fluid is then pumped back to the compressor, and the cycle begins again.
Mechanically, all heat pumps rely on just these few components -- a compressor, pumps, heat exchangers, and the fluid -- sometimes a gas, sometimes a liquid -- that runs between them.
Physically, this process relies on just a few basic principles of nature -- the ideal gas law, plus the laws of thermodynamics.

## What are refrigerants?

Refrigerants are the fluid used inside heat pumps to move heat from one place to another.
There are many considerations when picking a substance to use as a refrigerant.
For instance, a common early refrigerant was ammonia (R-717).
Ammonia boils at a low temperature, which makes it easy to turn into a gas.
It does not freeze until a very low temperature, so there's no risk of it freezing inside the heat pump during the expansion phase.
And it has a fairly high specific heat, which means that it can absorb and release a lot of heat.

Unfortunately, ammonia is highly toxic and corrosive.
Other early refrigerants, such as sulfur dioxide (R-764) and methyl chloride (R-40), were also toxic.
To avoid the safety issues associated with early refrigerants, scientists began experimenting with other substances.
In 1928, Thomas Midgley, Jr. invented the first chlorofluorocarbon (CFC) refrigerant, which was marketed by DuPont as Freon-12 (R-12).
R-12 was cheap to produce, non-toxic, non-flammable, non-corrosive, and it had a low boiling point and a high specific heat.
These properties led to R-12 becoming the most popular refrigerant in the world, with production peaking at over 1 billion tons per year in the 1980s.

## The Montreal Protocol

When searching for safer replacements to early refrigerants, scientists looked for substances which were highly stable, since stable substances pose less of a health risk to humans.
This made R-12 an attractive choice, since it was highly stable and non-toxic.
However, this stability also proved to be a problem.
In the 1970s, scientists studying the ultimate fate of CFCs like R-12 discovered that these gasses could make their way up to the stratosphere.
There, they could be broken down by ultraviolet radiation, releasing chlorine atoms (the first C in CFC and HCFC).
The chlorine would then react with, and destroy, atmospheric ozone.

In response to these findings, in 1987 the international community adopted the Montreal Protocol on Substances that Deplete the Ozone Layer.
The agreement was initially signed by 46 countries, and has since been ratified by 198 parties, including all member states of the United Nations.
It sets out a schedule for the phase-out of the production and consumption of ozone-depleting substances (ODSs), including CFCs like R-12 and HCFCs like R-22.

The phase-out schedule differed for different substances and in different places.
For instance, R-12 was phased out in developed countries in 1996, and in developing countries in 2010.
Meanwhile, R-22, an HCFC common in home and commercial heat pumps, was phased out in developed countries in 2010, but it continues to be in-use in developing countries until 2030.

## GWP and the Kigali Amendment

If you recall, one of the properties of a good refrigerant is its ability to absorb and release heat.
Refrigerants continue to play this role even after they've been released into the atmosphere.
In particular, they absorb and release heat in the form of infrared radiation, which is the same type of radiation that is absorbed and released by greenhouse gasses like carbon dioxide and methane.

The heat-trapping ability of a greenhouse gas is measured by its global warming potential (GWP).
The standard basis of comparison is the most common greenhouse gas, carbon dioxide (CO2), which has a GWP of 1.
The GWP of Freon, or R-12, is 10,900.
This means that, released into the atmosphere, R-12 traps 10,900 times as much heat as an equivalent amount of CO2.

CFCs and HCFCs phased out under the Montreal protocol were often replaced by hydrofluorocarbons, which have many of the desirable properties of a refrigerant, but without the ozone-destroying chlorine.
For example, an HFC called R-134a is a common replacement for R-12 in automotive air conditioners.
However, R-134a has a GWP of 1,430 -- much less than R-12, but still substantially heat-trapping.

When the Montreal protocol was adopted in 1987, the focus was on the ozone-depleting properties of CFCs and HCFCs.
However, as the world began to phase-out these substances, it became clear that their global warming potential was also a problem.
In response, in 2016 the international community adopted the Kigali Amendment to the Montreal Protocol.
The Kigali amendment is meant to prevent additional global warming from refrigeration by phasing-out the production and consumption of hydrofluorocarbons (HFCs), which are the most common replacements for CFCs and HCFCs.
The amendment has, so far, been ratified by [148 parties](https://ozone.unep.org/all-ratifications). 

Like the Montreal Protocol, the Kigali Amendment sets out a schedule for the phase-out of HFCs, but this schedule differs from place to place.
For instance, for R-3
In the developed world, the phase-out began in 2019, and is scheduled to complete by 2036.
In the developing world, however, the phase-out is not scheduled to end until 2045, with some countries receiving additional extensions to 2047 or later.

## How Recoolit Fits In

Refrigerant pollution is a major global problem, and the world is working hard to solve it.
However, as you can see, even the phase-out of ozone-depleting substances like R-22 will not be complete until 2030.
With high-GWP HFCs like R-134a, the process has barely begun, and will last for decades.

What do we do in the meantime?
This is where Recoolit comes in.
While the world works to phase out harmful refrigerants, we can work to reduce the amount of refrigerant pollution that is released into the atmosphere.

Certainly, some of this release is accidental or the result of equipment malfunction.
However, a significant portion of refrigerant pollution is the result of intentional venting.
This is because, without Recoolit, technicians simply have no other way to dispose of refrigerant properly.
Even well-intentioned technicians, who are concerned about pollution and environmental impacts, have no other choice when a system must be drained for maintenance or repair.
We've seen technicians vent refrigerant through a hose inserted into a bucket of water, in the (mistaken) belief that this will somehow "scrub" the refrigerant.

Additionally, the process of recovering refrigerant is time-consuming and expensive.
The international protocols do not provide any funding for the transition to cleaner refrigerants, or to mitigate the damage from refrigerant pollution.
This is where <em>you</em> come in.
Now that you know about the problem, you can help us solve it.
Fund our work by [buying our carbon credits](https://registry.recoolit.com/buy).
Tell your friends and colleagues about the problem, and about our solution.
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/refrigerants.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[An Open Letter to GPE]]></title>
            <link>https://igor.moomers.org/posts/open-letter-gpe</link>
            <guid>https://igor.moomers.org/posts/open-letter-gpe</guid>
            <pubDate>Fri, 09 Sep 2022 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
In Cory Doctorow's book [Down and Out in the Magic Kingdom](https://bookshop.org/books/down-and-out-in-the-magic-kingdom/9781250196385), the main character (Jules) has this to say about himself:

> [my compulsion] was Beating The Crowd, finding the path of least resistance, filling the gaps, guessing the short queue, dodging the traffic, changing lanes with a whisper to spare -- moving with precision and grace and, above all, _expedience_.
> I spied a queue ... that was slightly longer than the others, but I joined it and ticced nervously as I watched my progress relative to the other spots I could've chosen.
> I was borne out, a positive omen for a wait-free World, and I was sauntering down Main Street, USA long before my ferrymates.

Reading this passage, I realized that I was not alone in my weird obsession.
I hate lines, the waiting in of, and like Jules I will obsessively attempt to pick the shortest queue to optimize my wait time.

As you can imagine, Burning Man, with it's notorious entry and exodus lines, is a kind of special torture for me.
I struggle constantly to remain patient and silence my overeager mind as it attempts to compute my rate of progress relative to my line-mates and my destination.
I become short and distracted with my car mates, and annoyed at the perceived inefficiencies in the system.

In order to channel my frustration into a productive direction, back in 2012 I joined the Gate, Perimeter, and Exodus (GPE) department.
In the decade since, I've worked just about every position in the department -- apex, airport, lanes, perimeter, and this year I did a stint in the Traffic Operations Center (TOC).
I worked enough shifts in 2019 for a staff-priced ticket, and enough this year for a staff credential for my next burn.

To some extent, joining the department (as well as simply becoming a decade older and a little bit more patient) has helped me deal entry/exit crawl.
When the line totally stops moving at 6 pm, I know that there's a shift change, and we'll get moving soon, and I can reassure people around me about this.
What I hoped for, however, was a sense that there's some giant master plan that's helping to move me out of a 10-hour traffic jam as quickly as possible.
This sense continues to elude me, and is the reason I'm writing this open letter.

I do not mean to throw shade on any people in the department.
I understand first-hand how difficult these shifts are, standing out in a white-out in 100+ degree heat, breathing dust and exhaust, dealing with cranky and cantankerous burners while trying not to get crushed by traffic.
I also understand the difficulty that the leaders of the department have, doing this thankless job in exchange for meal pogs and t-shirts, trying to take care of their volunteers while balancing the demands of the org, various state agencies, and the forces of pure chaos.
But I cannot help but get the sense that something is not working quite right.
I am 200% open to the possibility that, actually, everything is going as well as possible, and it's just that my spidey sense is off here.
I know that when other department members express concerns on the gate list, they sometimes get (vehemently) dismissed as "armchair quarterbacks" who, because they weren't working that specific shift or that specific role, have no right to an opinion.
But in this case, I think it's a communication problem.
If even long-time department volunteers (including this one) are frustrated and confused by the operation of the department, what of the participants?
These are the people who, after being stuck in a 10-hour traffic jam for inscrutable reasons, unload their anger on the front-line Exodus volunteers.
Everyone here deserves better -- and even if this is *just* a communication problem, that's still a real problem that, I believe, demands attention.

## Entry

My entry into Burning Man was more than 3 hours long -- on Wednesday pre-event!
As others have noted on the gate list, the second and third lines from the left on the way in merged right before apex, meaning those two lines moved half as fast as the other lines.
I attempted to reassure my car mate for almost an hour that this must be an illusion, and that no line moves faster through Apex than any other line, but my poor choice of lanes cost me a lot of extra wait time, and anxiety that we would miss our camp's placers for the night and would have to sleep in our truck instead of setting up camp.
The department doesn't want participants changing lanes, because it messes up the cone placement.
The best way to prevent lane-jumping is to make sure the lanes are actually fair!

Next, I had to pick up my staff credential from the box office.
On the way out of the Will-Call lot, we again had a massive line stretching back into the lot.
Two volunteers at the front were letting cars out of Will-Call, only into the two rightmost lanes, which Apex was also using.
This resulted in an additional 30 to 45 minutes of wait out of the lot.
It was incredibly frustrating to have waited at the Apex line already, only to have cars that arrived hours after us hit the lanes ahead of us.
Apex can hold traffic and empty out the Will-Call line.
Why not do this?

## TOC

I worked a TOC shift on Sunday of Temple Burn.
For 6 hours, I had control of [this twitter account](https://twitter.com/bmantraffic) -- the first time I've ever posted anything on Twitter!
This was my chance to understand exodus, and to help tens of thousands of people to leave the Burn in the easiest way possible.

I do not believe that, through any of my tweets, I helped anyone make better decisions.
Some of my tweets were of the inane variety, like "don't break down in the lanes".
I had a campmate whose car broke down on the way in, and he was just going to leave when it was time to leave -- the tweets had no bearing on his decision.

The most useful tweets could have been wait time estimates, and I believe that some of those were purely made up.
I tweeted that the wait was [6 hours](https://twitter.com/bmantraffic/status/1566535630493913089), and tweeted again that the wait was [1 hour](https://twitter.com/bmantraffic/status/1566539015972474881) just 20 minutes later!
The amount of information coming to the TOC from folks on the ground is miniscule -- reporting wait times is not a huge priority, even though this is the one number everyone in the city wants to know.
We communicated this number to BMIR *maybe* twice during my six-hour shift, and once it was only because we had [totally stopped all traffic](https://twitter.com/bmantraffic/status/1566571695158153216) and asked them to report it.
I didn't have a radio in my truck, but campmates reported that BMIR was more interested in their programming than in keeping folks informed about exodus -- just was well, since I don't believe they have any special information.
At least on Sunday afternoon, *nobody* seemed to have any accurate information about the wait time in Exodus.

My TOC shift lead was warm, personable, efficient on the radio, and a great person.
However, I was sometimes shocked at their claims and decisions.
At the beginning of my shift, someone radioed into TOC, asking, "BMIR is reporting a 3.5 hour wait time, do you know where they got that number"?
My shift lead said, "Yeah, that makes sense, because gate road is 4 miles long and the speed limit is 5 miles an hour."
After some back-and-forth, the person on the radio signed off, unconvinced by this obviously faulty math, but clearly unwilling to continue taking up airtime.
My shift lead then told me to [tweet that number](https://twitter.com/bmantraffic/status/1566522222767812608)!

## Exodus

I left on Sunday night, shortly after my TOC shift ended.
Because I sent [this tweet](https://twitter.com/bmantraffic/status/1566539015972474881) about using all the lanes, I was able to satisfy my compulsion and skip a ton of traffic by using the empty lanes.
The pulses themselves, however, drove me nuts.
Specifically, in all my pulses, I observed the final portion of gate road completely draining of all cars -- nobody else merging onto the highway.
Exodus would then continue to hold traffic for anywhere from 20 to 40 minutes, for no reason I could discern.
I sort of assumed this was because traffic was getting stuck on 447, but my brothers in black running Exodus that night assured me this was not happening.

It was sad to watch participants get into their vehicles and start them as they watched the road drain, then shut them down again 5 minutes later when the line showed no signs of movement.
Keeping these people informed would help everyone.

## Wrap-Up

My exodus took 9 hours (though it would've been 7 if my janky borrowed truck hadn't refused to start at the beginning of my last pulse!)
To everyone who helped out, including volunteers who brought me a jump box -- THANK YOU!)
I know people who spent 11 hours in the full heat of Monday afternoon.
These anecdotes are, sadly, the best data we have!
Where are the charts of wait time by departure time over the years of exodus, the charts that would help level out traffic flows out of the city?
If your answer is "leave Tuesday morning", then you clearly don't understand the reality of participants who have to get back to their lives and jobs, but first must drive a theme camp home, unload it, and de-dust it.

I can anticipate the reaction to this essay among the GPE crew.
At worst, I'll be dismissed as ignorant, not someone who trully understands the process and the constraints.
Guity!
To these people, I would say that, if a 10-year department veteran is confused and frustrated by gate operations, then there's a real problem here.
At best, some GPE folks will tell me that, if I think I can do better, I should sign up -- as though there's a slot in the shift system for "Run everything as you see fit".

My goal is not to blindly criticize, but to contribute constructively.
Ultimately, I would love to just relax and enjoy the Gate process.
I think a little bit of information would help me and the other participants to do just that.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Peter Eckersley]]></title>
            <link>https://igor.moomers.org/posts/peter-eckersley</link>
            <guid>https://igor.moomers.org/posts/peter-eckersley</guid>
            <pubDate>Thu, 08 Sep 2022 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
I was at Burning Man 2022 when I heard that my dear friend [Peter](https://en.wikipedia.org/wiki/Peter_Eckersley_\(computer_scientist\)) was, initially, in critical condition.
I learned later that evening that he had passed away.
In the midst of a difficult exodus full of logistics and dust storms, I kept finding myself with tears in my eyes, in a state of shock and disbelief that such a light in the world as Peter had gone out.
I was grateful to learn the news while surrounded by friends and people who loved Peter.
Our impromptu vigil at the Temple on Friday night was heartwarming and healing.

I first met Peter back in 2009, and we ended up living together at the 1355 shortly after that year's Burn.
In the decade+ that I've known him, Peter has been much more than a friend.
He has been a collaborator, a mentor, and a source of inspiration.
He introduced me to good coffee.
I have learned about the Bay area's best bike rides by riding them with Peter.
He showed me all the best dumpling restaurants.
I learned about [giving what you can](https://www.givingwhatwecan.org/) from him, and made a commitment to donate a percentage of my income thanks to his example.
Peter helped to bring me and my last long-term partner together -- a relationship that lasted 4 years.
He taught me about jasmine green pearls, and toast with a generous amount of butter.
I cannot count the number of evenings we spent talking over the problems of the world, and how I would always learn something from him, shift my perspective just a little bit.
He taught me that it's possible to reason about the world.

I last saw Peter about two weeks ago, when he came over to hang out while we prepared for Burning Man.
Inspired by one of our projects, he tried, as usual, to wrangle us into making another one on the spot.
That evening, in my dining room over Vietnamese food, I complained about the difficulty of fundraising for a climate startup I've been involved with.
Though Peter has already introduced me to three potential investors over the last few months, he said to me, "I will work harder for you."
This may be one of the last things he said to me, and perfectly sums up his generosity and his unwavering enthusiasm.

As I reflect on my relationship with Peter, I can't help but feel that I took his presence for granted.
He was just such a pillar, a constant of the world, always available for dinner or a chat.
I don't think I've ever told him just how much I appreciated him.
His passing is a reminder to me, to take every chance I get to share my appreciation with the people in my life.

Peter's passing also leaves a hole in the world.
Who is going to make sure AI is on our side?
Who will take a stand for privacy and security?
Who will bring people together to solve seemly-intractable problems?
Where will all the quirky ideas come from?

I think the best way that I can honor Peter's memory is to take on some of the work he has left behind.
I want to throw one extra dinner party.
I want to go on one more extravagant bike ride, which is nothing more than an excuse to stop for coffee and banana bread.
Most of all, I want to remain engaged and enthusiastic in the world.
If I've learned anything from Peter, it's that any problem is solvable though reason and cooperation.

Peter, I will miss you always.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The War in Ukraine]]></title>
            <link>https://igor.moomers.org/posts/war-in-ukraine</link>
            <guid>https://igor.moomers.org/posts/war-in-ukraine</guid>
            <pubDate>Mon, 07 Mar 2022 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
I was born in [Mykolaiv, Ukrain](https://en.wikipedia.org/wiki/Mykolaiv).
I lived there until age 9, when my family emigrated to Los Angeles.
Over the years, much of my immediate family has likewise emigrated to the US.
However, I still have many extended family members there.

I haven't been back to Ukraine since I left as a child.
My parents, however, have returned several times to visit friends and family.
I feel strong personal connections both to the city itself, and to the well-being of my many family members living there.
Watching the [Battle of Mikolaiv](https://en.wikipedia.org/wiki/Battle_of_Mykolaiv) take place, both through media and through family updates, has therefore been stressful, distracting, and unsettling.

## Family Update ##

As of today, I have three separate groups of family members who've been affected by the war in Mykolaiv.
My closest relative, an uncle, was living in the city but is an Autralian citizen.
Around February 25th, he took himself and much of his Ukranian family to Poland.
We had confirmation that he arrived in Warsaw on March 4th.

More family members have fled to western Ukraine.
Adult males are currently prohibited from leaving Ukraine, so the family is staying in the country to stay together.
Another big group has a house with a basement, which they've been using as a bomb shelter.
They've been descending there whenever the air raid sirens go off.

## Predictions ##

I planned a family gathering for the past weekend, and naturally, Ukraine was all we could talk about.
After much kitchen-table debate where people were making informal/implicit predictions, I tried to make things more concrete by formalizing them and writing them down.
Since so many folks have been asking me what I think about this conflict, I figured I would share my predictions publicly.

### The End of the Conflict ###

A common question I get is about the end-game of the war.
I think Russia will, eventually, seize Kiev and install a puppet regime.
This regime will be highly unpopular, but it will remain in power through Russian military and intelligence support.
The closest analogs for this, for me, are Syria and Belorus -- both places where an unpopular leader has resisted all attempts at regime change.
In Syria in particular, the cost was the destruction of a large part of civilian infrastructure.
Ukraine is already there after two weeks of bombing, and will continue to get worse.

[This article](https://carnegieendowment.org/2022/03/03/how-does-this-end-pub-86570) confirmed my existing biases about the end-game.
I agree that the task, for the west, is to avoid escalation at all costs, even though the outcome for Ukrainians seems pretty dire.
The alternative -- a possible nuclear confrontation -- is worse, for everyone.

### Putin and His Circle ###

Many of my family members were convinced that this is the end game for Putin personally.
I disagree.
Putin is powerful, paranoid, and ruthless.
If he's able to prop up extremely unpopular dictators in other countries, he'll definitely be able to do it at home.
I predict that he will remain in power until he dies, and his death will likely be of natural causes.

My family members were wondering about the oligarchs, who are having assets seized and being sanctioned by foreign governments.
What's the point of being a billionaire if you can't jet around the world on your private plane to your private yacht?
There was a "surely, they'll kick him out now that he no longer benefits them" thread.
Again, I disagree.
The oligarchs serve at Putin's mercy, as he showed with [Khodorovsky](https://en.wikipedia.org/wiki/Mikhail_Khodorkovsky).
I also recommend the book [Nothing Is True and Everything Is Possible](https://bookshop.org/books/nothing-is-true-and-everything-is-possible-the-surreal-heart-of-the-new-russia/9781610396004) for an in-depth look at how Putin plays his allies against each other, all the while securing even more power for himself.

### Energy/Climate ###

Russia can balance it's budget with an oil price of [$45 per barrel](https://www.bloomberg.com/news/articles/2019-08-22/putin-s-budget-has-lowest-break-even-oil-price-in-over-a-decade) (paywall).
The price is north of $100 at the moment.

There are serious people in the EU who are thinking about [a future without Russian gas](https://www.bruegel.org/2022/02/preparing-for-the-first-winter-without-russian-gas/).
Given the politics in the West, this might actually happen, but the impact to Russia will be minimal.

> crude oil accounted for $110.2 billion, oil products for $68.7 billion, pipeline natural gas for $54.2 billion and liquefied natural gas $7.6 billion

([source](https://www.reuters.com/markets/europe/russias-oil-gas-revenue-windfall-2022-01-21/)).
So, even if we assume all pipelines stop flowing, Russia will still find a ready export market for the remaining 80% of it's fossil fuels.
While the conflict might accelerate renewables transition in the EU, it also has a bunch of negative knock-on effects.
Most especially, it makes burning coal a better deal -- something [the EU is already struggling with](https://www.newyorker.com/magazine/2022/02/07/can-germany-show-us-how-to-leave-coal-behind).
In addition, fracking becomes more profitable again, as does refining poor-quality petroleum such as tar sands.

The US administration was already walking a fine line between advocating a renewables shift while also protecting low gas prices at all costs.
The calculus there has become harder, and the administration will face tough choices.
Approve more drilling and face the anger of the base?
Deny new oil and gas leases, and face the political consequences?

## Political Opportunity ##

This war is very prominent in public conciousness, and people want a frame in which to think about it.
I recommend using the following frame whenever talking to anyone about this conflict.

__This War Is a Climate War__.
Putin is just one more character in the cast of fossil-fuel-powered dictators.
He's been wreaking havoc on the world stage for 20 years, and the world rolled over and took it because he supplied the gas.
Every bullet shot at a Ukrainian citizen, every missile hitting a residential building, every tank and warplane was paid for when Westerners pumped their gas and turned up their heat.

__Ukrainian refugees are climate refugees__.
They were displaced because of fossil fuels.
They join migrants from the Syrian civil war, and the people coming to the US from South and Central America -- all people displaced by our addiction to fossil fuels.

__The climate crisis is a refugee crisis__.
Millions of Ukrainians fleeing to the EU is a foreshadowing of what's to come.
We are reacting today in the best possible climate -- a very unpopular war, a very definite and very unpopular aggressor, very sympathetic (because white, Christian) victims.
How we treat these folks is the best-case scenario, and it [could be better](https://www.reuters.com/world/europe/britain-may-ease-immigration-rules-ukrainian-refugees-sun-2022-03-07/?taid=62260ffc18c5730001d4f729).
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Words I Avoid]]></title>
            <link>https://igor.moomers.org/posts/words-i-avoid</link>
            <guid>https://igor.moomers.org/posts/words-i-avoid</guid>
            <pubDate>Fri, 24 Sep 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
Language shapes thought -- a.k.a, [the Sapir-Whorf hypothesis](https://www.sciencedirect.com/topics/psychology/sapir-whorf-hypothesis).
There's lots of good evidence for this -- for instance, a very popular study on how Russian's multiple words for shades of blue allow speakers to [more rapidly distinguish between those shades](https://www.newscientist.com/article/dn11759-russian-speakers-get-the-blues/).

Differences in thought patterns might arise not just between speakers of different languages, but between individuals speaking the same language.
I'm specifically interested, here, in [cognitive distortions](https://www.theschoolofmomentum.com/post/cognitive-distortions), -- a kind of dialect spoken by people who are clustered together, not through physical geography but on a memetic landscape.
Cognitive distortions [are correlated with depression](https://www.nature.com/articles/s41562-021-01050-7).
There are even claims that [we might be getting more depressed as a society, based on increasing prevalance of cognitive distortions in public text](https://www.pnas.org/content/118/30/e2102061118).

This is interesting, but is a post-hoc justification for a practice that I noticed myself independently adopting.
Language forces a frame of thought, sometimes those frames are not helpful, and language can be a clue that a specific frame is arising.
When I notice myself using these words, this is a clue to myself to pay attention and possibly to re-frame my thinking.
Without further ado, my listsicle.

## SHOULD ##

This is the most common one I notice in myself, and in many of the people in my life.
Often used like "I *should* stop being on the computer so much" or "I *should* exercise more".
An old housemate used to react to *should*s by saying, "Don't *should* all over yourself".

I dislike the way *should* imposes an obligation.
It's not that I *should* go brush my teeth -- it's that I *want* to have healthy teeth and good breath.
Reminding myself of my motivations is often helpful in getting myself to actually do the task.

I also don't like imposing *should*s on others.
Instead of saying "You *should* read X", I could say "I think you would like X".
While reframing in this way, I sometimes realize that I'm not even sure the person would even enjoy X -- I end up simply telling a story, like "I enjoyed X, because..." -- allowing me to share excitement and enthusiasm without adding a TODO to my interlocutor's list.

## MAKE[s] [ME/YOU] FEEL ##

I suffer from the influence of what a wise person I know once called "the control virus" -- the mistaken desire, and belief in the ability to, control outcomes in the world.
For instance, I cannot control the world's climate, or the way the world responds to climate change.
What I can control is how I engage with that problem, how I show up in relationship to it.
Do I avoid thinking about it?
Do I let it dominate my life and take away my joy?

A global catastrophe is a more obvious example.
But the control virus shows up more frequently in interpersonal interactions.
I can't control whether or not my housemate washes their dishes, for instance.
This is insidious because it seems like, if I just ask them "correctly" -- if I say just the right words that would inspire just the right feelings of solidarity and guilt -- then maybe the dishes will get washed after all.
In other words, I would get the outcome that I wanted.

Thinking about life in terms of desired outcomes is a recipe for dissatisfaction.
There is gratitude when my housemate chooses to wash their dishes, while if I somehow controlled their actions via the right words, all I get is smug self-satisfaction.
And what of the alternate scenario?
If they didn't wash their dishes after our conversation, I can be mad at myself for not saying those magic correct words, and mad at the housemate for depriving me of my desired outcome.

I find it helpful, through life, to remind myself that I am fundamentally not in control of outcomes.
All that I can choose -- and even that, with great difficulty -- is how I feel in a given moment.
I can feel drained, avoidant, anxious, and withdrawn from the problem of climate change -- or I can be motivated to engage through feelings of love, the joy of building and creation.
I can be frustrated at my housemate, or I can be curious and supportive of the challenges in their life, and inspired to build a more harmonious household (including by being less frustrated in general).

I think the freedom to choose my own feelings in the moment is the greatest possible freedom.
It often seems non-existent -- always, the temptation to react in a way that I might later regret, not in alignment with the kind of person I would like to cultivate.
So, it is difficult enough to obtain this freedom without constantly, voluntarily surrendering it to others.

This is why *makes me feel* is such a agency-robbing expression.
My usual retort to this, inside my own head, is "nobody can *make* you feel anything!"
People who up in the way that they choose, and then I feel about it the way that I feel.
Sometimes, my reframing of this phrase takes an [NVC](https://www.cnvc.org/learn-nvc/what-is-nvc) turn -- "when you leave your dishes unwashed, I feel...".
When talking of others, I can pivot "I'm sorry I made you feel X" into an opportunity to think about exactly what I said or did that proved a reaction.
For instance, seeing that a person is angry, I can take a moment to think about what I said which provoked anger.
Saying "I'm sorry I said X", instead of "I'm sorry I made you angry" acknowledges the person's feelings and my role in them, while leaving intact their agency in the situation.

## JUST ##

I got this from one of my housemates, who pointed out that the word *just* is often doing a lot of heavy lifting in a sentence.
Having trouble with your boss/spouse/child/pet/inanimate object?
Why not *just* ...?
This can often be quite patronizing.
There's an impression that the solution to a problem is trivial, deadening curiosity through a mistaken belief that you already know the right answer.

There's no catch-all reframing for *just*, but many paths depending on the conversation and relationship.
Is your manager being a jerk in meetings?
A common script devolves into naive solution-ism -- "You could *just* complain to **their** manager!"
When you notice that *just*, that's the cue.
One option is sympathy -- "Damn, that sounds terrible, I would be so mad in your situation."
Another option is collaborative problem solving, but with curiosity -- "Does **their** manager know about this behavior? How would she react if she knew?"

## Putting it all together ##

An Igor of a few years ago might say something like "My manager makes me feel so angry in meetings! I should just go complain to the VP."
A hopefully-more-self-aware Igor of today might notice several opportunities for deeper reflection here.
Why am I feeling angry about the situation?
How can I work, both internally and externally, towards a different reaction in that situation?
Do I want to bring external parties to help mediate the conflict?
Would that help the situation, or escalate it?

The goal is not inactive analysis paralysis.
Instead, I want to give myself the chance to avoid reacting to situations, because my reactions often have the opposite effect from the one I desire.
If I avoid reacting and create spaciousness even in difficult situations, I can use the spaciousness to choose a path most likely to help me feel best in the future.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Kitchen Table Decisions]]></title>
            <link>https://igor.moomers.org/posts/kitchen-table-decisions</link>
            <guid>https://igor.moomers.org/posts/kitchen-table-decisions</guid>
            <pubDate>Mon, 09 Aug 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
I've been spending a bunch of time, both personally, and professionally, thinking about the clean energy transition and "kitchen table decisions".
Some examples of kitchen-table decisions:

* where to live
* how to heat the home/heat water for home use
* whether to get solar panels
* whether to use an electric car
* whether to get battery storage
* how to insulate the home
* how to do laundry and drying
* how to cook food in the home
* what kind of food to eat
* what kinds of vacations to take
* how and when to shop for consumer goods

These are in (very rough) order of importance in terms of climate impact.
My professonal thoughts on this have mostly come from my involvement in the [Rewiring America](https://www.rewiringamerica.org/) project.
I've been working on data mining and analysis to quantify the impact of some of these decisions.
For instance, I helped put together [a map](https://map.rewiringamerica.org/) which shows which households would most benefit from switching to [heat pump](https://www.carrier.com/residential/en/us/products/heat-pumps/what-is-a-heat-pump-how-does-it-work/)-based heating, and also to quantify the [CO2e](https://coolerfuture.com/en/blog/co2e) savings from such a transition.

Personally, I am thinking of these projects as a co-owner at [Chrysalis](https://chrysalis.community/).
Because we are co-housing, we get a multiplier effect from many of these decisions.
For instance, getting a single heat pump space or water heater for our one household is the equivalent of as many as three or four "normal" nuclear-family households.
Our group is pretty committed to making long-term investments that benefit the environment, and we're in the process of making decisions on many of them.

As we do the research and make big investments, I would also like to document what we are learning.
I'm envisioning this post being the entry-point for more in-depth discussions.
Please get in touch if you have valuable experiences to share, or would like to know more about some of these subjects.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[x86_64 on an Apple M1 MacBook]]></title>
            <link>https://igor.moomers.org/posts/navigating-arch-on-osx</link>
            <guid>https://igor.moomers.org/posts/navigating-arch-on-osx</guid>
            <pubDate>Tue, 11 May 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
I started a new job, and this job came with a new computer -- an Apple M1 MacBook.
I had to do quite a bit of work to get my usual dev environment set up, mostly having to do with the transition to the `amd64` architecture from `x86_64`.
Some tips and tricks are documented here.

## Two `homebrew`s

I found [this stack overflow question](https://stackoverflow.com/a/64951025/153995) invaluable.
On my computer, I had already run the normal homebrew install command:

```bash
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```

This installed homebrow into `/opt/homebrew`.
To also add an x86_64 version of homebrew, I did:

```bash
cd /usr/local
sudo mkdir homebrew
sudo chown :staff homebrew
sudo chmod g+w homebrew
curl -L https://github.com/Homebrew/brew/tarball/master | tar xz --strip 1 -C homebrew
```

I then followed the stack overflow question, and added an alias to my `.bash_profile`:

```bash
alias brow='arch --x86_64 /usr/local/homebrew/bin/brew'
```

## Python with `asdf` / `pyenv`

Apparently, Python newer than 3.9.1 [natively supports arm64](https://github.com/pyenv/pyenv/issues/1768).
Alas, I needed Python 3.8.9, which is what my co-workers use:

```bash
$ asdf install python 3.8.9
python-build 3.8.9 /Users/igor47/.asdf/installs/python/3.8.9
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew

<-- snip -->

BUILD FAILED (OS X 11.2.3 using python-build 1.2.27-29-gfd3c891d)

<-- snip -->

configure: error: Unexpected output of 'arch' on OSX
```

This still does not work if you explicitly specify an arch:

```bash
$ arch -x86_64 asdf install python 3.8.9
python-build 3.8.9 /Users/igor47/.asdf/installs/python/3.8.9
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew

<-- snip -->

Installing Python-3.8.9...
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
WARNING: The Python readline extension was not compiled. Missing the GNU readline lib?
ERROR: The Python ssl extension was not compiled. Missing the OpenSSL lib?

BUILD FAILED (OS X 11.2.3 using python-build 1.2.27-29-gfd3c891d)
```

You'll need to install the `x86_64` versions of readline and openssl to make progress!
The second install of brew to the rescue (notice, I'm using my alias, `brow` and not `brew`, below):

```
$ brow install readline
$ brow install openssl
$ brow install xz
```

Now, you need those libraries in your linker and compiler.
I figured I might need to do this again, so I am using my [direnv](https://github.com/asdf-community/asdf-direnv) setup to make this easier.
I created a directory -- `mkdir ~/x86_64` -- and added a `.envrc` that looks like this:

```bash
use asdf
export BROW="/usr/local/homebrew"
export LDFLAGS="-L${BROW}/opt/openssl@1.1/lib -L${BROW}/opt/readline/lib -L${BROW}/opt/xz/lib"
export CPPFLAGS="-I${BROW}/opt/openssl@1.1/include -I${BROW}/opt/readline/include -I${BROW}/opt/xz/include"
export PATH="${BROW}/bin:$PATH"
export ARCHPREFERENCE="x86_64"
```

A quick `direnv allow`, and now, when I `cd ~/x86_64`, I am in a good place to do Rosetta-type things:

```bash
$ uname -a
Darwin planetarium.local 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:06:51 PST 2021; root:xnu-7195.81.3~1/RELEASE_ARM64_T8101 arm64
$ cd ~/x86_64
direnv: loading ~/x86_64/.envrc
direnv: using asdf
direnv: loading ~/.asdf/installs/direnv/2.28.0/env/3601927912-4079855243-2014015008-304321844
direnv: using asdf rust 1.52.1
direnv: using asdf python 3.8.9
direnv: using asdf direnv 2.28.0
direnv: export +ARCHPREFERENCE +CARGO_HOME +CPPFLAGS +LDFLAGS +RUSTUP_HOME ~PATH
$ arch uname -a
Darwin planetarium.local 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:06:51 PST 2021; root:xnu-7195.81.3~1/RELEASE_ARM64_T8101 x86_64
```

Now, from this directory, I can easily install my Python.
I didn't have to specify `-x86_64` to `arch`, because this is already set by my `ARCHPREFERENCE`.

```bash
$ arch asdf install python 3.8.9
<-- snip -->
Installed Python-3.8.9 to /Users/igor47/.asdf/installs/python/3.8.9
```
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Arch Linux Configuration]]></title>
            <link>https://igor.moomers.org/posts/arch-linux-config</link>
            <guid>https://igor.moomers.org/posts/arch-linux-config</guid>
            <pubDate>Wed, 27 Jan 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
After encountering some issues with Ubuntu on my Thinkpad X1 Carbon 6th-gen, I went back to Arch linux.
So far, I'm loving it, but I did have to figure out a few things.
This is meant to document a few things I did to customize the machine to my liking.

## Re-mapping CapsLock to Ctrl 

My brain is very used to the caps lock key being the control key.
When this is not remapped, I end up constantly switching into all-caps mode and being very confused when things don't work correctly.
I used [interception](https://gitlab.com/interception/linux/tools) and the [caps2esc](https://gitlab.com/interception/linux/plugins/caps2esc) plugin to make the re-mapping work in a virtual console as well as in X.

First, install `caps2esc` (I use the [yay](https://github.com/Jguer/yay#installation) package manager):
I picked the `community/interception-caps2esc` package, which was option `1` in the `yay` listing.

```bash
$ yay caps2esc
```

Next, configure the re-mapping.
I use `-m 1` to disable the mode where pressing `esc` turns on caps lock, since I like the escape key to just remain the escape key and don't really need caps lock.
This configuration goes into the file `/etc/interception/udevmon.yaml`:

```yaml
- JOB: intercept -g $DEVNODE | caps2esc -m 1 | uinput -d $DEVNODE
  DEVICE:
    EVENTS:
      EV_KEY: [KEY_CAPSLOCK]
```

Finally, enable and activate the `udevmon` service, and enjoy no-more caps lock:

```bash
$ sudo systemctl enable udevmon
$ sudo systemctl start udevmon
```

## Auto-suspend on low battery

Occasionally, I leave my laptop with the lid open for a reason.
Other times, I just forget about it.
I always feel bad coming back to a dead computer with a battery at 0%, since this can [reduce the lifetime of the battery](https://electronics.stackexchange.com/questions/164103/if-li-ion-battery-is-deeply-discharged-is-it-harmful-for-it-to-remain-in-this-s).

To prevent this, I have a script which will auto-suspend my computer if the battery level drops too low.
I use this script (which I put into my `~/bin/auto_suspend.sh` and made executable with a `chmod u+x`):

```bash
#!/bin/bash

battery_level=`cat /sys/class/power_supply/BAT0/capacity`

if [ "$battery_level" -le 5 ]
then
  notify-send "Battery critical. Battery level is ${battery_level}%! Suspending..."
  sleep 5
  systemctl suspend
elif [ "$battery_level" -le 8 ]
then
  notify-send "Battery low. Battery level is ${battery_level}%!"
fi
```

To run this script periodically, I used [systemd timers](https://wiki.archlinux.org/index.php/Systemd/Timers) (since Arch does not come with a cron daemon installed in the base system).
First, I created a unit file for my auto-suspend service, in `~/.config/systemd/user/auto_suspend.service`:

```
[Unit]
Description=Checks battery and suspends if low

[Service]
Type=oneshot
ExecStart=/home/igor47/bin/auto_suspend.sh
```

Next, I created a timer which will periodically activate this service (in `~/.config/systemd/user/auto_suspend.timer`):

```
[Unit]
Description=Check battery level and auto-suspend

[Timer]
OnBootSec=1m
OnUnitActiveSec=1m

[Install]
WantedBy=timers.target
```

My config will activate the timer 1 minute after boot-up, and also 1 minute after every activation.
I then enable the timer:

```bash
$ systemctl --user daemon-reload
$ systemctl --user enable auto_suspend.timer
$ systemctl --user start auto_suspend.timer
```

You can check the status of the timer like so:

```bash
$ systemctl --user list-timers
```
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Minimum Viable Air Quality Monitoring]]></title>
            <link>https://igor.moomers.org/posts/minimal-viable-air-quality</link>
            <guid>https://igor.moomers.org/posts/minimal-viable-air-quality</guid>
            <pubDate>Sun, 13 Sep 2020 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
If you, like me, live in the Bay Area, you may have woken up to something like this last week:

![Outdoor Air Quality](/images/minimal-aq-outside.jpg)

This is my back yard, but things looked just as dire in the house:

![Indoor Air Quality](/images/minimal-aq-inside.jpg)

If you checked a website, like [PurpleAir](https://www.purpleair.com/map?opt=1/mAQI/a10/cC0#8/38.138/-121.702) or [AirNow](https://fire.airnow.gov/?lat=37.86988000000008&lng=-122.27053999999998&zoom=12), you would see scary numbers and dire warnings about staying indoors and avoiding the air outside.
However, how *is* the air inside your house?
Are you actually much safer?

Thankfully, answering this question has gotten cheaper and easier in recent years.
Low-cost air quality sensors have been built into reasonably inexpensive hardware.
The [PurpleAir indoor sensor](https://www2.purpleair.com/products/purpleair-pa-i-indoor), for instance, is only $200.

The heart of that device is a plantower laser dust sensor which can be [had on AliExpress for about $12](https://www.aliexpress.com/item/32639894148.html).
With PurpleAir, you're paying for more than just the sensor.
That device includes a [BME280](https://learn.adafruit.com/adafruit-bme280-humidity-barometric-pressure-temperature-sensor-breakout) temperature/pressure/humidity sensor, a microcontroller that can access a WiFi network, GPS that helps localize the device on a map, all  conveniently packed into a nice-looking device.
You pay for integration of all that hardware, firmware to make it work together, and software that integrates the data on the back-end and gives you a nice map to view it on.
You're also paying for hosting to allow PurpleAir to store your data and display it to others.
So, by buying one, you're not only becoming better-informed, but you're also helping folks in your neighborhood be better-informed too -- a *positive* externality, for once!

However, if you're brave enough, there are advantages to building the hardware yourself.
Getting several sensors inside and outside your house can help you understand more about the air you're breathing.
It can help quantify interventions, like installing an air purifier or running [a box fan](https://www.texairfilters.com/how-a-merv-13-air-filter-and-a-box-fan-can-help-fight-covid-19/).
If you have a large house, you can see how quality differs throughout the house.

Also -- while for most readers of this blog, $200 is probably not a lot of money, it *is* a lot of money for *most* people.
The folks who are most regularly affected by poor air quality are exactly those who cannot just blow $200 to satisfy their curiosity.
Making air quality monitoring much, much cheaper can potentially make life much better for those people.

This post will help you build a minimum viable sensor that works with your laptop.
By using a pre-existing computer, you can avoid having to pay for additional hardware like a microcontroller with WiFi.
Your computer is probably already on WiFi, so configuration is also minimized.

You can also avoid sending data to a cloud.
This is a double-edged sword.
On the one hand, cloud hosting costs are eliminated.
On the other hand, you lose the positive benefits of open data.
Also, if you want visualizations, you have to make them yourself.

Anyway, on to the construction!

## Hardware 

The bill of materials for this project includes two things:
* Plantower PMS7003 -- maybe [this one](https://www.aliexpress.com/item/32784279004.html)
* A USB-to-TTL cable like [this one](https://amzn.to/2GYAYAD)

This should run you about $20 all-told.
I also needed the following tools and supplies:
* soldering iron and solder
* heat shrink tubing and a lighter
* wire clippers/strippers

The PMS7003 sensors I had did not come with the little breakout board, and this device has *very* small pins.
To connect to it, I used some 30AWG wire-wrap wire and a wire-wrapping tool.
If you get a PMS7003 with a breakout and cable, you won't need this.

My first task was connecting to the PMS7003.
Here's the pinout, from the [data sheet](https://download.kamami.com/p564008-p564008-PMS7003%20series%20data%20manua_English_V2.5.pdf):

![PMS7003 pinout](/images/minimal-aq-pms7003-pinout.gif)

You'll only need 4 wires -- for power, ground, serial Tx, and Rx.
I stripped the tiniest bit of insulation on my wire wrap wire, tinned both the wire and the pin, and touched them together with the soldering iron (very carefully, to avoid bridging the pins).

![soldering wires](/images/minimal-aq-wires.jpg)

Here's a (blurry) photo with all 4 wires connected:

![soldering wires](/images/minimal-aq-all-wires.jpg)

Next, I stripped my USB TTL cable and tinned the exposed multi-strand wires.
Tinning just means touching the soldering iron to the wire and allowing some solder to flow onto the wire.
This makes the wire stiff, so I can wire-wrap onto it using my wire-wrap tool.

![USB TTL](/images/minimal-aq-usb-ttl-tinned.jpg)

Next, I wire-wrapped the exposed, tinned wires.
Black and Red are for ground and power, and get connected together.
On my TTL cables, `green` is for `Tx` (transmit) and `white` is for `Rx` (receive).
You need to connect the `Tx` of the PMS7003 to the `Rx` of the TTL cable, and vice versa.
I made it easier for myself by making my `white` wire on the PMS7003 be the `Tx` pin (Pin9), so I could just do white-to-white:

![Completed Wiring](/images/minimal-aq-wired-up.jpg)

After wire-wrapping, I soldered the wires together (belt *and* suspenders!)
Finally, I put some heat-shrink tubing over the wires and shrunk it using a lighter.
I then taped the device right to the USB plug with electrical tape.
Be sure to avoid covering up the air intake and expel ports on the PMS7003.
Also, be mindful of USB polarity.
The orientation that I have in the photo is probably the way your USB port is aligned on your computer, so the PMS7003 ends up on top of the USB plug.

![Completeled device](/images/minimal-aq-complete.jpg)

Here it is, plugged into my computer and ready for software integration:

![Plugged into computer](/images/minimal-aq-in-computer.jpg)

## Software

Next, you'll need some software to integrate with this hardware.
The cables that I got use the Prolific Technology PL2303 chipset.
This is already supported on linux; if you plug it in and run `dmesg | tail` you'll see something like this:

```
[1457958.125315] usb 1-2: new full-speed USB device number 38 using xhci_hcd
[1457958.281810] usb 1-2: New USB device found, idVendor=067b, idProduct=2303, bcdDevice= 4.00
[1457958.281816] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[1457958.281820] usb 1-2: Product: USB-Serial Controller
[1457958.281823] usb 1-2: Manufacturer: Prolific Technology Inc.
[1457958.283810] pl2303 1-2:1.0: pl2303 converter detected
[1457958.285306] usb 1-2: pl2303 converter now attached to ttyUSB0
```

Looks like it got recognized correctly, and is available at `/dev/ttyUSB0`.
On OSX, you will probably have to [install a driver](http://www.prolific.com.tw/US/ShowProduct.aspx?p_id=229&pcid=41), but you can ignore the warning about restarting your computer.
On these machines, the device will probably show up at `/dev/tty.usbserial`.

While it has power, the PMS7003 will be outputting a continuous stream of binary data containing the particulate readings onto that TTY device.
The [data sheet](https://download.kamami.com/p564008-p564008-PMS7003%20series%20data%20manua_English_V2.5.pdf) specifies the protocol:

![PMS7003 protocol](/images/minimal-aq-pms7003-protocol.png)

I recommend using my [mini-aqi repo](https://github.com/igor47/mini-aqm), which includes a Python implementation of this protocol.
You will need a recent python (I tested with `3.8.3`; anything above `3.6` should probably work).
Run these commands to grab the repo, install the code, and begin reading data:

```bash
git clone https://github.com/igor47/mini-aqm.git
cd mini-aqm/
pip install poetry
poetry install
poetry run ./main.py
```

And you should see the output:

```
beginning to read data from /dev/ttyUSB0...
PM 1.0: 32  PM 2.5: 54  PM 10: 73  AQI: Unhealthy for Certain Groups
PM 1.0: 31  PM 2.5: 54  PM 10: 73  AQI: Unhealthy for Certain Groups
```

Here's a screenshot:

![Runtime Screenshot](/images/minimal-aq-screenshot.png)

`mini-aqm` tries to print informative error messages.
If `main.py` exits immediately without printing any air quality measurements, read the error message and try to resolve it.
If you suspect a hardware issue, use a multi-meter to check for a short between pins.

## Visualizations

I am running [telegraf](https://www.influxdata.com/time-series-platform/telegraf/) on my laptop.
I've configured telegraf to read data from the device and store it in [influxdb](https://www.influxdata.com/products/influxdb-overview/), for graphing with [grafana](https://grafana.com/).
Using this stack, here's a visualization of the past few months of PM data in my workshop:

![Last Few Months in Particulates](/images/minimal-aq-last-3-months.png)

Using this setup, it's convenient to perform experiments and look at results.
For instance, here is what happened with indoor air quality when we turned on our central fan system, which pushes air through two MERV13 filters:

![Central Air](/images/minimal-aq-central-fan.png)

Here's where we take a box fan with a MERV13 filter and run it near the sensor:

![Box Fan](/images/minimal-aq-box-fan.png)

Here's what happens when we just point a normal fan at the device, with no filter:

![Normal Fan](/images/minimal-aq-normal-fan.png)

I noticed that the air quality in my bedroom was not great.
The basement door is right outside my bedroom door, and is pretty leaky, so I started keeping my bedroom door closed.
I also taped over my old, leaky windows with masking tape:

![Taped-over windows](/images/minimal-aq-tape-on-windows.jpg?cache=no)

These interventions had a real effect!

![Normal Fan](/images/minimal-aq-window-tape-effect.png)

Finally, it *is* possible to have *good* air quality.
I'm currently sitting in my taped-up room, with doors closed, right next to a box-fan-with-filter:

![Good Setup](/images/minimal-aq-getting-to-good.jpg)

The results, with other rooms in the house and outside the house on my Grafana dashboard:

![Normal Fan](/images/minimal-aq-all-together.png)

Setting up `telegraf`, `influxdb`, and `grafana` is beyond the scope of this post.
If you do go this route, however, the `mini-aqm` code is already writing a log of collected data into a `measurements.log` file.
You can run the collector while you're working on the visualization setup, and then import the "historical" data you've collected when you're done.

## What's Next?

I'd like to help you breathe better air.
Reach out if you need help building these devices, or have any questions at all.
Also, if you're in the Bay Area, I have a few spare PMS7003 devices.
If you'd like one without waiting for them to be shipped from China, please reach out, and I can leave one on my porch for you to come grab.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Kubernetes for ETL]]></title>
            <link>https://igor.moomers.org/posts/building-etl-kubernetes</link>
            <guid>https://igor.moomers.org/posts/building-etl-kubernetes</guid>
            <pubDate>Thu, 10 Sep 2020 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
From December 2018 until May of 2020, I worked as a software engineer at [Aclima](https://aclima.io/).
While I was there, I ended up building an in-house [ETL](https://en.wikipedia.org/wiki/Extract,_transform,_load) system written entirely in Python on Kubernetes.
Though fairly generic, this system is, and likely will remain closed-source.
However, I still learned a lot -- about ETL, Kubernetes, data science workflows -- and this post is an attempt to summarize those learnings.

I would love to reflect generally on my time at Aclima, similar to my [reflections on leaving Airbnb](/thoughts-on-leaving-airbnb).
I found it difficult to approach this as a single post, so I'm going to do it in pieces, of which this is one.
Stay tuned for more posts about other things I learned in the last 18 months.

## What's Aclima ##

When I joined, Aclima had just completed it's Series A, and was focused on building and scaling a new product.
The products customers were regulators, such as Bay Area's own [BAAQM](https://www.baaqmd.gov/) (or "the district").
These folks are in charge of making the air that citizens in their districts breathe as clean as possible.
To do this, they need *data* -- how good is the air, what are some problem areas or pollutants, what is the effect of various interventions.

Traditionally, that data has primarily come from permanent EPA monitoring sites.
These contain very expensive regulatory- or lab-grade equipment that has been strenuously vetted to give accurate readings.
This equipment has to be secured and property maintained to retain accuracy.
As a result, there are not that many such regulatory sites -- here's a map of the Bay Area ones from the [airnow.gov](https://fire.airnow.gov/?lat=37.86988000000008&lng=-122.27053999999998&zoom=12#) website:

![Regulatory sites in the Bay Area](/images/epa-sites-bay-area.jpg)

The problem is that, while the reference stations give very accurate data, that data is only very accurate for the small area immediately adjacent to the regulatory site.
Air quality, on the other hand, can differ dramatically, even block-by-block.
It's strongly affected by the presence of [major roadways](https://www.epa.gov/air-research/research-near-roadway-and-other-near-source-air-pollution), [toll booths](https://www.macfound.org/media/files/HHM_Research_Brief_-_Living_Along_a_Busy_Highway.pdf), [restaurants](https://www.theguardian.com/environment/2019/oct/10/restaurants-contribution-to-air-pollution-revealed), features of the landscape, and many other factors.

Aclima's goal was to quantify this local variability by collecting "hyperlocal" air quality measurements.
Even with the advent of [low-cost sensors](https://www.alibaba.com/product-detail/PLANTOWER-Laser-PM2-5-DUST-SENSOR_62480702957.html) and [cheap connected devices](https://en.wikipedia.org/wiki/ESP32), installing and maintaining thousands of devices on every street would be a challenge.
Instead, Aclima builds out vehicles equipped with a suite of sensors, and then drives those vehicles down every block -- many times, to collect a representative number of samples.

## Overall Architecture ##

What are the technical details around implementing such a data collection system?
There are lots of moving -- sometimes, literally moving -- pieces here.
First, a bunch of hardware must be spec'ed, sourced, integrated, and tested.
The hardware needs software which can collect data from a suite of sensors, and then the software needs to report this data back to a centralized backend.
The data must be collected, stored, and processed.
Finally, there's a presentation or product layer, which allows customers to gain insights from the pile of data points.

Over my time at Aclima, I worked on all of these components.
For instance, I built a tool which generated [STM32](https://www.st.com/en/microcontrollers-microprocessors/stm32-32-bit-arm-cortex-mcus.html) firmware and then flashed it onto boards using [dfu-util](http://dfu-util.sourceforge.net/).
I experimented with rapidly prototyping low-cost hardware (post coming soon).
I also worked on fleet-level IOT device management and data collection.

However, most of my time at Aclima was spent on the pipeline for processing the data that our vehicles and sensors collected.
I'd like to focus this post on the technical details behind this component.

## Why Kubernetes? ##

My immediate project was taking some code written by a data scientist, which had only ever been running on her laptop, and making it run regularly and on some other system.
This involved configuring an environment for this code -- the correct versions of python, libraries, and system dependencies.
I would need to keep the environment in-sync between the OSX laptops of other engineers and data scientists, and the cloud environment where the code ran in production.
I wanted the ability to have representative local tests, but also a rapid-iteration development environment.
Finally, I didn't want to introduce too many workflow changes -- only the barest minimum necessary.

The choice of tooling was already somewhat constrained.
Aclima was already running in the [Google Cloud](http://dfu-util.sourceforge.net/), and many of their existing services ran in pods on Google's [Kubernetes Engine](https://cloud.google.com/kubernetes-engine/).
Most of my experience was with raw EC2 instances configured with [chef](https://medium.com/airbnb-engineering/making-breakfast-chef-at-airbnb-8e74efff4707), and I was unfamiliar with the Docker ecosystem.
However, I was also unfamiliar with Cloud ETL tools like [Dataflow](https://cloud.google.com/dataflow/) and was reluctant to jump into a ecosystem new for both myself and my colleagues.
The other obvious choice would have been [Airflow](https://airflow.apache.org/); Google even has a [hosted version](https://cloud.google.com/composer/).
However, at the time (early 2019), Airflow did not have good support for Kubernetes, and I would have had to configure instances for code to execute on.

In any case, my first task was fairly simple -- take a Python program which is already running locally, and make it run in the cloud.
I wrote a Dockerfile to correctly configure the environment, copied the code into the docker image, and uploaded it to [GCR](https://cloud.google.com/container-registry/).
I used the [Kubernetes CronJob controller](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) to get it to run daily, and [K8s secrets](https://kubernetes.io/docs/concepts/configuration/secret/) to manage credentials for the process.
By configuring the [cluster autoscaler](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler), we could avoid paying for cluster resources when we didn't need them.
Whenever the job ran, if the cluster didn't have enough resources, GKE would automatically add them, and then clean up after the job completed.

The local setup for users involved installing docker and the [gcloud](https://cloud.google.com/sdk/gcloud/) tool, and getting `gcloud` authenticated.
`gcloud` takes care of managing permissions for the K8s cluster/the [`kubectl` command](https://kubernetes.io/docs/reference/kubectl/overview/), which is invoked to deploy the `CronJob` and `Secret` manifests.
To shield users (and myself!) from the raw `docker` and `kubectl` commands, I immediately added [`invoke`](https://www.pyinvoke.org/) to the data science repo.
After getting the ETL job running locally, the workflow to deploy it to production was a simple `inv build` and `inv deploy`.
I also created convenience tooling, like `inv jobs.schedule`, to do a one-time run of the job in the cloud, and `inv jobs.follow` to tail it's output.

These initial steps were simple and easy enough that we decided to continue with Kubernetes for a while, and reconsider when we hit snags.

## Scaling Up to Multiple Jobs ##

One job does not an ETL pipeline make.
A few changes were required to add a second job to the pipeline.
First, we wrote the second job in the same repo as the first.
We standardized on a job format -- a Python class with a signature like so:

```python
class PerformType(Protocol):
    def __call__(
      self, start: pendulum.DateTime, end: pendulum.DateTime, config: Dict[str, Any]
    ) -> None:
        pass

class JobType(Protocol):
    perform: PerformType
```

We then created a standard `main.py` entrypoint which would accept parameters like the job name and options, and dispatch them to the correct job class.

The Kubernetes work is a little more complicated.
The jobs look basically the same, but have different arguments -- in K8s-speak, the pod spec container args -- are different.
The solution is to template the K8s manifests, but templating a data structure (in this case, `yaml`) like a string (e.g., with [jinja](https://jinja.palletsprojects.com/en/2.11.x/)) is a recipe for disaster, and popular tools like [jsonnet](https://jsonnet.org/) seem quite heavyweight.

We managed to find [json-e](https://json-e.js.org/), which hit just the right note.
This allows you to template and render a pod manifest:

```yaml
spec:
  containers:
    - name: {$eval: 'container_name'}
      image: {$eval: 'image'}
      args: {$eval: 'args'}
```

with something like:

```python
pod_manifest = jsone.render(
  YAML(typ="safe").load('pod_manifest.yaml'),
  {
    'container_name': job_name,
    'image': 'latest',
    'args': [job_name, start_time, end_time, job_config]
  }
)
```

You end up with a valid pod manifest as a data structure.
You can then render it into your `CronJob` manifest:

```yaml
apiVersion: batch/v1beta1
kind: CronJob
spec:
  schedule: {$eval: schedule}
  jobTemplate:
    spec:
      template: {$eval: pod_manifest}
```

in the same way:

```python
cron_manifest = jsone.render(
  YAML(typ="safe").load('cron_manifest.yaml'),
  {
    'schedule': '0 2 * * *',
    'pod_manifest': pod_manifest,
  }
)
```

The resulting `cron_manifest` can be serialized to YAML again, and passed to `kubectl apply` to update your resources:

```python
with NamedTemporaryFile() as ntf:
  ntf.write(YAML(typ="safe").dump(cron_manifest))
  ntf.flush()
  
  os.system(f"kubectl apply -f {ntf.name}")
```

Of course, we updated our `inv` tooling to support gathering input from users about which jobs they wanted to deploy, and with which arguments.
We also provided helpers for the `main.py` dispatcher, so you could specify arguments like `yesterday` to a job and have it run over the previous day.

## Inter-job Dependencies ##

Soon enough, we had dozens of jobs, and they had inter-job dependencies.
You can only go so far by saying "job A runs at 2 am, and usually finishes in an hour, so we'll run job B at 3:30 am".

This was time, again, to re-examine existing popular open-source tooling, and we seriously considered [Argo](https://argoproj.github.io/argo/).
But again, some functionality was missing -- specifically, we got bit by [this bug](https://github.com/argoproj/argo/issues/703#issuecomment-494183536) preventing the templating of resources, which was a step back from being able to template any part of the manifest using `json-e`.

Instead, we ended up creating two new concepts.
The first, a `workflow`, listed all the jobs that depended on each other, along with their dependency relationships, encoding a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph).
It's easy to parse a YAML list, and verify that it is indeed a DAG, with [networkx](https://networkx.github.io/documentation/stable/reference/algorithms/dag.html).
Creating a workflow specification also gave us a place to keep track of standard job parameters, such as command-line options or resource requirements for a job.
(As an aside, resource management for jobs was a constant chore, especially as the product was scaled up and data volumes increased.)

The second concept was a `dispatcher` job.
Instead of actually doing ETL work, this type of job, when dispatched using a `CronJob` resource, would accept a workflow as an argument, and then dispatch all the jobs in that workflow definition.
A job would not get dispatched until the jobs it depended on completed successfully.
This allowed us to schedule a workflow to run daily instead of scheduling a pile of jobs individually.
Most of the code was re-cycled from the code already used by users locally to schedule their jobs -- the `dispatcher` was doing the same thing, just from *inside* Kubernetes.

## Monitoring via Web UI ##

As the complexity of the ETL pipeline grew, we began encountering workflow failures.
We needed an audit log of which jobs ran for which days, so we could confirm that we were delivering the data we processed.
We also needed tooling to reliably re-process certain days of data, and keep track of those re-processing runs.

By this point, the ETL system we were building had run for a year without requiring any UI beyond the one provided by GKE and `kubectl`.
However, to manage the complexity, it seemed like a UI was needed.
I ended up building one using [firestore](https://cloud.google.com/firestore/), React, and [ag-grid](https://www.ag-grid.com/).

Previously, the `dispatcher` job that ran workflows was end-to-end responsible for all the jobs in the workflow.
It ran for as long as any job in the workflow was running.
If any of the jobs failed the `dispatcher` would exit, and the subsequent jobs would not run.
Likewise, if `dispatcher` itself failed, remaining workflow jobs would be left orphaned.

Instead, we turned the workflow `dispatcher` into something that merely manipulated the state in `firestore`.
The job would parse the workflow `.yaml` file into a DAG, and then create `firestore` entries for each job in the graph.
The `firestore` entry would include job parameters like command-line arguments and resource requests, as well as job dependency information.
Jobs at the head of the DAG would be placed into a `SCHEDULED` state, while jobs that were dependent on other jobs were placed into a `BLOCKED` state.

An always-running K8s service called `scheduler` would subscribe to `firestore` updates and take action when jobs were created or changed state.
For instance, if a job was in the `SCHEDULED` status, the `scheduler` would create pods for those jobs via the K8s API, and then mark them as `RUNNING`.
If a task finished (by marking itself as `COMPLETED`), the `scheduler` would notice, and clean up the completed pods.
It would also check if any jobs were `BLOCKED` waiting on the completed job, make sure all their dependencies had completed, and placed them into the `SCHEDULED` state.
If a job marked itself as `FAILED` (via a catch-all exception handler), we had the option to track retries and re-schedule the job.

Because the `scheduler` became the only thing that interacted with the K8s API, it made building user-facing tooling easier.
Those tools merely had to manipulate the DB state in `firestore`.
This enabled less technical users, without `gcloud` or `kubectl` permissions, to create, terminate, restart, and monitor job and workflow progress.

From what I can tell, components like the `scheduler` are often built using K8s [operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) and [custom resources](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
However, we found that just running a service with permissions to manage pods is sufficient.
This avoids having to dive too deep into K8s internals, beyond the basic API calls necessary to create and remove pods and check on their status.

## Parallelism ##

Some jobs in our system were trivially parallelise-able, e.g. because they processed a single sensor's data in isolation.
The system of using the `scheduler` to run additional jobs unlocked infinite parallelism inside jobs.
For instance, we could schedule a job like `ProcessAllSensors`.
This job would first list all sensors active during the `start`/`end` interval, and then could create a child job in `firestore` for each sensor.
Creating child jobs was as simple as writing a job entry into `firestore`, with the ID of the sensor to process.
Parallelism was limited only by the auto-scaling constraints on the K8s cluster.

I created an abstraction called `TaskPoolExecutor`, based on the existing [`ThreadPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor).
Each job submitted to the `executor` would run in a different K8s pod.
Not only did this make data processing much faster, but more resilient, too.
Previously, if a particular sensor failed in it's processing, restarting that processing was complicated.
In the new system, the existing retry system baked into the `scheduler` could retry individual sensors, without the parent job even knowing about it.

## Takeaway ##

My focus at Aclima was on business objectives -- making sure data is delivered, data scientists and other technicians are productive, and we can manage our fleets of vehicles and devices.
In 18 months, I was able to build an infinitely-auto-scalable, reliable parallel job scheduling system and UI accessible to non-technical users.
I was able to build this one step at a time, from "how do I regularly run one cron job" all the way to "how to I parallelize and monitor a run of a job graph over a year of high-precision sensor data".

I think this is a tribute to the power of the abstractions that Kubernetes provides.
It's an infinite pile of compute, and it's pretty easy to utilize it.
This experience has sold me on the premise, and I would definitely use Kubernetes for other projects again.

## Do Differentlys ##

The scheduler and all the code that interacts with K8s was written in Python, mostly because the data processing code was also written in Python.
However, as soon as I wanted to add a UI, I had to begin writing Javascript.
This means I have to share at least data structures -- for instance, the structure of a `job` in firestore -- between the two languages.
In the future, if I'm writing a UI, even a CLI, I will consider strongly whether I should just write it in JS to begin with.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AWS MFA on the CLI with `direnv`]]></title>
            <link>https://igor.moomers.org/posts/aws-mfa-cli-direnv</link>
            <guid>https://igor.moomers.org/posts/aws-mfa-cli-direnv</guid>
            <pubDate>Fri, 28 Aug 2020 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
You might already be using multi-factor authentication (MFA) for logins to your AWS account.
This will cause AWS to prompt you for your MFA token when you log in via the web console.
However, if you use AWS via command-line tools (e.g., `terraform` or `aws s3`), you might have issued yourself access keys.
Those are single-factor, and if they leak, anyone on the internet can use them to do horrible things to your account.

We can make your admin AWS accounts safer by requiring MFA, even for API requests.
First, put your account, and the account of all other admins in your AWS account, in a group like `AdminMFA`.
This group should have a policy that looks like this:

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "*"
      ],
      "Resource": "*",
      "Condition": {
        "Bool": {
          "aws:MultiFactorAuthPresent": "true"
        }
      }
    }
  ]
}
```

## Direnv Config ##

Now, you'll need a mechanism to authenticate via your MFA token, get a session token, and put that session token into your environment.
I do this using [`direnv`](https://direnv.net/).
I've only recently learned about `direnv`, and I'm already using it, in combination with [`asdf`](https://asdf-vm.com/#/core-manage-asdf-vm), to replace [`rbenv`](https://github.com/rbenv/rbenv), [`pyenv`](https://github.com/pyenv/pyenv), and [`nodenv`](https://github.com/nodenv/nodenv).
`direnv` and `asdf` setup is beyond the scope of this post, but you can check out [my dotfiles repo](https://github.com/igor47/dotfiles) to get an idea of how I have it configured.

Here's my configuration in a `.envrc` file of a [terraform](https://www.terraform.io/) repo for a project hosted on AWS:

```bash
use asdf

export MASTER_AWS_ACCESS_KEY_ID=AKIA<redacted>
export MASTER_AWS_SECRET_ACCESS_KEY=<redacted>
export AWS_MFA_ARN=arn:aws:iam::<redacted>:mfa/igor
export AWS_SESSION_FILE="${HOME}/.config/aws/session-${MASTER_AWS_ACCESS_KEY_ID}"

watch_file $AWS_SESSION_FILE
direnv_load ~/bin/aws_load_session
```

This file causes 4 environment variables to be exported into my environment whenever I `cd` into this repo's directory.
The `MASTER_AWS_ACCESS_KEY_ID` and `MASTER_AWS_SECRET_ACCESS_KEY` are just the access key ID and key that I created for my account via IAM.
I've prefixed their usual environment variable names with `MASTER` to distinguish them from the session-specific keys created by authenticating with MFA.
The `AWS_MFA_ARN` variable contains the ID of my MFA token.
You can get this from [your security credentials page](https://console.aws.amazon.com/iam/home#/security_credentials), under the `Multi-factor authentication (MFA)` section.
Finally, the `AWS_SESSION_FILE` variable will keep track of where my MFA session is stored in my filesystem.

The next two lines handle reloading the MFA session.
I've told `direnv` to reload my local environment whenever the contents of the file at `$AWS_SESSION_FILE` change.
Next, we use `direnv_load` (from the [direnv stdlib](https://direnv.net/man/direnv-stdlib.1.html)) to load the environment exported by my `aws_load_session` script.

## Session-management Scripts ##

I have two custom scripts to manage the MFA session.
The first is `aws_get_session`, and it's responsible for prompting me for my MFA token, creating an MFA session, and storing it into the `AWS_SESSION_FILE`.
I run this script whenever my MFA session expires.
Here's the script:

```bash
#!/bin/bash

TOKEN=$1
shift

if [[ -z $TOKEN ]]; then
    echo "Usage: aws_get_session <mfa token value>"
    exit 1
fi

set -u

mkdir -p `dirname ${AWS_SESSION_FILE}`
unset AWS_SESSION_TOKEN

AWS_ACCESS_KEY_ID=${MASTER_AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${MASTER_AWS_SECRET_ACCESS_KEY} aws sts get-session-token --serial-number $AWS_MFA_ARN --token-code ${TOKEN} > ~/.config/aws/session-${MASTER_AWS_ACCESS_KEY_ID} > ${AWS_SESSION_FILE}
echo "saved session info to ${AWS_SESSION_FILE}"
```

The other script, `aws_load_session`, loads the MFA session into my environment.
It's run by `direnv`, whenever the `AWS_SESSION_FILE` changes.
Here's the script:

```bash
#!/bin/bash

set -u

if [[ ! -f ${AWS_SESSION_FILE} ]]; then
  echo "No session found; did you run `aws_get_session <mfa token>` ?"
fi

export AWS_ACCESS_KEY_ID=`cat ${AWS_SESSION_FILE} | jq --raw-output .Credentials.AccessKeyId`
export AWS_SECRET_ACCESS_KEY=`cat ${AWS_SESSION_FILE} | jq --raw-output .Credentials.SecretAccessKey`
export AWS_SESSION_TOKEN=`cat ${AWS_SESSION_FILE} | jq --raw-output .Credentials.SessionToken`
direnv dump
```

Both of these scripts depend on having `aws` and `jq` installed and in your `PATH`.

## Example Session ##

Here's how this looks in real use, with a terraform repo that stores it's state in AWS S3.

```bash
igor47@fortress:~/repos/terraform/roots/prod {master} $ terraform plan

Error: error using credentials to get account ID: error calling sts:GetCallerIdentity: ExpiredToken: The security token included in the request is expired
	status code: 403, request id: abfd729b-4dad-41f4-857f-2539170f68a9


igor47@fortress:~/repos/terraform/roots/prod {master} $ aws_get_session 123456
saved session info to /home/igor47/.config/aws/session-AKIA<redacted>
direnv: loading ~/repos/terraform/.envrc
direnv: using asdf
direnv: loading ~/.asdf/installs/direnv/2.21.2/env/733966593-20565860-1008169379-2914714444
direnv: using asdf python 2.7.18
direnv: using asdf python 3.8.3
direnv: using asdf nodejs 12.13.1
direnv: using asdf ruby 2.7.1
direnv: using asdf direnv 2.21.2
direnv: using asdf terraform 0.12.29
direnv: export +AWS_ACCESS_KEY_ID +AWS_MFA_ARN +AWS_SECRET_ACCESS_KEY +AWS_SESSION_FILE +DD_API_KEY +DD_APP_KEY +MASTER_AWS_ACCESS_KEY_ID +MASTER_AWS_SECRET_ACCESS_KEY +NPM_CONFIG_PREFIX +RUBYLIB ~AWS_SESSION_TOKEN ~PATH

igor47@fortress:~/repos/terraform/roots/prod {master} $ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
```

Here, `terraform plan` fails because my MFA session has expired.
I re-run `aws_get_session` to update my `AWS_SESSION_FILE`.
`direnv` notices that the file has been updated, and reloads the environment.
I can then continue using `terraform` as normal.
As a bonus, in any other shell, the session will also be re-loaded automatically whenever I get a new prompt.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Website Scalability]]></title>
            <link>https://igor.moomers.org/posts/website-scalability</link>
            <guid>https://igor.moomers.org/posts/website-scalability</guid>
            <pubDate>Tue, 04 Aug 2020 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
When I was [at Airbnb](https://igor.moomers.org/thoughts-on-leaving-airbnb), I learned a lot about how to scale a website.
I've recently been working in that space again, helping other websites to scale.
I figured it might be useful to write down some of what I've learned in this space.
This is useful for me, to clarify my thinking, but is also useful for collaboration.
If my colleagues understand how I think about scaling, it would help set the context for what I'm working on, and why.

## A Basic Web App ##

To think through various scaling scenarious, lets imagine a basic hello-world web app.
When you come to the site, it renders an HTML page containing something like `<h1>Hello, World!</h1>` and sends it back to your browser, which displays it.
How does this app scale?

### Caching ###

This initial version of the app is quite static, and you could utilize caching to help you scale.
Caching here would involve routing requests to your app through a CDN, like Akamai or Cloudflare or Cloudfront.
The CDN would keep a copy of your “Hello World”, and when visitors ask for it, it would come from the CDN’s servers, not your server.

Suppose your app, instead of printing “Hello World”, printed `“Hello World, today is <day of week>”`.
This is still almost entirely static – the text only changes once a day.
You will need to carefully configure cache-control headers, so that the CDN mostly serves the file from it’s servers, but will occasionally come back to your servers to retrieve an updated version of the page.

### Horizontal Scaling ###

Suppose your app prints out `“Hello, World! It's <hour>:<minute>:<second> where I’m at.”.`
Also, assume you have to render this server-side (as a client-side app, you could do the rendering in JS, and then this app just become the perfectly-cached one from the section above).

This app would be perfectly horizontally scalable.
You do this by putting a load balancer in front of your server, and then adding more, identical web servers as needed.

If you run a single-threaded web server, it can only serve one visitor at a time.
When a second visitor visitor shows up while your server is busy rendering “Hello, World” for the first visitor, the second visitor has to wait – to get into a queue.
If you have a lot of visitors showing up at the same time, the queue will get quite long – and then some of those visitors will give up before seeing “Hello, World”.
Some of them give up subjectively, because the page isn’t doing anything, and for the really patient ones their browser will give up for them, eventually.

If you were running your single-threaded web server on a 16-CPU machine, or if the reason it took 200ms to serve “Hello, World” is because you called `sleep(0.196)` in your web server code, your server would not *look* overloaded.
You wouldn’t see excessive CPU or memory or IO usage.
The only way to tell that visitors are giving up is by looking at metrics, such as queue size, timed-out connections, number of requests at the load balancer vs. your web app, etc.

If you looked at these metrics, and discovered that you were in fact failing to greet a bunch of visitors, then you might engage in performance engineering.
You would say, “why does it take 200ms to serve this page?"
If it took less time, we'd be able to serve more visitors total.
This might lead you to finding the `sleep(0.196)`, or switching to a more performant programming language or architecture.
Alternatively, you might realize that you have a bunch of under-utilized resources on your server – say, 15 additional CPUs – and then you might use a threaded or forked web server to serve more visitors concurrently.

OR – you might not do any of that.
Instead, you might just decide, since your app is so horizontally scalable, to launch a bunch more web servers behind your load balancer.
The end result would be the same – more satisfied visitors to your site.
But you might end up paying more money for all those extra servers.
However, you would save all the time and money you spend combing through the code looking for sleep, or tuning your server software for concurrency.
This is a relevant trade-off, and should be considered when you want to scale your web app.

### State and Vertical Scaling ###

Suppose that when a visitor came to your page, you would log their IP address and the time of their visit.
Then, you would display either “Hello, visitor, for the first time!” or “Hello, visitor, welcome back!”, depending on whether you had or had not previously seen their IP address.

This version of the web app is stateful – it retains state between requests -- and thus is no longer purely horizontally scalable.
To realize this, think about where you would store the state.
You could store it directly on the server which renders “Hello”.
But, if scale by launching additional servers, then a visitor might get different servers on different requests, and would get the wrong “Hello” message and be sad.

Typically, looking up some data in a database is much faster than rendering a complicated web page, and so you end up with a two-tier web infrastructure.
The first tier is a bunch of horizontally scalable, stateless web servers.
These may have been optimized to some extent, or else just scaled as needed, according to the necessary trade-off (discussed above).
The second tier is the database, which is heavily optimized to be as fast as possible.
Web optimizations are harder, because each website is different, while “retrieve some data” is pretty generic and can be iteratively optimized over time.

However, how the database server can no longer be easily scaled horizontally.
Once you are using all the memory and all the CPUs on your database, you would be up against a wall.
In thise case, ou might decide to scale vertically – by getting a bigger, badder, beefier database server.

In this scenario, your goal is to squeeze the most possible out of the resources on your database server.
You’ll want to watch for high CPU usage, running out of memory, or saturating your disk IO or network buses.
These would alert you that you need to either scale your database server, or reduce the load from your application.

### Scaling through load shedding ###

Suppose that your simple site now has two pages.
The first page is that same one that says “Hello, visitor, for the first time!” or “Hello, visitor, welcome back!”, while the second page shows a cute random kitten.
The kitten page scales horizontally (each server has it’s own kitten repository), but the “Hello” page still requires a DB read/write before you can render it.

Suppose that your DB server is having trouble keeping up with all the demand for the “Hello” page.
As the DB server slows to a crawl, the “Hello” page takes longer and longer to render.
The kitten page would keep working quickly but, alas, visitors to the kitten-page are stuck in line behind the “Hello”-page visitors, and nobody is getting either “Hello”ed or kittened.

Kind of how an escalator that fails becomes stairs, your kitten-and-hello site, when it fails, should become just a kitten-showing site, instead of no site at all.
You can do that if you quickly turn away the “Hello” visitors when you realize that the DB server is having problems.
This can be accomplished through automation, called “circuit breaking”, in the DB layer; as a bonus, it might even help the DB layer automatically recover from transient spikes.

This pattern is often useful with dependencies that scale neither horizontally nor vertically.
For instance, suppose that a page on your site will offer visitors the ability to enter their phone number, and then recieve a “Hello, World” as a text message through Twillio.
Twillio becomes a dependency, but you can neither add more horizontal Twillio capacity, nor increase the size of Twillio’s machines then they’re overloaded.
In fact, the only way you know that Twillio is overloaded is when your own site is totally down – visitors can’t get a “Hello” page and they can’t get a kitten, because there are too many people waiting for Twillio API calls that never complete.
If you used a circuit breaker around calls to Twillio, then you might be able to quickly show those visitors an error message, while the other visitors continue getting “Hello” and kittens.

Coming full circle, caching can also be a form of load shedding.
You may be able to serve visitors a cached version of the page which is not strictly correct (for instance, the time on the "Hello" page is stale), but still more useful than an error page.

### Dependencies as bottlenecks ###

Life was easy for your awesome “Hello, World” site, so long as you could just horizontally scale it.
But, as soon as you began introducing dependencies – DB servers, Twillio APIs – things got a whole lot more annoying.
What’s clear is that any dependency ought to be treated with suspicion – these are what’ll getcha on big launch night.

In fact, only three things will break your site:
    
* You yourself, by accidentally breaking it. This usually happens through a deploy.
* Some malicious actors, by breaking it on purpose
* A non-horizontally-scalable dependency of your site, which breaks suddenly in response to increased traffic

If you want your site to stay up, you have to engineer around all three failure causes.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Syncthing]]></title>
            <link>https://igor.moomers.org/posts/my-syncthing-setup</link>
            <guid>https://igor.moomers.org/posts/my-syncthing-setup</guid>
            <pubDate>Sat, 25 Jul 2020 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
[Syncthing](https://syncthing.net/) is a file synchronization tool.
I decided to try it after seeing [this post](https://tonsky.me/blog/syncthing/) on [hacker news](https://news.ycombinator.com/item?id=23537243).
Many posts have been written about how awesome it is, and this is another one of those -- I'm really having fun with it.

## Setup ##

I mainly run `syncthing` on three devices -- my Android phone, my server, and my laptop.
I ended up switching to it because I upgraded my laptop to Ubuntu 20.04, and this [broke Unison](https://unix.stackexchange.com/questions/583058/unison-and-version-compiler-conflicts/583377#583377), which I had previously been using to synchronize a few folders between my server and laptop.
After several hours wasted didn't solve the issue, I gave up on it altogether, and I'm glad I did.

Setup was straightforward on my Ubuntu laptop.

On the server, I had to do a few manual steps.
After adding the apt rep and installation, I copied a systemd unit file from [here](https://computingforgeeks.com/how-to-install-and-use-syncthing-on-ubuntu-18-04/).
I wasn't familiar with the `@.server` and the `User=%i` syntax of unit files, and I still can't find it documented anywhere.
This confused me for a bit, but eventually I got the file named properly, reloaded unit files with `systemctl daemon-reload`, and got the service running with `systemctl enable` and `systemctl start syncthing@igor47.service`.

Next, I wanted to get into the configuration web GUI.
I used a local tunnel:

```bash
$ ssh -L 4567:localhost:8384
```

I was then able to visit localhost:4567 in my local browser and configure `syncthing` on the server.
I picked a custom port for the `syncthing` protocol, and punched a firewall hole for incoming connections on that port.
Also, I picked a custom port for the web GUI server, so I wouldn't conflict with other users who might want to enable their own `syncthing`.
I set the web GUI to only listen on localhost, and then added a reverse proxy to this port from my web server config.

```apacheconf
  ProxyPass /syncthing/ http://localhost:12345/
  ProxyPassReverse /syncthing/ http://localhost:12345/
```

On my phone, I wanted `syncthing` to be able to write stuff onto the SD card.
Apparently, this is [not currently possible](https://github.com/syncthing/syncthing-android/wiki/Frequently-Asked-Questions#what-about-sd-card-support).
I worked around it by granting Syncthing root permissions, which works for me on my rooted Lineage android build.
YMMV.

## What I use `syncthing` for? ##

* removed google photos from my phone and allowed syncthing to sync photos
  directly to my laptop. This is especially handy when I use my phone as a
  scanner (to take photos of documents for archival), since I then immediately
  have them available for email. It's nice to be off Google photos -- one step
  closer to a google-free life!
* syncing my documents folder between laptop and server
* local cache of music. I prviously used
  [dsub](https://f-droid.org/en/packages/github.daneren2005.dsub/) to play my
  music collection, and occasionally had to fight it's cache system to convince
  it that I really wanted it to cache my entire music library. Now, I just
  `syncthing` my music collection onto the SD card in my phone, and then play
  it with
  [Pulsar](https://play.google.com/store/apps/details?id=com.rhmsoft.pulsar&hl=en)
* i use a text-based email reader ([mutt](http://www.mutt.org/)) which I access
  while SSHed into my server. Dealing with attachments can be annoying.
  Previously, I would save them to a web scratch folder and open them in a
  browser. Now, I simply keep a `syncthing`ed scratch folder and throw them
  into there -- they're immediately accessible on my laptop.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Reliable SSH Tunnel for Raspberry Pi]]></title>
            <link>https://igor.moomers.org/posts/embedded-system-ssh-tunnel</link>
            <guid>https://igor.moomers.org/posts/embedded-system-ssh-tunnel</guid>
            <pubDate>Sat, 03 Aug 2019 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
You install [raspbian](https://www.raspberrypi.org/downloads/raspbian/) on a brand-new [Raspberry Pi](https://amzn.to/2GMdnjA).
When you plug it into power and the ethernet jack, it's online, but how do YOU get into it?

Over the years I've resorted to:
* giving my Pi a static IP -- which breaks when I put it on a different network
* scanning the network with `nmap`
* running a DHCP server with `netmasq` on my laptop's ethernet port (probably a USB one) and then sharing my wireless connection with the Pi to get it online

Recently, I decided I'd like to just get my Pi online and have it open a reverse-tunnel to itself.
I found a few guides to do this, but none quite put all the pieces together.
There is even a [paid service](https://www.pitunnel.com/) to do this!

However, this is actually quite easy.
I put the script necessary to do this, plus the instructions, in [this repo](https://github.com/igor47/pitunnel).
Hope it helps!
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Different Approaches to Environmentalism]]></title>
            <link>https://igor.moomers.org/posts/differences-in-environmentalism</link>
            <guid>https://igor.moomers.org/posts/differences-in-environmentalism</guid>
            <pubDate>Mon, 13 May 2019 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
I've been on a climate-change book-reading spree lately.
Since March, I've read:
* [The Weather Makers](https://amzn.to/2Q4dla5)
* [An Inconvenient Sequel](https://amzn.to/2W3uJBs)
* [Drawdown](https://amzn.to/2Hi7L1i)
* [The Uninhabitable Earth](https://amzn.to/2Q5uiB1)
* [Climate: A New Story](https://amzn.to/2HfKT2e)
* [Falter](https://amzn.to/2JDLnRo)

That's not even counting books that are a bit about climate change.
For example:

* [Braiding Sweetgrass](https://amzn.to/2HgGYlK)
* [Parable of the Sower](https://amzn.to/2HjUqWh) (particularly scary and prescient)

My goal has been to find a book I can recommend as a book-club book for [Spaceship Earth](https://spaceshipearth.org), but I haven't succeeded yet.
Some books, like *Uninhabitable Earth*, are too gloomy and [disempowering](https://igor.moomers.org/individual-action-and-climate).
Some, like *Drawdown* or *The Weather Makers*, are too dry.
Some, like *Inconvenient Sequel*, are too rah-rah and sound like an infomercial (there are several section of *Inconvenient Sequel* that I would like to excerpt and redistribute, though).
*Falter* starts out strong but manages to be both too unfocused, too gloomy, and too cheery at the same time.

A book that I really struggled with, and one that I've been cautiously recommending to a few select folks, is *Climate: A New Story*.
For most people in my community, it's a little too... umm... maybe [woo](https://rationalwiki.org/wiki/Woo)?
Or anyway I can imagine that it would create cognitive dissonance.
It definitely did for me, which I enjoyed but I recognize that others might not enjoy.

Listening to *Falter* today, I came across a section that's almost diametrically opposed to the *Eisenstein* book.
In this section, Bill McKibben is gushing about solar panels, and how they are going to transform Africa.
He follows a salesperson from a solar company as he goes to remote villages to try to sell the solar panels:

> Fossouo was born in Cameroon and went to school in Paris, but his real education seems to have come in the seven summers he spent in the United States selling books for Southwestern Publishing, a Nashville-based titan of door-to-door marketing. (Rick Perry is another alum; ditto Ken Starr.) “I did Los Angeles for years,” he said. “‘Hi, my name is Max. I’m a crazy college student from France, and I’m helping families with their kids’ education. I’ve been talking to your neighbors A, B, and C, and I’d like to talk to you. Do you have a place where I can come in and sit down?’”
>
> All selling, he insists, is the same: “It starts with a person understanding they have a problem. Someone might live in the dark but not understand it’s a problem. So, you have to show them. And then you have to create a sense of urgency to spend the money to solve the problem now.”
>
> …
> This prospect is a farmer and a schoolteacher, and we settle down in his classroom, which has a few low desks with slates—literal shards of slate—resting on top. Max quickly figures out that the man has two wives, and he starts sprinkling their names liberally through the conversation. “There’s no pressure. It’s okay. I don’t want to sell you anything,” he says, as they move through the steps familiar to anyone who’s seen an infomercial.
> 
> … 
> The customer is resistant, but Max tries angle after angle. “You have to think big here. When I talked to your chief, he said, ‘Don’t think small.’ If your kid could see the news on TV, he might say, ‘I, too, could be president.’”
> “This is great,” the man says. “I know you’re trying to help us. I just don’t have the money. Life is hard, things are expensive, sometimes we’re hungry.”
> Max nods, helpful. “What if I gave you a way to pay for it, so the dollar wouldn’t even come from your pocket. If you get a system, people will pay you to charge their phones. Or, if you had a TV, you could charge people to come watch the football games.”
> “I couldn’t charge a person for coming in to watch a game,” the man says. “We’re all one big family. If someone is wealthy enough to have a TV, everyone is welcome to it.”

It was super-interesting to read this section after the following section, from *Climate: A New Story*.
I think comparing them will give you a sense for the different perspectives:

> Economic growth means the growth in goods and services exchanged for money. Therefore, a remote village in India or a traditional tribal area in Brazil presents a big growth opportunity, because the people there barely pay for anything. They grow or forage their own food. They build their own houses. They use traditional healing methods to treat their sick. They make their own music and drama. Imagine the development expert goes there and says, “What a tremendous market opportunity! These backward people grow their own food—they could buy it instead. They cook their own food too—restaurants and supermarket delis could do it for them much more efficiently. The air is full of song—they could buy entertainment instead. The children play with each other for free—they could enroll in day care. They accompany adults learning traditional skills—this society could pay for schooling. When a house burns down, the community gets together to rebuild it—if we can unravel those ties of mutual aid, there’s a big market for insurance. Everyone has a strong sense of social identity, a strong sense of belonging—they could buy brand name products instead. Everyone is joyful and content—they could be buying a semblance of that through legal and illegal drugs and other forms of consumption.”
>
> Okay, I’m getting a bit dizzy with visions of riches, but you get the idea. The question is: how are these people going to pay for all that? Easy. They earn money by converting local natural resources and their own labor into commodities. The rainforest becomes a palm oil plantation. The mountain becomes a strip mine. The river becomes a hydroelectric plant. The population abandons their traditional ways and goes to work in the money economy. A few become doctors, lawyers, and engineers. The rest migrate to the slums.
>
> In a nutshell, this is the process called “development.” It is what development loans have funded for more than half a century. It accompanies an ideology that says that money equates with well-being, that development along the model of the West is a good thing (or an inevitable thing), that a high-tech life is superior to a life close to nature. These assumptions are difficult to refute using logical arguments. Usually, shedding them requires spending time in less developed cultures, witnessing the joy and depth of aliveness there, and seeing their beauty erode as they modernize.

There's definitely more to say about this, but I wanted to note this down while it's in my mind.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Individual Action and Climate Change]]></title>
            <link>https://igor.moomers.org/posts/individual-action-and-climate</link>
            <guid>https://igor.moomers.org/posts/individual-action-and-climate</guid>
            <pubDate>Sat, 11 May 2019 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
Can your individual action make a difference for climate change?
Often, there's little room for such action in the climate change discourse.
Take Project Drawdown, which aims to rank the best climate-friendly interventions available.
Some (such as their #1 intervention, "Refrigerant Management"), are targeted at small groups of specialist in a specialized industry.
Others, such as #3 -- Reduce Food Waste -- seem more approachable for individuals looking to make a difference.
But even there, the recommendations are not actionable for most people:

> There are numerous and varied ways to address key waste points.
> In lower-income countries, improving infrastructure for storage, processing, and transportation is essential.
> In higher-income regions, major interventions are needed at the retail and consumer levels.
> National food-waste targets and policies can encourage widespread change.

It seems that to help, you should either be a grocery magnate or a politician.
Failing that, you might try to lobby or influence your local politicians or [greengrocers](http://openproduce.org/#contact).
This view is well-expressed by David Wallace-Wells in his best seller "Uninhabitable Earth":

> …the climate calculus is such that individual lifestyle choices do not add up to much, unless they are scaled by politics.

Later on:

> accusations of individual irresponsibility were a kind of weaponized red herring, as they often are in communities reckoning with the onset of climate pain.
> We frequently choose to obsess over personal consumption, in part because it is within our control and in part as a very contemporary form of virtue signaling.
> But ultimately those choices are, in almost all cases, trivial contributors, ones that blind us to the more important forces.

What are the "more important forces"?
Politics:

> Eating organic is nice, in other words, but if your goal is to save the climate your vote is much more important.

Wallace-Wells is particularly dismissive of individual action, even calling it "virtue signaling".
Bill McKibben, founder of 350.org, expressed a more charitable version of this view on a [recent episode of KQED Forum](https://www.kqed.org/forum/2010101870787/bill-mckibben-warns-of-dire-consequences-of-unchecked-climate-change), right here in the Bay Area.
Several locals called in to the show to ask how they can help.
A typical caller is Jenny from Petaluma (around 31:30 in the show):

> I'm willing to upend my life to be a part of really solving this problem in my own small way.
> I'd love to hear how to do that sensibly.

McKibben responds in [a typical way](https://www.youtube.com/watch?v=DYLWZPFEWTw):

> There are a number of individual actions that are useful.
> Eating lower on the food chain.
> Putting solar panels all over your own roof.
> Figuring out how to get around on public transit.
> Not jumping on an airplane just because you want to get to some place that's a little warmer than the place you are now.
> But, let me add this caution.
> Climate change is a math problem, and at this late stage in the game you can't make the math work anymore one Tesla at a time, one vegan meal at a time.
> My house is covered with solar panels, I'm prod of them, I don't try to fool myself that this is how we're going to stop climate change.
> The most important thing an individual can do is be a little less of an individual and join together with others in movements of a size enough to make a difference.

If you take McKibben's advice and visit the [Bay Area 350.org website](https://350bayarea.org/), their actions -- writing letters, organizing rallies, and convincing local governments to declare a state of emergency -- are all oriented around politics.
The biggest call to action on their page is the Donate button.

## What About Money? ##

I live in the Bay Area, where [one out of 11,000 people is a billionaire](https://www.vox.com/recode/2019/5/9/18537122/billionaire-study-wealthx-san-francisco) and [everyone is really busy all the time](https://thebolditalic.com/why-are-san-franciscans-so-goddamn-busy-all-the-time-the-bold-italic-san-francisco-2e15a498d750).
Maybe one way that these rich, busy people can help is by throwing a few bucks towards a cause?

I saw an extreme version of this approach [articulated on the SSC subreddit recently](https://www.reddit.com/r/slatestarcodex/comments/bm3j6x/how_to_effectively_buy_carbon_offsets/emtimk7?utm_source=share&utm_medium=web2x):
The OP is asking about carbon offsets, and another poster expresses skepticism that those are effective.
The OP replies:

> I agree it's not the systemic solution, but for now I'm just looking for a personal moral offset.

In today's world (`/me shakes fist at the kids on their escooters`), where most life concerns are outsourced, this kind of thinking makes sense.
What if i just go about my life as normal, but I pay someone else to clean up the mess?
This is a diametric opposite of Jenny from Petaluma, who was willing to "upend [her] life" -- u/BistanderEffect is unwilling to change in almost any way, but is willing to spend a little money.

In the Bay Area, an even more pragmatic approach is available.
For instance, Malcolm Handley, founder of [Strong Atomics](https://strong-atomics.com/), once told me that he realized he personally knows several of the Bay's many billionaires.
Perhaps the most effective way to help with climate change, he told me once, is to convince some of them to spend a bit of their money to fund, say, fusion reactors.

In a more cliche version of Bay Area thinking -- maybe we don't even have to spend any money at all?
After all, brilliant people like Elon Musk are working on climate solutions, like a fancy electric car, that actually *make* money.
All we have to do is wait a little while, and The Market and technology will be our salvation.

## I Disagree ##

I've outlined some typical memes in our culture around climate change which I find fundamentally unsatisfying.
Politics is, of course, necessary, but it's also exhausting and it's very difficult to keep focused and motivated.
I could write a separate blog post specifically on my views around politics as an infinite game of tug-of-rope, but everyone pulling as hard as they can in their direction has given us the current state of stalemate.

The several versions of "someone else will do something" -- be it greengrocers, farmers, refrigerant technicians, or Elon Musk -- are quite disempowering.
Surely, if you believe the climate change is a big deal, just sitting back and doing nothing won't be the right course of action for you.
The most complicated argument to counter is the one around around outsourcing the problem.
Shouldn't it be enough to just donate a few bucks to some organizations, and maybe buy some carbon offsets?

## Living the Difference ##

Two facts:

1. To stop the worst effects of climate change, we have to leave fossil fuels in the ground.
2. There is no electric passenger air service

Taken together, these two facts imply that passenger air service is impossible to do in an ecologically friendly way right now.
We need to both plant a bunch of trees AND stop flying -- one does not excuse the other.
In the same way that you cannot "pay" for flights with trees planted in the ground, you cannot "pay" for climate change with money.
We definitely have to spend money to ameliorate climate change, for instance by building solar and wind farms.
But we cannot just pay the universe to take out our CO2 for us, the way we pay a plumber to fix a leak.

There are too many complicated ideas here for me to unpack all of them.
For instance, humans compare themselves to others, and evaluate their own status based on their relative status in their social circle.
When we argue for carbon taxes, we might suspect that they would make certain activities -- like flying -- more expensive and therefore less commonly-practiced.
But maybe if *everyone* had to fly less, it wouldn't be as big a deal?
You  might still fly more than your friend Bob, and that's good enough.

This then starts involving complicated ideas of climate justice.
What if we pass [high enough carbon taxes](https://www.jbs.cam.ac.uk/fileadmin/user_upload/research/workingpapers/wp1109.pdf) to double the cost of flights.
Do the rich people still get to fly as much as they want, while the poor and middle class folks get even less access to flights?
Carbon taxes would also raise the price of food -- do the rich still get to eat as much as they want while the poor starve in greater numbers?
Advocating for solutions like carbon taxes -- the means -- lets you avoid thinking about their consequences -- the actual ends we're pursuing, and what those look like.

I think we should reverse our thinking and start with the end.
What does a world where climate catastrophe has been averted look like?
Maybe people fly less.
Maybe we eat less meat and more plants.
Maybe our energy is produced through renewable sources.

We don't have to wait for governments to force us to make these changes.
We can just start living as through the future was already here.

## Enter the Spaceship ##

I don't want you to keep doing whatever you want in the hopes that someone else will solve climate change.
But I also don't want you to decide that, so long as you're pure and holy, you've done your part.
Bill McKibben is not wrong to say that we need collective action, but there might be kinds of collective action that he hasn't anticipated.

This is where the idea of [Spaceship Earth](https://spaceshipearth.org) comes from.
You should be living as though we've already decided to stop pumping petroleum from the ground and instituted carbon taxes.
But you need to get everyone you know to start living like that, too.
The carbon taxes are one way to get them to start living like there already exist carbon taxes.
But Spaceship Earth is another way, and unlike passing carbon taxes, you can start playing Spaceship Earth *right now* (well, once we're done building it).

]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Platform for Issue Journalism]]></title>
            <link>https://igor.moomers.org/posts/issue-journalism-platform</link>
            <guid>https://igor.moomers.org/posts/issue-journalism-platform</guid>
            <pubDate>Wed, 01 May 2019 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
The cover story on NYT today is headlined "Profitable Giants like Amazon Pay $0 in Taxes, and Voters are Sick of It".
Okay, cool, that sounds interesting.
I would like for corporations to pay their fair share of taxes.
I read the article, and there are many stories of laid-off workers, of political strategy, Democrat vs. Republican priorities, and how to defeat Trump in 2020.
But what do *I* do about this issue?
The main takeaway seems to be to vote for Sanders in 2020, or perhaps more immediately, to go donate to his campaign.
But this is left as an exercise for the reader.

I think this is a problem.
It seems like the goal of this article to is to get you to be vaguely informed, but we are in an information abundance age.
Is this information *useful* to me?
Does it connect me to my peers?
How does it *make me feel*?

Also, suppose you really care about the unfairness of corporate taxation, or the plight of the Rohinga.
How do you actually follow the story?
Probably, the Gray Lady would like me to just read the paper cover to cover every day in the hopes of spotting stories I care about, but ain't nobody got time for that.

Stories about horrible tragedies, like the genocide of the Rohinga in Myanmar, are are more extreme example of this problem.
If you read such a story, the main takeaway seems to be "I feel terrible and the world is a terrible place."
There is rarely anything actionable to do after reading it, and if you really care there's no good way to follow up or stay informed.

## An Idea

What if all the news you read was oriented around action?
For instance, if you care about corporate taxes, you can subscribe to the corporate taxes "issue".
The issue would *only* ever get updated if there's a concrete action you can take on that issue, right at that moment.
The issue would have moderators, and they would do the work of screening out potential actions.
If they were convinced that an action is likely to make a dent in the issue, they would post it, and then you could participate right away.

This is a sort of news organization, but one with a very strong and clear editorial voice.
However, I think this is fine -- I think most people would prefer a strong and clear editorial voice to feigned objectivity.
It's also a little like a subreddit, except without random distractions, and maybe a more limited role for discussion.

My current project, [Spaceship Earth](https://spaceshipearth.org), came out of this idea.
I took a stab at trying to build the overall platform, but it was too big a project for me to tackle.
Plus, when I reflected, I realized I would want to focus on the climate change issue anyway.
Finally, Stacey and I had the additional insight that we don't need to wait for new actions you could take on the issue -- climate change already has a ton of actions you could take right now.
Although, of course, for Spaceship Earth, we do plan to create "one-off" missions whenever something pops up, like a vote in congress, or in your state legislature.
(We will need local moderators who can be on top of such developments).

I think someone should build this.
It might well be me, if we manage to get Spaceship Earth up and running.
But if you're gonna build it and you want help, let me know!
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Mailman Behind HTTPS]]></title>
            <link>https://igor.moomers.org/posts/mailman-behind-https</link>
            <guid>https://igor.moomers.org/posts/mailman-behind-https</guid>
            <pubDate>Fri, 23 Feb 2018 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
When [Let's Encrypt](https://letsencrypt.org/) became available, I moved most of the vHosts I run on this web server behind HTTPS.
This included my [Mailman](http://www.list.org/) web interface.
However, this broke the admin interface for my mailing lists.
Even though I changed the `DEFAULT_URL_PATTERN` in `/etc/mailman/mm_cfg.py` to `'https://%s/'`, the submit button on the admin interface still took me to `http://mailman.moomers.org`.

I spent way too long debugging this, which is why I'm writing this post.
It turns out that the `InitVars()` function in `MailList.py` is only called once, when the list is created, and the resulting information is stored in the `config.pck` file for the list.
Because I created the lists over HTTP, back in the day, the url in `config.pck` was still `http://mailman.moomers.org` for all my old lists.

To fix this problem, I first wrote the correct url to a file:

```bash
echo "web_page_url = 'https://mailman.moomers.org/'" > /tmp/newurl
```

I then ran the following little bash script to fix all my lists:

```bash
for i in $(list_lists -b); do config_list -i /tmp/newurl -v $i; done
```

Hopefully, if you're having the same problem as me (mailman's admin page still submits to HTTP instead of HTTPS), you might come across this page and save yourself some trouble!
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Belden Spa Summer 2017]]></title>
            <link>https://igor.moomers.org/posts/my-projects-belden-spa</link>
            <guid>https://igor.moomers.org/posts/my-projects-belden-spa</guid>
            <pubDate>Sat, 20 Jan 2018 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
The Spa was a project for [False Profit's](http://www.false-profit.com/) [Priceless](http://priceless.false-profit.com/) event in 2017.
As in past years, we (meaning myself, [Jered](https://github.com/jeredw) and [Abi](https://www.linkedin.com/in/abi-kelly-8698ab36/)) wanted to create a space where participants could interact in a quiet setting (no loud music) that wasn't someone's camp (so, there's opportunity for a chance encounter).

The Spa was a two-part project -- a Finish-style dry sauna, and an accompanying welcome space.
The concept for the welcome space was a fancy spa-type lobby -- think soft white lighting, comfy furniture, and cucumber water dispenser.
We ended up spending most of the time on the sauna, and the spa itself ended up being a last-minute cobbled-together afterthought.

# The Sauna #

![All set up](/images/sauna-complete.jpg)

This is a photo of the completed sauna, currently set up in my back yard.
It's a small wooden building, with a floor plan of 8' by 8'.
The construction is fairly standard, with the added constraint that the building is modular and can be disassembled, transported, and re-assembled on-site.
It is also meant to be installed in uneven terrain, and so should be somewhat-level-able.

I spent much time agonizing over the floor.
I wanted it to be unfinished wood, but it was going to get a lot of water and sweat spilled on it, and I wanted to be able to hose it off to clean it.
I eventually settled on building a little patch of deck, and I chose untreated redwood for the material.
Redwood is relatively inexpensive, rot and mildew-resistant, and looks and smells good.

![Sauna deck](/images/sauna-deck.jpg)

This is a photo of the deck somewhat-finished (the floor boards are not yet nailed down here).
I used 2x6s for the structural outside components.
I raised them off the ground on short segments of 4x4, to allow for leveling and to create airflow underneath that would allow the boards to shed water and to dry.

![Sauna deck feet](/images/sauna-deck-feet.jpg)

I read a lot of advice on the internet to not use 1x4s for decking, since it's too flexy and feels unstable.
I decided to use them anyway, because it's much less expensive and I was worried about weight (remember, this is meant to be moved to different parties).
I used a fairly dense grid of 2x3s to give a rigid support for the deck:

![Sauna deck support](/images/sauna-deck-support.jpg)

The walls are fairly conventionally framed out of 2x4s:

![Sauna walls framed out](/images/sauna-walls-framed.jpg)

I used [aluminum foil vapor barrier](https://superiorsaunas.com/collections/foil-vapor-barrier/products/aluminum-foil-vapor-barrier) on the inside face of the walls.
On top of the vapor barrier, I used [this cheap, thin cedar planking](https://www.homedepot.com/p/1-4-in-x-3-5-in-14-sq-ft-Western-Cedar-Planks-6-Pack-8203015/202106509).
This was the most expensive part of the sauna, and I agnozed for a long time over the material choice.
In the end, it works fairly well, however you can definitely tell when you're leaning against a part of the wall that's just cedar on top of vapor barrier -- there's a lot of give.
It's much nicer to lean against the wall in a place where there's a stud, and it's pretty easy to find the studs with your back.

Inside, the walls are insulated with denium insulation.
On the outside face, i used tyvek sheeting, and then the exterior walls are made of the thinest plywood I could find, painted with exterior-grade paint:

![Sauna walls insulated](/images/sauna-walls-insulated.jpg)

Here you can see the walls laid out on the ground, with the plywood, tyvek, and insulation.
In the next shot, you can see the walls set up with vapor barrier, before the cedar is installed.

![Sauna walls before cedar](/images/sauna-walls-precedar.jpg)

The walls themselves just sit on the deck, like so:

![Sauna walls on the deck](/images/sauna-walls-on-deck.jpg)

To keep them aligned, we drilled holes through the bottom plate of each wall and into the 2x6 redwood of the deck.
We then installed bolts, serving as pegs, through the wall bottom plates.
To sit the wall correctly, you have to manuever the wall until the bolt pegs fall into their holes.
In some of the photos you can see the cabinet handles we added to the outside walls, to make this manuevering possible.
The walls are heavy enough that this is an incredibly annoying, difficult, and dangerious task -- everyone always wants to hold the wall from underneath, but if you successfully align the pegs with the holes then the wall falls on your hand -- the worst part of the assembly process.

The structure that is formed when the walls sit with their pegs in the holes on the deck is already fairly rigid.
However, to avoid torque on the 2x6s of the deck, we wanted to bolt the walls together, too.
To do that, we installed [tee nuts](http://amzn.to/2DYnRKc) inside the wall.
After putting two walls onto the deck, we would bolt the corner together; you can see that if you look carefully at the corners in the next photo.

![Sauna walls bolted together](/images/sauna-walls-bolted.jpg)
![Sauna wall bolts](/images/sauna-wall-bolt-closeup.jpg)

The cieling was made from 2" structural foam.
We taped two 4'x8' pieces together and then sandwiched it -- cedar planking on the inside, and two 2x3s on top.
We trimmed the foam until it fit snuggly on top of the walls, inside the exterior plywood.
Over the summer, the sauna had no roof (our contigency for rain during an install was to throw a tarp over the whole thing).
I installed some galvanized steel panels over the building just in time for the first real rain of the season.

## Layout and Benches ##

I drafted several potential layouts of the sauna on paper before building benches.
This is the layout we settled on:

![Sauna layout](/images/sauna-layout.jpg)

In many saunas, the benches hang from the walls, but we needed flat-pack walls free of hardware and flat-pack benches.
We made the bench tops out of western red cedar 1x4s -- quite expensive.

![Sauna bench tops](/images/sauna-bench-tops.jpg)

The structure beneath the cedar is just normal 2x4s.
Weight is borne by rectangles made of 2x4s, which are bolted to the bench tops.

![Sauna benches installed](/images/sauna-benches-installed.jpg)

## The Stove ##

I spent a lot of time deliberating over how to heat the sauna.
Other saunas I've experienced in remote places (like at Burning Man) have been wet -- steam is manufactured using a giant propane burner and a pot of water, and piped indoors to heat the room.
Most dry saunas are heated with electric stoves, but I wouldn't have the 220V hookup that's usually necessary.
I thought about using propane indoors, but didn't want to risk carbon monoxide poisioning.
I went through several fancy design iterations.
One particularly insane idea involved heating a metal plate inside the sauna from the outside with a propane torch.

Eventually, I decided to fabricate a wood-burning stove.
I modeled mine on the principles of rocket mass heaters to ensure good draft through the room.

![Sauna stove riser](/images/sauna-stove-riser.jpg)

Here, you can see the J-shaped riser where the combustion happens.
This was fabricated from scrap 5x5 mild steel.
The riser was installed in a larger box which is the surface which actually heats the sauna:

![Sauna stove bottom](/images/sauna-stove-bottom.jpg)
![Sauna stove heat exchanger](/images/sauna-stove-exchanger.jpg)

The inside of the barrel has some 1x1 tubing for structure, and the outside is made of 16ga plate.
The top plate of the box, which recieves the brunt of the heat coming out of the exchanger, was made from a thick piece of plate.
I initially tried to insulate the riser with a mix of portland cement and perlite, but I wasn't seeing the rocketing I wanted.
Using [2" ceramic fire blanket](http://amzn.to/2BjMWfB) created a much better effect.

![Ceramic insulation around the riser](/images/sauna-stove-insulation.jpg)

I created a 6" diameter exaust port by rolling the same 16ga plate.
I used 3 90-degree elbows, two 4' long pieces, and a single 1' piece of single-wall stainless steel chimney pipe to vent to the outside.
I used single-wall because i wanted the portion of the chimney inside the sauna to contribute to heating the sauna.
As an aside, finding single-wall chimeny pipe in the Bay Area is quite difficult -- and it's quite slow and expensive to ship.
I eventually got lucky with [London fireplace](https://londonchimney.com/) in Mill Valley.
Here's the stove undergoing testing in the back yard.
You can see it got quite warm (although we've single learned to get it much hotter with proper feeding).

![The stove, assembled in the back yard](/images/sauna-stove-assembled.jpg)
![Taking the stove's temperature](/images/sauna-stove-temperature.jpg)

Running a rocket stove is pretty different from conventional stoves.
It eats fuel very quickly, so it must be constantly fed.
This is actually nice for an install at a party; if the operator wanders off or becomes incapacitated, the stove soon shuts itself off.
Now that this sauna lives in my back yard, it's a bit of a chore to constantly run outside for more fuel.

It's also been a struggle finding the correct fuel to burn.
Rocket stove communities talk about burning thin branches, but I don't have access to those in the city.
At Priceless, I mostly burned scraps from construction and [these fatwood firestarter sticks](http://amzn.to/2F04cIQ) (we went through the whole 25 pounds in a weekend).
I tried burning fuel pellets, but those clump up on the bottom and there's not enough air flow for combustion.
Currently, I burn hardwood kindling I split with [a hatchet](http://amzn.to/2G5BM1M), and add a stick or two of fatwood to keep things burning hot enough for a good sweat.

## Sauna Design Verdict ##

Overall, the sauna works very well.
We use it several times a week in my back yard.
There are a few gotches with the design that I would iterate on further.

First, the stove.
It works very well and is fun to use, but it's not really enough thermal mass to heat the room.
Adding a bunch of river rocks on top helps a lot, but the top of the stove is flat, meaning I can't fit that many rocks.
In a v2, I would add a hopper on top to contain several layers of rocks.

The top of the stove gets so hot that water I pour onto it beads off the surface and sputters off.
I would also extend the walls up a bit, to keep those water beads contained until they fully evaporate.
Also, the stove doesn't radiate very well -- if I didn't pour water onto the stove, the room would never get hot enough.
For v2, I would create a more heatsink-like surface, with more surface area.
Finally, I'm not sure how long the stove will last.
[Rocket stove forums](https://permies.com/f/260/rocket-mass-heaters) claim that the [high heat of the riser causes rapid oxidation and failure of the steel](https://permies.com/t/52544/metal-burn-tunnel-heat-riser).
I regularly see the portion of the burn tunnel that extends past the barrel glow red, [meaning a temperature of at least 1200°F](https://en.wikipedia.org/wiki/Red_heat).
I haven't yet seen any spalling inside the stove from these temperatures -- possibly because it's only in-use about 4 to 6 hours a week -- but I'm not optimistic about the lifespan of the stove.

The decking in the room works well as a floor -- it's easy to clean -- but the gaps between floorboards create most of the draft inside the room.
There is a very strong temperature gradient between the floor and the upper bench.
Covering the floor with lock-together foam squares would significantly improve insulation, but since the room gets hot enough I haven't bothered.
However, better insulating the floor might reduce the intensity at which I run the stove, prolonging it's lifespan.

The walls of the sauna are the biggest design failure of sauna v1.
It's easy to underestimate just how heavy a framed 2x4 and plywood wall is, especially when you have to load it onto a truck or carry it through a forest.
At 8' x 7', the walls are also not particularly convenient to transport.
For instance, box truck are usually neither 7' tall nor 7' wide.
(Aside: you'd think I'd have learned my lesson here after building [a boat that, at 16' wide, was wider than any boat launch in Chicagoland](http://boat.moomers.org)).

Also, the fasteners between the walls and the deck, and between the walls in the corners, didn't work all that well.
The wall pegs were extremely difficult to line up with their slots on the deck -- it's hard to adjust a 200# wall 1/2 inch to the left when you have no good way to hold it upright.
I used a lot of hidden fasteners because I worried too much about the exterior appearance of the sauna.
In hindsight, nobody cares about that.
In a v2, I would make the walls entirely out of foam sheets -- two 4'x8' sheets per wall, just as we made the cieling of the existing sauna.
Transporting the cieling was always a welcome reprieve for the build crew after transporting the walls.
To assemble a structure from the foam-and-cedar modules, I would use some kind of easily-visible exterior fasteners, maybe even just clamps.

# Belden Spa #

For the structure of the spa, I originally envisioned an organic dome in tension, made from pencil rod.

![Spa dome concept](/images/spa-dome-sketch.jpg)

However, when Jered and I prototyped the design, it was unclear how to keep the tension from deforming the entire structure.
In the image below, you can see that the tensioned overhead X wants to turn the floor plan into an oval rather than a circle.

![Spa dome prototype](/images/spa-dome-prototype.jpg)

I'd still love to create a dome of organic shapes (to conteract the traditional regular geodesic dome shape), but it might be easier at a place like Burning Man, where rebar in the ground can counteract the tension and create a rigid base for the rest of the structure.

Next, we tried to build [a stardome](http://stardome.jp/index-en.html), but we had trouble sourcing the appropriate building materials.
Maybe 6" diameter bamboo is easy to come by in Japan, but not in the Bay Area.
I did buy a bunch of thinner bamboo poles.
Because we had them, we attempted to use them.
We prototyped a space made of several free-standing structures, arranged so as to inclose an area.

![Bamboo pyramids](/images/spa-pyramids.jpg)

In the end, we decided that bamboo is not good structural material.
It was too light, broke too easily, and bent too easily.
It would not support fabric in tension without deforming.

At this point, we were running out of time and the sauna was consuming too much design energy.
We decided to bring a bunch of fabric, some spools of paracord, a pile of 7' 2x3s (super-cheap at home depot), and a nail gun and just improve on the spot.
We ended up with a pretty nice structure, but as per usual I have no photos of my actual install on-site.
If anyone has photos of the Belden Spa, please contact me so I can put them on this page!
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Reflections on Leaving Airbnb]]></title>
            <link>https://igor.moomers.org/posts/thoughts-on-leaving-airbnb</link>
            <guid>https://igor.moomers.org/posts/thoughts-on-leaving-airbnb</guid>
            <pubDate>Wed, 26 Apr 2017 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
April 3rd, 2017 was my last day at Airbnb.
These last 4 ½ years were an intense, wild adventure, and a very important part of both my career and my life.
As I move on, I want to reflect on this experience while it is still fresh in my mind.
Some of the things I want to focus on: what I was able to accomplish while at the company, what I think I could have done better, and my reasons for leaving.

## So.... what would you say you did around here? ##

I joined Airbnb in September of 2012.
At that time, the company was maybe 500 people, the product team around 70 people, and the engineering team around 40.
I was recruited by [a very good friend](https://www.linkedin.com/in/raphaeltlee) who wanted someone to replace the [previous one-and-only infrastructure engineer](https://www.linkedin.com/in/jasondusek/).

When I started, I joined the data infrastructure team run by [Flo Leibert](https://www.crunchbase.com/person/florian-leibert).
I was interested in the Hadoop ecosystem, and felt that this technology was going to become increasingly important going forward.
However, shortly after joining, I went through the first Airbnb Sysops training, which was recruiting people to join the volunteer on-call rotation.
During this training, it became apparent that the production infrastructure had some serious unsolved issues, and was in need of a lot of attention.
By December of 2012, I moved from the data side of the infra to the production side -- specifically, the SRE team -- where I directed my attention for the remainder of my time at Airbnb.

### Configuration Management ###

We began with configuration management.
At that time, Airbnb had [a mechanism for launching new instances](http://airbnb.io/cloud-maker/), but it was unclear how those instances would be configured.
Also, it was unclear how to make new instances and previously-existing instances the same, and no audit trail of configuration changes.
With [Martin Rhoads](https://www.linkedin.com/in/martin-rhoads-a63a0027/), we decided to introduce [Chef](https://www.chef.io/) to solve some of these problems.

We first tried a standard Chef-Server approach, but ran into annoying versioning and clobbering issues.
Shortly, we decided to convert to a Chef-Solo approach based around a monorepo.
The approach we settled on is documented in [this blog post on Chef at Airbnb](https://medium.com/airbnb-engineering/making-breakfast-chef-at-airbnb-8e74efff4707).
It remains in use at Airbnb today, and has also been adopted by several other companies.

Since we had dumped Chef-Server, we now needed an inventory system -- ideally, one that supported custom metadata.
I wrote a very simple proof-of-concept one called [optica](https://github.com/airbnb/optica), which (surprisingly) remains in-use today.
(In fact, because optica is now queried from many places, it has become quite embedded, and any replacement would have to re-implement it's rudimentary API.)

Also, since we were no longer using `knife ec2`, we needed a tool for launching and bootstrapping instances.
Martin wrote another proof-of-concept, called [stemcell](https://github.com/airbnb/stemcell).
This has been refactored several times to support more advanced features, but also continues to be in-use at Airbnb today.
While it remains possible to use stemcell on an engineer's laptop (and indeed, this would be required to re-bootstrap the infrastructure in case of catastrophic failure), most engineers probably shouldn't have the AWS credentials to launch instances.
Instead, engineers at Airbnb use stemcell though a web service UI.
The web interface helps avoid tedius command-line invocations, is responsible for authorization, and eases other common cluster management tasks (AZ balancing, scaling up/down, cluster-wide chef runs).

### Service Discovery ###

Around the same time as we were introducing configuration management, in the spring of 2013, we had an additional problem.
There was growing consensus that we couldn't (and shouldn't) write all of our code inside our Rails monolith.
However, we didn't have the tooling to build an SOA.
Configuration management (how to configure the instances running individual services) was certainly part of the problem, but another part was connecting the services together.

We had already written some services in Java, using the Twitter Commons framework which included a service discovery component.
However, this service discovery had to be implemented in every service, and required ZK and Thrift bindings inside that service.
A team was working on a NodeJS service for the mobile web version of the site, and NodeJS had neither of these available at the time.

We decided that we would abstract this problem away -- first, with [Synapse](https://github.com/airbnb/synapse) as a service discovery component, and then with [Nerve](https://github.com/airbnb/nerve) for service registration.
The entire system is called SmartStack, and the design is more comprehensively justified in the [SmartStack blog post](https://medium.com/airbnb-engineering/smartstack-service-discovery-in-the-cloud-4b8a080de619#.m0x2ks9ja).

SmartStack was very easy to deploy incrementally via our new configuration management system.
Registering a service via `nerve` and making it available through `synapse`/`haproxy` required only configuration changes in the Chef monorepo.
Actual deployment of SmartStack involved merely changing where a service finds it's dependent services, and also killing any retry or load balancing mechanisms (since these would now be handled by `haproxy`).
By the end of the summer of 2013, all of our services were communicating via an HAProxy, and we were able to kill lots of Zookeeper and server-set-management code in the Rails monolith and other services.
SmartStack also remains in-use at Airbnb today.

### Load Balancing ###

Once our services were talking internally through HAProxy, we wanted to bring the same approach to our upstream load balancing.
At the time, all traffic inbound to Airbnb always went directly from an [ELB](https://aws.amazon.com/elasticloadbalancing/) to a Rails monolith instance, and managing the set of instances registered with the ELB was a manual process.
Initially, we planned an ambitious service that would accept all incoming traffic and would be able to mutate it -- for instance, handling authentication and session management and setting authoritative headers for downstream services.
However, we quickly learned that writing a proxy service that could handle Airbnb traffic, even in 2013, was nontrivial.
After several attempts to deploy a Java version, we punted and instead deployed [Nginx](https://www.nginx.com/resources/wiki/).

The Nginx instances, collectively known as Charon, become our front-end load balancer.
The Charon instances were discovered by Akamai through DNS, where they were entered manually.
After inbound traffic arrived at a Charon instnace, it would be routed to the correct service based on request parameters -- most often the hostname in the request headers.
`HAProxy` took traffic for a specific service (by port number on `localhost`) and load balance it to actual instances providing that service.
Once this system was deployed, I was able to kill our ELBs.
It was very convenient to have service routing be consistent throughout the stack -- an instance would receive traffic if and only if `nerve` was active on that instance, whether or not it was a "backend" or "frontend" service.

This system worked well enough until Spring 2016.
At that time, several problems arose.
The biggest was that all traffic bound for the Charon boxes was coming from Akamai, and Akamai was not doing a good job load-balancing between the Charon instances.
Since some of these instances were receiving a lion's share of the traffic, and since `haproxy` is single-threaded, we were seeing traffic queueing due to high CPU usage on those instances.
Scaling the Charon cluster wasn't helping, since we would still end up with individual hot instances.

Akamai claimed that [Route53](https://aws.amazon.com/route53/) [weighted resource sets](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted) were to blame, since they return only a single IP address every time a name is resolved.
To avoid Akamai internally caching a single IP, we switched to vanilla `A`-record sets, which return all the IP addresses for a name with each request.
We hoped that this would result in Akamai traffic being balanced between Charon nodes, but the approach did not work.
Eventually, we resorted to re-introducing ELB into our stack, this time as a way to load-balance between Charon instances.
Insert yo-dawg joke about load balancers here.

As part of this project, we spent a lot of time manually managing DNS or ELB registration for Charon instances.
To ease this burden, I wrote a service called `Themis`, which read `nerve` entries from Zookeeper and then took action when the set of these entries changed.
I wrote actions to manage ELB registration, or to create Route53 entries for either multi-IP `A` records or weighted record sets.
As a bonus, this made our stack fully consistent.
Now, even a load balancer instance would only receive traffic if and only if `nerve` was up on that instance.
This system remains in use at Airbnb today.
Alas, I did not have a chance to open-source Themis before I left Airbnb; hopefully, someone at the company takes that on as a project.

### Internal Load Balancing ###

While Charon was handling our load balancing needs for production, we were starting to deploy a lot of internal services, too.
To make writing internal services easier, [Pierre Carrier](https://github.com/pcarrier) and I launched Dyno in the fall of 2013.
This was Nginx configured, just like Charon.
If an engineer marked a service as an internal web service, then `<service name>.dyno` took you to an instance of that service.

Dyno eventually added an authentication mechanism, so internal services didn't have to write their own authentication code.
While the `.dyno` instances were initially manually entered into DNS, once Themis become available we allowed it to handle DNS registration for those boxes as well.
Today, Airbnb engineers regularly interact with dozens of dyno services.

### Monitoring ###

In the beginning of 2014, our systems monitoring was pretty spotty.
At that time, we were using [Scout](http://server-monitor.pingdom.com/) for instance monitoring, but monitoring was inconsistently available.
Also, Scout was a dead-end for metrics -- it only supported system-level stats that it's agent was able to collect.
At the time, several cool monitoring SaaS companies were getting started, and I embarked on a project to evaluate our options.

In the end I ended up choosing [DataDog](https://www.datadoghq.com/).
A strong reason was DataDog's very good `haproxy` integration.
This integration allowed us to have metrics for how much traffic each service was getting, where this traffic was coming from, and the distribution of response sizes, result codes, and other interesting statistics.
Another reason was that the [DataDog agent](https://github.com/DataDog/dd-agent) accepted [StatsD metrics](http://docs.datadoghq.com/guides/dogstatsd/), so we could monitor instance statistics like CPU, memory, and other resource utilization alongside our own custom metrics.
Furthermore, DataDog had [server-side CloudWatch scraping](http://docs.datadoghq.com/integrations/aws/), which meant we could see CloudWatch specific information like RDS utilization stats alongside all other metrics (and avoid asking engineers to log into the AWS console just to see CloudWatch metrics).
Finally, DataDog had a [very comprehensive API](http://docs.datadoghq.com/api/), so it seemed possible to do more automation as time went on.

In early 2014, I rolled out DataDog and removed Scout from all of our systems.
I would be a primary point of contact for managing Airbnb's relationship with DataDog until my departure from the company.
As I was leaving Airbnb, there were teams contemplating what it would look like for us to run at least some of our own monitoring tools.
However, on the whole the decision to use DataDog worked out.
Despite some bumps, the product scaled well with Airbnb, and the company was very responsive in managing problems and rolling out new features.
I strongly recommend DataDog.

I also became a primary point of contact for anything monitoring-related at Airbnb.
After rolling out the `dd-agent` to systems, I added [DatDog's StatsD client](https://github.com/DataDog/dogstatsd-ruby) to many of our applications.
I encouraged other developers to liberally instrument their code with `statsd` calls.
Additionally, I ended up writing lots of internal documentation on monitoring best practices.
I also developed monitoring curriculum, originally for the SysOps group but later as a bootcamp class for all new hires.
I encouraged engineers to formulate hypothesises about what could be causing issues, and then to use the available monitoring tools to test these hypothesises.
Since it is impossible to formulate such hypothesises without at least some understanding the overall infrastructure, my bootcamp class became a primer on both the Airbnb infrastructure as a whole as well as the monitoring tools that illuminate that infrastructure.

### Alerting ###

After migrating metric monitoring to DataDog, I was able to tackle alerting.
I had several strong requirements.
I wanted alerts to be defined automatically, so new hosts get alerts as they're spun up and alerts are cleaned up when hosts go away.
I wanted the notifications to be automatically routed to the right people.
Finally, I wanted alerts to be configuration-as-code, not created via manual clicking in the UI.

First, I had to take a stab at the ownership issue.
This was a constant problem during my tenure at Airbnb, and we never really solved it.
As people moved around and teams formed and dissolved, systems would become orphaned, and the maintenance burden would fall on "whoever cares most".
However, I at least made an initial system for assigning ownership, even if the data in that system was not always consistent.
I stored the information in the Chef repo, inside the role files which defined instances, and made it available for querying via `optica`.

Next, I created [Interferon](https://github.com/airbnb/interferon).
This project uses a Ruby DSL to programmatically create alerts.
I chose a DSL because I wanted complicated filtering logic, and found myself inventing mini-programming languages inside pure-data formats like JSON or YAML.

As an example for how Interferon was used, lets take CPU utilization.
I was able to write a single alert file which specified a CPU utilization threshold.
Whenever Interferon ran, it would pull a list of all hosts from inventory and make sure that each host had a CPU alert, cleaning up stale alerts for any terminated hosts.
The alerts would be routed to the owners for each host, and the alert file could be modified to explicitly filter out any systems where high CPU usage is expected.
Because all changes are code changes, the usual review process applies, so there are no surprises for new alerts or alerts suddenly going missing.
Also, writing alerts in a file encourages developers to write longer, more informative alert messages, and the DSL allows information about hosts to be encoded in the alert, making alerts more actionable.

I gave [a talk about this work](https://www.usenix.org/conference/srecon15/program/presentation/serebryany) at SREConf 2015, which includes more details if you're interested.
There's also a [blog post about Interferon/the Alerts Framework](https://medium.com/airbnb-engineering/alerting-framework-at-airbnb-35ba48df894f) on the Airbnb blog.  

### Product Work ###

In the middle of 2014, I was becoming burned out on SRE work.
Also, although I had been running systems at Airbnb, I hadn't done any work on the product -- I didn't even know how to work with Rails.
To change things up, I transitioned to a product team which was building an experimental cleaning integration for Airbnb.
I worked with a front-end engineer for six months to build this product, and we launched it in several markets.
However, in the end it wasn't viable and was shut down.

### Developer Happiness ###

In early 2015, it was clear that the cleaning product I had been working on wasn't going to ship, and the team would dissolve.
I was looking around for new problems to focus on, and they weren't hard to spot.

Shipping code to the Airbnb Rails monolith was becoming increasingly difficult.
Build and test times were increasing rapidly.
Spurious test failure was a constant problem and required regular re-builds, further delaying shipping.
The build-test-deploy pipeline was unowned, meaning that whenever it broke it was up to whoever was most frustrated to fix it.
Overall, the experience of being a product engineer was quite frustrating because of gaps in tooling, documentation, ownership, and communication.

So, when my product was finally terminated, [Topher Lin](https://github.com/clizzin) and I started the Airbnb Developer Happiness team.
Our broad mandate was to work on whatever was causing the most frustration.
Our initial top target was build times, but we envisioned tackling a wide range of issues around internal communication and tooling.
To understand our problem space and to get buy-in for our projects, we began conducting the Airbnb developer survey.
I collected and analyzed the data, which showed widespread frustration with our tooling and infrastructure.

I spent most of my remaining time at Airbnb working in this problem space.
The team Topher and I started ended up expanding to more than 20 engineers on at least 4 sub-teams.
Although we never got time to work on many of the broader problems we initially envisioned tackling (like internal communication practices), the intersection of people and tooling is the area I remain most passionate about.

### Build System ###

The Developer Happiness Team's initial target was slow build times, at that time creeping into 30-minute-plus territory.
At the time, we were using [Solano](https://www.solanolabs.com/), a third-party ruby testing platform, to run any commit-time tasks including builds.
We had hacked building an artifact into this system as a fake test.
We were also using Solano to build non-ruby projects, including all fat JARs from our Java monorepo.
Solano was running on AMIs provided by the company, and we didn't understand the build environment, how to debug any problems or build failures, or how to control system dependencies for builds.

We decided that we would start by moving builds to our own hardware, where we could optimize the environment.
Since we would end up with multiple systems performing build and test tasks, we decided to create a unified UI where all such tasks could be collected and visualized.
I also began evaluating multiple build systems to replace Solano, with an eye towards a system which supported arbitrary pipelines to support optimizations to the Java builds as well as Ruby builds.

A build system is just an executor which performs tasks in response to events, usually commit events.
We already had a system that fed all [webhook events](https://developer.github.com/webhooks/) from Github Enterprise into RabbitMQ, providing a convenient trigger.
We were already very familiar, too, with [Resque](https://github.com/resque/resque), a Ruby task executor for delayed or long-running tasks which we used throughout our production infrastructure.
Finally, we were tired of writing build tasks as shell scripts (which can't be tested) and which integrated with the build system by making API calls via `curl`.
We envisioned instead a small library of common tasks, written as Ruby functions with good test coverage, and which could report their status, progress, and results directly into log systems and databases.

These design considerations lead us to decide to roll our own build system.
We built it into Deployboard, the tool we were already using to deploy the builds.
Instead of learning about new deployable builds via API calls, Deployboard would now generate them using Ruby executed in response to RabbitMQ events.
It would display any progress and error logs.
The end result was the Deployboard Build System -- built in less than 4 months by just three engineers, who were also supporting frequently-failing CI for an engineering team of 400+.

We migrated the Airbnb Rails monolith to this system in summer of 2015.
This system immediately improved the speed and reliability of builds by an order of magnitude.
In November of 2015, I wrote a Ruby test splitter which allowed arbitrary parallelism on our monolith's RSpec suite, and migrated the tests from Solano to Deployboard as well.
This improved test times from 30+ minutes to around 10 minutes, as well as reduced spuriousness and made test result tracking easier.
By March of 2016, we completely terminated Solano, migrated all builds to Deployboard, and introduced and integrated Travis CI for testing most projects except the Rails monolith and the Java monorepo (which were tested in Deployboard for performance reasons).
The combination of Deployboard Build System and Travis CI remains in use for all projects at Airbnb today.

There are a few talks about Deployboard online.
One is [a talk Topher and I gave at Github Universe 2015](https://www.youtube.com/watch?v=4etQ8s74aHg).
There is also [a talk I gave at FutureStack 2015](https://blog.newrelic.com/2015/12/15/airbnb-democratic-deploys-futurestack15-video/).
However, Deployboard was also unfortunately never open-sourced, mostly due to lots of Airbnb-specific code and it's use of O2, an internal Bootstrap-style CSS framework.

### Ruby Migration ###

In mid-2016, after a brief break to focus on load balancing and Themis, I embarked on what became my final project at Airbnb.
At the time, we were still using Ruby 1.9.3 on Ubuntu 12.04 for all of our projects.
In general, we had no story around how to upgrade system dependencies of any kind for our projects.
My goal was to create such a mechanism, and then to use it to upgrade the Ruby version for our Rails monolith.

Our build artifacts were generated directly on build workers, using system versions of any dependencies.
They are deployed as tarballs to instances which are required to have matching versions of these system dependencies.
Upgrading such dependencies had to happen in concert between the build and production systems -- a difficult operation that would be more difficult still to roll back in case of trouble.
We had no way of even tracking what system dependencies a given artifact required.

I began by tagging builds with system dependencies used to create the build -- things like ruby version, NodeJS version, and Ubuntu version (as a shorthand for any dynamic library dependencies).
Next, I rebuilt our deploy system UI, which previously asked engineers to pick a specific build artifact to deploy.
The new UI asked engineers to pick a specific version (SHA) of the code, which may be associated with any number of build artifacts.
Finally, I modified the system which actually performed deploys on instances.
Previously, that system would receive a specific artifact (e.g. a tarball of Ruby code along with it's `bundle install`ed dependencies) and then go through the steps (untar, link, restart) to deploy that artifact.
In the new version, the system would receive a list of possible artifacts, along with their tags.
It would then compare local dependency versions with the tags on artifacts, and pick an artifact that matched the system (or error out if, for instance, the system Ruby version didn't match the Ruby version tag on any of the available artifact).

This system allowed me to build the Rails monolith concurrently for Ruby 1.9.3 and Ruby 2.1.10.
It also allowed me to have web workers for both versions of Ruby in production -- each worker, upon receiving a deploy, would pick a correct artifact.
I also began running tests for the monolith under both versions of Ruby, fixing any spec failures that were version-dependent.

By February of 2017, this preliminary work was completed.
I began running some upgraded Ruby workers to watch for unexpected errors, and also to compare performance between the populations.
The vanilla Ruby 2.1.10 build actually had *worse* performance than the [Brightbox PPA](https://www.brightbox.com/docs/ruby/ubuntu/) build of Ruby 1.9.3 we had been using.
In the end, I created a custom build of Ruby 2.1.10 with several performance patches.

In March 2017, I performed the Ruby upgrade for the monolith.
Also, the system dependency upgrade system I created was used to upgrade our Ubuntu version from 12.04 to 14.04.
Other engineers were beginning to use the system to upgrade other system dependencies, including NodeJS versions for Node projects and Ruby versions of other services in our SOA.
After completing the migration and documenting the work, I announced my departure.

### Non-Technical Projects ###

Besides the big chunks of code I wrote while at Airbnb, I was also involved in lots of non-technical (or at least, non-coding) projects.
In hindsight, some of those projects were arguably more important than any of the strictly technical work that I did; see the section below on a post-mortem around those thoughts.
It seems worthwhile to document those here, too, while I still remember them.

One big area of focus was SysOps.
I spent a lot of time on-call, especially during the hectic years in 2013 and 2014 when we were growing rapidly and our infrastructure was in flux.
I eventually transitioned into a leadership role of the SysOps group.
This involved organizing training for new members, planning the on-call schedule, and running the weekly postmortem meetings.
The SysOps group was incredibly successful, and I frequently hear astonishment from my peers when I tell them that Airbnb has a strictly volunteer on-call rotation.
The group was so successful that we eventually had more people who wanted to be in the on-call rotation than we could fit into slots during a 6-month period.
We ended up reducing on-call shift duration from a week to just two days.
In 2015 several long-tenured members, including me, began stepping back from the group to allow newer engineers to take the lead.

Another big focus was our overall technical vision.
I was a member of Tech Leads, an initial stab at such a vision, in 2013.
However, when [Mike Curtis](https://www.linkedin.com/in/curtismike/) became VP of Engineering, he dissolved the group as part of his efforts to abolish any hierarchy among engineers.
However, we still needed a way to collectively decide how to evolve our infrastructure.
I took lead on an initial stab at such a system, called Tech Decisions, in late 2013, but that system was too bureaucratic and never had much adoption.

In 2014, a crisis around whether and how we run an SOA precipitated another attempt, called the Infrastructure Working Group.
We held a series of meetings to come up with a shared set of principles for our infrastructure, which formed the basis of any future decision-making.
I drafted several of the principles we eventually settled on.
We also created a structure called the Infrastructure Working Group, which worked to influence individual teams and engineers to make technical decisions in accordance with the principles.
I was heavily involved with the group, at least until the Developer Happiness Team began taking all my time.

I was very involved with our Bootcamp efforts for new hires.
I participated in the meetings that created the Airbnb Bootcamp.
Afterwards, I ended up regularly teaching two of the sessions.
The first was on monitoring our infrastructure.
The second was titled "Contributing and Deploying", and covered the developer workflow from committing code (including how to write good commit messages -- a personal quest) to getting that code successfully out in production.
This was the only mandatory session of the bootcamp.

Finally, as a technical leader and senior engineer, I spent a lot of time on mentorship and code review.
During our Chef roll-out, I ended up reviewing almost every pull request to our Chef repo in an effort to broadly seed Chef best practices.
Later, as I transitioned on working primarily on Deployboard, I reviewed most PRs to that repo, trying to ensure consistent architecture, style, and test coverage.
As the Developer Happiness/Infrastructure group of teams grew and hired many new engineers, I worked to get them up to speed on the codebase and to become productive and self-sufficient contributors.

Finally, I spent a large amount of time maintaining what I call "situational awareness".
This meant engaging with the firehouse of stuff that the Airbnb engineering team was doing, from project proposals and infrastructure decisions down to individual pull requests, email threads, and even Slack conversations.
I attempted to inject vision and guidance wherever I could, connect the dots between disparate projects, and in general to be helpful.
For instance, I could tell an engineer that a project they were trying to accomplish would become easier when another engineer on a different team completed a different project.
I could catch PRs that were likely to cause problems, or connect outages to specific changes.
This connector role was performed by several people in the engineering organization, and these people never got the credit they deserved for this thankless and never-ending task.

## What Didn't Work? ##

While I knew that I had been incredibly productive at Airbnb, writing everything that I worked on really created some perspective.
Looking back over that I worked on, I see that my technical projects -- Chef, SmartStack, Deployboard, the work on Monitoring -- were very successful.
They made life easier for other engineers, and have survived the test of time to continue providing value.

However, I always had more grand ambitions than to just accomplish a specific project.
I had a vision for how I wanted our infrastructure and our engineering team to function, and I did not succeed, in most cases, in making that vision a reality.

A great example of this is our original vision for the Developer Happiness team.
We did not set out to become the CI team, although that's where we eventually ended up.
We wanted to improve documentation, communication, and make being an Airbnb engineer easier and more fun.
I ran the engineering team survey and collected pain points, but never had enough bandwidth to address more than a few of these pain points.

Why didn't I have enough bandwidth?
I think it's because I failed to navigate the transition from individual contributor to technical leader.
The easiest way for me to get things done at Airbnb was to just do the things I thought needed doing.
When our builds were slow, I jumped in with both feet and made them faster -- by writing a build system and a test splitter and a Javascript UI for result visualization, etc...

In the meantime, I let things fall to the floor.
I ignored the structural problems that lead to the situation, or merely kvetched about them and left others to try to solve them.
I spent too much time writing code, and not enough time influencing or guiding other contributors -- which would have allowed me to focus on a broader range of problems.
I focused on my ability, as an IC, to definitively solve smaller (though usually still very important) problems.
This took away from my ability to address the larger ones.

This wasn't *entirely* my fault.
Airbnb could have done better to support my transition.
The engineering team is completely flat -- even though we have engineering levels, they're supposed to be secret.
Expectations around what engineers should be focusing on at each level are vague at best, and there's no consensus among managers about what makes a more senior engineer.
Mike Curtis once told me, in a one-on-one, that it should be possible to reach the highest engineering levels by focusing on deep technical work, but I think that assertion would come as a surprise to most of his engineering managers.

I had always wanted to make technical leadership an explicit role, not something a few engineers did in their spare time.
I didn't succeed in making this vision a reality.
I don't think I really even tried.

If I could change anything about my time at Airbnb, it would be a change in focus.
I wish I was more focused on building relationships, and less on accomplishing objectives.
In the end, when I was leaving, it was the relationships that I did manage to build that persisted in my life -- the technical stuff is all someone else's problem now.

## Why I Quit ##

It took about 6 months for the Developer Happiness team to go from three to 4 engineers.
This was an incredibly stressful time.
Between broken builds, broken tests, and bugs in the code, it could take an entire day to ship a single changeset.
We were concerned about a code backlog so deep we couldn't clear it in one day.
We were performing heroics to keep the system running, at the same time trying to build and roll out a replacement, all with minimal headcount.
Yet, we didn't seem to get the recognition and support we deserved from engineering management.

From the founding of the team, our manager was part-time, also managing a separate product team.
In the summer of 2015, he approached me about potentially taking over as the full-time manager.
However, I was in the middle of several deep technical projects, which I felt couldn't lose me as an IC.
Also, I felt I was a role model for the parallel IC career path at Airbnb.
I wanted to continue being an example of a successful IC who got things done without transitioning.

As a result, I declined the offer.
Instead, another recently-hired team member took on the management role, and we also added a newly-hired project manager.
Together, the two of them came up with broad roadmap for the team.
They organized a process to plan specific projects to meet that roadmap.

This process turned out to be incredibly frustrating for me.
I was spending all my time putting out fires and shipping high-impact code, but the new roadmap had no room for several of the projects I thought were most important, including some that I was in the middle of working on.

More, this situation epitomized the disconnect between the Airbnb engineering team's vision for technical leadership, and the reality.
It seemed to me that senior engineers should have a lot of input over our planning process.
Instead, I faced what I now recognize as the structural disadvantage of remaining an IC.
The new manager and PM had all their time to influence, plan, and decide.
As an IC, I had to split my time between those activities and actually participating in the tech process -- coding, code review, maintenance, mentoring.
To live up to our goals, the management team would need to put a lot of effort into active engagement and deference.
This just didn't seem likely.

The combination of high-stress firefighting, loss of control in planning, and disillusionment with our ideals lead me to utterly burn out.
I ended up taking a two-month leave of absence in March 2016.
By the time I returned from leave, I was even further sidelined in my own team.
Instead of continuing to fight for influence, I decided to step aside.

The Observability team I joined consisted of some of the most senior engineers working on some of the deepest technical problems of the infrastructure.
I focused on a single project, the Ruby upgrade, and lead it through to completion almost single-handedly.
However, it was clear to me that this wasn't the kind of work I wanted to be doing.
I enjoyed the social aspects of my job as much or more as the deep technical ones.
I am most passionate about the interface of people and technology, and I wanted to be a technical leader.
It felt like the right moment to move on from Airbnb.
I announced the completion of the Ruby upgrade, and my departure, in the same all-hands meeting.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Setting up IMAP]]></title>
            <link>https://igor.moomers.org/posts/setting-up-imap</link>
            <guid>https://igor.moomers.org/posts/setting-up-imap</guid>
            <pubDate>Mon, 20 Mar 2017 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
Recently, I wanted the ability to more reliably check email while on my phone.
So far, I had gotten by with [VX ConnectBot](http://connectbot.vx.sk/), which I would use to SSH into my phone and connect to my `tmux` session running `mutt`.
But I wanted to be able to check personal email while on the go without a laptop more often, and I wanted the ability to access attachments without having to forward them to my gmail address.

It turns out that setting up IMAP is easier than I thought, because the software is so good.
What's difficult is navigating the maze of confusing, overlapping email standards and options.
Here's how I did my setup, but YMMV.

## Migrate existing mail

I decided to run [Dovecot](https://www.dovecot.org/) as an IMAP server.
Dovecot has excellent documentation.
For instance, on their page about [MailLocation](http://wiki.dovecot.org/MailLocation), I learned that it is "not possible to mix maildir and mbox formats".

This was going to be a problem because I use [Maildir](http://www.qmail.org/man/man5/maildir.html) for my personal email folders, but [Postfix](http://www.postfix.org/), my SMTP server, delivers to an [mbox](https://en.wikipedia.org/wiki/Mbox) file as it's inbox.
I was going to need to standardize on one or the other.

I chose Maildir, IMHO a superior format that is more reliable without complicated locking.
Maildir seemed like the correct choice if I wanted to continue using `mutt` locally while also accessing those same emails via Dovecot remotely.
So, I began by using [mb2md](http://batleth.sapienti-sat.org/projects/mb2md/) to migrate all of my existing messages to a local Maildir.

I installed this program into `/usr/local/bin` by downloading it from the link above.

```bash
$ cd /usr/local/bin
$ wget http://batleth.sapienti-sat.org/projects/mb2md/mb2md-3.20.pl.gz
$ gunzip mb2md-3.20.pl.gz
$ chmod a+rx mb2md-3.20.pl
$ ln -s mb2md-3.20.pl mb2md.pl
```

I ran it like so:

```bash
igor47@purr:~/procmail $ mb2md.pl -s /var/mail/igor47 
Converting /var/mail/igor47 to maildir: /home/igor47/Maildir
Source Mbox is /var/mail/igor47
Target Maildir is /home/igor47/Maildir 
2766 messages.
```

Afterwards, I removed my pre-existing `mbox` inbox to prevent confusion:

```bash
$ echo > /var/mail/igor47
```

## Procmail to local Maildir

Next, I wanted to ensure that new mail would continue to be delivered to my Maildir folder.
I was already using [Procmail](https://wiki.archlinux.org/index.php/Procmail) to filter spam and other kinds of messages, but I didn't have a final fallback rule.
This meant that any mail not delivered to a specific location by Procmail would come back to Postfix, which delivered it to the `mbox` inbox.
To resolve the situation, I added a new catch-all rule to my `procmailrc`:

```bash
$ echo INCLUDERC=${PMDIR}/rc.final > ~/procmail/procmailrc
```

`rc.final` looks like so (my `procmailrc` sets `$MAILDIR` to `~/Maildir`):

```
:0:
$MAILDIR/new
```

As always, [this reference](http://www.zer0.org/procmail/quickref.html) is inestimably helpful to write these obscure Procmail filter rules.

## `mutt` uses new inbox

Now, mutt should be told where my mail is coming in.
I set the following variable in my .muttrc:

```
mailboxes ~/Maildir
```

Note that I am continuing to get an error (`/var/mail/igor47 is not a mailbox`) when I first open mutt, but it seems to cause no trouble after I open the correct inbox.

## SSL certs for mail

I used [Let's Encrypt](https://letsencrypt.org/) to get SSL certs for my mail server.
Because let's encrypt uses HTTP to authenticate that you really own the domain, I first needed my mail server (`mail.moomers.org`) to be accessible on an HTTP port.
I did this in Apache by making `mail.moomers.org` a [`ServerAlias`](https://httpd.apache.org/docs/2.4/mod/core.html#serveralias) for the [www.moomers.org](https://www.moomers.org) virtual host.

That done, I invoked Let's Encrypt like so:

```bash
$ letsencrypt certonly -a webroot -d mail.moomers.org -w /var/www/moomers.org/htdocs
```

Once the cert was acquired, I double-checked that automatic renewal works, too:

```bash
$ letsencrypt renew --dry-run
```

[This article was very helpful with helping to configure Dovecot/Postfix for SSL](https://ubuntu101.co.za/ssl/postfix-and-dovecot-on-ubuntu-with-a-lets-encrypt-ssl-certificate/).

## Configure Dovecot

I was ready to [install Dovecot](https://help.ubuntu.com/community/Dovecot):

```bash
$ aptitude install dovecot-imapd
```

I want system users to also be Dovecot users, but I didn't want passwords to be transmitted unencrypted over the web.
I modified `10-auth.conf` (all of the config files here are relative to `/etc/dovecot/conf.d`) like so:

```
disable_plaintext_auth = yes
auth_mechanisms = plain login
```

To enable SSL, I set these in `10-ssl.conf`:

```
ssl = yes
ssl_cert = </etc/letsencrypt/live/mail.moomers.org/fullchain.pem
ssl_key = </etc/letsencrypt/live/mail.moomers.org/privkey.pem
```

I wanted Postfix to SASL-auth against Dovecot (so, Dovecot users, who are system users, are also Postfix users).
I set this in `10-master.conf`:

```
  # Postfix smtp-auth
  unix_listener /var/spool/postfix/private/auth {
    mode = 0666
    user = postfix
    group = postfix
  }
```

Finally, I wanted Dovecot to read my Maildir inbox.
I set this in `10-mail.conf`:

```
mail_location = maildir:~/Maildir
```

I was ready to start dovecot:

```bash
$ service dovecot restart
```

## Configure Postfix

We use [SASL](http://www.postfix.org/SASL_README.html) to allow postfix to authenticate users.
Given that we've already configured Dovecot, above, we can skip straight to [here](http://www.postfix.org/SASL_README.html#server_sasl_enable) in the Postfix documentation.
We also need to [enable TLS on postfix](http://www.postfix.org/TLS_README.html).

In the end, my config (the relevant parts) looks like this:

```
# Enable TLS using Let'sEncrypt certs:
smtpd_use_tls=yes
smtpd_tls_cert_file=/etc/letsencrypt/live/mail.moomers.org/fullchain.pem
smtpd_tls_key_file=/etc/letsencrypt/live/mail.moomers.org/privkey.pem
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

# Disable Poodle
smtp_tls_security_level = may
smtpd_tls_security_level = may
smtp_tls_mandatory_protocols=!SSLv2,!SSLv3
smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3
smtp_tls_protocols=!SSLv2,!SSLv3
smtpd_tls_protocols=!SSLv2,!SSLv3

# Changes to SSL Ciphers
tls_preempt_cipherlist = yes                                                                                                                                                                  smtpd_tls_mandatory_ciphers = high                   
tls_high_cipherlist = ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA256:ADH-AES256-GCM-SHA384:ADH-AES256-SHA256:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-RSA-AES256-SHA384:ECDH-ECDSA-AES256-SHA384:AES256-GCM-SHA384:AES256-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-DSS-AES128-SHA256:ADH-AES128-GCM-SHA256:ADH-AES128-SHA256:ECDH-RSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-RSA-AES128-SHA256:ECDH-ECDSA-AES128-SHA256:AES128-GCM-SHA256:AES128-SHA256:NULL-SHA256

# Enable SASL
smtpd_sasl_auth_enable = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_security_options = noanonymous, noplaintext
smtpd_sasl_tls_security_options = noanonymous
smtpd_tls_auth_only = yes

# Permit SASL-authenticated users to relay mail
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
```

## Mail Client

First, I made sure that the Dovecot IMAP port (143) was accessible from the internet through the firewall on my server.
I didn't need to punch a hole for port 25, because it was already open to allow SMTP traffic from the internet.
I did notice, while testing, that my ISP (AT&T) blocks outbound port 25 from my local network.
I had to VPN out from my devices to test things, and will continue to need to do that to send email while my phone is on my local network.

To configure my mail client, I selected IMAP.
My user name is my local system unix account, and my password is my normal unix password.
Set up your client for incoming mail via IMAP on port 143, using TLS.
Outbound mail goes through port 25, via `STARTTLS`.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Backups to another server]]></title>
            <link>https://igor.moomers.org/posts/backups-to-another-server</link>
            <guid>https://igor.moomers.org/posts/backups-to-another-server</guid>
            <pubDate>Thu, 22 Dec 2016 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
I've been running a personal Linux server for just about ten years.
This makes me a [Gladwellian expert](http://gladwell.com/outliers/the-10000-hour-rule/) at the task!
One of the chores of running a personal server is backups.
Recently, I got access to a second machine with enough disk space to back up at least the most important files.
This documents how I set up [duplicity](http://duplicity.nongnu.org/) to regularly back up the data to the remote machine.

I was inspired by other guides, primarily [this one by Marc Gallet at zertrin](https://zertrin.org/how-to/installation-and-configuration-of-duplicity-for-encrypted-sftp-remote-backup/).
Marc offers a script which makes using `duplicity` easier, especially if you're backing up to S3.
I also found the [duplicity man(1) page](http://duplicity.nongnu.org/duplicity.1.html) a useful reference.
[This guide by Justin Ellingwood](https://www.digitalocean.com/community/tutorials/how-to-use-duplicity-with-gpg-to-securely-automate-backups-on-ubuntu) is also substantiatively similar.

My focus was on security on both ends.
I wanted to make sure that
1. Since the remote machine is untrusted, that machine cannot be used to read the content of the backups
2. The backup user cannot be used to get access to anything on the remote machine

(1) is ensured by using duplicity and pgp-encrypting the backups.
(2) is ensured by creating a very limited remote user which can only access the backup data.
I also create a local user dedicated to running backups, to encapsulate the configuration.
You can probably use `root` to run the backups, in which case you can skip the user and `sudo` configuration below.
But I don't like to keep too much configuration under the root user.

## Setting up users for secure communication

On the local machine, lets set up a user that will own the backup process.
Since I'm backing up the `moomers.org` server, I named the user `moobacker`.

    igor47@local:~$ sudo useradd -N -m -s /bin/false moobacker

I prevent the creation of any groups for the user (`-N`), cause `useradd` to create a home directory in `/home/moobacker` (via `-m`), and set the shell to `bin/false` (via `-s /bin/false`) to limit what this user can do.
Note that setting the login shell to `/bin/false` is not foolproof -- someone can still run `/bin/bash` as the user if they manage to log in, for instance.
However, we prevent any logins as this user by not setting a password (never running `sudo passwd moobacker`) and by not setting up an `.ssh/authorized_keys` file.
The only way to run commands as that user is via `sudo` or from cron.

We will need `moobacker` to have an ssh key to get into our remote server.

    igor47@local:~$ sudo -u moobacker ssh-keygen -t rsa
    igor47@local:~$ sudo cat /home/moobacker/.ssh/id_rsa.pub

Don't pick a passphrase for the key -- we won't be using it interactively.
I originally used the new elliptic-key type ssh key (via `-t ed25519`), but switched back to RSA after the Paramiko SSH implementation in `duplicity` had trouble with this key type.
We `cat` out the public component of the key we just created; we're going to need it shortly.

Next, we should set up a user on the remote machine who will own the backups.
I named the user `purrbackups` because I'm backing up a server called `purr`.
On the remote server, do the following:

    igor47@remote:~$ sudo useradd -N -m -s /bin/false purrbackups
    igor47@remote:~$ sudo -u purrbackups mkdir /home/purrbackups/.ssh
    igor47@remote:~$ echo 'from="<local-server-ip>" <public key> | sudo -u purrbackups tee /home/purrbackups/.ssh/authorized_keys

This will allow the `moobacker` user from the local server to log into the `purrbackups` account on the remote server.
Copy-pasta the actual SSH public key in place of `<public key>`.
Also, replace `<local-server-ip>` with the actual IP address of the source server.
The `from=` option in `authorized_keys` only allows this user to log in from that IP address for greater security, even if the SSH private key leaks out.

At this point, you should test the setup so far by trying to SSH from the local server to the remote server as the two new users we set up:

    igor47@local:~$ sudo -u moobacker ssh purrbackups@<remote>

After accepting the host keys, you should connect and then immediately get a `Connection closed` error.
Congratulations -- we got the two machines talking to each other!

We should still lock down the `purrbackups` user so it can only be used for sftp purposes.
To do this, lets edit the `/etc/ssh/sshd_config` file.
Find a line containing `Subsystem sftp` and make sure it is uncommented (no `#` at the beginning) and looks like this:

    Subsystem sftp internal-sftp

Then, at the end of the file, add a section like so:

```
Match User purrbackups
  ForceCommand internal-sftp
  ChrootDirectory /home/purrbackups
  AllowAgentForwarding no
  AllowTCPForwarding no
  X11Forwarding no
```

This will only allow the `purrbackups` user to use SFTP (for file-copying purposes), and will further only allow it access to it's own home directory.
Finally, set permissions on the `Chroot` directory, and add another directory for backup purposes:

    igor47@remote:~$ sudo chown root:root /home/purrbackups
    igor47@remote:~$ sudo chmod 755 /home/purrbackups
    igor47@remote:~$ sudo mkdir /home/purrbackups/backups
    igor47@remote:~$ sudo chown purbackups /home/purrbackups/backups

Now, test this config again via `sftp`:

    igor47@local:~$ sudo -u moobacker sftp purrbackups@<remote>

You should get an SFTP prompt.
Hooray!

## Allow local user to run backups

We want the backup user to be able to access files owned by other users on the system.
This means the backup user will need to run the backup program as root.
Lets set up a backup script for this purpose.

```bash
#!/bin/bash

whoami
```

Save this as `/home/moobacker/backup.sh`, and then set permissions appropriately:

    igor47@local:~$ sudo chown root:root /home/moobacker/backup.sh
    igor47@local:~$ sudo chmod 755 /home/moobacker/backup.sh

Next, lets allow the backup user to run this as root.
Use `visudo` to edit `/etc/sudoers.d/backups`:

    igor47@local:~$ sudo visudo /etc/sudoers.d/backups

The contents should be:

```
moobacker ALL = (root) NOPASSWD: /home/moobacker/backup.sh
```

This will allow the `moobacker` user to invoke the script as root.
Test it like so:

    igor47@local:~$ sudo -u moobacker sudo /home/moobacker/backup.sh

The output should be the word `root`

## Set up GPG key for `duplicity`

We will use this key to encrypt the backups.
Since the key will need to be distributed to other systems (so you can decrypt your backups in case you need them), set a passphrase on the private key.
You can pass this passphrase to the backup script when it runs.

    igor47@local:~$ sudo -H -u moobacker gpg --gen-key

Accept all the defaults.
You can pick some values for the name and email addresses.
The command will take a while to generate the entropy required for the keys -- you can run some random tasks in the meantime (like `aptitude update`).
Note the `-H` option to `sudo` -- we need this, otherwise `gpg` won't know where the home directory is and won't save the resulting keys.

Normally, private keys should remain on the system where they were generated.
In this case, you'll want to copy the private key to another system so you can decrypt your backups if necessary.
I used the `ccrypt` program to encrypt the backup with a symmetric key:

    igor47@local:~$ sudo -u moobacker tar -czf - /home/moobacker/.gnupg | ccencrypt > /tmp/moobacker.gnupg.tgz.cc

You can then distribute the file freely.
You'll need the passphrase to the `.cc` file containing the GPG private key, as well as the GPG key passphrase, to recover your backups.
If you don't have the `ccencrypt` binary, it can be had on most distros by installing the `ccrypt` package.

## Set up backups

Let's put all of the pieces together.
Remember the script we created earlier, in `/home/moobacker/backup.sh`?
Here's what the final version of mine looks like:

```bash
#!/bin/sh

set -o errexit
set -o nounset

encryption_key_id="DC3EEE04"
duplicity="duplicity --verbosity error --no-print-statistics --encrypt-key $encryption_key_id"
remote="sftp://purrbackups@remote/backups"

# back up /etc
$duplicity /etc ${remote}/etc

# back up /var
$duplicity /var ${remote}/var

# back up /home
$duplicity /mnt/raid/home ${remote}/home
```

The `encryption_key_id` can be gotten like so:

    igor47@local:~$ sudo -H -u moobacker gpg --list-keys

I picked the ID of the `sub` key, which is typically used for encryption.
For initial runs of the script, you might wish to use a more verbose output model, so you can see any errors:

```bash
duplicity="duplicity --encrypt-key $encryption_key_id"
```

Perform an initial run (or two) of the script:

    igor47@local:~$ sudo -H -u moobacker /home/moobacker/backup.sh

If this succeeds, it's time to add the script to a cron tab to run regularly.

    igor47@local:~$ sudo -H -u moobacker crontab -e

My `moobacker` user's crontab looks like this:

```cron
MAILTO="admins@example.org"

# m h  dom mon dow   command
23 23 * * 0 sudo /home/moobacker/backup.sh
```

This will run backups at 23:23 every Sunday.
I expect the backup script to produce no output -- any output indicates errors.
The `MAILTO` setting will send any error output to me via email.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What is to be done?]]></title>
            <link>https://igor.moomers.org/posts/what-is-to-be-done</link>
            <guid>https://igor.moomers.org/posts/what-is-to-be-done</guid>
            <pubDate>Wed, 28 Sep 2016 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
I'm a technologist -- someone who builds tools and systems.
I'm also not someone who draws a sharp distinction between work and life.
I view life as a series of projects I care about, the only distinction being that sometimes people want to pay me money to work on some of these projects.

I am motivated in my work by the premise that what I am doing is somehow helping to improve the world.
In the past year, I experienced a crisis of faith, where I stopped believing this premise, and as a result my work (and so also my life) became meaningless to me.

This post is a somewhat-personal account of how this crisis came about, how I dealt with it, and what I concluded in the end.
I wrote this for me, so I have a record of the journey.
But if you, too, are a technologist who is finding herself depressed and uninspired, then maybe this could help.
Or maybe you're just trying to figure out where best to apply yourself?
If so, [skip to the end](#ok-but-whats-to-be-done).

## Catch-22 ##

I am an environmentalist as well as a technologist.
I really, really like this awesome planet we're all riding around on; so beautiful, so full of neat things.
My environmentalism came into direct conflict with my profession, as it became clear to me that we're using technology to make a mess of things.

For instance, [we create nuclear fuel which will last for 10,000 years and has to be stored in complicated facilities lest it poison us all](https://www.damninteresting.com/this-place-is-not-a-place-of-honor/).
We release chemicals into the environment which are [turning amphibians female](http://www.newsweek.com/female-frogs-estrogen-hermaphrodites-suburban-waste-369553).
[Our bees are dying](https://en.wikipedia.org/wiki/Colony_collapse_disorder), probably [thanks to our pestides](https://en.wikipedia.org/wiki/Neonicotinoid#Bees), but [our pine beetles are thriving (and killing all the trees)](http://www.tahoedailytribune.com/news/california-tahoe-area-tree-deaths-climb-to-record-levels-thanks-to-bugs-drought/) because of global warming.

Technology is not just causing environmental degradation, but seems to be digging at the very fabric of society.
We can [build weapons which may destroy us all](https://en.wikipedia.org/wiki/Nuclear_warfare).
At the same time, we have isolated ourselves in [filter bubbles](https://www.techopedia.com/definition/28556/filter-bubble), wherein conflict and rhetoric can escalate until [the use of those weapons doesn't seem so bad](http://www.politicususa.com/2016/08/03/trump-asks-if-nuclear-weapons-them.html).
At a time when we're faced with huge collective challenges, technology seems to [have taken away even our ability to agree on 'facts'](http://www.newyorker.com/magazine/2016/03/21/the-internet-of-us-and-the-end-of-facts).

So, here was the catch-22.
How could technology be both the cause of *and* the solution to all of our problems?
If I continued working on improving technology, wouldn't I be hastening the very outcomes I decried?
But if I refused to work on technology any further, than what was I to do instead?
I could retreat into the woods and hide from the world, but that didn't seem like the solution to *any* problem -- not even my distinctly personal one.

## Descent ##

This crisis of confidence was the cause of (or caused by, or just correlated with) some major sad times in my life.
I felt paralysed with inaction.
I felt as though I should be working to fix the problems I saw in the world, but there was no action to take that wouldn't make things worse.
During this period -- roughtly, autumn 2015 to summer 2016 -- I felt like an automaton going through the motions of life, while inside I felt nothing but dread.
During this time, my relationship with my long-term parner disintegrated.
The shared house community I had been living in fell apart as well, and I moved into an apartment by myself for the first time in 14 years.
I burned out at work, and went on leave to attempt to recover my sanity.

## Escape ##

Today, I am feeling much more optimistic about the world, and my role in it.
A few things really helped me to overcome this malaise.

### Spirituality and Inward Focus ###

There were two disjoint sets of problems.
One set included the problems I discussed above -- environmental degradation, political gridlock, the threat of catastrophe.
But a totally different set of problems was internal -- my own sad, depressed mental state.

I could do nothing about the former until I addressed the latter.
That, itself, was a useful realization.

I had several tools to address my internal problems.
One learning, which I picked up from the various books on communication I had been reading (especially [Non-Violent Communication](http://amzn.to/2dveZlp) and [Crucial Conversations](http://amzn.to/2dmUanB)), was that I chose my reactions.
A person in an interaction with me could not "make" me upset -- they did whatever, and then I made myself upset by reacting.
Likewise, global warming didn't "make" me depressed -- I could react with either ennui or with determination when confronted with such a problem, and it was up to me to choose which.

Another tool, which I gained from my healing circle, was insight into the nature of control.
It's what the shaman I worked with would call the "control virus".
I was upset because I felt like all of these huge world problems were beyond my control.
But -- of course they were!
Most things are beyond our control, even when we think they're not.
Just about the only thing we *can* control is how we react, which actions we ourselves take.
It's not up to me to solve all of the worlds problems, but it's up to me to do the best I can, to *be* the best I can.
This realization may sound trite.
But, as with many important realizations, there's a huge difference between hearing or reading or knowing it, and having it really sink into your bones.

I group realizations like this into an overall spiritual journey.
Any work I do in the external world is secondary to work I do on the inside, on my own conciousness.
It is the latter which enables the former.
Whatever your focus -- environmentalism, or poverty, or health and healthcare -- it must start with inward healing,

### Community ###

Another amazing source of healing while I dealt with my crisis was my community.
My healing circle has provided me with many mental tools and insights.
I meet regularly with a group called HeartTribe, the members of which support one another in making the positive changes they wish to see in their lives.
Also, my general group of close friends is wonderful, and supported me during the dark times.
What was most helpful was feeling like I was not alone in my spiritual journey, but that these wonderful people were there with me.

Also, many of the people I'm close to are also technologists who have struggled with some form of the same crisis.
I was able to learn how my peers were able to create meaning in their lives.
I could learn about the types of projects which were inspiring them, and become infected by their enthusiasm.
I am very fortunate to be surrounded by so many generous people doing so many cool things.

## Ok, But What's To Be Done? ##

Ultimately, a powerful realization (which came from my friend Other Igor) was that there's no way back.
We cannot just shut off our technology -- we're too dependent on it, now.

Thus, we must move forward.
We have to create technology which preserves the environment, respects human values, and enables us to be the best species we can be.
If we simply accept this premise, then there's no time for despondency, because there's so much work to do.

The problem then becomes (as usual) how to pick the biggest thing to work on.
Bret Victor already has [an amazing list of priorities in climate change](http://worrydream.com/ClimateChange/).
I've also been thinking of projects in FinTech and in digital communication and privacy which could have a large impact.
I plan on writing more about my ideas for improving communication, but currently I'm excited about [sandstorm](https://sandstorm.io/).

Finally, an important source of inspiration for me has been a careful re-reading of [Meditations on Moloch](http://slatestarcodex.com/2014/07/30/meditations-on-moloch/).
This is a brilliant piece of writing, which really gets at all of the underlying issues I am concerned about -- really, all the same issue, *Moloch* (*whose mind is pure machinery!*).

If you are looking for inspiration, reading this essay should fill you with a thousand ideas.
There is a ton of room for attacking the underlying coordination problems that the author discusses.
But his final conclusion suggests that *the most important project* is AI.
For a dispirited technologist, this is an incredibly energizing conclusion.
You can join the ranks of thousands of other engineers and scientists who have worked on this problem, and there's room for everyone no matter your specific skill set.
Probably, even a crusty systems guy like me can contribute here.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The 12V Music Manifesto]]></title>
            <link>https://igor.moomers.org/posts/the-12v-music-manifesto</link>
            <guid>https://igor.moomers.org/posts/the-12v-music-manifesto</guid>
            <pubDate>Mon, 19 Sep 2016 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
Party people, we need to talk.
Every time I go to a renegade event -- a party on the beach, or in the forest, or in an abandoned tunnel -- there is always a generator involved.
The generator powers the sound system, but it also creates a steady hum of undesirable noise and a plume of smelly smoke.
The generator inevitably runs out of gas, causing the party to pause for a minute while it's topped off.
Oh, and topping off the generator is always done using [those annoying CARB cans](http://www.gad.net/Blog/2012/11/22/one-mans-quest-for-gas-cans-that-dont-suck/), in the dark -- and so fuel is always spilled *everywhere*.

## Generator Power Is Inefficient ##

Actually, amps run on DC power.
Inside your amp is a [rectifier](https://en.wikipedia.org/wiki/Rectifier) which turns the 120VAC line supply into DC, and then probably a [voltage regulator](https://en.wikipedia.org/wiki/Voltage_regulator) which dissipates some of that power to get a level usable by the [op-amps](https://en.wikipedia.org/wiki/Operational_amplifier) and [transistors](https://learn.sparkfun.com/tutorials/transistors) in the amp.

Let's assume that the effiency of your small generator is [a generous 20%](https://settysoutham.wordpress.com/2010/05/26/portable-generators-about-half-as-efficient-as-power-plants/), and then [80% for the full-wave rectifier](http://www.brighthubengineering.com/consumer-appliances-electronics/96645-efficiency-of-ac-rectifiers/), and then say another 90% for the voltage regulator.
Then, only `0.20 * 0.80 * 0.9 ≈ .15` or 15% of the power in the gasoline you're burning is actually used to power your sound system.

"Okay, okay, we get it -- you hate generators," you're saying at this moment.
"But like what are we supposed to do?"

## 12V Sound Systems ##

Actually, I love my [Honda EU2000](http://amzn.to/2cE5yk1).
It's quiet, lightweight, runs for a long time on very little gas, and has proven extremely dependable.
It's just that I don't need my generator unless I plan to be running sound for many, many days -- like, a week and a half of Burning Man.
But I can run a great-sounding system for ***about 24 hours*** off a single [120 amp-hour deep-cycle battery](http://amzn.to/2dc97Kk).

Because people often ask me how I manage to run such a sound system without a generator, I am writing this post that breaks down my whole system.
Read on to learn about each component.

### The Battery ###

The 12V batteries that go in your car are ***not correct*** for this task.
Those batteries are meant to start your car's engine, and then be immediately topped off by the alternator.
If you try to draw power off them, they will quickly die, and if they remain dead they will calcify and will need to be replaced.

The batteries I use are sold as either deep-cycle or marine batteries.
These are meant to be used on boats or in RVs, where you're expected to run appliances off the batteries without the engine running.
They're also often used as part of energy storage for a solar system.

The capacity of deep-cycle batteries is measured in [amp-hours](https://en.wikipedia.org/wiki/Ampere-hour)(or `Ah`).
A `120Ah` battery will run a device which consumes 120 amps for an hour, 60 amps for two hours, etc...
I recommend buying the 120 amp-hour ones because they're not much heavier or more expensive than smaller batteries, and give you more capacity.

I buy my deep-cycle batteries at Costco, because they're readily available there, and relatively inexpensive (around $150 per battery).
Also, Costco will let you trade them in if they lose their capacity too quickly.
I've owned maybe fifteen of the Costco-branded or Interstate batteries over the last decade, and I've found most of them to last for three to four years with no problems and when they stop holding their charge you can trade them in for new ones.

I charge my batteries using a couple of [these Duracell battery chargers](http://amzn.to/2cZGlzX).
At $50 each, these are the most economical 12V battery chargers I've found.
Also, their maximum charge rate is 15 amps, which means it can charge a `120Ah` battery from fully empty to totally full in about 8 hours (overnight).
Even at Burning Man, [Camp Warp Zone](http://igor.moomers.org/warpzone/) never ran any lights or art pieces off the generator.
We always run everything off 12V power, but several times during the burn we'll run the generator to recharge all the batteries.

### The Amp ###

The heart of your 12V sound system is a 12V amp.
These amps are made for automotive audiophiles, and so there's a huge range of options to chose from.
I've been using [this Sound Storm 2000W amp](http://amzn.to/2cLB0KO).
I like it because it's totally sealed and fairly compact, and it sounds fine.
One complaint I have is that it puts out a hiss when no audio is playing or connected.

If you want to add a subwoofer to your system, you'll need an additional amp for that.
Amps designed for subwoofers are called "monoblock" amplifiers, because they only have one channel -- unlike stereo amps, which have a left and right channel.
I use [this Audio Pipe 1500W monoblock amp](http://amzn.to/2cDiAeo), which sounds great and comes with nice features like a built-in crossover and a subsonic filter to protect your subwoofer.
One potential problem is that the Audio Pipe is actively cooled with fans which suck air from the environment.
This will probably cause problems over time in dusty outdoor environments.

It's hard to find suitable monoblock amps for hi-fi applications because of impedance mismatch.
Most automotive subwoofers are rated at either 4 Ohm or 2 Ohm.
Alternatively, because of the tight space requirements inside cars, several smaller 8 Ohm or 4 Ohm subs might be installed in parallel, halving the impedance.
As a result, automotive monoblock amps typically put out their rated power at these low impedances.
On the other hand, most performance subs for clubs are 8 Ohm.
So, if you want to match the power supplied by the amp to the power requirements of the sub, you have to buy wildly overpowered monoblock amps.
For example, to power a 1500W subwoofer, you might need an automotive amp which claims to be rated at 4000W, because that rating will typically be at 2 Ohms.
To get the power rating at 8 Ohms, you'll need to divide twice to get 1000W.

Keep in mind that power ratings for both speakers and amps are typically inflated.
There's no way your amp can supply 1000W *continuously*, but neither can your sub sink that much power continuously.
If it looks like your amp is slightly underpowered, that's okay -- just don't try to compensate by turning up the gain, or you'll [risk blowing your speaker cones](http://www.bcae1.com/2ltlpwr.htm).

### Speakers ###

This is a contentious issue -- everyone has their preferred speakers.
I've found that my system sounds okay with just four of [these Behringer B212XLs](http://amzn.to/2cXPMxe).
I like them because they're very lightweight, made of rugged plastic, and fairly inexpensive.
I don't worry about them getting beat up in the back of a van, or sitting directly on the dirt at the party.
They sound much better at higher volumes, which is perfect for parties.

I don't own my own subwoofer yet, but I had great success driving a friend's [Behringer B1800X](http://amzn.to/2cE7qcx).
I wouldn't buy this sub for myself, though, because it's a little too large and unwieldy.
I've been eyeing the [Peavy 118D](http://amzn.to/2cZJdg6) because it's weighs a few pounds less, and it can run in both active and passive mode.
I could run it in passive mode off batteries during parties, and use the built-in amp if I wanted to blast bass in my one-bedroom apartment.

You'll probably want to make up your own mind on which speakers to buy.
Either way, to run a 12V system, the important thing is to buy either ***passive*** speakers or ones which, like the Peavy I linked, can be run in both active and passive mode.

### Extra Goodies ###

I keep a few more components which I think make the system run smoother.
One is a capacitor -- I use [this 2-farad model](http://amzn.to/2cLDd9j).
I got one of these after I noticed that in our theme camp at BM2014, where I was running all of the lighting as well as the sound system off a single battery, the lights would dim when the bass kicked in the music.
Having a large capacitor helps smooth out such problems, and is probably more gentle on the battery.
I doubt if the capacitor has any impact on sound quality, since the amps themselves should already have large-enough internal capacitors to smooth out their own power demands.
I like that the capacitor comes with a built-in volt meter, so I can keep an eye on the charge of the battery and avoid draining it too far.

Another useful component is [this electronic crossover](http://amzn.to/2cyrk3E).
It gives better control over the distribution of signal than the built-in crossover in the monoblock amp, and it also allows me to connect an additional amp if I want to run even more speakers.
I usually configure the sound so that two of the Behringers sit *behind* the DJ but still facing forward, so the DJ gets good monitors but they're still contributing to the party.
But the crossover makes it easy to connect dedicated monitors if desired.

Finally, it's helpful to have some power plugs.
I use [this cigarette lighter block](http://amzn.to/2dc9iVN) to provide USB power, which is nice to charge phones and also to run [little USB lights](http://amzn.to/2dc7WdK) (super-helpful when you need to plug and unplug stuff, or to see the DJ equipment).
I also like to keep a small inverter on-hand, [like this one](http://amzn.to/2cE4ahA).
That's helpful to power any laptops, mixers, or DJ controllers with dedicated power.
However, here, too, I recommend buying devices which natively support 12V.
For instance, I am planning to get the [Xone:23](http://amzn.to/2dc9sNb) as my next mixer because it runs on 12V, so I can just plug it right into the battery without needing an inverter.


## How to Wire Everything Together ##

There are two separate wiring paths -- one for power, and one for signal.
In either case, there's no need to buy into the cable hype.
For instance, if you're shopping for automotive audio, you'll often be told that you need to use at least 4GA wire.
This might be true inside actual cars, where the cable runs might have to be 20 or even 30 feet long.
However, if your sound system, if you keep your power runs to around 6 feet then you can get away with 12GA or 10GA wire, which is much cheaper and much easier to work with than the thick stuff.
I bought a couple of [spools of primary wire](http://amzn.to/2d6d1Iu) in the correct colors, and those have worked fine for me.

Automotive amps always have a `remote` terminal, usually next to the 12V `+` and `-` terminals.
This is meant to attach to the head unit in the car, so you can control power to the amps without digging around in the trunk.
I wire the `remote` terminal to the `+` terminal of the amp via a [toggle switch like this one](http://amzn.to/2dc79cN).
This allows me to wire up everything, double-check it, and only *then* to try to power up the amps.

For signal, I use tons of [these composite cables](http://amzn.to/2cNIbiH) in lengths of 3' or 6'.
To connect from your amps to your speakers, you'll usually need some random connector.
For instance, my speakers use normal XLR connectors, while that Behringer sub used some annoying Nuetrik connector.
In all cases, the side of the cable which connects to the amp is just bare wire, so you'll just need to cut off any connector and wire it straight into your amp.

## Yes, But How Does It Sound? ##

My system has been used in several art installations where sound was nice to have but which weren't full-on dance parties.
In these cases, I just use the four 12" speakers and the amp and keep it simple.
My setup has also been the primary system for several full-blown outdoor dance parties of approx. 50 people.
I have no problem filling a forest clearing with beautiful, clear sound.
I never hit the gain limits on the equipment -- I've always had more volume headroom than I've used.

I can keep the setup running of a single 120-amp-hour battery for two nights, although I like to switch the battery out after one night to avoid draining it down too far and causing damage to the battery.
Of course, with this setup, there's no generator noise -- all you hear is the music!

![All set up](/images/soundsystem.jpg)
![Rocking out](/images/djing.jpg)

## Updates ##

Dr. Niels has built a version of this system.
He has [a spreadsheet outlining the components he used](https://docs.google.com/spreadsheets/d/1q0VkUu1GSiJAtObuSfRULU4YRTkQmtfJzZ-F6q1a--8/edit?usp=sharing).
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Congenital Heart Disease in the USSR]]></title>
            <link>https://igor.moomers.org/posts/heart-disease-in-the-ussr</link>
            <guid>https://igor.moomers.org/posts/heart-disease-in-the-ussr</guid>
            <pubDate>Mon, 12 Sep 2016 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
I recently had heart surgery to fix a [leaky mitral valve](http://www.heart.org/HEARTORG/Conditions/More/HeartValveProblemsandDisease/Problem-Mitral-Valve-Regurgitation_UCM_450612_Article.jsp).
When investigating my condition and the surgery online, I found individual people's accounts of their progression to be most helpful to understand what I'll be going through.
These accounts are unfortunately rare -- they mostly seem to crop up in individual posts in random forums in the dusty corners of the web.
I plan on documenting my full journey once I've made some additional headway in my recovery.

However, this was not my first experience with heart problems.
As an infant, I was diagnosed with [coarctation of aorta](https://en.wikipedia.org/wiki/Coarctation_of_the_aorta) and was the subject of the very first [balloon angioplasty](https://en.wikipedia.org/wiki/Angioplasty) performed on an infant in the USSR.
While I had heard bits and pieces of the story over the years, my parents' coming to stay with me during my recent recovery was a great opportunity to hear the whole thing.
I took notes!

This is my parent's account of that adventure.
I obviously find it autobiographically fascinating, but I think it might be interesting to other readers as well.
I especially found insights into the functioning of the Soviet medical system enlightening.
Read on, if you care!

### Spring 1984 ###

In 1984, when I am about 6 months old, my mom takes me to the local regional hospital in Nikolayev, Ukraine.
She is concerned because she had performed the (I think?) [Barlow maneuver](https://en.wikipedia.org/wiki/Barlow_maneuver), but could not get my hips to lay flat as they're supposed to.
In the hospital, the doctors get an X-Ray, but see no physiological pathology.
Instead, they suspect a neurological problem, and so refer her to a neurologist.

My family was well-known in the medical community of Nikolayev.
Both of my mom's parents were pharmacists in a country with chronic shortages of critical medication.
If you were sick, it definitely helped to be on my grandparent's good side, and so they were owed favors by many people in town.
This fact helped my mom and me to quickly be seen, first by the on-duty physician who did the X-Ray and then by the neurologist.

Additionally, a decade before this story begins, my cousin Boris was born to my mom's brother and his wife.
Boris and his mother had [incompatible blood types](http://www.cerebralpalsy.org/about-cerebral-palsy/risk-factors/blood-incompatibility).
As a result, Boris was born with jaundice, but the cause was not diagnosed and his condition was allowed to progress until he suffered brain damage.
Boris remains severely handicapped today.

So, when my mom and grandma show up at the neurologist office, they are quickly recognized.
It takes only a minute for the neurologist to make her diagnosis.
"You understand that I can't write you a prescription for these, but I'll give you a list because I know you can get them," she says to my grandmother, handing her a long list of medications.
Her diagnosis: hydrocephaly.

At home, after looking up hydrocephaly in the family encyclopedia, my mom has a mild nervous breakdown and stops producing breast milk.
My dad's mom Anika, also a doctor (of optometry), is dispatched from nearby Kherson to come provide moral support.
It is mid-spring -- a cold, rainy time -- and on top of my hydrocephaly I have picked up a cold.
Anika tells my mom to chill out about the hydrocephaly, but to be more concerned about my cold progressing to pneumonia.

The on-call physician at the local clinic is summoned for a house call -- typical for the USSR at the time.
"Mamochka, why did you call me", he asks.
"Well, the weather is cold, the child is sniffly, we were worried about pneumonia," says Mom.
"Nah, he doesn't have pneumonia," announces the physician.
"He has a congenital heart defect."

At this point, I've been diagnosed by roving bands of doctors to have pneumonia, hydrocephaly, and a heart defect.
Everyone is freaking out.
To instil calm, my mom calls the father of her best friend, Issac Issacovich, who is one of the most well-regarded cardiologists in Nikolaev.
His opinion is that the on-call physician is a dick.
"Yes, he has a murmur, but that's [common in infants](http://kidshealth.org/en/parents/murmurs.html)," he says.
"You need to wait until he's at least a year old. If the murmur persists, then we can investigate further".

### Fall 1985

My heart murmur persists, and so my family is referred to doctors in Kiev who may be able to diagnose the cause.
Kiev is no Nikolayev (a regional back-water), but my family is still well-connected there.
Antonina Grigorivna, a good friend of my grandfather, is the head of the 4th Municipal Pharmacy, which served the Communist Party apparatus of Ukraine -- the party bosses and their families.
As a result we are able to get an appointment very quickly, and the doctors in Kiev perform an ultrasound -- the first I'd had at that point!
(Aside: In the US at that point, prenatal screening with ultrasound [was routine](http://www.ob-ultrasound.net/history1.html)).

The ultrasound allowed the cardiologist in Kiev to definitively diagnose me with coartation of aorta.
However, surgeries to fix the condition were not performed anywhere in Ukraine.
Using our connections to Antonina Grigorivna, my parents are able to secure a referral from the Ministry of Health to the main Soviet hospital in Moscow.
It is late fall, and nobody wants to travel to Moscow for the full winter experience, so my family makes plans to go there in the Spring.

### Spring 1986

In the spring of 1986, my mom, along with my dad's mom Anika, fly to Moscow and take up residence at my aunt's house.
Her hard-won referral from the Ministry of Health in hand, my mom shows up at the hospital, and attempts to make an appointment to see a doctor.
She is rebuffed at the reception.
"Why are you here?" asked the woman behind the counter.
"You should have mailed us the referral and waited for us to summon you."

The next day, my mom purchases a fancy chocolate bar in a Moscow department store.
Over the objections of my grandmother and my aunt ("We have never done such a thing, and never would!"), she puts a 50-ruble bill inside the wrapper.
The median monthly salary in the USSR at this point is around 75 rubles.
Unsure of herself (because she has always just gotten by on family connections), she shows up back at the hospital, confronts the same receptionist, and slips the chocolate and money into the front pocket of her white coat.
Now, the answer is "I'll see what I can do."

The cardiologists in Moscow perform a doppler echo, and confirm that I definitely need surgery to fix the coarctation.
They schedule a date to admit me to the hospital.
However, in the chill of the Moscow spring, I get sick again.
My family and the doctors decide that I should come back to Moscow in June, when I am feeling better and the weather is improved.
We fly back to Nikolayev from Moscow on the infamous date of [April 26th, 1986](https://en.wikipedia.org/wiki/Chernobyl_disaster).

### June 1986

Back home, my family begins scheming for how to get me the best care during the surgery.
Invoking the extended family network, they involve Aunt Lora.
Lora is both my grandmother's cousin, and also my grandmother's brother's wife's sister in law.
It gets better: Lora's childhood friend lives in Moscow, and is neighbors with the mother of [Gennady Khazanov](https://en.wikipedia.org/wiki/Gennady_Khazanov), a famed Soviet Comedian.
Khazanov's mother is friends with the family of Professor Falkovsky.
I was able to find [a reference to Falkovsky](http://articles.dailypress.com/1990-11-13/news/9011140436_1_soviet-health-care-soviet-doctors-soviet-central-asia) as the "director of the cardiosurgery department at the Soviet Academy of Medical Science."

So, when my mom and I show up in Moscow in June, we have at least some tenuous connection to the people at the top.
I am immediately admitted to the hospital.
There, using some newly-purchased markers from my mom, a girl in my ward and I cover each other in little red dots, creating a chicken pox scare.
The nurses demand payback of the half-litre of medical-grade ethanol they used to wash the dots off us (remember, people drink this stuff straight!)

That night, the Soviet surgeons perform their first balloon angioplasty on an infant.
Rather than using the femoral artery, as is common for adults, they gain access via the brachial artery in my left arm.
After the procedure, they accidentally suture closed the artery, cutting off blood flow to my left arm, and send me to recovery.

The next morning, Professor Falkovsy and Dr. Leo Bakeria (apparently, [Russia's chief cardiologist](http://rbth.com/articles/2012/08/30/a_walk_in_the_park_with_russias_world-renowned_cardiologist_17813.html)) make the rounds of the hospital.
I had been complaining of arm pain all night -- my earliest memory may be of standing in my hospital bed with my arm in pain, trying to get the attention of a doctor through the glass door.
However, the on-call doctors don't realize that anything is wrong.
Thankfully, the two chief cardiologist recognize the problem with my arm, and proceed to raise hell.
By the time my mom arrives in the hospital in the morning, I am already in surgery again.
Dr. Bakeria re-opens my incision, cuts out the dead portion of the brachial artery, and connects the two remaining sections together to save my arm.

### The Recovery

I am transferred to the ICU.
Mom shows up there with Aunt Lora, and they bribe the head of the ICU (Dr. Yuri Buziashvili) 200 rubles more to ensure quality care.
My dad was already planning to come to Moscow, but my mom calls his boss at the ship-building plant, who puts him on the next plane.
Because in the USSR, family is not allowed to visit in the hospital, Mom enrolls temporarily as a hospital worker so she can spend time with me while I recover.

Although gangrene starts in my arm, it slowly recovers.
I lost a lot of range of motion in it -- for instance, when crawling I kept the left hand balled up in a fist.
To help in the recovery, Prof. Falkovsky connects my mom to a woman named Valentina Nikolayevna.
She is a neurologist, but she also practices alternative medicine -- probably not legal, but she has cover from her husband, a KGB officer.
I spend many hours at her apartment in Moscow, getting acupuncture.
My reward for good behavior during the treatment is to play with her son's remote-controlled waking robot, a toy light years ahead of anything I had access to.

### The End

In the end, this is a happy story.
Thanks to my family connections and a bunch of money, my coarctation is fixed, and I am able to grow up normally.
Even my left arm is not much of an impediment.

The hero of this story is my mom.
It couldn't have been easy to deal with all the doctors, or to travel around all over the place with a child, bribing every other person.
Her dedication saved my life -- thanks, Mom!
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Recovering OpenID]]></title>
            <link>https://igor.moomers.org/posts/recovering-openid</link>
            <guid>https://igor.moomers.org/posts/recovering-openid</guid>
            <pubDate>Sat, 23 Apr 2016 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
I recently became locked out of [my StackOverflow account](https://stackoverflow.com/users/153995/igor-serebryany).
This was because, back in the day when I first created the account, I set it up to authenticate via OpenId.
However, I never ran my own OpenId provider, which seemed like a huge hassle.
Instead, I started out delegating to an OpenID provider called `MyOpenid`.
This worked for a while, but eventually this provider went out of business.
I then delegated to google, which actually acted as an openid provider for a little while.
However, it seems like in the past few years google also stopped providing any sort of OpenId services.

I had a long-lived session with StackOverflow, but at some point it expired.
So, unable to log in, I found myself using StackOverflow a lot less.
Several times, I had urges to add an answer to a question, or to post my own question/answer pair on some obscure issue, but I would just skip it because I was logged out.

I finally decided to recover my access to my account.
I did it using [local-openid](https://bogomips.org/local-openid/).
Here's how!

1. Install local-openid -- I just did `gem install local-openid`.

2. Start the local-openid server.
  I just ran `local-openid`, which booted up a WEBRick server on port 4567.

3. Forward traffic to the local-openid server from apache.
  I did this in my `<VirtualHost>` section:

   ```apacheconf
   ProxyPass / http://localhost:4567/
   ProxyPassReverse / http://localhost:4567/
   RewriteEngine on
   RewriteCond %{HTTP:Authorization} !^$
   RewriteCond %{QUERY_STRING} openid.mode=authorize
   RewriteCond %{QUERY_STRING} !auth=
   RewriteCond %{REQUEST_METHOD} =GET
   RewriteRule (.*) %{REQUEST_URI}?%{QUERY_STRING}&auth=%{HTTP:Authorization} [L]
   ```
  I made sure that this was the only ProxyPass directive that was uncommented, and then ran `apache2ctl restart`.

4. Attempt to log in via my OpenID at StackOverflow.
  This caused some output like so to be printed out from the running server:

   ```
   localhost - - [23/Apr/2016:19:59:48 CDT] "GET /xrds HTTP/1.1" 200 567
   - -> /xrds
   Not allowed: 172.5.245.84
   You need to put this IP in the 'allowed_ips' array in:
    /home/igor47/.local-openid/config.yml
   ```

5. Edit my `~/.local-openid/config.yml` file.
  This file had an `allowed_ips` section into which I added the IP address that was making requests.
  I also saw a section like so:

   ```yaml
   https://stackoverflow.com/users/authenticate/:
     assoc_handle:
     updated: 2016-04-24 01:01:39.873137626 Z
     expires: 1970-01-01 00:00:00.000000000 Z
     session_id: 1461459588.9306.0.1628395514528984
     expires1m: 2016-04-24 01:02:39.873137626 Z
   ```
  I removed the `expires` key and renamed the `expires1m` key to `expires`.

6. Reload the auth page in the browser. Viola! I was logged in!

Hopefully this helps you if you also lost access to some old OpenId account and would like to regain access.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[SmartStack vs. Consul]]></title>
            <link>https://igor.moomers.org/posts/smartstack-vs-consul</link>
            <guid>https://igor.moomers.org/posts/smartstack-vs-consul</guid>
            <pubDate>Thu, 01 May 2014 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
I am one of the primary authors of Airbnb's SmartStack, which is composed of two pieces: [nerve](https://github.com/airbnb/nerve) and [synapse](https://github.com/airbnb/synapse).
When we released this software, we documented a lot of the reasoning behind it in a [very comprehensive post](http://nerds.airbnb.com/smartstack-service-discovery-cloud/) on service discovery.
I recommend reading that post carefully to understand why we made the design decisions we did.

Recently, I've been getting a lot of questions on how SmartStack compares to [Consul](http://www.hashicorp.com/blog/consul.html), which is an alternative take on service discovery from the amazing guys at [HashiCorp](http://www.hashicorp.com/).
I am excited to see more people taking on this operational challenge.
In general, better service discovery will lead to more available SOA infrastructures, which makes for a better web experience for all web users.
Also, it will lead to a better engineering experience for the people maintaining those SOAs.

Recently, HashiCorp put out [a comparison between Consul and SmartStack](http://www.consul.io/intro/vs/smartstack.html), which gets somethings right but also some things wrong.
This post aims to complement HashiCorp's comparison from my perspective.
Of course, I welcome constructive criticism to the opinions expressed here.

## The Gossip Protocol ##

> Consul uses an integrated [gossip protocol](http://www.consul.io/docs/internals/gossip.html) to track all nodes and perform server discovery.
> This means that server addresses do not need to be hardcoded and updated fleet wide on changes, unlike SmartStack.

This is a fair criticism of SmartStack -- the addresses of the [Zookeeper](https://zookeeper.apache.org/) machines must be statically configured.
Of course, [Serf must also be bootstrapped](http://www.serfdom.io/intro/getting-started/join.html) with at least one existing node to join the cluster.
If all of the bootstrapped nodes you have hard-coded into your configuration management system (like [Chef](http://www.getchef.com/chef/) or [Puppet](http://puppetlabs.com/)) die, new nodes will not be able to join the cluster.

Really, there are two choices here.
The first is statically hard-coding a list of Zookeeper instances and relying on Zookeeper.
The second is static configuration of bootstrapping information for Serf and relying on Serf's [gossip protocol](http://www.serfdom.io/docs/internals/gossip.html).

The gossip protocol is a modified version of [SWIM](http://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf).
Consul uses this not just for bootstrapping but for propagate ALL information, including the availability information you're trying to discover.
I have many unanswered questions about the gossip protocol.
For instance, in the case of a network partition, it seems like a partitioned-off node will be alternatively marked suspected-down and then back up by different group members.
This may result in a partitioned-off node never leaving the cluster.

In the end, replacing ZooKeeper with Serf may be a viable option for SmartStack.
I would welcome pull requests to [synapse](https://github.com/airbnb/synapse) that use Serf, or maybe Consul, as a [service watcher](https://github.com/airbnb/synapse/tree/master/lib/synapse/service_watcher) instead of [Zookeeper](https://github.com/airbnb/synapse/blob/master/lib/synapse/service_watcher/zookeeper.rb).

## Service Discovery ##

> For discovery, SmartStack clients must use HAProxy, requiring that Synapse be configured with all desired endpoints in advance.
> Consul clients instead use the DNS or HTTP APIs without any configuration needed in advance.
> Consul also provides a "tag" abstraction, allowing services to provide metadata such as versions, primary/secondary designations, or opaque labels that can be used for filtering.
> Clients can then request only the service providers which have matching tags.

The first sentence here doesn't really even make sense.
Sure, Synapse must be configured to discover the services you are going to want to talk to, but you could just as easily configure it to discover ALL of your services.
On the other hand, explicitly specifying which services you are going to want to talk to from which box is extremely useful, because it allows you to build [a dependency graph of your infrastructure](/images/airbnb-infrastructure-oct13.png).
I view this as a benefit, not a drawback.

Another benefit is using [HAProxy](http://haproxy.1wt.eu/#desc) to actually route between services.
Whenever a service inside Airbnb talks to a dependency SmartStack, that service knows nothing about the underlying implementation.
The ability to avoid writing a client (even a simple, HTTP client) for service discovery into each application was a fundamental design goal for us.
If you want a third-party application you didn't write to run on your network and consume Consul information, you must use DNS.
However, DNS is even worse -- when, how, and for how long will DNS resolutions be cached by your underlying libraries or applications?

Instead of insisting on a simple HTTP API, Consul provides you with the ability to do complex tag-based discovery.
It is almost certainly a mistake to utilize these features.
Your infrastructure should aim to be as simple and flat as possible.
A service instance is a service instance, and if it's different then it is a different service!
If you find yourself 6 months in, only talking to instances of service Y which provide property X from some unknown number of clients which have requirement X hardcoded into an HTTP request buried in their codebase, you are going to wish that you hadn't done that.

Finally, HAProxy is an extremely stable, popular, well-tested, well-utilized, fundamental component of the internet which provides amazing introspection.
That we use HAProxy means that synapse and zookeeper can just go away, and your service will keep on working (although it won't get updates about new or down instances).
Using connectivity checks in HAProxy means that we can survive network partitions -- services which remain registered will be taken out of rotation by HAProxy.
Using HAProxy's [built-in load balancing algorithms](http://docs.neo4j.org/chunked/stable/ha-haproxy.html) meant that we didn't have to write them.
Using HAProxy's [built-in status page](http://haproxy.1wt.eu/img/haproxy-stats.png) means we can easily see what's happening on a particular box with that box's service dependencies.
Using HAProxy's logging, we can see a detailed history of communications between services.
And using monitoring tools that scrape and aggregate HAProxy's stats, we can get instant insight into what kinds of load services are seeing, from which kinds of other services.

Many of these advantages can again be gained by configuring synapse to use Consul as a discovery source.
But I strongly feel that synapse/HAProxy combo is better in many ways than Consul, and urge you to consider the benefits I've outlined above.

## Health Checking ##

> Consul generally provides a much richer health checking system.
> Consul supports Nagios style plugins, enabling a vast catalog of checks to be used.
> It also allows for service and host-level checks.

The current list of health checks in nerve is minimal at best, although it's been sufficient for our needs here at Airbnb.
I like the simple model, of nerve doing a direct check on a service from the machine it's running on.
Conceptually, it's easier to wrap your head around.
Why is this box deregistered?
Because it failed it's nerve health check?
Or because Nagios is down or overloaded, or because the application pinged your service to ask to be deregistered and then kept running, or for what other unseen reasons?

Although I would discourage the use of complex health checks, I can see the advantages, and I would welcome PRs to nerve to add better health checking.

## Multi-DC ##

> While it may be possible to configure SmartStack for multiple datacenters, the central ZooKeeper cluster would be a serious impediment to a fault tolerant deployment.

I am not certain that I would want to run a UDP-based gossip protocol across the public internet.
Running a Zookeeper cluster across the public internet is also not an ideal situation.

I think that the correct approach is to provide mostly-local service clusters per datacenter.
A single, global Zookeeper cluster will contain only the list of services that are truly cross-DC (like the front-end load balancers), while most services only talk to services inside their local DC.
Assuming a flat cross-DC topography is setting yourself up for much higher than necessary latency.

Of course, with Consul you could probably configure your services to discover only dependencies tagged with your local datacenter.
But this reaches into the realm of configuration management, and at that point both Consul and SmartStack become equivalent -- a Chef change is a Chef change.

## Summing up ##

I love Hashicorp, and I think Serf is a great idea, implemented well.
I think that from an operations perspective, SmartStack has a bit of an edge on Consul.
I am happy to have the opportunity to engage in dialog like this, and I'm excited about how much easier it's getting all of the time to operate internet infrastructure.
If you have comments, or corrections, please do get in touch!
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Third-party domain delegation considered harmful]]></title>
            <link>https://igor.moomers.org/posts/third-party-domains</link>
            <guid>https://igor.moomers.org/posts/third-party-domains</guid>
            <pubDate>Wed, 26 Mar 2014 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
Your public domain name is the name that your users use to access your site.
For instance, if you're google, your public domain name is [`google.com`](https://www.google.com).
For Airbnb, it would be [`airbnb.com`](https://www.airbnb.com).

## Delegation ##

Usually, when a user initiates a request for something under your domain name, one of your own servers will respond.
However, this is not always the case.
For instance, you may be using a bulk email provider like [sendgrid](https://www.sendgrid.com) to send email.
In that case, you might want to point `email.yourdomain.com` to sendgrid's servers.
Another example might be your blog, under `blog.yourdomain.com`, which is actually operated by [wpengine](https://wpengine.com).

There are typically two ways to do such delegation.
By far the most common is [DNS](https://en.wikipedia.org/wiki/Domain_Name_System)-based.
You simply edit the DNS entry for `thirdparty.yourmain.com` to resolve to an [ip address](https://en.wikipedia.org/wiki/IP_address) for a server operated by the third party.

In the age of proxying web servers like [nginx](http://nginx.org/en/), another way has emerged to do delegation.
The domain name in question can point to your server, but it can then simply be proxied to the third party.
If you're reading this blog post at the domain [igor.moomers.org](http://igor.moomers.org), you are actually an end-user of such delegation.
The content you're reading actually comes from [igor47.github.io](http://igor47.github.io/).

## Delegation considered harmful ##

There are many reasons why it is a bad idea to do the kind of delegation mentioned above.
This is mostly for security reasons, but might also cause usability issues.

Lets talk about each of the issues in turn.
I will use the example website at `yourdomain.com` with a third-party delegation to a blog provider at `blog.yourdomain.com`.

### Session Leakage ###

If you provide your users with a session cookie, anyone who has this cookie can trivially impersonate the user.
It is very common to serve traffic under `www.yourdomain.com` but set a session cookie for `*.yourdomain.com`.
When your users read your blog, at `blog.yourdomain.com`, the blog provider will get a copy of all of your session cookies.

An attacker that compromises the systems of the blog provider can now steal the identities of all of your users who have visited your blog.
The blog provider is an attractive target, because all the sites that use this provider can be simultaneously compromised.

This attack can be mitigated by setting your cookies to specific subdomains, but this may not be possible if you operate on several subdomains.
This can also be mitigated by serving your production traffic on `https` only and setting your cookies' `secure` flag.
Then, your user's browser will not send the cookies to your blog provider.
Obviously, in this case you should NEVER give your blog provider an SSL certificate covering that subdomain.

### Script Injection ###

Rather than stealing your user's session, attackers an force your users to perform actions on your site via the third-party site.
For instance, suppose that `blog.yourdomain.com` does not properly sanitize javascript in your blog's comments.
This is the equivalent of your own site allowing [javascript injection](https://en.wikipedia.org/wiki/Cross-site_scripting).
An attacker putting malicious javascript into the blog comments will force your users to perform actions on your main site.

### Session Clobbering/Cookie Fixation ###

If your blog provider happens to set the same cookies as you do, you will log out your user or ruin your tracking or analytics.
For instance, if both you and your blog provider use google analytics, you will both be attempting to set the various `_utm*` cookies.

The name collision can cause usability problems for your users.
However, this is also a potential security vulnerability given vulnerable browsers.
An attacker who can set cookies on your domain can set a sensitive cookie to a value he controls and then force you to make that value valid.
[Here is a description](http://homakov.blogspot.com/2013/03/hacking-github-with-webkit.html) of such an attack on Github.

### Personal Data Leakage (PII) ###

You might be leaking other data than your session cookies via additional cookies, such as tracking cookies.
For instance, your [mixpanel](https://mixpanel.com) cookie might contain a bunch of attributes you want to track about your user.
Even if you're diligent about setting proper settings on your session cookies, your mixpanel cookie is likely clear-text.
This means that your blog provider now has a large [PII](https://en.wikipedia.org/wiki/Personally_identifiable_information) dataset on your users.
This could get you into trouble if the blog operator experiences a breech of their log data.

### Social Engineering & Reputation ###

Users assume that content under `yourdomain.com` comes from you.
By gaining access to your delegated domains, an attacker can convince users to simply give you their passwords or other information.
Additionally, you can gain a poor reputation if your delegated domains are defaced or simply contain negative content.
The internet will assume that you endorse that content.

## Workarounds ##

You could try to be very diligent about your security settings, and "do it right" with domain delegation.
However, there are many ways for you to fail here.
The best approach is to avoid delegating domains in the first place.
You should control all of the content that is served at `yourdomain.com` and it's subdomains, and sanitize any user-generated content there.

### Separate top-level domain ###

Typically, for sites operated by third parties, you would use a separate top-level domain.
For instance, when [github](https://github.com) first launched [github pages](http://pages.github.com/), they were hosted under the main `github.com` domain.
However, github quickly [moved this content to it's own top-level domain](https://github.com/blog/1452-new-github-pages-domain-github-io) at `github.io`.
Similarly, google began [hosting user-generated content at \*.googleusercontent.com](http://googleonlinesecurity.blogspot.com/2012/08/content-hosting-for-modern-web.html).

### Avoiding Migrations ###

It may be tempting to just use a single domain for all of your content initially.
However, my experience is that it is much more difficult to migrate than to make the right choice from the beginning.
You will not want to move `blog.yourdomain.com` to `blog.yourdomain.io` later on.
So, if you're just starting out, don't go down the wrong path -- resist the temptation to delegate!
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Recovering an Android Phone With a Broken Screen]]></title>
            <link>https://igor.moomers.org/posts/recovering-an-android-phone</link>
            <guid>https://igor.moomers.org/posts/recovering-an-android-phone</guid>
            <pubDate>Sat, 04 Jan 2014 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
Over the break, I dropped my phone on a tile floor and totally shattered the screen.
I can still see what's happening, but the digitizer is not working so I cannot unlock it or do anything.
I ordered a new phone, but in the meantime I would like to access my text messages (a lot of them NYE well-wishes from friends I don't see too often) and get to my Google Auth credentials.

There are a few people who have managed to get into their phones using adb shell.
[This post on reddit](http://www.reddit.com/r/Android/comments/1r2zha/) is the best writeup I've found.
However, they omit the steps they followed to enable adb debugging on a phone where this was disabled.

## Enabling ADB via Recovery Mode ##

This is what I did:

  1. Hold the power button until the phone reboots
  2. Hold down the "down volume" button until you get into the bootloader
  3. Use the volume and power buttons to boot into recovery mode
  4. Follow the instructions in step 1 of [the reddit post](http://www.reddit.com/r/Android/comments/1r2zha/) but also run `update global set value = 1 where name = "adb_enabled";` to enable debug mode

The debug mode will not persist after the phone reboots.
To fix this, I followed instructions [in this stackexachange question](http://stackoverflow.com/questions/13326806/enable-usb-debugging-through-clockworkmod-with-adb):

```bash
adb shell
mount /system
cd /system
cp build.prop build.prop.backup
echo persist.service.adb.enable=1 >> build.prop
```

If you edit `build.prop` via `adb pull` and `adb push`, remember to `chmod 644 build.prop` or your phone won't boot.

## Accessing the phone ##

Once I did this, my phone booted with USB debugging turned on.
Now, I wanted access to my phone.
There are many options for this -- screencasting, VNC server, etc... -- but the easiest solution is a bluetooth mouse.
Once you've got one paired with your phone, it's as though you've go your touchscreen back.

You will need to access your settings menu and pair the mouse.
From `adb shell`, you can use the `input tap` command to send screen tap events.
`input tap X Y` is a tap at the corresponding X and Y coordinate, with `0 0` being the upper left-hand corner.

I had an icon for the settings menu on my home screen, so I got to it by running `input tap 250 800`.
Once there, from `adb shell` run:

```bash
input tap 600 400  # for bluetooth
input tap 100 1100 # search for devices
input tap 100 1000 # to pick the first device that you've found
```

At the end of this process, you will have a pointer on-screen which works just like your finger.

## Moving to a new phone ##

I used titanium backup (I paid for the Pro version).
I used the mouse to do a backup of all of my apps and their data.
I also backed up my SMS/MMS and call log.
To do that, click `Menu` -> `Backup data to XML` and then pick a local location to save the XML files.

I copied the `TitaniumBackup` dir using `adb pull` and pushed it to the new phone with `adb push`.
I had to do the restore of the XML data separately, using `Menu` -> `Restore data from XML`.
I had to enable the advanced view in the file picker to find the location of my XML files on KitKat 4.4.2.

Afterwards, I had Google Play update all of those apps to newer versions.
This happened without problems.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Tech Interview Questions]]></title>
            <link>https://igor.moomers.org/posts/interview-questions</link>
            <guid>https://igor.moomers.org/posts/interview-questions</guid>
            <pubDate>Sat, 09 Nov 2013 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
As an engineer at [Airbnb](https://www.airbnb.com), I do a LOT of interviewing.
I talk to at least two to three people a week, but sometimes it's as many as 5 or 6.
All of my interview questions involve asking people a technical question that we can work through to generate real, working code.

Usually, I'll whiteboard the question and we'll spend a moment talking about possible approaches.
But the goal is to get the candidate to start writing code quickly so we can get to a solution.

We've all been faced with the terrible, knowledge-based, "I could look that up in 1 minute but I don't have a computer" question.
Worse are the gotcha questions that you wouldn't be able to solve unless you happen upon a moment of brilliant insight.
The questions I ask aim to avoid that.

Good questions are fun and engaging for candidates.
Good questions also always have a path forward.
If the candidate is stuck, I should be able to give a hint that allows them to get unstuck but that doesn't give everything away.

I like to arrive at some running code that solves at least a subset of the problem at the end of every interview.
If my question just wasn't going well with a candidate, getting something running keeps them from spiraling down a mental failure vortex, and allows them to relax and focus on the next interview.

## Preparing to Ask a Question ##

A lot of work happens before you ever see a particular interview question.
First, I myself have probably solved it in one or two possible ways.
The first time I solve it, I try to give myself the same constraints a candidate would have -- limited time, no previous knowledge, no specific preparation.

Next, if I'm the one who came up with the question, I will ask it to a few of my coworkers to get a basic calibration.
If someone else came up with the question, then I have probably sat in on (shadowed) a few interviews where the question was asked.

By the time I ask you, I am familiar enough to quickly know the various dead ends and blind alleys that you can fall into.
I know of a few ways to steer you towards something that would work.
Finally, I know how people of various experience and skill levels usually perform.
I know enough to be amazed at your quick and clean approach.
Alternatively, I've seen how good whiteboarding goes bad and results in spaghetti code that's impossible to debug.

## Don't Leak the Questions! ##

When you publicly post a question that you were asked after interviewing, you are undoing weeks of work for your fellow engineers.
In it's place, you a creating weeks of new work as we develop, test, and calibrate on new questions.

During that calibration, we might ask your fellow engineers sub-optimal questions.
They will have a bad time in their interviews, and will post rants about the sorry state of interviewing in engineering.
Also, because we're not calibrated, they might get undeserving feedback which causes them to miss out on their dream job.

Leaking interview questions is antithetical to what we're trying to do as engineers.
Our goal should be to spare our peers work -- that's what writing clean code and creating nice architecture is all about.
Once software engineering becomes [a real profession](http://michaelochurch.wordpress.com/2012/11/18/programmers-dont-need-a-union-we-need-a-profession/), there will be no need to ask our peers silly little puzzles to evaluate them.
But until then, lets be professional and JUST SAY NO.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Soviet KGB Stories Pt. 1]]></title>
            <link>https://igor.moomers.org/posts/soviet-kgb-stories-pt1</link>
            <guid>https://igor.moomers.org/posts/soviet-kgb-stories-pt1</guid>
            <pubDate>Sat, 14 Sep 2013 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
In 1986, my uncle Michael got summoned to the offices of the KGB in Nikolaev, Ukraine.
They wanted to know about his friend, Tolic Dobrusin.
Did Tolic have any relatives outside the country?
Michael plead ignorance, and the KGB let him go.

When Michael got home, he called his friend Tolic.
He told him that the KGB had been asking questions about him, and that they needed to talk.
Tolic came over to my uncle's (actually, most of my family's) apartment for an evening chat.
Over vodka and pickles, he admitted that he did indeed have a relative in Germany.
It was his uncle, with whom he recently met in St. Petersburg (then Leningrad).
However, Tolic did not give his uncle any military secrets, because his uncle was working as a cleaner in a hotel and didn't care about any of that stuff.

The next workday, when my uncle showed up to work to the shipbuilding plant, the KGB was waiting for him at the gate.
He was taken away to their HQ, where he was questioned about his meeting with Tolic.
My uncle tried to evade, by claiming that he was just trying to confirm the KGB's hunch about Tolic's relatives.
However, the KGB was not happy with having their investigation revealed, especially by a party member (my uncle had joined the communist party at the behest of his father, my grandfather).
"We didn't want you revealing our investigation.
We're going to put you away."

My mom's good friend, Ira, had a father who was a cornel in the KGB.
Around that time, Ira came to talk to my mom.
She was asking about whether our family was in serious trouble, but my mom was ignorant about the conversations her brother was having with the KGB.
Mom claimed she didn't know what Ira was talking about, which made Ira think that Mom was playing some kind of crazy political game with her.

In the meantime, my uncle's friend in the police department came to speak with him.
He told him that the KGB was sure to arrest him in the near future.
"Get out of here while you still can" was the message.

Michael was walking down the street and he saw a sign on a streetpost about workers needed in Deputatsk.
Deputatsk was a small settlement about two hour's flight north of [Yakutsk](https://www.google.com/maps/preview?hl=en&authuser=0#!q=yakutsk%2C+russia&data=!4m10!1m9!4m8!1m3!1d1951509!2d129.3333433!3d62.7799773!3m2!1i1438!2i802!4f13.1), which claimed home to one of the only tin mines in the USSR.
At incredible expense, tin was mined there.
The raw ore was loaded onto planes, which was shipped to Yakutsk for processing.
3 tons of ore yielded several kilograms of tin.
Deputatsk was a totally fabricated Soviet affair, with a single restraunt, a move theater, block housing, and triple hazard pay for workers.

Shortly, over a near-heart attack from my grandmother, my uncle found himself in Deputatsk.
He showed up to talk to the head of the electric plant there, and shortly realized that everyone in the settlement was doomed.
"Look," the head told him, "here's the deal.
The whole town survives on the electricity from the plant.
It produces electricity via diesel turbines manufactured at a plant in Nikolaev.
However, we have no access to maintenance materials or spare sparts.
We are running on the edge of capacity, and no government plan has room for us to receive the necessary materials."

My uncle claimed to be able to use his personal connections in his home town to get the necessary parts.
However, it was going to take money.
He named the first price that came into his head: 10,000 rubles.
The plant head claimed that this would be no problem, and also approved any additional travel expenses.

My uncle moved into the dormitory while he waited for his wife to come join him in Deputatsk.
He quickly made friends with his dorm mates, and explained that he was about to go on a trip to the "mainland" to procure spare parts for the settlement's electric plant.
The housemates were desperate for any products from the mainland (top request: apples, vodka).
They quickly gathered another 2000 rubles each (which they all had because they were receiving triple hazard pay for living in that frozen hellhole).
They assured Michael that if he was unable to procure anything that would be fine, but they would like him to please try.

My uncle showed up back in his home town with close to 30k rubles in his pocket.
For that kind of money, in Nikolaev in 1986, he coud buy a house, a car, a dacha and furniture for those.
His wife convinced him that he had to do the right thing for the people counting on him.

Via his personal connections at the turbine plant, he was able to procure the necessary supplies (which did not appear on any Soviet production plan).
He packed them into suitcases, and set off for Moscow, an intermediate destination on the way to Siberia.
In Moscow, he still head several thousand rubles to spend.
He bought the produce his dorm mates were asking for.
He also managed to buy special checks which granted access to special stores selling rare foreign goods destined for sailors getting off long-haul voyages and for party officials.
In these stores (where he saw goods he's never seen before), he purchased among other things several cases of French perfume.

My uncle got back to Deputatsk, and began handing out his bounty.
Word quickly spread around town about his perfume purchase, and girls from all over the settlement begged for a chance go buy some, at a price some 7 or 8 times what he had paid for it in Moscow.
Of course, this was speculation -- strictly forbidden in the USSR.

The next day, my uncle found two officers from the local police department visiting him at work.
They asked him about the goods he had brought back from Moscow, and he mentioned the perfume (which was insanely popular; almost all of the females in the settlement had bought some from him).
He spun the cops a yarn about how he was planning to profit from his trade.
However, his young beautiful wife told him that if he sold the perfume for even a ruble over what he bought it for, she would kick him out into the street.

Via some magical chance, the cops bought my uncle's story.
They told him to thank his wife for her good advice, and left.
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Github pages: proxies and redirects]]></title>
            <link>https://igor.moomers.org/posts/github-pages-proxying-and-redirects</link>
            <guid>https://igor.moomers.org/posts/github-pages-proxying-and-redirects</guid>
            <pubDate>Sun, 14 Jul 2013 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[
When I wanted to start a github pages blog, github was serving it's page requests on `<username>.github.com`.
These days, they service it on `<username>.github.io`, probably for security reasons (session cookies?).

Anyway, when I wanted my page to be at `igor.moomers.org`, my own domain name.
I also wanted the same thing under `igor.monksofcool.org`, which is another domain name that is also my OpenID.
So, I configured apache to just proxy my domains to the github page.
Here is my config:

```nginx
   ProxyPass / http://igor47.github.com/
   ProxyPassReverse / http://igor47.github.com/
```

This change made my openid inaccessible.
I [had my openid settings in my head](https://github.com/igor47/igor47.github.com/commit/1b7b28605aa5b74e7f150e1a1a5e67f93b8d6138).
However, when I would sign in to my openid, [stackoverflow](http://stackoverflow.com/) would ask me if I wanted to create a new account for `igor47.github.io`.
In fact, I realized that even though I was proxying, I would get redirected to `github.io` in my browser.

Looks like the github web server only accepts requests on one hostname.
You can tell it's just one because they say so [here](https://help.github.com/articles/my-custom-domain-isn-t-working#multiple-domains-in-cname-file).
So, I figured I could fix my problem by setting a [custom cname page](https://help.github.com/articles/setting-up-a-custom-domain-with-pages).
I tried that, [here](https://github.com/igor47/igor47.github.com/commit/2d3ce308de32dd734d35633f32442db6759cec68).

This resulted in strange behavior.
Even going to `igor.monksofcool.org` in my browser resulted in an infinite 301 redirect loop to `igor.monksofcool.org`!

This finally made me realize what was happening.
I was proxying all requests to `github.com` and should have been using `github.io`.
Even requests to `igor.monksofcool.org` would go to `igor47.github.com`, which is NOT the domain name in my CNAME file.
Github would redirect away, and hitting that domain would again issue an improper sub-request.

To fix the problem, I removed my CNAME file and fixed my apache config to go to proxy to `github.io`.
Suddenly, everything was magically working again!
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Now: February 2025]]></title>
            <link>https://igor.moomers.org/posts/now-feb-2025</link>
            <guid>https://igor.moomers.org/posts/now-feb-2025</guid>
            <pubDate>Tue, 20 Feb 2001 00:00:00 GMT</pubDate>
            <description><![CDATA[Now page for Feb 2025
]]></description>
            <content:encoded><![CDATA[
Welcome to my 2025 life update!
My previous now page [is here](/posts/now-march-2024).
Here is a [permalink to this page](/posts/now-feb-2025).

## Work

I'm still working at [Rock Rabbit](https://rockrabbit.ai).
Work has been pretty interesting and fun.
The core problem at the company is how to model the crazy world of incentives.

Incentives have really complicated eligibility criteria.
You have to get only specific equipment, with specific efficiency ratings.
You might only be eligible if you're replacing an existing appliance of a specific type.
There are different programs for emergency repairs vs. planned replacements vs. new construction.

It's also pretty complicated to show you how much money you might be eligible for.
The total might depend on everything from the number of condensers in your HVAC unit to whether you've programmed the appliance to make use of TOU rates.
Claiming the money is another exciting adventure.
Programs have all sorts of documentation requirements, and they range across the board in terms of systems for submitting and validating your claims.
Sometimes, you send your application to Susan, and she lets you know if she approved it.
Other times, there's a complicated portal, which requires a CAPTCHA for every interaction.

Finally -- the most interesting wrinkle is that we're not encoding all of these rules in software.
Instead, we have a team of analysts who are building and maintaining the incentive model.
The software has to provide mechanisms for analysts to express the complexity, and then the UI and functionality is driven by the metadata they construct.
This is the most interesting component of the system, and is keeping me pretty heads-down trying to figure out how to build meta-UIs and storage systems.

## Philanthropy & Thoughts on Climate

For the past few years, I've been trying to figure out how to effectively deploy philanthropic (501(c)(3)) funds in the climate space.
I've gone through interesting values-alignment exercises with [Founders Pledge](https://www.founderspledge.com/).
I joined [SV2](https://sv2.org/) as a partner.
I did a lot of personal research, and found worthwhile organizations like [Rising Sun](https://risingsunopp.org/), [Rewiring America](https://www.rewiringamerica.org/), or [Climate Cabinet](https://climatecabinet.org/).

At SF Climate Week 2024, I heard a great talk by [Dan Stein](https://www.linkedin.com/in/daniel-stein-8210a639/) and [Giving Green](https://www.givinggreen.earth/how-it-works).
I bought their core premise -- that a lot of climate innovation is downstream of policy, and that funding policy advocacy orgs can be a big lever.
As a result, a lot of my donations in 2024 were to Giving Green-recommended organizations: [The Good Food Institute](https://gfi.org/), [Project Innerspace](https://projectinnerspace.org/), and [Clean Air Task Force](https://www.catf.us/work/).

## Regime Change & Information Ecosystem

We had an election.
I had a pretty strong sense that Trump was going to win, and felt fairly disengaged around my options in the opposition party.
I remain a strong proponent of the Biden administration -- it accomplished a ton of amazing policy work, and it's pretty sad how little credit those people get for their enduring accomplishments, especially the IRA.
However, neither Biden as candidate, nor his appointed heir Harris inspired much confidence for me, and I felt fairly disengaged throughout the election.
Now, I, along with everyone else who was similarly disengaged, get to watch in horror as the incoming administration tears apart not only those accomplishments I was so happy about, but the entire federal government, plus the rule of law to boot.

My focus on climate change over the past decade was two-fold.
I love the natural world, and feel especially drawn to help protect it.
But also -- I've been viewing the climate crisis as both foundational and especially time-sensitive.
If people are missing health care, we can still pass health care policy next year, and those people will get health care.
If we wait a year on climate change, some things might change in irreversible ways.
Maybe the coral reef never recovers from the next bleaching event, or another species goes extinct.
Climate change itself becomes harder and more expensive to solve -- even more warming locked in, more stranded assets deployed, more infrastructure lock-in.
Plus, climate change is a tax on the world, making other problems harder to solve.
As we spend more resources on cleaning up from the next hurricane or mega-fire, we have fewer resources for all other problems, including the problem of climate change itself.

Watching the new regime take the reigns, not only administratively but in the culture, has caused my thinking to jump another abstraction level.
While a lot of problems are downstream of climate change, climate change itself is downstream of the problem of the mess in our information ecosystem.
Epistemology is out the window, now.
We cannot hope to solve *any* problems without knowledge, consensus, and attention.
These resources are now under the control of a few demagogues, who will use them to centralize their own power.

I'm re-thinking my philanthropic commitments, and would love to re-direct my available funds towards projects that aim to build a strong information commons.
I've already been supporting some projects in this space -- notably, local journalism through [CalMatters](https://calmatters.org/), [CitySide Journalism](https://www.citysidejournalism.org/) (publishes BerkeleySide), or [Mercury News](https://www.mercurynews.com/).
However, now that the Trump administration is [actively](https://www.foxnews.com/media/george-stephanopoulos-abc-apologize-trump-forced-pay-15-million-settle-defamation-suit) [attacking](https://apnews.com/article/trump-meta-settlement-zuckerberg-capitol-riot-9939e52679364080c983e0cab739b805) [information sources](https://www.foxnews.com/media/trumps-lawsuit-against-cbs-expands-after-release-60-minutes-transcript-adds-paramount-defendant), I think there might be room for more structured defense of journalism, knowledge, and truth.

I'm open to recommendations, both in terms of orgs to fund, and in terms of places were technologists can contribute to the problem domain.

## Projects

A big project over the last few months has been moving back to Berkeley.
This has involved getting settled in a new house, and especially setting up a new garage.
I'm still working on getting my workshop settled; I got a WallBoard!

![A photo of my in-progress wallboard](/images/wallboard.jpg "Still in-progress")

I'm still working a bunch on my self-hosted setup.
Some big new additions have been [Rallly](https://github.com/lukevella/rallly) for a self-hosted Doodle alternative, and [Vikunja](https://vikunja.io/) as a self-hosted task tracker.
I tried to deploy a self-hosted temporary file sharing tool, but [ran into issues](https://github.com/eikek/sharry/issues/1597).

For real-world project ideas, one thing I noticed is that there are some really dark blocks in Berkeley.
I'm plotting how to install guerilla light installations, possibly powered by solar panels and batteries.
I now have some experience in unattended electronics after installing the Peter Bench in the deep playa at Burning Man 2024.

![The Peter Bench](/images/peter-bench-at-home.png "As usual, I didn't get on-playa photos")

I'm also plotting a sound-reactive light project for Priceless 2025.

### Shop Talk

Now that I'm living back in Berkeley, I'm really enjoying the chance to connect more with people.
I'm really interested in connecting with people over their ideas and creative projects.
I think it's helpful to create spaces like this explicitly -- I certainly often feel awkward getting deep into work or politics or philosophy with people, especially when other folks in the conversation might not be down for it, or if the context is not such that we can really get into it.

I'm hoping to spin up a recurring event called "Shop Talk".
I want to focus on one or two people presenting what they're working on, followed by Q&A and discussion.
Let me know if you're interested!

## Reading

Last year, my big epiphany was that the [Bobiverse](https://en.wikipedia.org/wiki/Dennis_E._Taylor#Bobiverse_%282016%E2%80%932024%29) books are really good, despite having an uninspiring title.
This year, I similarly discovered that the [Murderbot  Diaries](https://en.wikipedia.org/wiki/The_Murderbot_Diaries) are really good -- I binged these books very quickly.
I really enjoyed finally reading [This Is How You Lose the Time War](https://en.wikipedia.org/wiki/This_Is_How_You_Lose_the_Time_War).
I can also recommend [the new Robin Sloan](https://www.robinsloan.com/moonbound/) and [the new Adrian Tchaikovsky](https://en.wikipedia.org/wiki/Alien_Clay).
Fantasy books haven't been appealing to me for the last few years, so I was surprised to enjoy [the Licanius Trilogy](https://bookshop.org/p/books/the-shadow-of-what-was-lost-james-islington/111298), which I devoured on vacation.
Finally, I re-listened to [the Delta-V series](https://www.penguinrandomhouse.com/series/NVD/a-delta-v-novel/#) by Daniel Suarez.
This was in the aftermath of the election, and provided amazing copium -- maybe we'll solve our terrestrial problems through space industry?

In non-fiction books, I enjoyed [Dopamine Nation](https://www.annalembke.com/dopamine-nation), which helped me better understand my own addictions (CANDY!) and have a language for the tools I use to control them (chronological, physical, and categorical self-binding).
Having previously read [Seeing Like A State](https://en.wikipedia.org/wiki/Seeing_Like_a_State), I picked up [Against the Grain](https://en.wikipedia.org/wiki/Against_the_Grain:_A_Deep_History_of_the_Earliest_States), which is another grand-narrative type deep-history book in the style of Yval Noah Harari or Bradford Delong.
Speaking of Harari, I'm still working through [Nexus](https://www.ynharari.com/book/nexus/), which is full of amazing ideas and provides a way to conceptualize our current slide into Autocracy in information-network terms.
I think it's one of those books that I could focus on better when reading, vs. when listening.

Finally, I spent a few months this past year working through [The Power Broker](https://en.wikipedia.org/wiki/The_Power_Broker).
I have a lot of thoughts on this book, which I want to write up as a separate blog post.

## Future

I'm feeling done with climate tech as a field, and feel ready to move on to something else, even while recognizing that I'm currently in the midst of solving interesting and useful problems in the space.
I'm still feeling a little over sitting at a computer all day, too, though I'm also feeling trapped by my comparative advantage in the space.
I'm really not sure what happens next!
Open to possibilities and suggestions!
]]></content:encoded>
            <enclosure url="https://igor.moomers.org/images/city-burns.png" length="0" type="image/png"/>
        </item>
    </channel>
</rss>