{
    "version": "https://jsonfeed.org/version/1",
    "title": "Igor47's Blog",
    "home_page_url": "https://igor.moomers.org/",
    "feed_url": "https://igor.moomers.org/feed.json",
    "description": "The feed for Igor's personal writing",
    "icon": "https://igor.moomers.org/images/myhead.jpg",
    "author": {
        "name": "Igor Serebryany",
        "url": "https://igor.moomers.org/"
    },
    "items": [
        {
            "id": "https://igor.moomers.org/posts/basic-tunnel",
            "content_html": "\nI occasionally need to allow some external service to interact with code that's running on my laptop.\nA few recent examples included building a UI in [Retool](https://retool.com/) against a local API and allowing webhooks from [Resend](https://resend.com) to hit my dev server.\nI've used [ngrok](https://ngrok.com/) and [localtunnel](https://localtunnel.github.io/www/) for this, but I've always wanted my own, self-hosted version.\n\nPopular options for self-hosting a tunnel service include [Pangolin](https://docs.pangolin.net/) and [frp](https://github.com/fatedier/frp).\nI recently saw [rustunnel](https://github.com/joaoh82/rustunnel) on [HN](https://news.ycombinator.com/item?id=47425918), and the discussion pointed me to [this giant list of other alternatives](https://github.com/anderspitman/awesome-tunneling).\n\nAll of these solutions tend to be pretty heavy-weight, because creating dynamic tunnels is actually pretty difficult.\nYou can use wildcard DNS to point `*.tunnel.example.com` to your service, but getting a wildcard cert from LE requires a DNS-based challenge.\nThis means your DNS must be hosted at some dynamic provider like Cloudflare, and you must give your tunnel service some sort of DNS provider API access and credentials.\nPangolin supports provisioning named (non-wildcard) SSL certs at tunnel creation time, but this involves some lag for the tunnel to spin up.\n\nIf you're a small-time dev with only an occasional need, you might be happy to just have a static tunnel name, like `mytunnel.example.com`.\nA static name also has a security benefit -- with dynamic tunnels, if you release a name that still has webhooks pointed at it, the next person to claim it could receive your traffic.\nHere's an approach I use, which works well for me because I already run a `docker compose` stack with an instance of `traefik` in it.\n\n## Simple Tunnel\n\nHere's a step-by-step guide for setting up your simple private tunnel.\nI'm going to give some TL;DR steps, followed by an explanation of what's happening if you're curious.\nThis setup is meant to run on a machine which has a public IP address and is already configured to send HTTP/s traffic from that IP to a docker compose stack fronted by `traefik`.\n\n### DNS\n\nFirst, pick a DNS name for your tunnel.\nUpdate your DNS provider to point the tunnel name at your `traefik` instance.\nWe're going to stick with `tunnel.example.com` for this tutorial.\n\n### Docker compose network\n\nWe need to make your compose stack's default bridge network have an explicit, not an auto-assigned, IP address.\nHere's an example config for the [top-level `networks` key](https://docs.docker.com/reference/compose-file/networks/), which you can customize:\n\n```yaml\nnetworks:\n  default:\n    driver: bridge\n    ipam:\n      config:\n        - subnet: 172.20.1.0/24\n          ip_range: 172.20.1.0/24\n          gateway: 172.20.1.1\n    driver_opts:\n      com.docker.network.bridge.name: \"tunnelnet\"\n```\n\nFor the bridge name, you can use whatever descriptive name fits your stack, or you can just omit it.\nFor the subnet, you can pick any available network in a [private network](https://en.wikipedia.org/wiki/Private_network) address range; I think a `/24` should be plenty for most compose stacks.\nYou'll probably need to take your compose stack down and then back up for this change to take effect.\n\n<details class=\"card mb-3\">\n<summary class=\"card-header h6 mb-0\">More details about docker networking</summary>\n<div class=\"card-body\">\n\nTo understand this, you need to understand docker's networking model.\nMost compose stacks run in bridge mode, and we're sticking with that here.\nIn bridge mode, when your stack boots up, docker creates a virtual [network bridge](https://en.wikipedia.org/wiki/Network_bridge) and allocates it a private network.\nContainers in the stack get allocated an address within this network, and traffic between containers is routed via this virtual bridge.\nIf you need outside traffic to get to a container in the stack -- for example, HTTP/s traffic intended for the stack's `traefik` instance -- then you can configure forwarding using the [`ports` key](https://docs.docker.com/reference/compose-file/services/#ports).\n\nWe're going to be telling traefik to forward our tunnel traffic to a port on the virtual bridge.\nThis means we need the bridge to have a specific -- not a randomly assigned -- gateway address.\n\nNote that docker also supports host-based networking, where instead of a virtual bridge your containers just listen on the host's network interfaces.\nIn this case, you don't need this bridge config.\nBut I don't recommend host-based networking, since it allows your compose services to access other, potentially private services running on the machine or in other compose stacks.\n\n</div>\n</details>\n\n### Configure Traefik\n\nWe need to tell Traefik to send traffic intended for the tunnel name to a specific port on the stack's bridge.\nThis requires setting up an entrypoint and a service; here's an example:\n\n```yaml\nentrypoints:\n  https:\n    address: \":443\"\n\nhttp:\n  routers:\n    tunnel:\n      rule: \"Host(`tunnel.example.com`)\"\n      service: \"tunnel\"\n      entryPoints: [\"https\"]\n      tls:\n        certResolver: le\n\n  services:\n    tunnel:\n      loadBalancer:\n        servers:\n          - url: \"http://172.20.1.1:8642\"\n```\n\nYou might need to restart your `traefik` instance for these changes to take effect.\n\n<details class=\"card mb-3\">\n<summary class=\"card-header h6 mb-0\">More details about traefik config</summary>\n<div class=\"card-body\">\n\nThe entrypoint tells traefik to listen on port 443.\nYou probably already have this configured if you're using Traefik for other services.\nNext, we define a tunnel router, which causes Traefik to provision an SSL cert for that name (this assumes you already have a `le` [`certificatesResolvers`](https://doc.traefik.io/traefik/https/acme/) configured in your Traefik static config).\nFinally, we tell Traefik to send traffic coming into this service to port `8642` on the stack's bridge interface.\nYou can pick a different port if you don't like `8642` -- just be consistent in the sections below.\n\n</div>\n</details>\n\n### Configure SSHD\n\nYour SSH server config is probably located in `/etc/ssh`.\nEdit `sshd_config` to include the line:\n\n```ini\nGatewayPorts clientspecified\n```\n\nMake sure there are no other `GatewayPorts` lines ahead of this.\nYou might need to restart SSHD for this to take effect:\n\n```console\n# /usr/sbin/sshd -t && systemctl reload ssh\n```\n\n<details class=\"card mb-3\">\n<summary class=\"card-header h6 mb-0\">More details about GatewayPorts</summary>\n<div class=\"card-body\">\n\nWe want our SSH remote tunnels to attach directly to the compose stack's bridge.\nHowever, by default SSHD only allows reverse tunnels to bind to the host's loopback interface.\nThis change to the config allows us to specify a different address -- in our case, the address of the stack's bridge -- for the remote tunnel to bind to.\n\n</div>\n</details>\n\n### Configure Firewall\n\nIf your server is like mine, it is running some sort of firewall to restrict which services can be accessed.\nIn our case, we need to allow `traefik` to access our tunnel port (`8642`) on the bridge interface.\nIf you're using `ufw`, run:\n\n```console\n# ufw allow in from 172.20.1.0/24 to 172.20.1.1 port 8642 proto tcp comment 'Tunneled traffic for tunnel.example.com'\n```\n\nIf you're using `iptables` or `nftables` directly, or some other firewall system, you'll need to translate that command.\n\n### Activate your tunnel\n\nOn your computer running the dev service you're trying to expose, run:\n\n```console\n$ ssh -N -R 172.20.1.1:8642:localhost:3000 tunnel.example.com\n```\n\nThis command exposes the local service on port `3000` to the internet at `tunnel.example.com`.\nCongratulations, your tunnel is now complete!\nWhen you're done with it, simply `Ctrl-C` the `ssh` command.\n",
            "url": "https://igor.moomers.org/posts/basic-tunnel",
            "title": "Local tunnel using SSH and traefik",
            "summary": "Expose your local dev services to the public internet via this basic SSH tunnel and traefik.\n",
            "image": "https://igor.moomers.org/images/basic-sweet-ass-tunnel.jpg",
            "date_modified": "2026-03-18T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/medical-chocolate",
            "content_html": "\nIt's basically impossible to get high-CBD chocolate.\nI've only really been able to find two brands of chocolate in a high-CBD formulation: [Kiva](https://www.kivaconfections.com/flavor/cbd-51-dark-chocolate), and [Revive Pure Life](https://www.revivepurelife.com/products).\nNeither is available from any dispensary in Los Angeles or the Bay Area, at least according to [Weedmaps](https://weedmaps.com/).\n\nThis is a problem for my family.\nMy mother has been in memory care for a few months now, but she continues to be very agitated there.\nShe's been prescribed two different kinds of atypical antipsychotics to help manage her agitation, but most of these medications are all off-label for dementia patients, and in fact come with [black-box warnings](https://publichealth.jhu.edu/2025/what-is-a-black-box-warning) recommending against their use due to increased risk of death.\nTheir efficacy is [dubious at best](https://pmc.ncbi.nlm.nih.gov/articles/PMC3516138/).\nThe FDA has only [recently approved brexpiprazole](https://www.fda.gov/news-events/press-announcements/fda-approves-first-drug-treat-agitation-symptoms-associated-dementia-due-alzheimers-disease) as the first atypical antipsychotic for dementia-related agitation, and that approval is [not without controversy](https://www.madinamerica.com/2023/05/fda-approval-antipsychotic-rexulti/).\n\nOut of a sense of helplessness and desperation, we decided to try cannabis.\nSurprisingly, this seemed to help!\nHowever, my mom is a pretty picky eater, and she wanted nothing to do with the typical gummies or weed candies.\nShe's obviously not going to smoke, and administering a tincture is challenging, given how much subterfuge is already required to get her to take all her other medication.\nPretty soon, her coffee is going to be more medication than actual coffee!\n\nShe *does* like chocolate, though, and I was initially able to source the Revive Pure Life at a dispensary not too distant from Mom's memory care facility.\nAlas, that appeared to have been the only available batch in Los Angeles.\nWhen we ran out a few weeks later, we were staring down the barrel of a major relapse in her behavior issues, and were totally helpless to source more chocolate.\nI attempted to contact the manufacturer directly, but never heard back from them.\nI also tried to talk several dispensaries into making a custom order for me -- but no luck there, either!\nIn desperation, I decided to just make my own chocolate.\n\n## Gathering Supplies\n\nIt turns out making chocolate is not too difficult.\nI ordered a few [chocolate molds](https://www.amazon.com/dp/B0DFH26GZS) and a [digital thermometer](https://www.amazon.com/dp/B0F5X4FM3Q) on Amazon.\nI also picked up [a large bar of chocolate](https://www.traderjoes.com/home/products/pdp/pound-plus-72-cacao-dark-chocolate-048875).\n\nNext, I needed cannabis.\nI decided to get tinctures of cannabis in MCT oil, since oil could dissolve in the chocolate without affecting the texture.\nI also wanted to match her previous 7:1 dosage from the Revive Pure Life, to avoid having to get a new script from her psychiatrist.\nThere are no 7:1 tinctures conveniently available, but I was able to get Papa & Barkley's [30:1](https://www.papaandbarkley.com/products/30-1-releaf-tincture) and a [1:1](https://www.papaandbarkley.com/products/1-1-releaf-tincture) tinctures from [a nearby dispensary](https://maps.app.goo.gl/w8NPzro4BE1bAPCv5).\n\n## Making Chocolate\n\nI began with a test bar, both to figure out the quantity of chocolate per bar and to get the technique right.\nI heated the chocolate in a pot, set into another pot full of water, until it was 120°F.\nThen, I allowed it to cool until it was around 90°F before pouring into the mold.\nWaiting for it to cool down took a substantial amount of time.\nI weighed the pot before and after the pour, and determined that my molds comfortably fit around 65 grams of chocolate per bar.\n\nNow, it was time for some arithmetic.\nI wanted to match the Revive Pure Life bars: 7mg CBD and 1mg THC per serving, or 70mg CBD and 10mg THC per 10-square bar.\nI needed to figure out how much of my 1:1 and 30:1 tinctures to use to get the correct ratio in the chocolate.\nThankfully the tinctures were *very* well-labeled.\n\n![THC and CBD tinctures](/images/chocolate-tinctures.jpg \"CBD and THC tinctures for the chocolate\")\n\nTo figure out how much of each tincture to use, it is necessary to solve a system of two linear equations, one for the total THC and one for the total CBD.\nRather than walk through the math, here's a calculator:\n\n(Interactive calculator available at https://igor.moomers.org/posts/medical-chocolate)\n<!-- CHOCOLATE_CALCULATOR -->\n\nThis works out to 2.44ml of oil added to 65 grams of chocolate.\nHowever -- this is a problem.\nI'm using 72% chocolate, which contains about 30% fat (cocoa butter) -- about 20 grams per 65g bar.\nAdding 2.44ml of oil would increase the fat content by over 10%, likely resulting in a soft, greasy-feeling bar.\nTo make sure I ended up with a nicely tempered bar, I ended up halving the oil, making the dose 2 squares instead of 1 square.\n\nTo make the medicated chocolate, I heated enough chocolate for 4 bars to around 120°F, and let it cool to 90°F.\nThen, I added my cannabis tincture and stirred gently but thoroughly.\nFinally, I poured the chocolate into the mold and allowed it to set at room temperature for quite a while.\n\n![The chocolate is poured into molds](/images/chocolate-poured.jpg \"Chocolate is poured into molds\")\n\nAfter letting it sit for a while, it solidified nicely and got a very pretty swirly texture.\nI wonder if this was the result of the tincture I added?\n\n![The chocolate has set](/images/chocolate-solidified.jpg \"Chocolate after it's solidified\")\n\n## Upshot\n\nI wrapped the chocolate I made in aluminum foil and proudly delivered it to my mom's memory care community.\nThe next day, I learned that the staff cannot legally administer this chocolate to my mom.\nApparently, the medicine cart cannot include an obviously home-made product.\nThey're only allowed to administer cannabis products that are \"commercially produced\".\n\nSo in the end, this project was, technically, a complete waste of time.\nStill, I had fun learning about chocolate!\nPlus, now I have a bunch of CBD chocolate for home consumption.\n\nWhen you're on a caregiving journey for a loved one with dementia, you quickly learn to take the small wins and look at the bright side.\nStay tuned for my next post -- producing professional-looking cannabis product wrappers.\n",
            "url": "https://igor.moomers.org/posts/medical-chocolate",
            "title": "Making Medical Chocolate",
            "summary": "Making medical cannabis chocolate for my mom's dementia.\n",
            "image": "https://igor.moomers.org/images/chocolate-header.jpg",
            "date_modified": "2026-01-28T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/political-fundraising-spam",
            "content_html": "\nThis is a post about how to preserve your privacy and avoid spam while still supporting Democratic campaigns and movements.\nThe TL;DR is that I want you to take action now.\nMy recommended actions:\n\n* Sign [petition 1](https://actionnetwork.org/petitions/pledge-to-stop-donating-to-candidates-who-wont-protect-your-email-address) and [petition 2](https://actionnetwork.org/petitions/tell-the-email-sending-platforms-dont-let-your-clients-spam-me).\n* See `Ready to do more?` at [ethicalemail.org](https://ethicalemail.org/) and send opt-out emails to democratic data brokers\n* (if you're in California) Sign up for [DROP](https://consumer.drop.privacy.ca.gov/)\n* Donate money to [Movement Voter Project](https://movement.vote/donate/)\n* Donate money to candidates via [Oath.vote](https://app.oath.vote)\n\nRead on for more details!\n\n## Background\n\nIn my phone's SMS app, the suggested first word for a response is `stop`.\nThis is because I get an ungodly amount of political spam.\nNo good deed goes unpunished, and my reward for caring about politics is to be forever blasted by messages with subject lines like `URGENT`, `Live Poll OPEN - For Democrats ONLY`, and `WE ARE BEGGING YOU`.\nThough I diligently `stop to end`, I continue getting on lists for newly-formed PACs or campaigns of the Democratic candidates for Comptroller of the City of Muncie, Indiana.\n\nNow, I don't have to tell you that things are pretty dire out there.\nAn unaccountable paramilitary force rampages around American cities, our \"leaders\" are engaging in massive corruption, and the global order is being dismantled for no reason.\nMeanwhile, our civilization has real problems; to name a few: climate change, the risks of AI, centralized wealth, rising authoritarianism, nuclear proliferation, novel or treatment-resistant pathogens.\nI would love to help elect leaders who are, you know, interested in some of these.\n\nOn the other hand, the spam...\n\n## Whence the spam\n\nI had assumed that the spam is the fault of [ActBlue](https://secure.actblue.com/directory).\nHowever, it turns out [this is not true](https://matthodges.com/posts/2024-08-25-actblue-isnt-selling-your-data/), a fact I discovered by reading [The Movement Voter Project's FAQ](https://movement.vote/faq/will-my-donor-information-stay-private/).\nInstead, when you donate to a candidate/campaign via ActBlue, they share your information with that specific candidate/campaign.\nThe campaign then later sells your data.\nSometimes, they sell it to other campaigns.\nOther times, it goes to [shady consulting firms extracting money from anxious seniors by using dramatic or false claims](https://data4democracy.substack.com/p/the-mothership-vortex-an-investigation).\n(Though, actually, that investigation into Mothership Strategies by Adam Bonica [did lead ActBlue](https://data4democracy.substack.com/p/the-mothership-vortex-a-quick-update) to implement some [policy changes](https://www.actblue.com/posts/actblue-takes-action-how-were-protecting-donors-from-deceptive-practices/)).\n\n## Stemming the Tide\n\nFrom Movement Voter, I learned about [Ethical Email](https://ethicalemail.org/), which is trying to organize resistance to the Big Blue Spam Machine.\nThe great thing about Ethical Email is they're very action-oriented.\nThey have two petitions I signed -- one pledging [not to support candidates who sell their donor data](https://actionnetwork.org/petitions/pledge-to-stop-donating-to-candidates-who-wont-protect-your-email-address), and one urging NGPVAN, which runs much Democratic campaign tooling, [to do a better job preventing non-consensual spam](https://actionnetwork.org/petitions/tell-the-email-sending-platforms-dont-let-your-clients-spam-me).\n\nBeyond that, they also provide convenient templates to request data deletion from a few big Democratic data brokers (see the `Ready to do more?` section of Ethical Email).\nCalifornians like me have special rights under our [consumer protection act](https://oag.ca.gov/privacy/ccpa), and I invoked that in the messages I sent to the data brokers.\n(BTW, if you're a California resident, California runs a program called [DROP](https://consumer.drop.privacy.ca.gov/) which will automatically delete your data from a bunch of data brokers; sign up right now!)\n\n## Prevention: 16x Better Than Cure\n\nIf you're like me, you might want to donate to political candidates without also signing up for a bunch of spam from other candidates you know nothing about.\n*I want this too!*\nAlas, your candidate is probably too busy raising money and filming TikToks, and has left day-to-day campaign operations to the same consultants who brought us Mothership Strategies.\nWhat to do while the Ethical Email petitions get traction?\n\nWell, you might start by donating money to privacy-conscious organizations.\nThe main one I recommend is [Movement Voter Project](https://movement.vote/).\nI actually prefer organizations like MVP over giving money directly to campaigns (aka [hard money](https://govfacts.org/elections-voting/candidates-campaigns/campaign-finance-rules-disclosure/hard-money-vs-soft-money-how-campaign-finance-rules-shape-american-elections/)).\nI want to invest in building movements and organizations which survive past a single campaign, and can go on to build political coalitions.\n\nOn the other hand, voices I trust [strongly recommend the hard-money approach](https://www.slowboring.com/i/178237616/some-general-considerations).\nThrough that blog post, I learned about [oath.vote](https://app.oath.vote/), which not only researches effective candidates but advocates for donor privacy with those candidates.\nTheir donation is organized around specific causes, like [Protecting Democracy](https://app.oath.vote/donate?p=pd) or [Flip the House](https://app.oath.vote/donate?p=fh).\nThose are the two I personally donated to.\n\n## Act Now\n\nWhether you care about spam or nah, this is no time to sit on the sidelines.\nIf you have the means, you should be donating to anti-MAGA campaigns.\nI've suggested some privacy-preserving donation paths, and also some approaches to reclaim your privacy from data brokers.\nMatt Yglesias has [other recommendations](https://www.slowboring.com/i/178237616/our-top-recommendations-for-now).\nEither way, don't get caught up in analysis paralysis; take action.\n",
            "url": "https://igor.moomers.org/posts/political-fundraising-spam",
            "title": "Preventing Political Fundraising Spam",
            "summary": "Donating to political campaigns, without getting bombarded by apocalyptic messages from randos.\n",
            "image": "https://igor.moomers.org/images/political-spam.jpg",
            "date_modified": "2026-01-23T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/wled-christmas-lights",
            "content_html": "\nI've got some Christmas lights on the front of my house.\n\n![Lights on my house](/images/house-lights.jpg)\n\nI actually leave these on all year long, and use different colors/patterns at different times of the year.\nThese are WS2812 (or compatible) LEDs, which are readily available and cheap.\nThe [signal protocol](https://www.arrow.com/en/research-and-events/articles/protocol-for-the-ws2812b-programmable-led) for WS2812 lights is pretty ingenious, and it always amazes me just how fast modern electronics are.\nTo shift out 24 bits of color to each of 100 LEDs, it takes ~125 *micro*seconds.\nThis means you can run animations on these LEDs at around 8,000 frames per second!\n\nI think the lights on my house are [these ones](https://www.amazon.com/dp/B08KS4LXFD) from Amazon.\nI cut off the USB controller that comes with the lights, and solder my own 3-wire pigtail onto them.\nThese lights don't actually implement the standard WS2812 protocol.\nInstead of grabbing the first 24 bits and then shifting out the rest of the data to the next LED, the LEDs in these strings know their position in the string.\nThis means you can't connect two of them in series -- or rather, you can, but the second string will just display whatever the first string displays.\nAlso, if you cut off the first LED, then the first 24 bits will just go into the void.\nStill, it's hard to beat the price and ready availability.\n\nI control these using an [ESP32](https://www.espressif.com/en/products/socs/esp32) chip running [WLED](https://kno.wled.ge/).\nThis lets me use an app on my phone and pick from a bunch of pre-defined color palettes and patterns.\nI can set specific presets to turn on at specific times of day.\n\n## Weatherproofing\n\nOne problem I've had has been weatherproofing the ESP32 controllers.\nPeople generally expect that any amount of water combined with any kind of electricity or electronics will result in a catastrophic reaction.\nIn practice, I've found that most electronics, especially low-voltage devices, are reasonably water-resilient.\nFor instance, the LED strings I'm using are not explicitly rated for any kind of water resistance, but seem to do okay left out in the rain for multiple seasons.\n\nOn the other hand, getting rainwater into my microcontrollers has not worked out well for me.\nMy initial outdoor controller died after the first rain, losing my hard-programmed schedules and presets.\nHere's my v2:\n\n![V2 of my lights controller](/images/house-lights-v2.jpg)\n\nAll I had on hand was this metal enclosure.\nTo avoid the project board shorting out, I covered the bottom with tape and then hot glue.\nI also hot-glued the wires to avoid water build-up.\nThis version lasted through one wet season, but then shorted out soon after I moved to Berkeley.\n\nI decided v3 would be the final version, and got actual waterproof enclosures.\nI drilled a hole for the wires, and epoxied over the hole with waterproof epoxy.\nI dispensed with the project board, and just soldered the wires directly to the microcontroller.\nI also got a much smaller microcontroller, a [Seeed Studio XIAO](https://www.seeedstudio.com/Seeed-XIAO-ESP32C3-p-5431.html), to make sure I had space in the enclosure.\nHere's my v3:\n\n![V3 of my lights controller, top view](/images/house-lights-v3-top.jpg)\n![V3 of my lights controller, side view](/images/house-lights-v3-side.jpg)\n\nI'm using [this enclosure](https://www.amazon.com/dp/B07H5C8BB6), which ended up being generously large given how small the microcontroller is.\nThe black sticker on the lid is the WiFi antenna from the XIAO.\nOf the three pigtails, one is for power, and the other two are for driving two separate strings of LEDs.\n\n## Flashing\n\nI wanted to install WLED on my new ESP32C3 microcontrollers, but the [official directions](https://kno.wled.ge/basics/install-binary/) on the WLED site are pretty out-of-date.\nThe recommended approach is to use the [WLED web installer](https://install.wled.me/).\nHowever, this doesn't work in Firefox.\nTrying to use Chromium on my Arch Linux laptop also didn't work, giving me the error `Serial port is not ready. Close any other application using it and try again.`\nAsking an LLM to help me debug the issue was similarly unproductive.\n\nI eventually resorted to analyzing [the web installer's source code](https://github.com/wled-install/wled-install.github.io) to figure out what the website is doing.\nThe important file seems to be [`build.py`](https://github.com/wled-install/wled-install.github.io/blob/main/scripts/build.py).\nThe official [WLED releases](https://github.com/wled/WLED/releases) include a binary build for just WLED itself.\nHowever, for ESP32C3, there are at least 3 additional files necessary:\n\n* bootloader -- initializes the microcontroller. This comes from Espressif, the maker of the ESP32\n* partitions -- a map of the flash space, read by the bootloader to understand how to run the main code\n* boot_app0 -- used by the OTA update process to understand which version of the app to run, kinda like the slots in an Android filesystem\n\nAll of these files are available in the web installer's repo.\nMy job was to parse through `build.py` and the relevant `_template.json` file for my ESP32C3 microcontroller to figure out the correct files and flash offset locations.\nThis resulted in the following incantation to get WLED running on the controller:\n\n```console\n$ esptool --port /dev/ttyACM0 write_flash 0x0 bootloader_esp32c3.bin\n$ esptool --port /dev/ttyACM0 write_flash 0x8000 partitions_v2022.bin\n$ esptool --port /dev/ttyACM0 write_flash 0xE000 boot_app0_v2022.bin\n$ esptool --port /dev/ttyACM0 write_flash 0x10000 WLED_0.15.3_ESP32-C3.bin\n```\n\nThis took me the better part of 2 hours to figure out, so I'm writing it down to help you (who might be future me).\n\n## Powering\n\nWith the controller flashed and weatherproofed, the final boss is powering the whole setup outdoors.\nI've got 200 LEDs at, say, 50mA per LED, equalling 10A or 50W of (peak) power.\nI haven't been able to find any weather-resistant power bricks that can source that much current on Amazon.\nSomething like 3A or 5A is more typical, and even then the weather resistance is questionable, as is voltage sag at higher currents.\n\nI ended up getting a 12V power brick; those are readily available on Amazon at 3A and even 5A, with good reviews and (claimed?) UL listing.\nA smaller brick fits pretty well in my outdoor receptacle, which keeps it out of the direct rain.\nI then connected it to a [5V voltage regulator](https://www.amazon.com/dp/B0C4L66SZ9), which is potted and actually does seem pretty waterproof.\nWith 3A at 12V, I'm limited to 36W of power, somewhat below my estimated 50W.\nThankfully, you can set the current limits in WLED, and it will automatically limit LED brightness in software to avoid exceeding your power budget.\nIn practice, my LEDs seem bright enough.\n",
            "url": "https://igor.moomers.org/posts/wled-christmas-lights",
            "title": "My WLED Christmas Lights",
            "summary": "My setup, how to weather-proof it, flashing WLED on an ESP32C3, and power tips\n",
            "image": "https://igor.moomers.org/images/xiao-esp32c3.jpg",
            "date_modified": "2026-01-01T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/zod-schemas-htmx",
            "content_html": "\nThe last few months, I've been working on [CSheet](https://www.csheet.net), my LLM-assisted DnD-5e character sheet tracker.\nYou know that old chestnut about how AI is easy and AV is hard?\nI've had that same experience, but with incoming data parsing.\n\nI was expecting that a challenging bit would be designing, for example, a good data flow with `htmx`.\nOr maybe I would run into problems integrating complicated UI components (e.g. an image cropper) into a vanilla-JS app, given the dominance of front-end frameworks like React.\nAlso, I had never integrated an LLM into a product, and expected this to also be difficult.\n\nInstead, the piece that I've ended up iterating on the most has to do with how to receive and process incoming data.\nI wanted to leave a few breadcrumbs here for anyone who comes after (or, let's face it, their LLM agents).\n\n## React: The Usual Sitch\n\nThese days most web apps are written in React, and React has a somewhat-smaller data representation surface area, at least for incoming data.\nIt's of course impossible to avoid the browser's data representation, which is full of quirks.\nFor instance, numeric input elements are represented in the browser as strings, so you have to deal with converting between `\"5\"` and `5`.\nIncomplete inputs are `\"\"`.\nCheckboxes are `\"on\"` when checked, unless there's a `value` , and they're totally missing when unchecked.\n`multiple` inputs are just multiple key-value pairs.\n\nIn React apps, you usually try to parse the browser's `FormData` representation into JSON as soon as possible.\nYour APIs then accept JSON, which is sensible and has numbers, booleans, arrays, and objects.\nOn the server-side, you're usually still validating that the JSON's *shape* conforms to your expected shape.\nBut you're not dealing with the nitty-gritty of turning a boolean string `\"true\"`, which might be missing, into the boolean value `true`.\nThat step happens right next to the data, in the browser.\n\nNote that for outgoing data, the situation is quite reversed.\nIn REST APIs, you might have a dozen representations of your internal objects for specific API purposes.\nGraphQL is an attempt to deal with an exploding number of statically-defined representations, and it solves the problem by creating an unbounded number of dynamic representations.\nIn server-side rendered apps, you might be passing the exact same internal representation to all your view-rendering logic, although sometimes you still get an explosion for different permissions structures.\nBut I digress...\n\n## Parsing `FormData`\n\nWhen you're dealing with browser-submitted forms, you have to deal with `multipart/form-data`-encoded request bodies.\nThis is a string representation containing a bunch of `key=value` pairs, which you're going to want to turn into some kind of object.\nThis step can be somewhat arcane; it's underspecified, and usually left to framework conventions.\n\nFor instance, let's say you have `<select multiple name=\"fruit\">` in your form, and the user only selects `apple`.\nYou get `fruit=apple` in your request body.\nRuby's [rack](https://github.com/rack/rack) will turn this into an object like `params.fruit = \"apple\"`.\nWhat if the user select both `apple` and `orange`?\nWell, in that case you'll get `params.fruit = [\"apple\", \"orange\"]`!\nYou can solve this problem by using `name=\"fruit[]\"` instead, which `rack` uses as a hint to remove the `[]` and always turn `fruit` into an array.\nThis is pure convention -- the `[]` have no meaning outside the request parsing logic in that stack.\nYour favorite stack might also do this, but you don't know unless you look it up.\n\nIn Hono, the situation is actually even more annoying.\nYou would typically use [`parseBody`](https://hono.dev/docs/api/request#parsebody), which returns `Record<string, string | File>`.\nIf your user selects `apple` and `orange`, you'll get `params.fruit = \"orange\"`, totally dropping the `apple`.\nYou should invoke `parseBody({ all: true })` instead, in which case you'll get a `Record<string, string | | File | (string | File)[]>`.\nYou don't need to add `[]` to your input `name`s -- Hono ignores this convention, and since it doesn't strip the `[]` you'll end up with inconvenient field names in your objects.\nThen there's also `{ dot: true }`, which treats `.` as special in your field names and turns dot-separated fields into nested objects.\n\n## Zod and Service Representation\n\nIn your services, you will probably want to use objects with sensible types.\nFor instance, you might want something like:\n\n```ts\nconst dietSchema = z.object({\n    fruit: z.array(z.enum([\"apple\" | \"banana\" | \"orange\"])).min(1),\n    perDay: z.number().int().positive()\n})\n```\n\nThis will produce a type like:\n\n```ts\ntype DietSchemaT = {\n  fruit: (\"apple\" | \"banana\" | \"orange\")[],\n  perDay: Number,\n}\n```\n\nTo get from your `parseBody` representation to this typed representation, you have to both parse and validate.\nFor example, `perDay` might be any of:\n\n* (happy path) a string containing a number, like `\"5\"`\n* an empty string `\"\"`\n* some random string `\"bob\"`\n* something else totally unexpected\n\nI took several stabs at this problem in CSheet.\nInitially, I split up my validation and submission logic.\nI had my validation run on the unparsed schema, which is a `Record<string, ...>` type.\nThis meant doing a bunch of parsing directly in validation.\nFor example, I might do something like:\n\n```ts\nif (body.perDay) {\n  const perDay = parseInt(body.perDay, 10)\n  if (isNaN(perDay)) {\n    errors.perDay = \"Invalid number\"\n```\n\nThen in my submission logic, I would attempt to parse using the `zod` schema, turning any parsing errors into form errors.\nAnd my business logic operated on the parsed service schema.\nThis is obviously a lot of duplication.\n\nIn version 2, I unified validation and submission.\nI added an `is_check` parameter to all my requests, representing either a request to validate or perform the submission.\nBoth began with attempting to parse the request body using the `zod` schema.\nThis meant I could lean on `zod` to perform the parsing:\n\n```ts\ntype DietSchemaT = {\n  perDay: z.preprocess((val) => {\n    if (val === \"\" || val === null || val === undefined) {\n      return null\n    }\n    if (typeof val === \"string\") {\n      const num = Number(val)\n      return isNaN(num) ? val : num;\n    }\n    return val\n  }, z.number().int().positive())\n}\n```\n\nThere's a lot of common patterns here, so I eventually factored these out into what I call [`formSchemas`](https://github.com/igor47/csheet/blob/main/src/lib/formSchemas.ts).\n\nWhen validating, an empty form field is not an error -- it just means the user hasn't gotten to the form field yet.\nTo deal with this, my version-2 schema definitions/service logic had two features:\n\n* I actually parsed twice -- for validation, using the [`partial`](https://zod.dev/api#partial) version of the schema\n* Since `zod` doesn't well-represent `z.preprocess(method, schema).optional()`, I had to embed optionality into my `schema`\n\nThis eventually let me to V3: just define the schema the way it *should* be.\nArguably, I should have done this from the beginning.\nI still wanted to avoid displaying errors on missing fields after validation.\nBut I realized I could do this when rendering the form.\nIf the field has an error, I now check if (a) we were validating (vs submitting) the form and (b) the field is blank.\nIf both of those are true, I [ignore the error](https://github.com/igor47/csheet/blob/06df22002c79c2fad32388c98f3762e6d5871e07/src/lib/formErrors.ts#L67-L85).\n\n## Re-rendering the Form\n\nAfter validation, we re-render the form for `htmx`, possibly containing new fields or errors.\nWe want to pass the user's answers back to the component, so that the form can be rendered with the old answers populated.\nWe have at least 3 choices for which representation to pass to the form component:\n\n1. generate a `FormData` from the string encoding\n2. the version we get from `parseBody`\n3. the parsed version we get inside the service\n\nI went back and forth several times on which representation to use.\nIt's tempting to use (3), the typed service representation.\nIt's strongly typed, and it seems somehow more correct to pass around a `{ perDay: Number }` type than a `Record<string, unknown>` type.\n\nHowever, using the typed representation in form rendering meant converting values back to their `FormData` types.\nI would have to do `value={String(values.perDay)}`.\nAlso, if parsing fails, I don't actually have a typed representation to use!\n\nI eventually settled on using (2) as the least of all evils.\nIt's an object, so for forms with nested fields or arrays I can at least do sensible iteration to render the form.\nIt also has the benefit of containing exactly what the user input, avoiding the unexpected UX of your values changing for you.\n\n## LLM Representation\n\nSince I have an LLM assistant with tool calling embedded into the system, my incoming request data might also be generated by the LLM instead of a form.\nI use [Vercel AI](https://ai-sdk.dev/) to interface with LLMs, and it accepts `zod` schemas to create descriptions for LLM tool-calling.\nMaking the schemas strict/non-optional helped tighten up what the LLM generates.\nNow, a field is marked optional only if it's actually optional in the tool call -- not because it might be optional during validation.\n\nFor `zod`, there's a distinction between input and output schemas.\nMy output schema is the strongly typed representation the service operates on.\nWhen I give this schema to the LLM, it generates strongly-typed (JSON) tool calls.\nHowever, my *input* schema is implicitly written to consider the output of browser forms.\nAs a result, since I perform schema parsing *inside* the service, I unfortunately have to convert the LLM's strongly-typed tool call into a \"stringly\"-typed input for the service.\nThe strings are then immediately parsed back into the strongly-typed representation.\nThere might be a good way to handle this, probably by parsing the form data outside the main service function, but I'm not sure if the added complexity is worth it.\n\n## Takeaways\n\nAfter three iterations, I landed on a pattern that works well:\n- Define schemas as they **should be** for your services (strict, typed, non-optional)\n- Use `zod.preprocess` to handle the string→type conversions from `FormData`\n- Handle validation-vs-submission differences at render time, not in the schema\n\nThe same schemas work for both browser forms and LLM tool calls, which has been a fortunate convergence.\nI do find myself missing `pydantic`, which feels more ergonomic for parsing data thanks to its default coercion\nHowever, `zod`'s `preprocess` works, and is definitely more explicit. \n\nI've factored the common patterns into [`formSchemas.ts`](https://github.com/igor47/csheet/blob/main/src/lib/formSchemas.ts).\nIf there's interest in packaging this as a standalone library, ping me on [the issue](https://github.com/igor47/csheet/issues/57).\n",
            "url": "https://igor.moomers.org/posts/zod-schemas-htmx",
            "title": "Data Boundary Layer for HTMX with Zod",
            "summary": "My experience creating a data boundary layer with Zod in Hono and HTMX.\n",
            "image": "https://igor.moomers.org/images/abstract-data-parsing-quilt.jpg",
            "date_modified": "2025-11-14T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/climate-week-2025",
            "content_html": "\nWe're in a critical moment for the climate.\n[2024 was the warmest year on record](https://www.nasa.gov/news-release/temperatures-rising-nasa-confirms-2024-warmest-year-on-record/), and we're continuing to hit [devastating](https://www.bbc.com/news/articles/cd9qy4knd8wo) [climate](https://content-drupal.climate.gov/news-features/event-tracker/hurricane-helenes-extreme-rainfall-and-catastrophic-inland-flooding) [disasters](https://www.theguardian.com/environment/2024/sep/27/heat-wave-death-record-southwest).\nMeanwhile, we are facing [the most climate-hostile administration](https://www.theguardian.com/environment/2025/may/01/trump-air-climate-pollution-regulation-100-days), even while international corporations [retreat](https://apnews.com/article/bp-oil-green-reset-1b9cfca4c2da138f83ace86b5945e053) from their [climate pledges](https://www.spglobal.com/market-intelligence/en/news-insights/articles/2025/1/with-jpmorgan-gone-all-major-us-banks-have-now-left-global-climate-alliance-85961423).\n\nWhat's a climate-concerned citizen to do in this time?\nTo find out, I went to two events at SF Climate week this year, both hosted by [Giving Green](https://www.givinggreen.earth/).\nThe first was a private panel specifically focused on philanthropy in the age of Trump.\nThe panelists included representatives from [outlier projects](https://www.outlierprojects.org/), [1.5Climate](https://onepointfiveclimate.org/), and [Skyline Foundation](https://skylinefoundation.org/).\nThough the event was ostensibly a panel, it really took the form more of a general conversation, with folks throwing ideas back and forth around the room.\n\nThe second was a [public event](https://lu.ma/674vd8e0) more general focused on \"how to do climate philanthropy\".\nThis was more of a panel and a Q&A format, with two parts.\nThe first, hosted by Giving Green, featured [Steve Newman](https://climateer.substack.com/), and Younger Family Fund advisor, [Nathan Aleman](https://www.linkedin.com/in/nathan-aleman-8b17082/) discussing their journey in climate philanthropy.\nThe second part was hosted by Breene Murphy of the [Carbon Collective](https://www.carboncollective.co/) and focused on a discussion with green impact investing with Dan Chu, Executive Director of the [Sierra Club Foundation](https://www.sierraclubfoundation.org/).\n\nI learned a lot, and I wanted to synthesize some learnings while they're still fresh in my mind.\n\n## What to do About Trump\n\nGiving Green's 2024 priorities included 8 areas where their research converges on high scale, reasonable feasibility of making a difference, and high need for additional financial contributions:\n\n![Chart listing giving green priorities](/images/giving-green-priorities-2024.png)\n\nTheir response to Trump was to take a basically defensive posture focused on advancing next-gen nuclear and advanced geothermal, which both have strong bi-partisan support.\nThe response in the room, and on the panel, was more varied.\nSome ideas I heard:\n\n* There is a lot of opportunity for what my colleague [Aimee](https://www.linkedin.com/in/aimee-gotway-bailey/) calls \"revenge policymaking\" in the states.\n  California recently became the 4th largest economy in the world, eclipsing Japan, and there's certainly a lot of appetite here and in other blue states (NY, Washington) to continue making climate progress.\n  This thread aligns well with movements like Abundance ([substack](https://substack.com/@modernpower), [book](https://bookshop.org/p/books/abundance-what-progress-takes-derek-thompson/20165403)), which are also oriented around building effective governance.\n  I broadly agree that Dems have a brand problem in state and local governments, a (not-unwarranted) perception of dysfunction that doesn't do us any favors when running for national office.\n  It seems like a good idea, in a moment when we're so disempowered nationally, to clean house locally.\n\n* At the same time, it took a lot of effort to get the IRA and IIJA passed federally, and it was not possible without a lot of infrastructure in DC.\n  To retreat from federal policy would cause that infrastructure to wither -- that is, to wither **more** given the attrition that's already happening in DC as a result of DOGE and other efforts.\n  People were particularly concerned about NOAA and the Loan Programs Office at the DoE.\n  Nobody thinks philanthropy can fully back-stop the massive federal spending cuts, but we can at least reduce the damage.\n\n* There is clear and growing need for adaptation on top of mitigation.\n  Outlier Projects in particular is focused on getting past the \"moral hazard\"/\"you're giving in\" reactions to approaches like [SRM](https://en.wikipedia.org/wiki/Solar_radiation_modification).\n  This is also a place where climate tech and philanthropy approaches more traditional community building.\n  In a time of overshoot and climate disruption, community resilience might be more important than CDR.\n  My experience at [Recoolit](https://www.recoolit.com/) has me feeling skeptical that there's anyone out there willing to pay for the latter.\n\n* The US is just one emitter, and there's a whole world out there.\n  Another group I'm involved with, the [Founders Pledge Climate Fund](https://www.founderspledge.com/funds/climate-fund/about), has also been very focused on growing emissions in the developing world.\n  Opinions in the room mostly leaned towards \"We are American philanthropists, familiar with the US space and most interested in making a difference in the US\".\n  One bogeyman was a rumored upcoming executive order that would prohibit 501(c)(3) investments outside the US, though folks were both prepared for legal action in response, and convinced that there would continue to be *some* way to fund work outside the country.\n  Another concern was something along the lines of \"should we be telling people in other countries what to do\".\n  I strongly disagree with this objection, and strongly feel that we on the left should have a much stronger focus on building and wielding political power.\n  There's a large space between wielding power coercively and being a strong advocate for things we believe in.\n  Our aversion to power has ceded the field to folks who are much less scrupulous about things.\n\n* Besides being defensive/reactive, a lot of folks in the room discussed going on the offense.\n  For instance, folks brought up coming up with our own version of Project 2025, so that we're ready with a comprehensive agenda when the moment times.\n  Besides promoting green policies in more local governments, we can also work on blocking anti-green policies (\"owning the cons\"?)\n  One person in the room proposed going after cryptocurrencies and AI companies, both bases of power for our political opponents.\n\nThere were several additional threads in the room that I was unable to follow, given my position as an interested outsider in the space.\nFor instance, I was not aware of the retreat of [Breakthrough Energy Ventures](https://www.breakthroughenergy.org/) from the climate space, a development which seems to have left a large funding gap in the space.\n\nOverall, this event was both informative and inspiring.\nIt helped me feel less alone in a turbulent and scary time.\nIt's good to look into the faces of dedicated, talented people who share my concerns, are thinking through solutions from so many angles, and who are both generous with their own ideas and so open-minded and receptive to others'.\nIt definitely felt like a community-building event -- I would love to remain connected with many of the folks I met!\n\n## Climate Philanthropy\n\nMy second SF Climate Week event began with [Dan Stein](https://www.linkedin.com/in/daniel-stein-8210a639/) hosting [Steve Newman](https://climateer.substack.com/) and [Nathan Aleman](https://www.linkedin.com/in/nathan-aleman-8b17082/).\nBoth are experienced climate philanthropists, and it was interesting to hear how they approach the problem space.\nTo some extent, there's a bit of confirmation bias here.\nI already generally agree with Giving Green's thesis about the importance of climate policy advocacy, and the speakers they brought in are preaching to the choir.\nNewman in particular mentioned several times that, had Giving Green existed at the start of his journey, he would have done much less research and much more of just letting Giving Green manage this climate philanthropy dollars.\n\nIt was interesting to hear Nathan Aleman discussing his climate philanthropy given the diverse political viewpoints in his family.\nI'm very worried about the continuing polarization/politicization of climate, and always looking for non-partisan climate opportunities.\nAleman mentioned [Deploy/US](https://www.deployus.org/) as the organization they fund, though he did mention that the impact is not yet clear.\n[ClearPath](https://clearpath.org/) is another organization that seems both right-leaning and climate-focused.\n\nI was curious to hear how both panelists thought about the effectiveness of their philanthropy.\nFor metrics and data oriented people, it can be difficult to think through contributions whose effects might not be evident for decades.\nNewman mentioned that he generally attempts to establish his grantees are experts, and then defers to their expertise.\nHe mentioned a useful heuristic of \"do they want to get off the phone with me to get back to work\", vs \"are they more focused on developing relationships with donors like me\".\n\n## Impact Investing\n\nThe final event of the day was a discussion between [Breene Murphy](https://www.linkedin.com/in/breene-murphy-climate-friendly-401k/) and [Dan Chu](https://www.linkedin.com/in/dan-chu-scf/).\nThis was probably the most informative and action-oriented event of the day.\n\nFor most of us, our savings and assets are invested for the purposes of maximizing financial return, agnostic of how that's accomplished.\nFolks are broadly aware that some of the organizations in our portfolios might not be, like, super cool or whatever.\nBut investing is simultaneously very complicated and connected with deep-seated feeling of security.\nLeave it up to the experts?\nAt best, we might shift assets into [ESG](https://en.wikipedia.org/wiki/Environmental,_social,_and_governance) funds.\nThose have become a [subject of controversy](https://www.volts.wtf/p/the-depthless-stupidity-of-republicans) in the past few years, while simultaneously having [dubious real-world impact](https://insights.som.yale.edu/insights/green-investing-could-push-polluters-to-emit-more-greenhouse-gases) or perhaps outright [greenwashing](https://fsc.org/en/blog/what-is-greenwashing).\n\nIs there a better way to leverage the power latent in the combined assets of climate-concious investors?\nThis is the broad space of [impact investing](https://www.investopedia.com/terms/i/impact-investing.asp).\nThe Sierra Club's [Shifting Trillions](https://www.sierraclubfoundation.org/shifting-trillions) initiative is aimed at building a movement around this project.\nThe talk was both a primer in the space, and a set of real-world examples of Sierra Club's impact investing fund from Dan Chu.\n\nThere are a couple of approaches to the space:\n\n* **Divestment**, which aims to move money out of polluting companies.\n  There are a few organizations primarily focused on this approach.\n  For example, [Fossil Free California](https://fossilfreeca.org/) aims to divest California's pension funds from fossil fuel companies.\n  Both Breene Murphy and Dan Stein discussed divestment as \"the least effective\" option in the toolkit, and [other sources agree](https://www.gsb.stanford.edu/insights/why-divestment-doesnt-hurt-dirty-companies)\n\n* **Shareholder Activism**, where investors in polluting companies vote their shares for better outcomes.\n  A poster-child in this fight is the Engine1 ETF, which famously [installed climate-aligned members on the board of ExxonMobil](https://engine1.com/transforming/articles/engagement-with-exxon-strengthened-company-value-for-shareholders/) (here, again, Dan Chu alluded to the relative ineffectiveness of such an approach; what's the purpose of installing climate activists on the board of an oil company?)\n  Arguably, as an individual investor, devoting any attention to your shareholder votes is [a huge waste of time](https://www.shareholderforum.com/access/Library/20240924_Bloomberg.htm).\n  However, by aggregating the votes of many smaller investors, it might be possible to force companies to take bigger action.\n  The panelists mentioned [iconik](https://www.iconikapp.com/) as a tool to help do this.\n  Apparently, opportunities for [impact via shareholder activism do exist](https://www.morningstar.com/sustainable-investing/4-climate-votes-that-matter-this-years-proxy-voting-season).\n\n* **Reinvesting** into climate solutions, where instead of just dumping specific \"bad\" companies, you focus on investing in \"good\" ones.\n  Especially early-stage investments can be a huge help here, but the deal flow for early-stage companies is a limiting factor.\n  With respect to shifting trillions, there must be a market for these large flows of money to go.\n  Breene Murphy's [Carbon Collective's CCSO](https://www.carboncollectivefunds.com/ccso/) ETF is one great example of this approach.\n  Another organization mentioned by several folks was [Prime Coalition](https://www.primecoalition.org/mission-and-vision).\n  Carbon Collective also has a [green bonds fund](https://www.carboncollectivefunds.com/ccsb/), theoretically enabling an entirely green balanced portfolio.\n\n* **Storytelling** as an action which enables all the others.\n  For instance, [fossil fuels have underperformed the market](https://ieefa.org/articles/another-bad-year-and-decade-fossil-fuel-stocks) recently, making them a comparatively bad investment.\n  Renewable energy, meanwhile, has [outperformed](https://blog.carboncollective.co/top-renewable-energy-stocks-beat-fossil-fuels/).\n  Making this story more salient in the minds of investors might help accelerate the shift.\n\nDan Chu highlighted a few examples of the dramatic impact that the Sierra Club's impact portfolio was able to have on the world.\nFor instance, they funded [Solar Holler](https://www.solarholler.com/), which helps decarbonization while building critical allies in red states.\nAnother powerful example was a non-recoverable grant they made to Standing Rock, enabling \n\nDan had what I perceived as an interesting take on \"impact investing\".\nIn his (approximate) words:\n\n> If you're doing market rate of return, you're participating in the existing extractive economy\n\nThis is somewhat of a hard sell.\nI think the folks at Carbon Collective would like to argue that you can have your cake and eat it, too -- investing in values-aligned organizations while preserving your portfolio for continued activism, or maybe even building wealth.\nOther sources agree:\n\n<blockquote>\n  <p>Our investment conviction is that sustainability-integrated portfolios can provide better risk-adjusted returns to investors.</p>\n  <footer>— <cite><a href=\"https://ieefa.org/resources/ieefa-update-blackrock-investors-sustainable-portfolios-provide-stronger-risk-adjusted\">Larry Fink, BlackRock</a></cite></footer>\n</blockquote>\n\nI lean more on Larry Fink than Dan Chus side here.\nI think climate investing has the potential to both make a difference, while providing *better* returns.\nI'm down to put some money behind this conviction, and would like to encourage my friends and family to do the same.\n\n## Summing Up\n\nIn an unprecedented dark moment for climate change, I found my day at SF Climate week inspiring and full of actionable insights.\nAn important next step for me is to continue to participate in this community.\nThis also feels like a good moment to step up my climate philanthropy, and encourage my fellow philanthropists to do the same.\nWe can't close the gaps left by the retreat of federal funding, but we should strive to preserve what we can.\n\nAdditionally, I have a ton of next steps on the impact investing front:\n\n* Look into shareholder activism using iconik\n* Look into shifting my assets, perhaps using something like [Values Advisor](https://valuesadvisor.org/)\n* Continue doing more early-stage climate investing\n\nIf you're looking for angel investors in your climate tech company -- get in touch!\n",
            "url": "https://igor.moomers.org/posts/climate-week-2025",
            "title": "Notes from SF Climate Week 2025",
            "summary": "My takeaways from the (few) events I attended at SF Climate Week 2025.\n",
            "image": "https://igor.moomers.org/images/green-vs-dystopia.jpg",
            "date_modified": "2025-04-25T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/solar-ev-at-burning-man-2024",
            "content_html": "\n# A Solar-powered EV at Burning Man 2024\n\nFor burning man 2024, I had two goals which sadly remain kind of unusual in 2024:\n* Drive my EV there\n* Power my camp with solar panels\n\nAs an auxiliary goal, remembering the crippling heat of 2022, I was hoping to be able to run the AC in the EV for a few hours a day to provide an air-conditioned hidey hole.\nI was able to accomplish all of these goals -- sort of!\nHere's some photos of my system.\nRead on to learn about the process and all the choices and pieces that went into the project.\n\n![The complete system at Burning Man](/images/bmsolar/system-at-bm.jpg)\n\n## Panel Mounting\n\nThe Rivian R1S has a glass roof, and is thus a greenhouse.\nAs an experiment, I parked in a sunny spot in my driveway and ran the AC on max.\nThe car remained somewhere north of 85, and the AC was clearly struggling.\n\nIt made sense to use the solar panels to shade the car, which then further implies just mounting the panels on the roof.\nSolar panel mounting is somewhat of a black art, so far as my internet research went.\nI eventually discovered unistrut, which is available [at my local Home Depot](https://www.homedepot.com/p/Superstrut-10-ft-12-Gauge-Half-Slotted-Metal-Framing-Strut-Channel-in-Gold-Galvanized-ZA1200HS-10/100125003).\nI could mount it to my cargo cross-bars using [t-channel bolts](https://www.rivianforums.com/forum/threads/crossbar-track-bolts-my-findings-what-are-you-using.14193/).\nIt comes in 10' lengths, and I needed somewhat more, but it was easy to get the necessary hardware to join a 2' length to the 10' length.\n\nAll the hardware for this at Home Depot was 3/8\", including these [square washers](https://www.homedepot.com/p/Superstrut-3-8-in-Square-Strut-Washer-Silver-Galvanized-5-Pack-ZAB2413-8EG-10/100390468) which I could use to hold the panels to the strut where two panels abutted.\nHowever, I could not use the square washers to hold the end of the panel -- they slipped off when not supported on both sides.\nI needed solar-panel specific [end-clamps](https://amzn.to/4dshqy7).\n\nAs solar panels are all made in China, most solar hardware is metric, and I could only find those clamps with 8mm bolts.\nTo make life \"simpler\" for myself, I wanted to minimize the variety of fasteners.\nI \"imperialized\" the metric hardware by drilled out the holes out for a 3/8\" bolt.\nI also had to use a dremel to grind away some of the metal around the hole, so I could get a hex-head bolt to rotate in the tight space.\n\nThe end result looks like this:\n![car with panels mounted, in my driveway](/images/bmsolar/car-with-panels.jpg)\n\n## Solar Panel Selection\n\nCraigslist is a great place to get solar panels.\nMy original plan was to buy used panels, which tend to be quite cheap in $/watt, but also have a lower efficiency and so are larger for a given system wattage.\nSince I was constrained on space on the top of the car, I ended up getting new panels from a CL reseller.\nI ended up with some pretty nice modern bifacial panels at 20.9% efficiency at about $0.31/watt.\n[Here's the datasheet](/images/bmsolar/solar-datasheet.pdf) for the panels I used.\n\nI wish I could have found panels that were longer and narrower.\nThis would have enabled me to put more panels on the roof, but also to have the panels shade the side of the car.\nAlas, panels seem to come in generally squarish dimensions.\nI ended up with a piece of aluminet hanging off the unistrut for side-shade.\n\nI also wish I could have had thinner, lighter, more flexible panels.\nBesides trying to save space and weight, I'm also pretty nervous about breaking those big sheets of glass.\nHowever, the glass panels with aluminum mounting borders are most common for industrial installations, and so are cheapest.\nThe flexible panel market is more for hobbyists, and those panels tend to be higher cost per watt, lower watts per m<sup>2</sup>, and less readily available for purchase.\n\n## Calculating Power Needs\n\nI estimated that, if I wanted to run the AC in the Rivian for a few hours a day, that would be my biggest power draw, far dwarfing any other loads I might need (e.g. lights, charging personal batteries, etc...).\nHow much power did I actually need?\nThis is not an easy question to answer.\nRivian studiously avoids any mentions of watts, volts, or amps in the in-car UI.\nAll battery displays are in %, with the total capacity of the battery (in kWh) nowhere to be found, or in miles, which are a meaningless unit only peripherally correlated to actual road miles given the wildly differing efficiencies given terrain, tires, driving style, and a thousand other factors.\n\nSo, how much power would I use keeping the AC on?\nI did a bunch of hacky tests, keeping the AC on in my driveway and checking the differences in % with my eyeballs every period of time.\nBut I was only really able to answer the question when I learned about a [secret RiDE menu](https://www.rivianforums.com/forum/threads/latest-ride-menu-code.21755/).\n\nUsing this menu, I was able to learn that when awake, my Rivian R1S uses about 500W of power *baseline*, just sitting there.\nWith the AC on, the usage goes to about 2500W.\nThat was more than double the amount of solar I planned for, meaning that for every hour of running the AC, I would need about 2.5 hours of charging to keep the same SoC.\nIt's actually slightly worse because, when charging at L1/120V, the Rivian seems to put only about 60% of the power into the HV battery.\nPresumably, the rest is going into some combo of staying awake (500W!) and keeping a built-in inverter running.\n\n## Inverters and Batteries\n\nI spent a LOT of time researching solar inverters.\nThe first annoying part is that I had to get a battery.\nThis is frustrating, because my Rivian is already basically a giant battery on wheels.\nWhy did I need *another* battery?\n\nOne reason is that the R1S is a 400V architecture, and I didn't really see any inverters running at those voltages.\nAnother is that, while [Rivian continues promising V2G and bi-directional charging](https://enteligent.com/products/enteligent%E2%84%A2-tlcev-t1-trusted-charging-presale), this so far remains vaporware.\nWithout bidirectional charging, there's no way to way to use power when the sun is not shining.\nSo I would need a battery if we wanted to run any loads at night.\nBut also, most inverters need either a battery or grid power to function.\nThere do seem to be *some* off-grid solar EV chargers that don't require an intermediate battery, but these are rare.\nI found [this one](https://enteligent.com/products/enteligent%E2%84%A2-tlcev-t1-trusted-charging-presale), but it's in pre-order only and not yet generally available.\n\nIn the end, I went for [this solar inverter](https://richsolar.com/collections/inverters/products/nova-3k-3000-watt-48-volt-off-grid-hybrid-inverter), which was temporarily available from [ShopSolarKits](https://shopsolarkits.com/collections/off-grid-solar-inverters/products/3000-watt-48v-all-in-one-inverter) for only $400.\nI also snagged [this battery](https://www.amazon.com/gp/product/B0CP7FZC1P) for about $500.\nFinally, it seemed like the inverter didn't have a good way of showing the power in/out of the battery, so I last-minute purchased [this little power meter](https://www.amazon.com/dp/B013PKYILS) to install on the battery.\nThis last purchase was incredibly clutch, allowing me to track power consumption at a glance.\n\n## Wiring and Dust\n\nThe inverter I got was not rated for dust, and I didn't want it failing half-way through the burn.\nI decided to build an enclosure for it, with air filters on top and bottom and an auxiliary fan so it could shed heat while not getting too dusty.\nI began by mounting all the components on the back -- a friend helped with this.\n\n![Initial system being wired](/images/bmsolar/system-with-friend.jpg)\n\nThis allowed me to do the full system test.\nNothing exploded!\n\n![First system test](/images/bmsolar/first-system-test.jpg)\n\nI used three breaks -- one as a battery disconnect, one for the HVDC solar, and another for the inter AC output.\nThere are not a lot of devices that run on 48V, so I added a [12V step-down regulator](https://www.amazon.com/gp/product/B07GPZWG1S) meant for golf carts.\nIn hindsight, I wished I had gotten a larger one so I could power more USB-C ports in parallel, but this one was okay.\n\nNext, I built a plywood box around the back plate.\nI used a pocket hole jig to join all the sides.\nThe molding you see on the bottom in this photo is where the bottom air filter is meant to rest.\n\n![Initial three-sided enclosure](/images/bmsolar/initial-enclosure.jpg)\n\nOn the front of the enclosure, I wanted a door so I could turn switches on/off and see inverter status.\nI routed an opening out of the front panel, and added molding so the door would have something to sit against as a dust barrier.\nI used velcro to keep the door closed.\n\n![Opening routed out of the front panel](/images/bmsolar/front-panel-opening.jpg)\n\nI also routed out openings for the 120V outlet, and for handles on the sides of the box.\nI used bolts through the back panel to bring out battery power, as well as a ground connection which I didn't end up using.\nI brought PV in and 12VDC out using [Anderson power pole panel mounts](https://www.amazon.com/dp/B097QG383J).\nHere's a final walkthrough of the system I made for the camp:\n\n<div class=\"d-flex justify-content-center\">\n<video controls disablepictureinpicture style=\"max-height: 800px\">\n  <source src=\"/videos/solar-system-walkthrough.mp4\" type=\"video/mp4\" />\n  Download the <a href=\"/videos/solar-system-walkthrough.mp4\">MP4</a>.\n</video>\n</div>\n\n## So ... How Did It All Work Out?\n\nOn the whole, the system worked well.\nMounting the panels on the car was easy, and they felt secure.\nNothing died, and my air filters kept the dust out of the enclosure (although it wasn't a very dusty burn).\nI got surprisingly good power production, more than 1kW at peak.\n\nIt wasn't a very hot year, and we didn't end up needing the Rivian's AC.\nHowever, a lot of folks in camp had swamp coolers, and we ran a swamp cooler grid off the solar.\nIt's always convenient when your biggest loads run when the sun is shining!\n\nWe also had several morning when we made \"solar waffles\" using electric waffle irons powered by the solar system.\nI found this incredibly satisfying, though opinions differed on whether this made the waffles taste any better.\n\nOn the other hand, some things didn't work well, mostly having to do with the Rivian.\nFirst -- we pulled a pretty heavy trailer, which really affected my range.\n\n![Rivian with the trailer](/images/bmsolar/rivian-with-trailer.jpg)\n\nAs a result, we arrived to camp with about 49% range -- clearly not enough to get back to a charger in Fernley.\nAll week long, I kept trying to charge up the Rivian off solar, but all I managed to do was keep it at 47%.\nOne reason was that, although I set the Rivian to `stay off` mode, I failed to turn off an AC schedule that was set up around my partner's daily commute, and which cooled the car down for her in the mornings and afternoons.\nI discovered this on Thursday, when I went into the Rivian to grab something and found it refreshingly cool.\nChalk another one up to the Rivian's counter-intuitive UI!\n\nThe Rivian's portable charger, in 120V mode, allowed me to set the charge rate between 8A (960W) and 12A (1500W).\nEven at 8A, I would be draining the battery somewhat.\nWhenever I plugged the Rivian in, I had to remember to unplug it, which I sometimes forgot to do until too late in the day to fully re-charge the 48V battery.\n\nAs a result, on two morning I found the 48V battery totally dead in BMS undervolt protection mode.\nThis is is when I learned that my inverter wouldn't turn on without battery power.\nI ended up having to \"jump-start\" the inverter.\nThis, I did by wiring two 20V DeWalt batteries in series (40V total, sufficient to meet the inverters 36V min-voltage threshold) and connecting them to the battery terminals long enough for the inverter to boot, and then disconnecting the jump starter once the solar kicked in.\n\nMy conclusion was that 1kW of solar is enough to either run camp, or to charge my EV, but not both.\nOn the final day of the burn, I ended up plugging the Rivian into the Silicon Village grid, where a few plugs had freed up once a few of their RVs had left.\nIn hindsight, I would have had a better time with the system if I hadn't attempted to charge the Rivian off the solar at all.\n\n## What's Next\n\nI think the basics of the system are solid, but I definitely need MOAR SOLAR.\nWith another 5 panels, I could have enough power to charge the EV at max 120V speed, and still keep the camp fully powered.\nThe big question for me then is, how to mount 8 solar panels, ideally in a way that also makes use of the shade they cast.\n\nI honestly think that building the mounting structure on playa is the biggest obstacle to adoption.\nLooking at a few big solar camps, systems varied wildly, mostly using steel tubing with some kind of panel mount clamps.\nMy experience locating the correct hardware even just for my unistrut helps me realize just how much work is involved in the hardware selection.\n\nI wonder if there's a market for easy plug-and-play systems for camps, including all the components -- panels, structural members, mounting hardware, inverter, and battery.\nThis is like a [black rock hardware](https://formandreform.com/blackrock-hardware/) for in-camp solar.\nIf you're interested, let me know -- maybe I can put this together?\n\nOverall, it was pretty encouraging to see *much* more solar at the burn this year, and commensurately fewer loud, smelly generators.\nHowever, there's still a lot of work to do.\nAlso, bringing an EV to the burn continues to be fairly challenging.\nI'm excited to keep iterating on the problem with my fellow burners!\n",
            "url": "https://igor.moomers.org/posts/solar-ev-at-burning-man-2024",
            "title": "A Solar-powered EV at Burning Man 2024",
            "summary": "How to go to Burning Man in an EV, and charge it with solar while you're there!\n",
            "image": "https://igor.moomers.org/images/bmsolar/back-of-system-burn.jpg",
            "date_modified": "2024-09-28T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/now-march-2024",
            "content_html": "\n**Note**: This is an old NowNowNow post!\nThe next post after this one [is here](/posts/now-feb-2025).\nThe most recent post [is here](/now).\n\n---\n\nI'm finally getting around to making a [now page](https://nownownow.com/about).\nI've been meaning to do this for a while, ever since I heard about `now` pages from [Raph Lee](https://www.linkedin.com/in/raphaeltlee/).\nThanks as always, Raph!\n\n### Work ###\n\nDirect emissions from just residential buildings are [almost 6% of all CO2 emissions](hhttps://www.iea.org/data-and-statistics/charts/global-co2-emissions-from-buildings-including-embodied-emissions-from-new-construction-2022) -- that's 2x the emissions from aviation.\nIndirect emissions -- that is, emissions due to energy use in residential buildings -- are another **11% of all** emissions.\nTo reduce building emissions, we have to electrify buildings and then/simultaneously de-carbonize the grid.\nThe [IRA](https://home.treasury.gov/policy-issues/inflation-reduction-act) has a number of provisions to accelerate building electrification.\nFor instance, sections [25C and 25D](https://assets.ctfassets.net/v4qx5q5o44nj/3FYfJiYMILiXGFghFEUx0D/279f180456183d560d9c68d4de8baa67/factsheet_25C_25D.pdf) provide generous tax credits for moving to heat pump or geothermal home heating.\n\nBesides the federal tax incentives, there are also local incentives at the state, county, city, and utility provider level.\nCombined, these can make the cost of projects like replacing fossil-gas furnaces with heat pumps much cheaper.\nHowever, actually getting these incentives is a complex process.\nThere are barriers at every step of the way:\n* Finding out about the incentives\n* Understanding the requirements\n* Applying for the incentives\n* Budgeting and executing the actual project\n\nTo help with this process. I've joined [Rock Rabbit](https://rockrabbit.ai), so far as a contract software engineer.\nAt RR, we've built a database of the available incentives, including their eligibility and application requirements.\nWe turned this data into a wizard which allows homeowners or contractors to plan a project, understand how much money they'll get back in rebates or credits, and smooth the application process.\nIn some cases, we can directly submit the application to an incentive provider and track the rebate progress.\n\nI've been playing a hybrid full-stack tech lead / eng manager role.\nOn the backend, I've implemented CI, cleaned up our infrastructure and deployment process, and added tests to help us be more confident that we're returning the correct set of incentives for a project.\nOn the front-end, I've built the scaffold for a web app (we've been mobile-only so far).\nI'm particularly excited about auto-generating an API client from our our FastAPI/OpenAPI spec.\nThis allows us to keep backend Python types in sync with FE TypeScript types automatically.\n\n### Projects ###\n\nBesides this blog, my main project has been my self-hosted infra.\nIn my ideal world, there are no giant cloud service providers who make money by selling my data and my attention.\nI generally agree with the likes of [Yuval Noah Harari](https://www.ynharari.com/), [Jaron Lanier](https://www.jaronlanier.com/), or [Cory Doctorow](https://pluralistic.net/) that those business model of the internet are unsustainable, unethical, and harmful to individual and collective well-being.\n\nInstead, I want small groups of friends to collectively run personal infrastructure.\nThis is connected both with my ideas on electronic liberty, and also with my ideas of group cohesion and bonding.\nTraditionally, we've relied on our social groups for our survival.\nToday, we all work remotely for different organizations from our own bedrooms.\nWhen we have friends at all, it's merely for entertainment.\nI would like to bring back a world in which we depend on each other and collaborate to accomplish shared goals.\nDigital infra is a good place to start.\n\nMy personal cloud started with an email server back in 2003 or so.\nWe've been running a shared media collection with services like [Subsonic](https://www.subsonic.org/) for more than a decade.\nHowever, usability has been limited to my nerdiest friends.\nMy goal over the past few months has been to both set up more services, and to make them more usable.\n\nSetting up more services has been much easier thanks to Docker and Docker Compose.\nThings got even better once I nailed secret management with [dcsm](/posts/secrets-in-docker-compose).\nFor usability, I wanted to create an SSO system and a login portal.\nI brought up [Authentik](https://goauthentik.io/) for SSO, so now there's a self-service signup flow.\nI had to modify several services to get them to support SSO.\nFor instance, I have [a PR](https://github.com/janeczku/calibre-web/pull/2899) to [Calibre Web](https://github.com/janeczku/calibre-web) to add SSO support.\n\nA big milestone was announcing the project to my broader group of friends.\nI did that a few weeks ago, and now have almost a dozen active users in the system!\n\n### Travel ###\n\nI'm still living in Sacramento, with regular trips to the Bay Area.\nHowever, over the next month I have some big trips coming up.\nFirst, I'm going to Cabo San Lucas for a cousin's wedding.\nI'm hoping to get at least a couple of days of scuba diving while I'm there.\n\nAfter that, I will be driving to Austin, Texas with a friend.\nWe'll be at the [Texas Eclipse Gathering](https://seetexaseclipse.com/), and then road-tripping back home.\nExcited to do another long EV road trip, and am curious how the infrastructure has come along in the past year.\nFingers crossed that Rivian rolls out NACS charging on the Tesla network and ships me an adapter before we leave!\n\n### Reading ###\n\nI've been reading mostly fiction lately.\nA big project for me was re-reading [Anathem](https://bookshop.org/p/books/anathem-neal-stephenson/8961850) by [Neal Stephenson](https://www.nealstephenson.com/).\nIt's been a decade since I read it the first time, and I enjoyed it even more the second time around.\nIt made me wish I was living in the Mathic world, spending all my time learning and debating ideas with my friends.\nI also enjoyed the mind-bending multiverse hijinks the concept of [Hylean flow](https://anathem.fandom.com/wiki/Hylean_Flow).\n\nI also re-read [Recursion](https://bookshop.org/p/books/recursion-blake-crouch/9597794) by [Blake Crouch](https://www.blakecrouch.com/).\nI pulled it up randomly in my library, and initially had no memory of reading it the first time -- a fun trip for a book all about memory!\n\nCurrently, I'm reading [The Deluge](https://bookshop.org/p/books/the-deluge-stephen-markley/18405115).\nThe book is quite well-written, with realistic characters, a good understanding of climate policy, and lots of fun insider-baseball politics.\nOn the other hand, is it explicitly a dystopian novel?\nIt anyway feels like one, and there's enough catastrophe to go around in the book, both for the planet and for the lives of the characters.\nI generally avoid dystopian fiction, but now that I'm in it, I want to see how it turns out.\n\nIn the last few months I also plowed through all of the [Bobiverse](https://bookshop.org/p/books/we-are-legion-we-are-bob-dennis-e-taylor/6389676) books.\nJust a fun, hard sci-fi romp through the galaxy.\n\n### Future ###\n\nI am still thinking about whether I want to go to grad school and do a career transition into energy engineering.\nI still really want to work on the transmission and distribution grid.\nI want to tackle [GETs](https://inl.gov/national-security/grid-enhancing-technologies/) and the problem of the [interconnection queue](https://www.utilitydive.com/news/energy-transition-interconnection-reform-ferc-qcells/628822/).\nI am sure there is a lot of work for a skilled software engineer in this space.\nIf you work in the space or have ideas for me, please reach out!\n",
            "url": "https://igor.moomers.org/posts/now-march-2024",
            "title": "Now: March 2024",
            "summary": "Now page for March 2024\n",
            "image": "https://igor.moomers.org/images/rr-jira-pica.png",
            "date_modified": "2024-03-17T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/light-up-boats",
            "content_html": "\nIn October, I completed [ASA104](https://asa.com/certifications/asa-104-bareboat-cruising/) with [Modern Sailing](https://www.modernsailing.com/content/bareboat-cruising-asa-104).\nI wanted to take out some of their larger cruising boats, but due to the [wind patterns in the San Francisco Bay](https://boardsportscalifornia.com/understanding-san-francisco-bay-area-weather-the-wind-beneath-our-wings/), there's no wind in the winter.\n\nSailboats are for sailing -- I am not a huge fan just sitting around burning diesel for fun.\nMy excuse came in the form of the [Sausalito light-up boat parade](https://www.winterfestsausalito.com/).\nI could get on the water in a big boat, and not feel **too** bad about just motoring around all day.\nPlus, I had never been in a parade before!\n\n## Decoration\n\nModern Sailing rentals are a 24-hour period starting at 9am.\nOur plan was to get the boat first thing in the morning and spend the day decorating.\nWe would go to the parade at 6 pm, then sleep on the boat and take the decorations down in the morning.\n\nHere's our final result:\n\n<p>\n<video controls muted loop disablepictureinpicture>\n  <source src=\"/videos/lightupboat.webm\" type=\"video/webm\" />\n  Download the <a href=\"/videos/lightupboat.webm\">WEBM</a>.\n</video>\n</p>\n\n## Rigging The Mainsail\n\nA friend had a giant cache of [twinkly strings](https://twinkly.com/en-us/products/strings-multicolor).\nOur plan was to hang them up in the mainsail triangle, and display some cool mapped patterns in 2D.\nHere's our rigging plan:\n\n![Rigging Plan](/images/light-up-boat-rigging.svg)\n\nI was really worried about losing the main halyard, so we used a serious line with some extra-redundant knots to secure it at the mainsail clip and on the deck.\nFor the rigging, we used paracord with some alpine butterflies tied into it:\n\n![Alpine Butterfly](/images/alpine-butterfly.jpg)\n\nThe first few twinkly strings, next to the mast, used the entire length of the string.\nFor the later strings, we could start at the top, go back down to the boom, then go back up to the top again.\nThis required hosting the rigging, figuring out where the string would end up, and securing it on the rigging that ran along the boom.\nThen we could lower the whole rigging, and secure the other end of the string in the loop that ran along the topping lift.\n\nThis was a huge pain.\nChristmas lights **want** to get tangled, and hosting them up and down gives them just the chance they're looking for.\nIt took us until the very end to figure out that we could just put extra paracord through the loops, and use it to hoist one string at a time.\nNext time, we'll definitely just install extra paracord in every loop from the very beginning.\n\nWe ended up using 2 strings of 400 lights, and 1 string of 600 lights -- 1400 LEDs total on just the mainsail triangle.\n\n## Mapping\n\nWe hoped to use the Twinkly app to map the lights to a 2D image.\nThis **absolutely** did not work.\nFirst, the mast is really tall -- it was difficult to even get the whole set of lights in the frame.\nTo do that, we had to stand pretty far back, which made each LED hard to distinguish in the picture.\nFinally, the strings swayed in the wind, and the whole boat rocked in the water -- no way to get a still image.\n\nThankfully, with just the default unmapped patterns, the Twinkly lights look pretty good.\nHowever, I am now kind of obsessed with the idea of mapping the lights to a 2D image.\nI think using a hybrid automatic-manual approach should work well.\nI should be able to take a single photo as the base of the scene.\nThen, I should be able to turn on sections of the lights, and then indicate their position in the base image.\nIt doesn't have to be perfect -- just good enough to get the general shape of the boat.\n\nI would like to try writing some software for this in time for next year's parade.\n\n## Other Lights\n\nI bought some of [these lights](https://amzn.to/3RSKvv1) to put on the lifelines.\nMy plan was to control them with [WLED](https://kno.wled.ge/).\nBut I discovered that, though they are individually addressable, they have some bug in the implementation of the protocol that causes them to flicker unpredictably.\n\nI did bring some of [these strings](https://amzn.to/3NI0LfP).\nUnlike the previous lights, these don't explicitly advertise WS2812.\nHowever, they do work well with WLED.\nTheir implementation of WS2812 is kind of [odd](https://todbot.com/blog/2021/01/01/ws2812-compatible-fairy-light-leds-that-know-their-address/).\nRather than shifting out the bits for the next light, they allow the controller to wiggle the entire common data line, and each LED knows its own address.\nThis means you cannot link multiple strands together serially -- the second string will just mirror the first.\n\nI didn't have time to do anything fancy with these lights.\nWe used them with their default controllers using some built-in patterns.\n\n## Future Work\n\nFor next time, I would dispense with the fancy Twinkly lights (which, wow, are pretty expensive!).\nInstead, I would use the cheap strings off Amazon and write my own software to control them over WS2812.\nLet me know if you have ideas for how to create a 2D mapping UI for this!\n\n## The Parade\n\nOh yeah, there was a parade!\nThat part was super-fun.\nWe got to see all the other boats up-close, and listening to the marine radio chatter with the stressed-out parade organizers trying to keep everything going was pretty entertaining.\nNext time I would like some other crew to feel confident driving -- turns out operating a 40' boat in close quarters with a bunch of other boats, in the dark, in a shallow marina, is stressful!\nWe didn't win any prizes, but we had a great time.\n",
            "url": "https://igor.moomers.org/posts/light-up-boats",
            "title": "Light Up Boat Parade 2023",
            "summary": "We decorated a boat for the Sausalito light-up boat parade!\n",
            "image": "https://igor.moomers.org/images/lights.png",
            "date_modified": "2023-12-18T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/secrets-in-docker-compose",
            "content_html": "\nTL;Dr: store your secrets in `git` alongside your `compose.yml` file.\nMy new service, [`dcsm`](https://github.com/igor47/dcsm), decrypts the secrets and templates them into your config files.\n\n## Primer on `docker compose` Repos\n\nLately, there's been a thriving ecosystem for running self-hosted services using `docker`.\nPackaging services with docker means abstracting away the complexities of configuring a local environment.\nUpdates are consistent across services.\nPlus, there are lots of utilities to make life easier.\nFor instance, [`traefik`](https://traefik.io/) will automatically terminate SSL and reverse-proxy to your service -- no more manual certificate management.\nAs a result, I am running increasingly more services using `docker compose` files.\n\nI consider each `compose.yml` file to define a \"cluster\" of services that are logically grouped.\nFor instance, I have a media cluster that handles movie ([jellyfin](https://jellyfin.org/)), music ([navidrome](https://www.navidrome.org/)), book ([calibre-web](https://github.com/janeczku/calibre-web)), and audiobook ([audiobookshelf](https://www.audiobookshelf.org/)) hosting services.\n\nI keep each cluster in its own git repo.\nThe repo includes the `compose.yml` file and the configuration for all the services in that file.\nMany services do not need any configuration beyond what is in the [`environment` key](https://docs.docker.com/compose/compose-file/compose-file-v3/#environment) of the `compose.yml`.\nOften, though, a config file is required or is a more ergonomic way to specify the configuration.\nFor instance, all my clusters have a `config/traefik/traefik.yml` file to configure `traefik`.\nI then bind-mount the config files into the container filesystem:\n\n```yaml\nvolumes:\n  - ./config/traefik:/etc/traefik\n```\n\n## How to Manage Secrets?\n\nSuppose I need a credential inside that config file?\nBefore writing [`dcsm`](https://github.com/igor47/dcsm), I was at the mercy of the service author.\nFor instance, every piece of `grafana`'s configuration [can be overridden with environment variables](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#override-configuration-with-environment-variables).\nSo, I could check a `grafana.ini` file into the repo with most of my config.\nThen, to add a secret (e.g., an OpenID-Connect client id/secret pair), I would:\n\n1. create a `grafana/environment` file containing just the overridden secret keys\n1. add the file to `compose.yml` under `env_file`:\n\n```yaml\ngrafana:\n  env_file:\n    - path/to/grafana/environment\n```\n\nThis is confusing -- now the configuration is split between several places.\nAlso, the `grafana/environment` file could not be checked into the repo.\nIts management becomes out-of-band; as a DevOps practitioner, I don't like that.\n\nGrafana is one of the better services here.\nLots of services require using a config file.\nSometimes, you can extract just the secret-containing part of the config and manage *that* out-of-band.\nThen there are services like [`synapse`](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html) which requires a bunch of secrets in a common config file and has no mechanism for either including environment variables in the config or sourcing sub-files.\nNow, your entire config file cannot be checked into the repo.\n\n## DCSM\n\n[`dcsm`](https://github.com/igor47/dcsm) is a simple service containing some python code and [`age`](https://age-encryption.org/) for symmetric-key encryption.\nTo use DCSM, you add it to your `compose.yml`:\n\n```yaml\n  dcsm:\n    build: .\n    environment:\n      - DCSM_KEYFILE=/example/key.private\n      - DCSM_SECRETS_FILE=/example/secrets.encrypted\n      - DCSM_SOURCE_FILE=/example/secrets.yaml\n      - DCSM_TEMPLATE_DIR=/example/templates\n    volumes:\n      - ./example:/example\n```\n\nThe variables `DCSM_KEYFILE` and `DCSM_SECRETS_FILE` are required for basic operation.\nYou may optionally set `DCSM_SOURCE_FILE` to tell `dcsm` about your unencrypted secrets source.\nThis allows you to use the `encrypt` and `decrypt` commands, though you can also perform those operations by running `age` locally.\n\nYour secrets source is a `yaml` file containing your secrets.\nFor example:\n\n```yaml\nGRAFANA_OAUTH_CLIENT_ID: this_is_secret\nGRAFANA_OAUTH_CLIENT_SECRET: \"this is also a secret\"\n```\n\nThis file, along with your `DCSM_KEYFILE`, should be `.git-ignore`ed from your repo\nThe keyfile must be copied out-of-band between your dev environment and your cluster runtime machine.\n\nYou may set any number of directories with the environment variable prefix `DCSM_TEMPLATE_`.\nIn these directories, `dcsm` will find files ending with `.template` and replace template strings with secrets from your encrypted `DCSM_SECRETS_FILE`.\nFor example, here is that grafana config file:\n\n```ini\n[auth.generic_oauth]\nenabled = true\nclient_id = $DCSM{GRAFANA_OAUTH_CLIENT_ID}\nclient_secret = $DCSM{GRAFANA_OAUTH_CLIENT_SECRET}\nscopes = openid profile email\n```\n\nThis approach enables you to keep your cluster repo consistent.\nYou can easily refer to a secret in multiple places.\nFinally -- if you need to pass secrets as environment variables, you can just template an `env_file`.\nFor instance, your template could be:\n\n```bash\nGF_AUTH_GENERIC_OAUTH_CLIENT_ID=$DCSM{GRAFANA_OAUTH_CLIENT_ID}\nGF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$DCSM{GRAFANA_OAUTH_CLIENT_SECRET}\n```\n\nIf you store this file in your repo at `config/grafana/oauth.env.template`, then you could use it like so:\n\n```yaml\nservices:\n  dcsm:\n    image: ghcr.io/igor47/dcsm:v0.3.0\n    environment:\n      - DCSM_KEYFILE=/secrets/key.private\n      - DCSM_SECRETS_FILE=/secrets/secrets.encrypted\n      - DCSM_SOURCE_FILE=/secrets/secrets.yaml\n      - DCSM_TEMPLATE_DIR=/config\n    volumes:\n      - ./secrets:/secrets\n      - ./config:/config\n\n  grafana:\n    image: grafana/grafana-enterprise\n    restart: unless-stopped\n    depends_on:\n      dcsm:\n        condition: service_completed_successfully\n    env_file:\n      - ./config/grafana/oauth.env\n```\n\nYou can see that `grafana` has a `depends_on` the success of `dcsm`.\nThis allows `dcsm` to run first and template your config files with your secrets.\nBy the time the `grafana` service starts, the config files are ready for action!\n\n## That's It\n\nI wrote this tool to meet my own need, but I hope others will find it useful as well.\nI think managing clusters via a configuration-as-code/infrastructure-as-code repo works pretty well.\nSecret management was the missing piece -- but, with [`dcsm`](https://github.com/igor47/dcsm), no longer.\n",
            "url": "https://igor.moomers.org/posts/secrets-in-docker-compose",
            "title": "Docker Compose Secrets Manager",
            "summary": "A service and an approach for managing secrets in docker compose repos.\n",
            "image": "https://igor.moomers.org/images/whales-with-secrets.png",
            "date_modified": "2023-12-15T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/soviet-kgb-stories-pt2",
            "content_html": "\n<small>**Note**: There is a [previous post](/posts/soviet-kgb-stories-pt1), about my uncle.</small>\n\nI never met my grandfather Lev -- he died (of stubbornness) just about a year before I was born.\nBut I heard lots of stories.\nIn the land of [блат (blat)](https://en.wikipedia.org/wiki/Blat_(favors)), the person in charge of the pharmacy containing scarce but potentially life-saving medication is a good person to have on your side.\nAnd my grandfather, by all accounts, knew how to play the game.\nIn my home city of Nikolaev, everyone knew him, and everyone owed him a favor -- therefore, or possibly also, everyone respected him.\n\nMy grandfather could possibly have aspired to a greater status than a two-bedroom apartment in a [хрущёвка](https://en.wikipedia.org/wiki/Khrushchevka) and the once-yearly government-sponsored vacation to the Black Sea.\nIn his later years, he was offered an opportunity to join [The Party](https://en.wikipedia.org/wiki/Communist_Party_of_the_Soviet_Union) -- a key opportunity for advancement in the USSR.\nBut he turned it down, and this is the story of why.\n\n## Some Flavor\n\nBefore we get into the meat of my story, an anecdote.\n\nThere's a common expression among Soviet Jews, which goes \"Бьют не по паспорту, а по морде.\"\nTransliterated -- \"biut-ne-po-pasportu-a-po-morde\".\nTranslated -- \"they punch you, not in the passport, but in the face\".\n\nThis is a reference to the [\"Пятая графа\"](https://ru.wikipedia.org/wiki/%D0%9F%D1%8F%D1%82%D0%B0%D1%8F_%D0%B3%D1%80%D0%B0%D1%84%D0%B0) -- the \"fifth line\" of the Soviet passport, which described the passport-holder's \"Nationality\".\nFor Soviet Jews, this line was filled in, \"Jewish\", and it served to close many doors.\nBut -- they don't punch you in the passport.\nAnti-semites are keenly attuned to stereotypical ethnic features, and it was enough to simply look Jewish to get into trouble.\n\nLev, for what it was worth, didn't look particularly Jewish.\nSo, when he showed up at my grandmother's house to ask her father for her hand, he was at first confused for a government inspector coming for a surprise visit to their family pharmacy.\nWhile they figured out what to do with him -- the chaos abated somewhat when my grandmother's precocious kid brother announced, \"That's not an inspector, that's Sofia's boyfriend!\" -- they bustled him into the kitchen.\n\nMy grandmother's grandmother was tasked with feeding Lev, which she went about grudgingly.\nBecause he didn't look Jewish, she felt free to mutter her complaints in Yiddish while she served him food.\n\"They put this goy in my kitchen, and I have to feed him?\nWhat's next, he's gonna ask for an (alcoholic) drink?\"\n\nMy grandfather, however, spoke fluent Yiddish.\nHe learned it at a yeshiva in Kherson, which he attended at the same time as my grandfather Semyon -- they actually knew each other in school, decades before my dad met my mom in Nikolaev.\nSo, hearing the muttering, he responded, \"Actually, a shot of vodka wouldn't go amiss.\"\nWhen the old woman realized that the suitor was actually Jewish after all, her reluctance instantly disappeared.\nMoments later the table was covered with the best in the house.\n\n## The War Years\n\nOn June 22nd, 1941, Germany declared war on the Soviet Union.\nMy grandfather Lev had just finished his first year of medical school, and he was immediately drafted.\nHe ended up serving as a medic on the front lines through the entire war.\nAfter Germany surrendered in 1945, Lev was not immediately released from the military.\nHe ended up serving in the occupation for another year, helping rebuild medical facilities in a Germany that had been reduced to rubble by the Allied campaign.\n\nWhen Lev finally returned to Odessa in October of 1946, his medical school had already finished more than a month of classes.\nHe was told to come back the following year to restart his education.\nLev was already way behind -- he would be four or five years older than his classmates -- and he didn't want to put life off for another year.\nWandering around Odessa, he saw an announcement that the pharmacy school was still recruiting students.\nMy grandmother had actually seen the same announcement when she showed up in Odessa to start school, a little late due to illness.\nAnd that's why I'm here to write this story today!\n\n## After School\n\nMy grandparents were married in 1949, and my uncle from [the previous story](/posts/soviet-kgb-stories-pt1) was born in 1950.\nThey moved to Nikolaev, where my grandfather immediately became the head of the pharmacological supply warehouse.\nI tried to figure out why a kid right out of school was immediately in charge of a medical warehouse.\nIt seems to boil down to two reasons.\nThe first is a massive shortage of men in the aftermath of the war.\nWhile Soviet casualties [are disputed](https://en.wikipedia.org/wiki/World_War_II_casualties_of_the_Soviet_Union), they total at least 9 million soldiers, all men.\nThe disputes all purport the official figures to be much too low, and there are estimates as high as 40 million military and civilian deaths -- twice as much as official figures.\n\nThe second reason is that the USSR had what, by modern US standards, unreasonably generous medical leave.\nThere was a baby boom in the USSR, just as there was in the US, and women were going into декрет, making them unreliable employees.\n\n## Arrest, Exile, and Release\n\nAs Lev was running the pharmaceutical warehouse, the USSR was in the grips of a paranoid Stalin.\nAs a result of the [Doctors' plot](https://en.wikipedia.org/wiki/Doctors%27_plot), many Jewish doctors and medical workers were \"dismissed from their jobs, arrested, and tortured to produce admissions\".\n\nIn 1951, Lev was using petty cash from his warehouse to purchase newspapers, which he hung up on the walls for public reading.\nBecause of this, he was accused by the local KGB of \"misappropriation of public funds\".\nIn a swift trial, he was pronounced guilty.\n\nAt this time, the USSR was in a Nuclear arms race with the USA.\nUranium ore was discovered in [Жёлтые Воды (Yellow Waters)](https://en.wikipedia.org/wiki/Zhovti_Vody), and the USSR needed people to mine the ore.\nI cannot tell if the Yellow River was actually yellow because of yellowcake uranium, or if the color was a coincidence.\nIn any case, Lev was sent to this region of Ukraine to work in the mines.\n\nBecause of his medical training and experience, Lev was spared hard labor in the mines.\nInstead, he provided medical services to the miners and surrounding community.\nNevertheless, he was not free to leave, and my grandmother could not visit him.\n\nInstead, my grandmother expanded much effort trying to free him.\nShe spoke to many lawyers, and visited many government officials.\nShe was finally told by a well-respected advocate in Kiev, -- \"Child, you just have to wait.\nThere is no case against him, and they will let him out soon.\"\n\n## Aftermath\n\nStalin died in 1953.\nIt was not until 1956 that his successor, Khrushchev, [denounced Stalin](https://www.britannica.com/event/Khrushchevs-secret-speech) and began the \"de-Stalinization\" of the USSR.\nHowever, almost immediately after Stalin's death, many of the political prisoners arrested during his rule were released.\nAmong them, my grandfather Lev.\n\nAfter his release, Lev was restored to his former rights and returned to his previous job.\nHe went on to make many advances in his field.\nAmong them, building one the first pharmacies located inside a hospital, an innovation copied from the USA.\n\nFor his service to the USSR in both military and civilian roles, he was awarded several medals.\nMy mom likes to recount how, when he wanted to call someone in Moscow, he would connect to the operator and just say \"This is Lev, connect me with so and so\" -- an ordeal that was much more difficult for other people.\n\nHowever, though he was liked and respected, he never forgave the USSR for sending him to prison for more than 2 years.\nHe did not join the party, and refused to get involved in any sort of politics.\n",
            "url": "https://igor.moomers.org/posts/soviet-kgb-stories-pt2",
            "title": "Soviet KGB Stories II",
            "summary": "A story of how my grandfather Lev was exiled to the uranium mines.\n",
            "image": "https://igor.moomers.org/images/uranium.jpg",
            "date_modified": "2023-09-21T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/self-hosted-link-shortener",
            "content_html": "\nI embarked on a yak shave so epic, it resulted in me writing an entire URL shortening service in rust.\nI'm calling it `smrs` (get it? \"`sm`all and in `r`u`s`t!), and [here is the github repo for it](https://github.com/igor47/smrs).\nIt's hosted publically, but I don't want to share the link because, from a few minutes of casual reading online, it looks like URL shorteners are frequently abused (see below).\n\n## For the love of God, why?\n\nThe yak shave is that I want to write some microcontroller code, and as I don't really remember any `C`, I figured I'd better just learn embedded `Rust`.\nI cracked open [the embedded Rust book](https://docs.rust-embedded.org/book/), and immediately came across:\n\n> You are comfortable using the Rust Programming Language, and have written, run, and debugged Rust applications on a desktop environment. \n\nI was like, \"Nope, I am not\", and proceeded to the [general-purpose Rust book](https://rust-book.cs.brown.edu/) (I've been reading the Brown-hosted version because I want to take the little embedded quizzes).\nThe book is *really* good, but there's too much reading in between the practical sections, and [I learn by doing](https://psycnet.apa.org/fulltext/2014-55719-001.pdf).\nSo I resuscitated a previous idea of hosting my own link shortener, and here we are.\n\n## Choices\n\nThis project has a really odd mix of new and old technology.\nThe modern approach is to run the project as a single binary which includes an embedded web server.\nThe binary is responsible for serving any static files, and also for generating the dynamic responses.\n\nThis seemed like too much work, and using an existing web server crate seemed like **too little** work.\nSo, I am running the service behind Apache and using good-old [CGI](https://en.wikipedia.org/wiki/Common_Gateway_Interface) for the dynamic content.\nRemember CGI?\nInstead of your application code running as a long-lived process, it's invoked by the web server for each request.\nThe details of the request are placed in the environment, and whatever is written to `stdout` is sent back to the client as the HTTP response.\nIt's probably slower than having the process already running (`fork` and `exec` are slow!), but it lets the program be really simple.\n\nFor storage, I looked at a bunch of options (like `sled` or `leveldb`) but ended up with good-old `sqlite`.\n\nAlso, I'm probably doing a non-conventional thing with sessions.\nI didn't want to implement logins, plus probably for lots of things you don't even care about user-level persistence.\nBut I figured it might be nice, and instead just exposed the user's session ID for them to see.\nIf they want to return to the site later and see their short links, they can just \"log in\" with their session ID.\n\nFor the front-end, I wanted it to be as lightweight as possible.\nI immediately discovered that I need to either have multiple pages, or write an SPA.\nBut for multiple pages, how do I share common elements like a header/footer?\n\nThis is where I learned about Apache's [SSI](https://httpd.apache.org/docs/2.4/howto/ssi.html) (server-side includes).\nI factored my header out into a separate file, and was able to include it in each page with a simple `<!--#include virtual=\"/header.html\" -->`.\n\nI still had some javascript to write, and I attempted to just use plain JS with no libraries.\nBut then I discovered [Alpine.js](https://alpinejs.dev/), which is just **so** lightweight and easy to use.\nFor CSS, I used [Skeleton](http://getskeleton.com/) but I'm not super-excited about it.\nThis is because it doesn't have a responsive grid system -- once something is, say, 6 columns, it's **always** 6 columns.\nI'm probably going to migrate this to [Bluma](https://bulma.io/) at some point to fix appearance on intermediate-size screens.\n\n## Deploying\n\nI run my self-hosted services via `docker-compose` on my server, and it was really easy to add this one.\nI set up [my `Dockerfile`](https://github.com/igor47/smrs/blob/master/Dockerfile) for multi-stage builds -- my `dev` target just contains configured Apache.\nI bind-mount my `htdocs` into the container, and iterating on the rust code means building it locally and copying into the container filesystem.\n\nFor production, my final stage builds the release version of the code and generates a container with the `htdocs` baked in.\nI have this container built and tagged via [Github Actions](https://github.com/igor47/smrs/blob/master/.github/workflows/publish.yaml).\nThen, in my `docker-compose`, I can just pull the image from `ghcr.io`.\n\nI ran into trouble because I'm hosting this behind a Cloudflare proxy, and so `traefik` couldn't get an SSL certificate for it.\nI had to add a custom `traefik` config for validating SSL via modifying Cloudflare DNS:\n\n```yaml\n  # for zones behind cloudflare proxied DNS, we use the cloudflare dns provider\n  # see:\n  #   https://www.techaddressed.com/tutorials/certbot-cloudflare-reverse-proxy/\n  # this relies on credentials in a file on purr. see the `env_file` directive in\n  # docker-compose for traefik\n  cf:\n    acme:\n      email: admins@example.org\n      storage: /acme/acme-cf.json\n      dnsChallenge:\n        provider: \"cloudflare\"\n```\n\nI then had to generate a Cloudflare API token with the `Zone.DNS` permission.\nI stored that in a file on my server, and then added it to my `docker-compose` via the `env_file` directive:\n\n```yaml\n    env_file:\n      - ${STORAGE}/traefik/cloudflare.env\n```\n\n## Link Shortener Criticism\n\nThere's a whole section on [Awesome Self-Hosted](https://awesome-selfhosted.net/index.html) for [URL shorteners](https://awesome-selfhosted.net/tags/url-shorteners.html).\nThis is where I came across [this Wikipedia section](https://en.wikipedia.org/wiki/URL_shortening#Shortcomings) which lists a bunch of criticisms of URL shorteners.\nIn fact, a bunch of the shorteners in the Awesome list have a publically-hosted instance, but every one of those seems dead.\nOthers explicitly say they had to put their public version behind a password or login because of abuse -- for instance, [liteshort](https://git.ikl.sh/132ikl/liteshort) says about their live demo:\n\n> (Unfortunately, I had to restrict creation of shortlinks due to malicious use by bad actors, but you can still see the interface.)\n\nI put mine behind Cloudflare, but I'm still seeing a bunch of nonsense (like wordpress hack attempts) in the request logs.\nSo, I'm keeping the link a \"secret\" for now.\nI might add a password system to create new links if I see a bunch of abuse.\n\n",
            "url": "https://igor.moomers.org/posts/self-hosted-link-shortener",
            "title": "A self-hosted URL shortener in Rust",
            "summary": "I wrote a self-hosted link shortener!\n",
            "image": "https://igor.moomers.org/images/rusty-scissors.jpg",
            "date_modified": "2023-08-30T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/my-rivian-r1s-initial-thoughts",
            "content_html": "\nI confirmed my pre-order for my Rivian R1S in February 2019.\nIn February 2023, I finally got a delivery window.\nAfter some hassle, I drove away in the vehicle about a month ago.\nAlmost immediately, I went on a 2,500 mile road trip to Colorado through Utah and Nevada.\nI'm at almost 4,000 miles on my odometer.\nSo... how's it going?\n\n## How It Drives\n\nThe Rivian is pretty fun to drive.\nIt's really nice being able to just take on any road.\nWe went to a little hot springs in Colorado that had a really rough, rocky final stretch of road.\nI would have hesitated to drive down that in my Subaru Outback, but with the Rivian there was no concern.\nWe also did several rounds of offroading in Nevada and Utah, and both were **really** fun.\n\nHighway and city driving are also pretty good.\nOne nice thing is that EVs are *really* quiet, which translates into a much more pleasant highway experience.\n\n## Charging\n\nI had some battery and charger anxiety when considering buying an EV.\nThose anxieties are totally gone now.\nThe 300-mile range means that it's actually kind of difficult to be too far from a DC fast charger.\n[PlugShare](https://www.plugshare.com/) helps me be confident that the chargers on the map will actually work.\nI haven't had to wait for a charging station yet.\n\nThe recommended route to Boulder from California would have taken us though Wyoming, which has few EV chargers.\nWe opted to take a slightly longer route through Green River instead.\nKnowing what I know now, I think the Wyoming route would also be okay.\n\nWe were worried that we'd be waiting for the car to charge at our stops.\nIn fact, it's mostly been the other way, with the car being fully charged while S and I are still dithering somewhere.\nOn the drive back, because we were more comfortable with the charging infrastructure, we would stop for a quick bio break while the car gained 20% to 30% of charge, and then we'd head to the next charger.\nIt's much nicer to hang out at most chargers than at gas stations (though we **really** miss windshield cleaning supplies at most charging stations).\n\nMy body is much happier with more, longer stops and opportunities to walk around and stretch while the car is charging.\n\n## Camping\n\nCamping in the Rivian is much nicer than in my Subaru.\nIt's much more roomy, and has fewer hard edges with the seats down.\nThe glass roof is also quite lovely.\n\n![My bed in the rivian](/images/rivian-bed.jpg \"Rivian Bed\")\n\nThere's more storage space (the frunk!), so we didn't need to bring a cargo rack.\nThe door opens in two halves, and I really enjoy sleeping with the bottom half closed.\nIt feels like my bed is less likely to slide out of the car, and we can still keep the top half open for a nice breeze.\n\nI really like all the camping settings in software.\nFor instance, self-leveling is better than having to look for correct-sized rocks to put under the car.\nOn the other hand, I've found the software quite buggy (see below).\nOur first attempt to use self-leveling in Nevada resulted in the car constantly adjusting it's height until the compressor overheated, and then we just had to wait for it to cool down and try again.\nOn another occasion in Utah, I reset self-leveling while packing the car, and then when I got in it wasn't reset and I had to reset it again.\n\nWhile I appreciate all the attempts to control lighting while camping, they, too, are very buggy.\nFor instance, there's a \"keep screens off\" setting, which claims you'll need to press the brake to turn the screens back on.\nIn fact, the screens come back on when you touch them.\nThe light controls are sometimes-broken and sometimes-buggy.\nFor instance, the tail gate light just keeps turning on whenever I open a door, despite me being in camp courtesy mode and also explicitly turning that light off several times.\nTo avoid that bright light coming on whenever I exit the car to pee, I've resorted to just setting the car to \"do not use energy\" mode.\nIt's nice that this exists -- but it also means that all the other lights don't work.\n\nFinally, it seems like even in \"do not use energy\" mode, the windows still work.\nThis is great -- in my Subie, I would have to insert the key and turn the car on to get fresh air or keep out bugs/the cold.\nHowever, at least once, the windows *do not* work in that mode -- another bug?\n\n## Driver+\n\nI'm used to the adaptive cruise control (ACC) from my Subaru.\nThe Rivian's Driver+ adds more effective lane-keeping.\nThis is great when it works, mostly on straight interstate highways.\nIt does not work on any roads that are not the interstate.\nIt turns off when you change lanes.\nIt turns off in less-than-ideal visibility, like when it's raining (\"camera blurry\").\nIt turns off for some, but not most, tunnels.\nIt also sometimes turns off for no reason, at the oddest time, with a loud beep, and then you swerve and catch the car before it smashes into something.\n\nPart of the reason we didn't bring the cargo rack is that using a \"rear accessory\" turns off all self-driving features, including good-old adaptive cruise control.\nThe Subaru still allowed me to use ACC with my cargo rack plugged in (it has brake and turn lights), so on a longer road trip with more outdoor/camping time, I would be pretty annoyed about that.\n\nA common complaint on forums is about how Driver+ mutes your music for a moment whenever anything changes.\nThis is indeed pretty annoying.\nFinally, I tried their lane assist, which is not full lane keeping, but does nudge you back into the lane if you depart it.\nThis was pretty aggressive and I turned it off.\nThey claim to have made it better in the recent update, but I haven't tested it yet.\n\n## The Nav\n\nThe screens in this car are trying to kill me.\nThey're so big and shiny and full of buttons that change position.\nAs I navigate this 4-ton monstrosity down the public roadways, the screens are telling me, \"Look at us, not the road!\"\n\nThe in-car nav system is basic.\nYou **have** to use it if you want to keep track of your state of charge, or have the car pre-condition the battery.\nBut it barely knows about traffic.\nIt sometimes doesn't work (like, it won't actually give you directions, or it will just sit there trying to compute the route).\nYou can't just tap a destination and have it route you there.\nYou can't add side trips (any new destination replaces the previous one).\nIt's always routing me to the back of any business I try to go to.\nIt includes some non-existent chargers, and excludes some existing ones.\nIt doesn't know about the state of most chargers (e.g, Electrify America ones, which are most common on road trips).\nIt doesn't know about speed traps or construction.\n\nI installed a phone holder on my dash, but honestly having 3 screens to look at is too many, and so I end up just using the car nav system while disliking it.\nMuch has been said about Android Auto and how Rivian is not going to add it.\nI think this is a customer-hostile move -- I just don't see how it benefits the customers, only the company's own dreams of world domination through software.\n\n## Music\n\nI have a version of the car with the Meridian-branded sound system, and it's ... okay.\nI mean, it sounds fine, somewhat better than the Subaru, but mostly I think because the car is pretty quiet.\nThe bluetooth in the car seems fairly reliable, better than the Subaru.\nAudio books work well in the `voice` EQ preset, and I tend to use the `rock` preset for all music.\n\nI also use the in-car version of Spotify, which is barely passable.\nBasic things -- for example, searching for a song, and then adding it to the current playlist -- are not possible.\nBut it's nice that there's music without having to connect my phone to the car.\n\n## Camp Speaker\n\nThis thing is maddening, and I honestly wish it didn't exist, rather than existing in it's current, broken state.\nIt sometimes works okay, and sometimes it refuses to connect to my phone.\nIt's so finicky and frustrating that S had to prevent me from just flinging it into the forest.\n\nOne time, I was having some people over, and I wanted to put some music on the camp speaker.\nI proceeded to spend 20 minutes trying to get the speaker to stay connected to my phone before finally admitting defeat.\nIt did this thing where it would *almost* connect, like just for a moment, and then disconnect again, that was close enough to working that I Just. Kept. Messing. With. It.\nOn another occasion, I finally got connected and was playing music, and then it just stopped working for no reason and  I couldn't get it to work again.\nOn yet a few more occasions, it powered on and made it's loud boot-up noises while we were sleeping in the car and the speaker was inserted into it's speaker-slot.\nThis was alarming and embarrassing (because there were some people sleeping in the car next to us).\n\nIn short, I hate the camp speaker.\n\n## Access\n\nI like having the app as a backup way to get into the car.\nBut I would prefer to use the key fob, which unfortunately is the worst.\nFirst, it has four identically-shaped, black-on-black buttons.\nAfter a month, I still don't know which one does what.\nGood look trying to figure out which side of the fob is even the one with the buttons in the dark.\n\nOnce you figure out what button to use -- good luck getting the car to respond.\nI just love standing in the rain with my friend and her 5-year-old, all of us getting wet while I repeatedly hit the unlock button while the car *thinks about something*.\nS also pointed out that this makes her feel unsafe.\nImagine a lone woman walking to her car on a dark street -- she wants that car to freakin' unlock!\n\nI don't understand why this could work instantly in my Subaru, but takes sometimes a minute on my Rivian.\nI bet it has something to do with the software.\nSpeaking of which...\n\n## The Software\n\nOkay, so, I'm a software guy.\nThis car is chock full of software.\nAnd -- it's pretty bad.\nIt's buggy as all hell.\n\nI gave some examples already, such as the tailgate light or the display not staying off, or the nav being unreliable.\nOther examples:\n* all of the lights will just come on for no reason\n* the lights button on the rear screen and the front screen are not aware of each other\n* the car forgets about it's leveling state\n* the bluetooth will just randomly turn back on, even after it's been turned off\n* sometimes battery pre-conditioning doesn't activate automatically; no way to do it manually\n* sometimes the car will refuse to charge, and you have to hard-reboot it\n\nOh, Rivian support recommends a hard-reboot for most troubleshooting.\nI was worried enough about the update [bricking my car](https://www.reddit.com/r/Rivian/comments/136y8gd/r1s_bricked_after_accepting_update_last_night/) that I avoided doing it until we returned from our road trip.\n\nI'm honestly worried and annoyed about the state of the software.\nProgress seems slow -- for instance, a major revision between January and May in Rivian software-land was hiding the trip odometer deeper into a settings sub-menu, despite people on forums asking for trip odometer to be *more* accessible.\nIt's not clear how to report software issues.\nI have been using the `Service` requests feature in the app, but clearly software issues are not service items.\nAre these reports going anywhere?\nI don't know.\n\nThere is a Rivian employee occasionally on Reddit, and people leave comments like \"I really hope this person sees this comment\".\nPeople repeatedly report the same software issue over and over, and there's no way to know if Rivian already knows about it or is planning to fix it or what.\n\nI guess I bought this very expensive beta piece of software, and I was kind of resigned to the early user experience and helping to improve the product.\nBut actually, with the other half-finished software projects I use, there's public issue trackers and road maps.\nRivian has none of that.\nThey're not telling whether anyone is even hearing me, much less if they plan to fix things.\nThe end result is that I'm much more frustrated than I anticipated.\nOn the basis of the software bugs alone, I would NOT recommend the Rivian to most people.\n\n## Support\n\nWhen we picked up the car, we noticed a problem with the head liner.\nThere was no availability for an appointment at the service center for several weeks.\nSo, the car is finally going in to get that fixed the week after next.\nThis might be a blessing in disguise, since I've found a bunch of other issues that can be fixed in the same visit (dead USB-C port, broken floor matt pin, inoperable lighting, etc...).\nThough, I guess it would be better if the car *hadn't* come with a bunch of issues?\n\nHowever, I'm also scared.\nPeople are saying that [the service centers are chaotic and overwhelmed](https://www.reddit.com/r/Rivian/comments/13l4nci/rivian_please_work_on_your_service_centers/).\nAlso, it seems like even minor accidents can result in [huge repair bills](https://www.theautopian.com/heres-why-that-rivian-r1t-repair-cost-42000-after-just-a-minor-fender-bender/).\nI'm worried that, if I need immediate maintenance, I might be out of luck.\nThankfully, this is an adventure mobile and I don't critically need it, but it would suck to miss a trip.\n\n## Minor Peeves\n\nThe car goes out of it's way to hide kilowatts from you -- range is measured in `%` or miles.\nIt does show efficiency in miles per kWh, but not how much kWh are left in the battery.\nI would really like to connect the speed of the charger I'm pulling up to -- invariably in kWh -- to the capacity of the battery, which is a mystery and nowhere in the UI.\nI would also *love* to see a live indication of how much power the car is using while camping.\n\nThere is no cruise control resume.\nIf you brake because a car swerved into your lane, or because you stopped at at a sign on a long stretch of highway, you'll have to re-set your previous cruise control speed manually.\n\nThe camera button is hidden in an overflow menu.\nIf you need help navigating into a tight parking spot, you'll have to pause and click around mid-park.\n\nThe windshield wipers are kind of hard to use manually.\nThe control is small, and the first tap only shows you the current setting, so you always need at least two.\nI would love to just leave it on auto, but it's often way too fast in low-rain conditions.\n\nThe AC controls decide on their own whether the car is heating or cooling.\nI really miss just having a fan blow outside air.\nThis would be especially nice while camping, where I don't want to use power for heat or AC, but a little extra airflow would be lovely (and help keep bugs off me!).\n\n## Summing Up\n\nI am definitely really enjoying having an EV.\nIt's also really fun to have a very capable off-road vehicle.\nOn the other hand, I'm finding it frustrating to be a Rivian early-adopter.\nThe car is buggy, and there's not really a good communication channel to the company to help resolve the bugs.\nI fear that I'll be stuck living with these bugs forever.\n\nHopefully, in the next year, Rivian will improve the quality of the software (though I think some things, like Android Auto, are never going to happen).\nThey might also add features I'm really hoping for, such as V2H/V2G.\nI'm also excited about all of the other electric SUVs and adventure vehicles coming onto the market in the next few years.\nHopefully, there will be less-buggy, more-polished alternatives to the Rivian available in the next few years, and those might be worth waiting for.\n\nFinally, I'm also just pretty scared of the high-software closed-source proprietary-nonsense vehicle future that we're entering.\nWhat happens when Rivian burns through it's cash and runs out of money?\nWill I be left with an expensive brick?\nWhy does my car need always-on internet connectivity?\nWho are they selling my location data to, or sharing it with?\nI'm sad that the EV transition is also being used as an opportunity to take control away from vehicle owners and transfer it to car companies and their big-business allies (in Rivian's case, Amazon).\n\nI really wish there was a car company that was transparent and consumer-friendly.\nAlas, I don't think that's Rivian.\n",
            "url": "https://igor.moomers.org/posts/my-rivian-r1s-initial-thoughts",
            "title": "My Rivian R1S: Initial Thoughts",
            "summary": "I picked up my Rivian R1S just over a month ago; how is it going?\n",
            "image": "https://igor.moomers.org/images/rivian-in-the-wild.jpg",
            "date_modified": "2023-05-19T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/purchase-allocation-via-graph",
            "content_html": "\n<small>**Note**: This is cross-posted to the [Recoolit blog](https://www.recoolit.com/post/purchase-allocations-eng).</small>\n\nWhen you [buy carbon credits from Recoolit](https://www.recoolit.com/buy), you are buying a specific amount of prevented atmospheric warming.\nWe prevent warming by preventing the release of refrigerants into the atmosphere.\n(Note: if you want to learn more about how refrigerants cause warming, check out [our post on refrigerants](/posts/refrigerants-what-are-they).)\nOur purchase receipts are designed to be transparent -- you see exactly how much warming you are preventing, and exactly how.\nThis post explains the technical details of how we create this transparency.\n\n## Brief Overview of Recoolit\n\nWhen refrigerators, air conditioners, or other kinds of heat pumps reach end-of-life or need maintenance, the refrigerants inside need to be removed from the system.\nThis is because refrigerant is at high pressure inside the system, and needs to be depressurized before the system can be serviced safely.\nIt is common for refrigerants to be vented into the atmosphere during this process.\nRecoolit provides technicians with the tools, know-how, and incentives to capture these waste gasses instead of releasing them.\nThen, we destroy the captured gasses in a high-temperature incinerator, permanently preventing their release into the atmosphere.\n\nOur main mechanism for incentivizing refrigerant capture is to pay technicians for the refrigerants they capture.\nWe also have to pay for the cost of destroying the refrigerants, including getting the destruction facility inspected and certified to international standards.\nFinally, we have to cover our operational expenses.\nThis includes buying and maintaining the pumps and cylinders that we use to capture and store refrigerants.\nWe maintain several depots, where technicians can borrow the equipment needed for recovery.\nFinally, we pay for a warehouse where we store refrigerants before they are destroyed.\n\nWe cover all of these costs by selling carbon credits to individuals and organizations that want to offset their carbon emissions.\n\n## Why Transparency?\n\nThe world has almost no way to pay for pollution remediation.\nSometimes, governments will force polluters to pay for the cost of cleaning up their pollution.\nOther times, governments will directly pay for cleanup using tax dollars, especially when the pollution is a public health hazard and there is no obvious responsible party who can be forced to tackle the cost.\n\nSo, even with the growing awareness that climate change is a big, looming problem, there are too few ways to get money to organizations that are tackling the issue.\nThe voluntary carbon credit markets -- where individuals and companies pay for carbon credits out of a sense of social responsibility, and not for any regulatory reason -- are one of the few ways to pay for climate change remediation.\n\nHowever, the voluntary carbon credit markets have been a bit of a mess.\nThere are no international agreements or standards on how to quantify the impact of carbon credits.\nA few private organizations have stepped in to fill this gap, but their standards are not universally accepted.\nFinally, some of the largest private organizations -- known as registries -- have faced controversy.\nFor instance, Verra, one of the largest registries, [has been criticized](https://www.theguardian.com/environment/2023/jan/18/revealed-forest-carbon-offsets-biggest-provider-worthless-verra-aoe) for allowing carbon credits to be sold for projects that were already underway, and for projects that were not actually preventing warming.\n\nFor these reasons, Recoolit's founder Louis was determined for the company to be as transparent as possible from day one.\nWe collect detailed data on every step of our operational process.\nWhen you make a purchase, we show you all of the data we've collected that pertains to the molecules that went into your purchase.\nFinally, we show as much data as possible on [our public registry](https://registry.recoolit.com/registry), so that anyone can see exactly what we are doing.\n\n## What Is Our Data?\n\nEvery Recoolit carbon credit begins with a recovery.\nThis is where a technician captures refrigerants from a refrigerator, air conditioner, or other heat pump.\nAt this step, we collect photos of the equipment that's being serviced, the reason for the refrigerant recovery, the amount of gas recovered, and the type of gas recovered (though we don't always know the exact type of gas at this point).\n\nNext, technicians return the cylinder used for the recovery to one of our depots.\nWe verify the amount of gas recovered, and we also test the gas using a gas analyzer.\nWe pay the technician for the amount of gas they collected.\nThis payment compensates the technician for the time and effort they spent performing the recovery.\n\nBecause we want to return the smaller recovery cylinders back into the field, we will often consolidate the gas from multiple recovery cylinders into a larger storage cylinder.\nWe cannot mix different types of refrigerants, so we have to keep track of the type of gas in each cylinder.\nEvery time gas is transferred, we keep detailed records on the source and destination cylinders and weights.\nSome small amount of gas is always lost during transfers, and the losses are not included in the carbon credits we sell.\n\nNext, we transport the gas to our destruction facility.\nEvery time we transport gas, we weigh and test the cylinders at both ends.\nWe maintain a chain of custody for the cylinders at all times, using signed transport manifests.\nSome of the refrigerants we transport are becoming expensive, because they are no longer allowed to be produced under internal agreements.\nOur procedures protect against loss or theft of refrigerants during transport.\n\nFinally, our refrigerants go through the destruction process.\nFirst, we take a sample of each cylinder that has arrived at the destruction facility.\nThe sample is lab-tested under rigorous standards, to confirm the exact makeup of the contents of the cylinder.\nNext, the cylinder is hooked up to a high-temperature incinerator.\nIn Indonesia, we partner with cement kiln operators, because their facilities reach the high temperatures needed to destroy refrigerants and need only minimal modifications to do so.\n\nAfter destruction, the cylinders are shipped back to us.\nWe vacuum-test the cylinders to make sure they're not leaking, and use them for future recoveries or consolidations.\n\n## Building the transfer graph\n\nWhen you buy a carbon credit from Recoolit, you are buying a specific quantity of a specific gas that was destroyed.\nIn order to show you all of the data that went into your destruction, we need to trace the path of the gas from the recovery to the destruction.\nWe call this data structure the \"transfer graph\", and it is the core of our transparency system.\n\nThe transfer graph is a directed acyclic graph (DAG).\nIn this graph, the edges are transfers from a source to a destination node.\nEach edge has a weight, which is the amount of gas transferred, as well as a gas type.\n\nBecause cylinders are reused, the nodes are actually a specific cylinder ID during a specific time interval.\nA node is created when gas is first transferred into it, and \"closes\" when all of the gas is transferred out and we vacuum out the cylinder.\nA subsequent transfer into the same cylinder ID would create a new node.\n\nEach node can have multiple incoming and also outgoing edges.\nThis is because we sometimes do partial transfers.\nFinally, each node can have a series of \"events\" associated with it.\nEvents include things like \"the gas was tested\", \"the cylinder was transported\", or \"the cylinder was weighed\".\n\n## An example\n\nThis might be easier with an example, so let's use some real data from our public registry.\nI bought some credits from Recoolit, and [here is my receipt for that purchase](https://registry.recoolit.com/purchases/f436f5ca-a6fe-49c4-b10c-f4e46cafcb8d).\nThis is a view of the same data in our internal system:\n\n![Igor's purchase graph](/images/igor-purchase-graph.png)\n\nTo allocate this purchase, we first look through all of our destructions to find one that has enough gas to cover the purchase.\nWe might need to combine multiple destructions to cover the purchase.\nIn this case, we find that we destroyed about 50kg of R-22 from cylinder `berkabut-panas-bebek`.\nR-22 is such a high-GWP refrigerant that only 511 grams of it were needed to cover my purchase of 1 tonne of CO2e.\n\nNext, we do a depth-first search through the graph, starting at the destruction node, and looking for a path that has enough gas to cover the purchase.\nWe see that about 10 kg of R-22 arrived in `berkabut-panas-bebek` from `dingin-kering-harimau`, so we allocate 511 grams of that 10kg.\nFinally, we see that 5.8 kg of R-22 was recovered directly into `dingin-kering-harimau`, so we allocate 511 grams of that 5.8kg.\n\nThis was a fairly easy case, as it only involved a single trajectory through the graph.\nHere's an example of a more complicated sale, which involved multiple kinds of gas allocated through multiple destructions:\n\n![A more complicated purchase graph](/images/complicated-purchase-graph.png)\n\nIn this case, the purchase of 30 tonnes of CO2e was covered by 3 different destructions.\nWe destroyed 13235 grams of R-410a and 193 grams of R-32 to cover this purchase, and these gasses were recovered by two technicians on 3 different occasions.\nThree of the five recoveries were on the same day into three different cylinders, indicating a large recovery job that filled up multiple cylinders!\n\n## Allocating purchases\n\nWe allocate purchases using a depth-first search through the graph, starting at the destruction node.\nEvery time we find a node that has enough gas to cover the purchase, we recursively look through the source nodes.\nThe recursion terminates at a recovery node.\nWe then propagate the list back up through the stack, allocating the purchase to each node in the path.\n\nAllocating the purchase means creating edges.\nFor each purchase, we create a `sale` node in the graph.\nFor each node that contributes gas to the purchase, we create a `sale` edge, from the source node to the `sale` node.\nTo find all the nodes involved in allocating a sale, we look for all nodes connected to the `sale` node through a `sale` edge.\n\nEach `sale` edge includes the amount of gas that was allocated to the sale.\nTo figure out if a node still has enough gas to cover the sale, we subtract the sum of all `sale` edges from the weight of the node's outgoing transfer edge.\n\n## Formatting for display\n\nWe've already discussed all of the data we collect during operations to enable this kind of transparency.\nYou can now also know how our system allocates your purchase to destructions and recoveries.\nThe final piece is presenting this data in a way that is easy to understand.\n\nIn your receipt, we show you a path-decomposed version of the transfer graph.\nA path is a linked list of edges and nodes, starting at a recovery and ending at a destruction.\nHowever, in your purchase subgraph, a single node or edge might be involved in multiple paths.\nWhen we do the decomposition, we clone the shared nodes and edges, so that each path has its own copy.\nHere's an example of a non-decomposed graph:\n\n![Non-path decomposed graph](/images/transfer-path.png)\n\nYou can see that this includes two recoveries -- one in the blue box, and one in the red box.\nThe green box includes a consolidation and destruction nodes that are shared between the two paths.\nWhen we display it, it would look more like this:\n\n![Decomposed paths](/images/transfer-path-decomposed.png)\n\nThere are two paths in this graph -- the one on the left, and the one on the right -- and the nodes in green are duplicated between the two paths.\nHere's how the same data might look in your purchase receipt:\n\n![Paths in the purchase receipts](/images/transfer-path-final.png)\n\nDoing this is surprisingly non-trivial because it's not clear, just from the subgraph, how many paths through a particular node there are.\nTo make it easier, we actually keep track of the paths when we construct the graph.\nEach time we begin trying to allocate gas from a destruction, we generate a path identifier.\nWhen we find a path that works, we store the path identifier in the `sale` edges that track the allocated purchase.\nThis means that a node might actually have multiple edges connecting it to a `sale` node, each with a different path identifier.\n\n## Wrapping up\n\nAs you can see, we've put a lot of thought into how to make our data as transparent as possible.\nOur goal is to create the highest-quality carbon credits, with the greatest possible assurance to buyers that their purchase is actually making a difference.\nIf you like what we're doing here, and want to support us, we urge you again to [buy some carbon credits](https://registry.recoolit.com/buy)!\n",
            "url": "https://igor.moomers.org/posts/purchase-allocation-via-graph",
            "title": "Purchase Allocation via Graph",
            "summary": "Recoolit is focused on transparency, and every purchase is allocated to a specific molecules recovered in the field. This post describes how we do that.\n",
            "image": "https://igor.moomers.org/images/transfer-graph-2.png",
            "date_modified": "2023-05-11T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/refrigerants-what-are-they",
            "content_html": "\n<small>**Note**: This is cross-posted to the [Recoolit blog](https://www.recoolit.com/post/refrigerants-what-are-they).</small>\n\nAt [Recoolit](https://www.recoolit.com), our mission is to reduce climate impact by collecting and destroying waste refrigerants.\nThis mission is often unclear to people who aren't familiar with the refrigeration industry.\nWe often hear questions like, \"What are refrigerants?\" or \"Why are refrigerants a problem?\"\nWe've skipped over such questions on our [explainer page](https://www.recoolit.com/our-work) because they're quite technical, and we don't want to overwhelm our customers.\nThis post is for those special nerdy few who want to know all the gory details!\n\n## Heat pumps -- How do they work?\n\nRefrigerators and air conditioners are all a type of heat pump -- a device that moves heat from one location to another.\nThey do this by means of a fluid that is pumped through a closed loop.\n\nFirst, the fluid is compressed, which causes it to heat up, turn into a gas, and become much warmer than the surrounding air.\nNext, the gas is pumped through a heat exchanger -- like a radiator on a car -- which results in the gas becoming cooler while the surrounding air becomes warmer.\nIn a heat pump on \"warming\" mode, this heat exchanger is located inside a building or room, and this warming is the desired effect.\n\nThe gas is then pumped to a second heat exchanger, where it is allowed to expand, turning it back into a liquid.\nThe expansion cools the fluid.\nIt's then pumped through a second heat exchanger, where it becomes warmer, while the air around it becomes cooler.\nIn a heat pump on \"cooling\" mode, this second heat exchanger is inside the refrigerator, freezer, or too-warm room or building.\n\nThe fluid is then pumped back to the compressor, and the cycle begins again.\nMechanically, all heat pumps rely on just these few components -- a compressor, pumps, heat exchangers, and the fluid -- sometimes a gas, sometimes a liquid -- that runs between them.\nPhysically, this process relies on just a few basic principles of nature -- the ideal gas law, plus the laws of thermodynamics.\n\n## What are refrigerants?\n\nRefrigerants are the fluid used inside heat pumps to move heat from one place to another.\nThere are many considerations when picking a substance to use as a refrigerant.\nFor instance, a common early refrigerant was ammonia (R-717).\nAmmonia boils at a low temperature, which makes it easy to turn into a gas.\nIt does not freeze until a very low temperature, so there's no risk of it freezing inside the heat pump during the expansion phase.\nAnd it has a fairly high specific heat, which means that it can absorb and release a lot of heat.\n\nUnfortunately, ammonia is highly toxic and corrosive.\nOther early refrigerants, such as sulfur dioxide (R-764) and methyl chloride (R-40), were also toxic.\nTo avoid the safety issues associated with early refrigerants, scientists began experimenting with other substances.\nIn 1928, Thomas Midgley, Jr. invented the first chlorofluorocarbon (CFC) refrigerant, which was marketed by DuPont as Freon-12 (R-12).\nR-12 was cheap to produce, non-toxic, non-flammable, non-corrosive, and it had a low boiling point and a high specific heat.\nThese properties led to R-12 becoming the most popular refrigerant in the world, with production peaking at over 1 billion tons per year in the 1980s.\n\n## The Montreal Protocol\n\nWhen searching for safer replacements to early refrigerants, scientists looked for substances which were highly stable, since stable substances pose less of a health risk to humans.\nThis made R-12 an attractive choice, since it was highly stable and non-toxic.\nHowever, this stability also proved to be a problem.\nIn the 1970s, scientists studying the ultimate fate of CFCs like R-12 discovered that these gasses could make their way up to the stratosphere.\nThere, they could be broken down by ultraviolet radiation, releasing chlorine atoms (the first C in CFC and HCFC).\nThe chlorine would then react with, and destroy, atmospheric ozone.\n\nIn response to these findings, in 1987 the international community adopted the Montreal Protocol on Substances that Deplete the Ozone Layer.\nThe agreement was initially signed by 46 countries, and has since been ratified by 198 parties, including all member states of the United Nations.\nIt sets out a schedule for the phase-out of the production and consumption of ozone-depleting substances (ODSs), including CFCs like R-12 and HCFCs like R-22.\n\nThe phase-out schedule differed for different substances and in different places.\nFor instance, R-12 was phased out in developed countries in 1996, and in developing countries in 2010.\nMeanwhile, R-22, an HCFC common in home and commercial heat pumps, was phased out in developed countries in 2010, but it continues to be in-use in developing countries until 2030.\n\n## GWP and the Kigali Amendment\n\nIf you recall, one of the properties of a good refrigerant is its ability to absorb and release heat.\nRefrigerants continue to play this role even after they've been released into the atmosphere.\nIn particular, they absorb and release heat in the form of infrared radiation, which is the same type of radiation that is absorbed and released by greenhouse gasses like carbon dioxide and methane.\n\nThe heat-trapping ability of a greenhouse gas is measured by its global warming potential (GWP).\nThe standard basis of comparison is the most common greenhouse gas, carbon dioxide (CO2), which has a GWP of 1.\nThe GWP of Freon, or R-12, is 10,900.\nThis means that, released into the atmosphere, R-12 traps 10,900 times as much heat as an equivalent amount of CO2.\n\nCFCs and HCFCs phased out under the Montreal protocol were often replaced by hydrofluorocarbons, which have many of the desirable properties of a refrigerant, but without the ozone-destroying chlorine.\nFor example, an HFC called R-134a is a common replacement for R-12 in automotive air conditioners.\nHowever, R-134a has a GWP of 1,430 -- much less than R-12, but still substantially heat-trapping.\n\nWhen the Montreal protocol was adopted in 1987, the focus was on the ozone-depleting properties of CFCs and HCFCs.\nHowever, as the world began to phase-out these substances, it became clear that their global warming potential was also a problem.\nIn response, in 2016 the international community adopted the Kigali Amendment to the Montreal Protocol.\nThe Kigali amendment is meant to prevent additional global warming from refrigeration by phasing-out the production and consumption of hydrofluorocarbons (HFCs), which are the most common replacements for CFCs and HCFCs.\nThe amendment has, so far, been ratified by [148 parties](https://ozone.unep.org/all-ratifications). \n\nLike the Montreal Protocol, the Kigali Amendment sets out a schedule for the phase-out of HFCs, but this schedule differs from place to place.\nFor instance, for R-3\nIn the developed world, the phase-out began in 2019, and is scheduled to complete by 2036.\nIn the developing world, however, the phase-out is not scheduled to end until 2045, with some countries receiving additional extensions to 2047 or later.\n\n## How Recoolit Fits In\n\nRefrigerant pollution is a major global problem, and the world is working hard to solve it.\nHowever, as you can see, even the phase-out of ozone-depleting substances like R-22 will not be complete until 2030.\nWith high-GWP HFCs like R-134a, the process has barely begun, and will last for decades.\n\nWhat do we do in the meantime?\nThis is where Recoolit comes in.\nWhile the world works to phase out harmful refrigerants, we can work to reduce the amount of refrigerant pollution that is released into the atmosphere.\n\nCertainly, some of this release is accidental or the result of equipment malfunction.\nHowever, a significant portion of refrigerant pollution is the result of intentional venting.\nThis is because, without Recoolit, technicians simply have no other way to dispose of refrigerant properly.\nEven well-intentioned technicians, who are concerned about pollution and environmental impacts, have no other choice when a system must be drained for maintenance or repair.\nWe've seen technicians vent refrigerant through a hose inserted into a bucket of water, in the (mistaken) belief that this will somehow \"scrub\" the refrigerant.\n\nAdditionally, the process of recovering refrigerant is time-consuming and expensive.\nThe international protocols do not provide any funding for the transition to cleaner refrigerants, or to mitigate the damage from refrigerant pollution.\nThis is where <em>you</em> come in.\nNow that you know about the problem, you can help us solve it.\nFund our work by [buying our carbon credits](https://registry.recoolit.com/buy).\nTell your friends and colleagues about the problem, and about our solution.\n",
            "url": "https://igor.moomers.org/posts/refrigerants-what-are-they",
            "title": "Refrigerants: What Are They?",
            "summary": "Recoolit prevents global warming by collecting and destroying refrigerants. But when even are refrigerants? Read to find out what refrigerants do, the different kinds that exist, why they can be so concerning, and how the world has managed the risk.\n",
            "image": "https://igor.moomers.org/images/refrigerants.jpg",
            "date_modified": "2023-05-08T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/open-letter-gpe",
            "content_html": "\nIn Cory Doctorow's book [Down and Out in the Magic Kingdom](https://bookshop.org/books/down-and-out-in-the-magic-kingdom/9781250196385), the main character (Jules) has this to say about himself:\n\n> [my compulsion] was Beating The Crowd, finding the path of least resistance, filling the gaps, guessing the short queue, dodging the traffic, changing lanes with a whisper to spare -- moving with precision and grace and, above all, _expedience_.\n> I spied a queue ... that was slightly longer than the others, but I joined it and ticced nervously as I watched my progress relative to the other spots I could've chosen.\n> I was borne out, a positive omen for a wait-free World, and I was sauntering down Main Street, USA long before my ferrymates.\n\nReading this passage, I realized that I was not alone in my weird obsession.\nI hate lines, the waiting in of, and like Jules I will obsessively attempt to pick the shortest queue to optimize my wait time.\n\nAs you can imagine, Burning Man, with it's notorious entry and exodus lines, is a kind of special torture for me.\nI struggle constantly to remain patient and silence my overeager mind as it attempts to compute my rate of progress relative to my line-mates and my destination.\nI become short and distracted with my car mates, and annoyed at the perceived inefficiencies in the system.\n\nIn order to channel my frustration into a productive direction, back in 2012 I joined the Gate, Perimeter, and Exodus (GPE) department.\nIn the decade since, I've worked just about every position in the department -- apex, airport, lanes, perimeter, and this year I did a stint in the Traffic Operations Center (TOC).\nI worked enough shifts in 2019 for a staff-priced ticket, and enough this year for a staff credential for my next burn.\n\nTo some extent, joining the department (as well as simply becoming a decade older and a little bit more patient) has helped me deal entry/exit crawl.\nWhen the line totally stops moving at 6 pm, I know that there's a shift change, and we'll get moving soon, and I can reassure people around me about this.\nWhat I hoped for, however, was a sense that there's some giant master plan that's helping to move me out of a 10-hour traffic jam as quickly as possible.\nThis sense continues to elude me, and is the reason I'm writing this open letter.\n\nI do not mean to throw shade on any people in the department.\nI understand first-hand how difficult these shifts are, standing out in a white-out in 100+ degree heat, breathing dust and exhaust, dealing with cranky and cantankerous burners while trying not to get crushed by traffic.\nI also understand the difficulty that the leaders of the department have, doing this thankless job in exchange for meal pogs and t-shirts, trying to take care of their volunteers while balancing the demands of the org, various state agencies, and the forces of pure chaos.\nBut I cannot help but get the sense that something is not working quite right.\nI am 200% open to the possibility that, actually, everything is going as well as possible, and it's just that my spidey sense is off here.\nI know that when other department members express concerns on the gate list, they sometimes get (vehemently) dismissed as \"armchair quarterbacks\" who, because they weren't working that specific shift or that specific role, have no right to an opinion.\nBut in this case, I think it's a communication problem.\nIf even long-time department volunteers (including this one) are frustrated and confused by the operation of the department, what of the participants?\nThese are the people who, after being stuck in a 10-hour traffic jam for inscrutable reasons, unload their anger on the front-line Exodus volunteers.\nEveryone here deserves better -- and even if this is *just* a communication problem, that's still a real problem that, I believe, demands attention.\n\n## Entry\n\nMy entry into Burning Man was more than 3 hours long -- on Wednesday pre-event!\nAs others have noted on the gate list, the second and third lines from the left on the way in merged right before apex, meaning those two lines moved half as fast as the other lines.\nI attempted to reassure my car mate for almost an hour that this must be an illusion, and that no line moves faster through Apex than any other line, but my poor choice of lanes cost me a lot of extra wait time, and anxiety that we would miss our camp's placers for the night and would have to sleep in our truck instead of setting up camp.\nThe department doesn't want participants changing lanes, because it messes up the cone placement.\nThe best way to prevent lane-jumping is to make sure the lanes are actually fair!\n\nNext, I had to pick up my staff credential from the box office.\nOn the way out of the Will-Call lot, we again had a massive line stretching back into the lot.\nTwo volunteers at the front were letting cars out of Will-Call, only into the two rightmost lanes, which Apex was also using.\nThis resulted in an additional 30 to 45 minutes of wait out of the lot.\nIt was incredibly frustrating to have waited at the Apex line already, only to have cars that arrived hours after us hit the lanes ahead of us.\nApex can hold traffic and empty out the Will-Call line.\nWhy not do this?\n\n## TOC\n\nI worked a TOC shift on Sunday of Temple Burn.\nFor 6 hours, I had control of [this twitter account](https://twitter.com/bmantraffic) -- the first time I've ever posted anything on Twitter!\nThis was my chance to understand exodus, and to help tens of thousands of people to leave the Burn in the easiest way possible.\n\nI do not believe that, through any of my tweets, I helped anyone make better decisions.\nSome of my tweets were of the inane variety, like \"don't break down in the lanes\".\nI had a campmate whose car broke down on the way in, and he was just going to leave when it was time to leave -- the tweets had no bearing on his decision.\n\nThe most useful tweets could have been wait time estimates, and I believe that some of those were purely made up.\nI tweeted that the wait was [6 hours](https://twitter.com/bmantraffic/status/1566535630493913089), and tweeted again that the wait was [1 hour](https://twitter.com/bmantraffic/status/1566539015972474881) just 20 minutes later!\nThe amount of information coming to the TOC from folks on the ground is miniscule -- reporting wait times is not a huge priority, even though this is the one number everyone in the city wants to know.\nWe communicated this number to BMIR *maybe* twice during my six-hour shift, and once it was only because we had [totally stopped all traffic](https://twitter.com/bmantraffic/status/1566571695158153216) and asked them to report it.\nI didn't have a radio in my truck, but campmates reported that BMIR was more interested in their programming than in keeping folks informed about exodus -- just was well, since I don't believe they have any special information.\nAt least on Sunday afternoon, *nobody* seemed to have any accurate information about the wait time in Exodus.\n\nMy TOC shift lead was warm, personable, efficient on the radio, and a great person.\nHowever, I was sometimes shocked at their claims and decisions.\nAt the beginning of my shift, someone radioed into TOC, asking, \"BMIR is reporting a 3.5 hour wait time, do you know where they got that number\"?\nMy shift lead said, \"Yeah, that makes sense, because gate road is 4 miles long and the speed limit is 5 miles an hour.\"\nAfter some back-and-forth, the person on the radio signed off, unconvinced by this obviously faulty math, but clearly unwilling to continue taking up airtime.\nMy shift lead then told me to [tweet that number](https://twitter.com/bmantraffic/status/1566522222767812608)!\n\n## Exodus\n\nI left on Sunday night, shortly after my TOC shift ended.\nBecause I sent [this tweet](https://twitter.com/bmantraffic/status/1566539015972474881) about using all the lanes, I was able to satisfy my compulsion and skip a ton of traffic by using the empty lanes.\nThe pulses themselves, however, drove me nuts.\nSpecifically, in all my pulses, I observed the final portion of gate road completely draining of all cars -- nobody else merging onto the highway.\nExodus would then continue to hold traffic for anywhere from 20 to 40 minutes, for no reason I could discern.\nI sort of assumed this was because traffic was getting stuck on 447, but my brothers in black running Exodus that night assured me this was not happening.\n\nIt was sad to watch participants get into their vehicles and start them as they watched the road drain, then shut them down again 5 minutes later when the line showed no signs of movement.\nKeeping these people informed would help everyone.\n\n## Wrap-Up\n\nMy exodus took 9 hours (though it would've been 7 if my janky borrowed truck hadn't refused to start at the beginning of my last pulse!)\nTo everyone who helped out, including volunteers who brought me a jump box -- THANK YOU!)\nI know people who spent 11 hours in the full heat of Monday afternoon.\nThese anecdotes are, sadly, the best data we have!\nWhere are the charts of wait time by departure time over the years of exodus, the charts that would help level out traffic flows out of the city?\nIf your answer is \"leave Tuesday morning\", then you clearly don't understand the reality of participants who have to get back to their lives and jobs, but first must drive a theme camp home, unload it, and de-dust it.\n\nI can anticipate the reaction to this essay among the GPE crew.\nAt worst, I'll be dismissed as ignorant, not someone who trully understands the process and the constraints.\nGuity!\nTo these people, I would say that, if a 10-year department veteran is confused and frustrated by gate operations, then there's a real problem here.\nAt best, some GPE folks will tell me that, if I think I can do better, I should sign up -- as though there's a slot in the shift system for \"Run everything as you see fit\".\n\nMy goal is not to blindly criticize, but to contribute constructively.\nUltimately, I would love to just relax and enjoy the Gate process.\nI think a little bit of information would help me and the other participants to do just that.\n",
            "url": "https://igor.moomers.org/posts/open-letter-gpe",
            "title": "An Open Letter to GPE",
            "date_modified": "2022-09-09T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/peter-eckersley",
            "content_html": "\nI was at Burning Man 2022 when I heard that my dear friend [Peter](https://en.wikipedia.org/wiki/Peter_Eckersley_\\(computer_scientist\\)) was, initially, in critical condition.\nI learned later that evening that he had passed away.\nIn the midst of a difficult exodus full of logistics and dust storms, I kept finding myself with tears in my eyes, in a state of shock and disbelief that such a light in the world as Peter had gone out.\nI was grateful to learn the news while surrounded by friends and people who loved Peter.\nOur impromptu vigil at the Temple on Friday night was heartwarming and healing.\n\nI first met Peter back in 2009, and we ended up living together at the 1355 shortly after that year's Burn.\nIn the decade+ that I've known him, Peter has been much more than a friend.\nHe has been a collaborator, a mentor, and a source of inspiration.\nHe introduced me to good coffee.\nI have learned about the Bay area's best bike rides by riding them with Peter.\nHe showed me all the best dumpling restaurants.\nI learned about [giving what you can](https://www.givingwhatwecan.org/) from him, and made a commitment to donate a percentage of my income thanks to his example.\nPeter helped to bring me and my last long-term partner together -- a relationship that lasted 4 years.\nHe taught me about jasmine green pearls, and toast with a generous amount of butter.\nI cannot count the number of evenings we spent talking over the problems of the world, and how I would always learn something from him, shift my perspective just a little bit.\nHe taught me that it's possible to reason about the world.\n\nI last saw Peter about two weeks ago, when he came over to hang out while we prepared for Burning Man.\nInspired by one of our projects, he tried, as usual, to wrangle us into making another one on the spot.\nThat evening, in my dining room over Vietnamese food, I complained about the difficulty of fundraising for a climate startup I've been involved with.\nThough Peter has already introduced me to three potential investors over the last few months, he said to me, \"I will work harder for you.\"\nThis may be one of the last things he said to me, and perfectly sums up his generosity and his unwavering enthusiasm.\n\nAs I reflect on my relationship with Peter, I can't help but feel that I took his presence for granted.\nHe was just such a pillar, a constant of the world, always available for dinner or a chat.\nI don't think I've ever told him just how much I appreciated him.\nHis passing is a reminder to me, to take every chance I get to share my appreciation with the people in my life.\n\nPeter's passing also leaves a hole in the world.\nWho is going to make sure AI is on our side?\nWho will take a stand for privacy and security?\nWho will bring people together to solve seemly-intractable problems?\nWhere will all the quirky ideas come from?\n\nI think the best way that I can honor Peter's memory is to take on some of the work he has left behind.\nI want to throw one extra dinner party.\nI want to go on one more extravagant bike ride, which is nothing more than an excuse to stop for coffee and banana bread.\nMost of all, I want to remain engaged and enthusiastic in the world.\nIf I've learned anything from Peter, it's that any problem is solvable though reason and cooperation.\n\nPeter, I will miss you always.\n",
            "url": "https://igor.moomers.org/posts/peter-eckersley",
            "title": "Peter Eckersley",
            "date_modified": "2022-09-08T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/war-in-ukraine",
            "content_html": "\nI was born in [Mykolaiv, Ukrain](https://en.wikipedia.org/wiki/Mykolaiv).\nI lived there until age 9, when my family emigrated to Los Angeles.\nOver the years, much of my immediate family has likewise emigrated to the US.\nHowever, I still have many extended family members there.\n\nI haven't been back to Ukraine since I left as a child.\nMy parents, however, have returned several times to visit friends and family.\nI feel strong personal connections both to the city itself, and to the well-being of my many family members living there.\nWatching the [Battle of Mikolaiv](https://en.wikipedia.org/wiki/Battle_of_Mykolaiv) take place, both through media and through family updates, has therefore been stressful, distracting, and unsettling.\n\n## Family Update ##\n\nAs of today, I have three separate groups of family members who've been affected by the war in Mykolaiv.\nMy closest relative, an uncle, was living in the city but is an Autralian citizen.\nAround February 25th, he took himself and much of his Ukranian family to Poland.\nWe had confirmation that he arrived in Warsaw on March 4th.\n\nMore family members have fled to western Ukraine.\nAdult males are currently prohibited from leaving Ukraine, so the family is staying in the country to stay together.\nAnother big group has a house with a basement, which they've been using as a bomb shelter.\nThey've been descending there whenever the air raid sirens go off.\n\n## Predictions ##\n\nI planned a family gathering for the past weekend, and naturally, Ukraine was all we could talk about.\nAfter much kitchen-table debate where people were making informal/implicit predictions, I tried to make things more concrete by formalizing them and writing them down.\nSince so many folks have been asking me what I think about this conflict, I figured I would share my predictions publicly.\n\n### The End of the Conflict ###\n\nA common question I get is about the end-game of the war.\nI think Russia will, eventually, seize Kiev and install a puppet regime.\nThis regime will be highly unpopular, but it will remain in power through Russian military and intelligence support.\nThe closest analogs for this, for me, are Syria and Belorus -- both places where an unpopular leader has resisted all attempts at regime change.\nIn Syria in particular, the cost was the destruction of a large part of civilian infrastructure.\nUkraine is already there after two weeks of bombing, and will continue to get worse.\n\n[This article](https://carnegieendowment.org/2022/03/03/how-does-this-end-pub-86570) confirmed my existing biases about the end-game.\nI agree that the task, for the west, is to avoid escalation at all costs, even though the outcome for Ukrainians seems pretty dire.\nThe alternative -- a possible nuclear confrontation -- is worse, for everyone.\n\n### Putin and His Circle ###\n\nMany of my family members were convinced that this is the end game for Putin personally.\nI disagree.\nPutin is powerful, paranoid, and ruthless.\nIf he's able to prop up extremely unpopular dictators in other countries, he'll definitely be able to do it at home.\nI predict that he will remain in power until he dies, and his death will likely be of natural causes.\n\nMy family members were wondering about the oligarchs, who are having assets seized and being sanctioned by foreign governments.\nWhat's the point of being a billionaire if you can't jet around the world on your private plane to your private yacht?\nThere was a \"surely, they'll kick him out now that he no longer benefits them\" thread.\nAgain, I disagree.\nThe oligarchs serve at Putin's mercy, as he showed with [Khodorovsky](https://en.wikipedia.org/wiki/Mikhail_Khodorkovsky).\nI also recommend the book [Nothing Is True and Everything Is Possible](https://bookshop.org/books/nothing-is-true-and-everything-is-possible-the-surreal-heart-of-the-new-russia/9781610396004) for an in-depth look at how Putin plays his allies against each other, all the while securing even more power for himself.\n\n### Energy/Climate ###\n\nRussia can balance it's budget with an oil price of [$45 per barrel](https://www.bloomberg.com/news/articles/2019-08-22/putin-s-budget-has-lowest-break-even-oil-price-in-over-a-decade) (paywall).\nThe price is north of $100 at the moment.\n\nThere are serious people in the EU who are thinking about [a future without Russian gas](https://www.bruegel.org/2022/02/preparing-for-the-first-winter-without-russian-gas/).\nGiven the politics in the West, this might actually happen, but the impact to Russia will be minimal.\n\n> crude oil accounted for $110.2 billion, oil products for $68.7 billion, pipeline natural gas for $54.2 billion and liquefied natural gas $7.6 billion\n\n([source](https://www.reuters.com/markets/europe/russias-oil-gas-revenue-windfall-2022-01-21/)).\nSo, even if we assume all pipelines stop flowing, Russia will still find a ready export market for the remaining 80% of it's fossil fuels.\nWhile the conflict might accelerate renewables transition in the EU, it also has a bunch of negative knock-on effects.\nMost especially, it makes burning coal a better deal -- something [the EU is already struggling with](https://www.newyorker.com/magazine/2022/02/07/can-germany-show-us-how-to-leave-coal-behind).\nIn addition, fracking becomes more profitable again, as does refining poor-quality petroleum such as tar sands.\n\nThe US administration was already walking a fine line between advocating a renewables shift while also protecting low gas prices at all costs.\nThe calculus there has become harder, and the administration will face tough choices.\nApprove more drilling and face the anger of the base?\nDeny new oil and gas leases, and face the political consequences?\n\n## Political Opportunity ##\n\nThis war is very prominent in public conciousness, and people want a frame in which to think about it.\nI recommend using the following frame whenever talking to anyone about this conflict.\n\n__This War Is a Climate War__.\nPutin is just one more character in the cast of fossil-fuel-powered dictators.\nHe's been wreaking havoc on the world stage for 20 years, and the world rolled over and took it because he supplied the gas.\nEvery bullet shot at a Ukrainian citizen, every missile hitting a residential building, every tank and warplane was paid for when Westerners pumped their gas and turned up their heat.\n\n__Ukrainian refugees are climate refugees__.\nThey were displaced because of fossil fuels.\nThey join migrants from the Syrian civil war, and the people coming to the US from South and Central America -- all people displaced by our addiction to fossil fuels.\n\n__The climate crisis is a refugee crisis__.\nMillions of Ukrainians fleeing to the EU is a foreshadowing of what's to come.\nWe are reacting today in the best possible climate -- a very unpopular war, a very definite and very unpopular aggressor, very sympathetic (because white, Christian) victims.\nHow we treat these folks is the best-case scenario, and it [could be better](https://www.reuters.com/world/europe/britain-may-ease-immigration-rules-ukrainian-refugees-sun-2022-03-07/?taid=62260ffc18c5730001d4f729).\n",
            "url": "https://igor.moomers.org/posts/war-in-ukraine",
            "title": "The War in Ukraine",
            "date_modified": "2022-03-07T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/words-i-avoid",
            "content_html": "\nLanguage shapes thought -- a.k.a, [the Sapir-Whorf hypothesis](https://www.sciencedirect.com/topics/psychology/sapir-whorf-hypothesis).\nThere's lots of good evidence for this -- for instance, a very popular study on how Russian's multiple words for shades of blue allow speakers to [more rapidly distinguish between those shades](https://www.newscientist.com/article/dn11759-russian-speakers-get-the-blues/).\n\nDifferences in thought patterns might arise not just between speakers of different languages, but between individuals speaking the same language.\nI'm specifically interested, here, in [cognitive distortions](https://www.theschoolofmomentum.com/post/cognitive-distortions), -- a kind of dialect spoken by people who are clustered together, not through physical geography but on a memetic landscape.\nCognitive distortions [are correlated with depression](https://www.nature.com/articles/s41562-021-01050-7).\nThere are even claims that [we might be getting more depressed as a society, based on increasing prevalance of cognitive distortions in public text](https://www.pnas.org/content/118/30/e2102061118).\n\nThis is interesting, but is a post-hoc justification for a practice that I noticed myself independently adopting.\nLanguage forces a frame of thought, sometimes those frames are not helpful, and language can be a clue that a specific frame is arising.\nWhen I notice myself using these words, this is a clue to myself to pay attention and possibly to re-frame my thinking.\nWithout further ado, my listsicle.\n\n## SHOULD ##\n\nThis is the most common one I notice in myself, and in many of the people in my life.\nOften used like \"I *should* stop being on the computer so much\" or \"I *should* exercise more\".\nAn old housemate used to react to *should*s by saying, \"Don't *should* all over yourself\".\n\nI dislike the way *should* imposes an obligation.\nIt's not that I *should* go brush my teeth -- it's that I *want* to have healthy teeth and good breath.\nReminding myself of my motivations is often helpful in getting myself to actually do the task.\n\nI also don't like imposing *should*s on others.\nInstead of saying \"You *should* read X\", I could say \"I think you would like X\".\nWhile reframing in this way, I sometimes realize that I'm not even sure the person would even enjoy X -- I end up simply telling a story, like \"I enjoyed X, because...\" -- allowing me to share excitement and enthusiasm without adding a TODO to my interlocutor's list.\n\n## MAKE[s] [ME/YOU] FEEL ##\n\nI suffer from the influence of what a wise person I know once called \"the control virus\" -- the mistaken desire, and belief in the ability to, control outcomes in the world.\nFor instance, I cannot control the world's climate, or the way the world responds to climate change.\nWhat I can control is how I engage with that problem, how I show up in relationship to it.\nDo I avoid thinking about it?\nDo I let it dominate my life and take away my joy?\n\nA global catastrophe is a more obvious example.\nBut the control virus shows up more frequently in interpersonal interactions.\nI can't control whether or not my housemate washes their dishes, for instance.\nThis is insidious because it seems like, if I just ask them \"correctly\" -- if I say just the right words that would inspire just the right feelings of solidarity and guilt -- then maybe the dishes will get washed after all.\nIn other words, I would get the outcome that I wanted.\n\nThinking about life in terms of desired outcomes is a recipe for dissatisfaction.\nThere is gratitude when my housemate chooses to wash their dishes, while if I somehow controlled their actions via the right words, all I get is smug self-satisfaction.\nAnd what of the alternate scenario?\nIf they didn't wash their dishes after our conversation, I can be mad at myself for not saying those magic correct words, and mad at the housemate for depriving me of my desired outcome.\n\nI find it helpful, through life, to remind myself that I am fundamentally not in control of outcomes.\nAll that I can choose -- and even that, with great difficulty -- is how I feel in a given moment.\nI can feel drained, avoidant, anxious, and withdrawn from the problem of climate change -- or I can be motivated to engage through feelings of love, the joy of building and creation.\nI can be frustrated at my housemate, or I can be curious and supportive of the challenges in their life, and inspired to build a more harmonious household (including by being less frustrated in general).\n\nI think the freedom to choose my own feelings in the moment is the greatest possible freedom.\nIt often seems non-existent -- always, the temptation to react in a way that I might later regret, not in alignment with the kind of person I would like to cultivate.\nSo, it is difficult enough to obtain this freedom without constantly, voluntarily surrendering it to others.\n\nThis is why *makes me feel* is such a agency-robbing expression.\nMy usual retort to this, inside my own head, is \"nobody can *make* you feel anything!\"\nPeople who up in the way that they choose, and then I feel about it the way that I feel.\nSometimes, my reframing of this phrase takes an [NVC](https://www.cnvc.org/learn-nvc/what-is-nvc) turn -- \"when you leave your dishes unwashed, I feel...\".\nWhen talking of others, I can pivot \"I'm sorry I made you feel X\" into an opportunity to think about exactly what I said or did that proved a reaction.\nFor instance, seeing that a person is angry, I can take a moment to think about what I said which provoked anger.\nSaying \"I'm sorry I said X\", instead of \"I'm sorry I made you angry\" acknowledges the person's feelings and my role in them, while leaving intact their agency in the situation.\n\n## JUST ##\n\nI got this from one of my housemates, who pointed out that the word *just* is often doing a lot of heavy lifting in a sentence.\nHaving trouble with your boss/spouse/child/pet/inanimate object?\nWhy not *just* ...?\nThis can often be quite patronizing.\nThere's an impression that the solution to a problem is trivial, deadening curiosity through a mistaken belief that you already know the right answer.\n\nThere's no catch-all reframing for *just*, but many paths depending on the conversation and relationship.\nIs your manager being a jerk in meetings?\nA common script devolves into naive solution-ism -- \"You could *just* complain to **their** manager!\"\nWhen you notice that *just*, that's the cue.\nOne option is sympathy -- \"Damn, that sounds terrible, I would be so mad in your situation.\"\nAnother option is collaborative problem solving, but with curiosity -- \"Does **their** manager know about this behavior? How would she react if she knew?\"\n\n## Putting it all together ##\n\nAn Igor of a few years ago might say something like \"My manager makes me feel so angry in meetings! I should just go complain to the VP.\"\nA hopefully-more-self-aware Igor of today might notice several opportunities for deeper reflection here.\nWhy am I feeling angry about the situation?\nHow can I work, both internally and externally, towards a different reaction in that situation?\nDo I want to bring external parties to help mediate the conflict?\nWould that help the situation, or escalate it?\n\nThe goal is not inactive analysis paralysis.\nInstead, I want to give myself the chance to avoid reacting to situations, because my reactions often have the opposite effect from the one I desire.\nIf I avoid reacting and create spaciousness even in difficult situations, I can use the spaciousness to choose a path most likely to help me feel best in the future.\n",
            "url": "https://igor.moomers.org/posts/words-i-avoid",
            "title": "Words I Avoid",
            "date_modified": "2021-09-24T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/kitchen-table-decisions",
            "content_html": "\nI've been spending a bunch of time, both personally, and professionally, thinking about the clean energy transition and \"kitchen table decisions\".\nSome examples of kitchen-table decisions:\n\n* where to live\n* how to heat the home/heat water for home use\n* whether to get solar panels\n* whether to use an electric car\n* whether to get battery storage\n* how to insulate the home\n* how to do laundry and drying\n* how to cook food in the home\n* what kind of food to eat\n* what kinds of vacations to take\n* how and when to shop for consumer goods\n\nThese are in (very rough) order of importance in terms of climate impact.\nMy professonal thoughts on this have mostly come from my involvement in the [Rewiring America](https://www.rewiringamerica.org/) project.\nI've been working on data mining and analysis to quantify the impact of some of these decisions.\nFor instance, I helped put together [a map](https://map.rewiringamerica.org/) which shows which households would most benefit from switching to [heat pump](https://www.carrier.com/residential/en/us/products/heat-pumps/what-is-a-heat-pump-how-does-it-work/)-based heating, and also to quantify the [CO2e](https://coolerfuture.com/en/blog/co2e) savings from such a transition.\n\nPersonally, I am thinking of these projects as a co-owner at [Chrysalis](https://chrysalis.community/).\nBecause we are co-housing, we get a multiplier effect from many of these decisions.\nFor instance, getting a single heat pump space or water heater for our one household is the equivalent of as many as three or four \"normal\" nuclear-family households.\nOur group is pretty committed to making long-term investments that benefit the environment, and we're in the process of making decisions on many of them.\n\nAs we do the research and make big investments, I would also like to document what we are learning.\nI'm envisioning this post being the entry-point for more in-depth discussions.\nPlease get in touch if you have valuable experiences to share, or would like to know more about some of these subjects.\n",
            "url": "https://igor.moomers.org/posts/kitchen-table-decisions",
            "title": "Kitchen Table Decisions",
            "date_modified": "2021-08-09T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/navigating-arch-on-osx",
            "content_html": "\nI started a new job, and this job came with a new computer -- an Apple M1 MacBook.\nI had to do quite a bit of work to get my usual dev environment set up, mostly having to do with the transition to the `amd64` architecture from `x86_64`.\nSome tips and tricks are documented here.\n\n## Two `homebrew`s\n\nI found [this stack overflow question](https://stackoverflow.com/a/64951025/153995) invaluable.\nOn my computer, I had already run the normal homebrew install command:\n\n```bash\n/bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"\n```\n\nThis installed homebrow into `/opt/homebrew`.\nTo also add an x86_64 version of homebrew, I did:\n\n```bash\ncd /usr/local\nsudo mkdir homebrew\nsudo chown :staff homebrew\nsudo chmod g+w homebrew\ncurl -L https://github.com/Homebrew/brew/tarball/master | tar xz --strip 1 -C homebrew\n```\n\nI then followed the stack overflow question, and added an alias to my `.bash_profile`:\n\n```bash\nalias brow='arch --x86_64 /usr/local/homebrew/bin/brew'\n```\n\n## Python with `asdf` / `pyenv`\n\nApparently, Python newer than 3.9.1 [natively supports arm64](https://github.com/pyenv/pyenv/issues/1768).\nAlas, I needed Python 3.8.9, which is what my co-workers use:\n\n```bash\n$ asdf install python 3.8.9\npython-build 3.8.9 /Users/igor47/.asdf/installs/python/3.8.9\npython-build: use openssl@1.1 from homebrew\npython-build: use readline from homebrew\n\n<-- snip -->\n\nBUILD FAILED (OS X 11.2.3 using python-build 1.2.27-29-gfd3c891d)\n\n<-- snip -->\n\nconfigure: error: Unexpected output of 'arch' on OSX\n```\n\nThis still does not work if you explicitly specify an arch:\n\n```bash\n$ arch -x86_64 asdf install python 3.8.9\npython-build 3.8.9 /Users/igor47/.asdf/installs/python/3.8.9\npython-build: use openssl@1.1 from homebrew\npython-build: use readline from homebrew\n\n<-- snip -->\n\nInstalling Python-3.8.9...\npython-build: use readline from homebrew\npython-build: use zlib from xcode sdk\nWARNING: The Python readline extension was not compiled. Missing the GNU readline lib?\nERROR: The Python ssl extension was not compiled. Missing the OpenSSL lib?\n\nBUILD FAILED (OS X 11.2.3 using python-build 1.2.27-29-gfd3c891d)\n```\n\nYou'll need to install the `x86_64` versions of readline and openssl to make progress!\nThe second install of brew to the rescue (notice, I'm using my alias, `brow` and not `brew`, below):\n\n```\n$ brow install readline\n$ brow install openssl\n$ brow install xz\n```\n\nNow, you need those libraries in your linker and compiler.\nI figured I might need to do this again, so I am using my [direnv](https://github.com/asdf-community/asdf-direnv) setup to make this easier.\nI created a directory -- `mkdir ~/x86_64` -- and added a `.envrc` that looks like this:\n\n```bash\nuse asdf\nexport BROW=\"/usr/local/homebrew\"\nexport LDFLAGS=\"-L${BROW}/opt/openssl@1.1/lib -L${BROW}/opt/readline/lib -L${BROW}/opt/xz/lib\"\nexport CPPFLAGS=\"-I${BROW}/opt/openssl@1.1/include -I${BROW}/opt/readline/include -I${BROW}/opt/xz/include\"\nexport PATH=\"${BROW}/bin:$PATH\"\nexport ARCHPREFERENCE=\"x86_64\"\n```\n\nA quick `direnv allow`, and now, when I `cd ~/x86_64`, I am in a good place to do Rosetta-type things:\n\n```bash\n$ uname -a\nDarwin planetarium.local 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:06:51 PST 2021; root:xnu-7195.81.3~1/RELEASE_ARM64_T8101 arm64\n$ cd ~/x86_64\ndirenv: loading ~/x86_64/.envrc\ndirenv: using asdf\ndirenv: loading ~/.asdf/installs/direnv/2.28.0/env/3601927912-4079855243-2014015008-304321844\ndirenv: using asdf rust 1.52.1\ndirenv: using asdf python 3.8.9\ndirenv: using asdf direnv 2.28.0\ndirenv: export +ARCHPREFERENCE +CARGO_HOME +CPPFLAGS +LDFLAGS +RUSTUP_HOME ~PATH\n$ arch uname -a\nDarwin planetarium.local 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:06:51 PST 2021; root:xnu-7195.81.3~1/RELEASE_ARM64_T8101 x86_64\n```\n\nNow, from this directory, I can easily install my Python.\nI didn't have to specify `-x86_64` to `arch`, because this is already set by my `ARCHPREFERENCE`.\n\n```bash\n$ arch asdf install python 3.8.9\n<-- snip -->\nInstalled Python-3.8.9 to /Users/igor47/.asdf/installs/python/3.8.9\n```\n",
            "url": "https://igor.moomers.org/posts/navigating-arch-on-osx",
            "title": "x86_64 on an Apple M1 MacBook",
            "date_modified": "2021-05-11T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/arch-linux-config",
            "content_html": "\nAfter encountering some issues with Ubuntu on my Thinkpad X1 Carbon 6th-gen, I went back to Arch linux.\nSo far, I'm loving it, but I did have to figure out a few things.\nThis is meant to document a few things I did to customize the machine to my liking.\n\n## Re-mapping CapsLock to Ctrl \n\nMy brain is very used to the caps lock key being the control key.\nWhen this is not remapped, I end up constantly switching into all-caps mode and being very confused when things don't work correctly.\nI used [interception](https://gitlab.com/interception/linux/tools) and the [caps2esc](https://gitlab.com/interception/linux/plugins/caps2esc) plugin to make the re-mapping work in a virtual console as well as in X.\n\nFirst, install `caps2esc` (I use the [yay](https://github.com/Jguer/yay#installation) package manager):\nI picked the `community/interception-caps2esc` package, which was option `1` in the `yay` listing.\n\n```bash\n$ yay caps2esc\n```\n\nNext, configure the re-mapping.\nI use `-m 1` to disable the mode where pressing `esc` turns on caps lock, since I like the escape key to just remain the escape key and don't really need caps lock.\nThis configuration goes into the file `/etc/interception/udevmon.yaml`:\n\n```yaml\n- JOB: intercept -g $DEVNODE | caps2esc -m 1 | uinput -d $DEVNODE\n  DEVICE:\n    EVENTS:\n      EV_KEY: [KEY_CAPSLOCK]\n```\n\nFinally, enable and activate the `udevmon` service, and enjoy no-more caps lock:\n\n```bash\n$ sudo systemctl enable udevmon\n$ sudo systemctl start udevmon\n```\n\n## Auto-suspend on low battery\n\nOccasionally, I leave my laptop with the lid open for a reason.\nOther times, I just forget about it.\nI always feel bad coming back to a dead computer with a battery at 0%, since this can [reduce the lifetime of the battery](https://electronics.stackexchange.com/questions/164103/if-li-ion-battery-is-deeply-discharged-is-it-harmful-for-it-to-remain-in-this-s).\n\nTo prevent this, I have a script which will auto-suspend my computer if the battery level drops too low.\nI use this script (which I put into my `~/bin/auto_suspend.sh` and made executable with a `chmod u+x`):\n\n```bash\n#!/bin/bash\n\nbattery_level=`cat /sys/class/power_supply/BAT0/capacity`\n\nif [ \"$battery_level\" -le 5 ]\nthen\n  notify-send \"Battery critical. Battery level is ${battery_level}%! Suspending...\"\n  sleep 5\n  systemctl suspend\nelif [ \"$battery_level\" -le 8 ]\nthen\n  notify-send \"Battery low. Battery level is ${battery_level}%!\"\nfi\n```\n\nTo run this script periodically, I used [systemd timers](https://wiki.archlinux.org/index.php/Systemd/Timers) (since Arch does not come with a cron daemon installed in the base system).\nFirst, I created a unit file for my auto-suspend service, in `~/.config/systemd/user/auto_suspend.service`:\n\n```\n[Unit]\nDescription=Checks battery and suspends if low\n\n[Service]\nType=oneshot\nExecStart=/home/igor47/bin/auto_suspend.sh\n```\n\nNext, I created a timer which will periodically activate this service (in `~/.config/systemd/user/auto_suspend.timer`):\n\n```\n[Unit]\nDescription=Check battery level and auto-suspend\n\n[Timer]\nOnBootSec=1m\nOnUnitActiveSec=1m\n\n[Install]\nWantedBy=timers.target\n```\n\nMy config will activate the timer 1 minute after boot-up, and also 1 minute after every activation.\nI then enable the timer:\n\n```bash\n$ systemctl --user daemon-reload\n$ systemctl --user enable auto_suspend.timer\n$ systemctl --user start auto_suspend.timer\n```\n\nYou can check the status of the timer like so:\n\n```bash\n$ systemctl --user list-timers\n```\n",
            "url": "https://igor.moomers.org/posts/arch-linux-config",
            "title": "Arch Linux Configuration",
            "date_modified": "2021-01-27T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/minimal-viable-air-quality",
            "content_html": "\nIf you, like me, live in the Bay Area, you may have woken up to something like this last week:\n\n![Outdoor Air Quality](/images/minimal-aq-outside.jpg)\n\nThis is my back yard, but things looked just as dire in the house:\n\n![Indoor Air Quality](/images/minimal-aq-inside.jpg)\n\nIf you checked a website, like [PurpleAir](https://www.purpleair.com/map?opt=1/mAQI/a10/cC0#8/38.138/-121.702) or [AirNow](https://fire.airnow.gov/?lat=37.86988000000008&lng=-122.27053999999998&zoom=12), you would see scary numbers and dire warnings about staying indoors and avoiding the air outside.\nHowever, how *is* the air inside your house?\nAre you actually much safer?\n\nThankfully, answering this question has gotten cheaper and easier in recent years.\nLow-cost air quality sensors have been built into reasonably inexpensive hardware.\nThe [PurpleAir indoor sensor](https://www2.purpleair.com/products/purpleair-pa-i-indoor), for instance, is only $200.\n\nThe heart of that device is a plantower laser dust sensor which can be [had on AliExpress for about $12](https://www.aliexpress.com/item/32639894148.html).\nWith PurpleAir, you're paying for more than just the sensor.\nThat device includes a [BME280](https://learn.adafruit.com/adafruit-bme280-humidity-barometric-pressure-temperature-sensor-breakout) temperature/pressure/humidity sensor, a microcontroller that can access a WiFi network, GPS that helps localize the device on a map, all  conveniently packed into a nice-looking device.\nYou pay for integration of all that hardware, firmware to make it work together, and software that integrates the data on the back-end and gives you a nice map to view it on.\nYou're also paying for hosting to allow PurpleAir to store your data and display it to others.\nSo, by buying one, you're not only becoming better-informed, but you're also helping folks in your neighborhood be better-informed too -- a *positive* externality, for once!\n\nHowever, if you're brave enough, there are advantages to building the hardware yourself.\nGetting several sensors inside and outside your house can help you understand more about the air you're breathing.\nIt can help quantify interventions, like installing an air purifier or running [a box fan](https://www.texairfilters.com/how-a-merv-13-air-filter-and-a-box-fan-can-help-fight-covid-19/).\nIf you have a large house, you can see how quality differs throughout the house.\n\nAlso -- while for most readers of this blog, $200 is probably not a lot of money, it *is* a lot of money for *most* people.\nThe folks who are most regularly affected by poor air quality are exactly those who cannot just blow $200 to satisfy their curiosity.\nMaking air quality monitoring much, much cheaper can potentially make life much better for those people.\n\nThis post will help you build a minimum viable sensor that works with your laptop.\nBy using a pre-existing computer, you can avoid having to pay for additional hardware like a microcontroller with WiFi.\nYour computer is probably already on WiFi, so configuration is also minimized.\n\nYou can also avoid sending data to a cloud.\nThis is a double-edged sword.\nOn the one hand, cloud hosting costs are eliminated.\nOn the other hand, you lose the positive benefits of open data.\nAlso, if you want visualizations, you have to make them yourself.\n\nAnyway, on to the construction!\n\n## Hardware \n\nThe bill of materials for this project includes two things:\n* Plantower PMS7003 -- maybe [this one](https://www.aliexpress.com/item/32784279004.html)\n* A USB-to-TTL cable like [this one](https://amzn.to/2GYAYAD)\n\nThis should run you about $20 all-told.\nI also needed the following tools and supplies:\n* soldering iron and solder\n* heat shrink tubing and a lighter\n* wire clippers/strippers\n\nThe PMS7003 sensors I had did not come with the little breakout board, and this device has *very* small pins.\nTo connect to it, I used some 30AWG wire-wrap wire and a wire-wrapping tool.\nIf you get a PMS7003 with a breakout and cable, you won't need this.\n\nMy first task was connecting to the PMS7003.\nHere's the pinout, from the [data sheet](https://download.kamami.com/p564008-p564008-PMS7003%20series%20data%20manua_English_V2.5.pdf):\n\n![PMS7003 pinout](/images/minimal-aq-pms7003-pinout.gif)\n\nYou'll only need 4 wires -- for power, ground, serial Tx, and Rx.\nI stripped the tiniest bit of insulation on my wire wrap wire, tinned both the wire and the pin, and touched them together with the soldering iron (very carefully, to avoid bridging the pins).\n\n![soldering wires](/images/minimal-aq-wires.jpg)\n\nHere's a (blurry) photo with all 4 wires connected:\n\n![soldering wires](/images/minimal-aq-all-wires.jpg)\n\nNext, I stripped my USB TTL cable and tinned the exposed multi-strand wires.\nTinning just means touching the soldering iron to the wire and allowing some solder to flow onto the wire.\nThis makes the wire stiff, so I can wire-wrap onto it using my wire-wrap tool.\n\n![USB TTL](/images/minimal-aq-usb-ttl-tinned.jpg)\n\nNext, I wire-wrapped the exposed, tinned wires.\nBlack and Red are for ground and power, and get connected together.\nOn my TTL cables, `green` is for `Tx` (transmit) and `white` is for `Rx` (receive).\nYou need to connect the `Tx` of the PMS7003 to the `Rx` of the TTL cable, and vice versa.\nI made it easier for myself by making my `white` wire on the PMS7003 be the `Tx` pin (Pin9), so I could just do white-to-white:\n\n![Completed Wiring](/images/minimal-aq-wired-up.jpg)\n\nAfter wire-wrapping, I soldered the wires together (belt *and* suspenders!)\nFinally, I put some heat-shrink tubing over the wires and shrunk it using a lighter.\nI then taped the device right to the USB plug with electrical tape.\nBe sure to avoid covering up the air intake and expel ports on the PMS7003.\nAlso, be mindful of USB polarity.\nThe orientation that I have in the photo is probably the way your USB port is aligned on your computer, so the PMS7003 ends up on top of the USB plug.\n\n![Completeled device](/images/minimal-aq-complete.jpg)\n\nHere it is, plugged into my computer and ready for software integration:\n\n![Plugged into computer](/images/minimal-aq-in-computer.jpg)\n\n## Software\n\nNext, you'll need some software to integrate with this hardware.\nThe cables that I got use the Prolific Technology PL2303 chipset.\nThis is already supported on linux; if you plug it in and run `dmesg | tail` you'll see something like this:\n\n```\n[1457958.125315] usb 1-2: new full-speed USB device number 38 using xhci_hcd\n[1457958.281810] usb 1-2: New USB device found, idVendor=067b, idProduct=2303, bcdDevice= 4.00\n[1457958.281816] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0\n[1457958.281820] usb 1-2: Product: USB-Serial Controller\n[1457958.281823] usb 1-2: Manufacturer: Prolific Technology Inc.\n[1457958.283810] pl2303 1-2:1.0: pl2303 converter detected\n[1457958.285306] usb 1-2: pl2303 converter now attached to ttyUSB0\n```\n\nLooks like it got recognized correctly, and is available at `/dev/ttyUSB0`.\nOn OSX, you will probably have to [install a driver](http://www.prolific.com.tw/US/ShowProduct.aspx?p_id=229&pcid=41), but you can ignore the warning about restarting your computer.\nOn these machines, the device will probably show up at `/dev/tty.usbserial`.\n\nWhile it has power, the PMS7003 will be outputting a continuous stream of binary data containing the particulate readings onto that TTY device.\nThe [data sheet](https://download.kamami.com/p564008-p564008-PMS7003%20series%20data%20manua_English_V2.5.pdf) specifies the protocol:\n\n![PMS7003 protocol](/images/minimal-aq-pms7003-protocol.png)\n\nI recommend using my [mini-aqi repo](https://github.com/igor47/mini-aqm), which includes a Python implementation of this protocol.\nYou will need a recent python (I tested with `3.8.3`; anything above `3.6` should probably work).\nRun these commands to grab the repo, install the code, and begin reading data:\n\n```bash\ngit clone https://github.com/igor47/mini-aqm.git\ncd mini-aqm/\npip install poetry\npoetry install\npoetry run ./main.py\n```\n\nAnd you should see the output:\n\n```\nbeginning to read data from /dev/ttyUSB0...\nPM 1.0: 32  PM 2.5: 54  PM 10: 73  AQI: Unhealthy for Certain Groups\nPM 1.0: 31  PM 2.5: 54  PM 10: 73  AQI: Unhealthy for Certain Groups\n```\n\nHere's a screenshot:\n\n![Runtime Screenshot](/images/minimal-aq-screenshot.png)\n\n`mini-aqm` tries to print informative error messages.\nIf `main.py` exits immediately without printing any air quality measurements, read the error message and try to resolve it.\nIf you suspect a hardware issue, use a multi-meter to check for a short between pins.\n\n## Visualizations\n\nI am running [telegraf](https://www.influxdata.com/time-series-platform/telegraf/) on my laptop.\nI've configured telegraf to read data from the device and store it in [influxdb](https://www.influxdata.com/products/influxdb-overview/), for graphing with [grafana](https://grafana.com/).\nUsing this stack, here's a visualization of the past few months of PM data in my workshop:\n\n![Last Few Months in Particulates](/images/minimal-aq-last-3-months.png)\n\nUsing this setup, it's convenient to perform experiments and look at results.\nFor instance, here is what happened with indoor air quality when we turned on our central fan system, which pushes air through two MERV13 filters:\n\n![Central Air](/images/minimal-aq-central-fan.png)\n\nHere's where we take a box fan with a MERV13 filter and run it near the sensor:\n\n![Box Fan](/images/minimal-aq-box-fan.png)\n\nHere's what happens when we just point a normal fan at the device, with no filter:\n\n![Normal Fan](/images/minimal-aq-normal-fan.png)\n\nI noticed that the air quality in my bedroom was not great.\nThe basement door is right outside my bedroom door, and is pretty leaky, so I started keeping my bedroom door closed.\nI also taped over my old, leaky windows with masking tape:\n\n![Taped-over windows](/images/minimal-aq-tape-on-windows.jpg?cache=no)\n\nThese interventions had a real effect!\n\n![Normal Fan](/images/minimal-aq-window-tape-effect.png)\n\nFinally, it *is* possible to have *good* air quality.\nI'm currently sitting in my taped-up room, with doors closed, right next to a box-fan-with-filter:\n\n![Good Setup](/images/minimal-aq-getting-to-good.jpg)\n\nThe results, with other rooms in the house and outside the house on my Grafana dashboard:\n\n![Normal Fan](/images/minimal-aq-all-together.png)\n\nSetting up `telegraf`, `influxdb`, and `grafana` is beyond the scope of this post.\nIf you do go this route, however, the `mini-aqm` code is already writing a log of collected data into a `measurements.log` file.\nYou can run the collector while you're working on the visualization setup, and then import the \"historical\" data you've collected when you're done.\n\n## What's Next?\n\nI'd like to help you breathe better air.\nReach out if you need help building these devices, or have any questions at all.\nAlso, if you're in the Bay Area, I have a few spare PMS7003 devices.\nIf you'd like one without waiting for them to be shipped from China, please reach out, and I can leave one on my porch for you to come grab.\n",
            "url": "https://igor.moomers.org/posts/minimal-viable-air-quality",
            "title": "Minimum Viable Air Quality Monitoring",
            "date_modified": "2020-09-13T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/building-etl-kubernetes",
            "content_html": "\nFrom December 2018 until May of 2020, I worked as a software engineer at [Aclima](https://aclima.io/).\nWhile I was there, I ended up building an in-house [ETL](https://en.wikipedia.org/wiki/Extract,_transform,_load) system written entirely in Python on Kubernetes.\nThough fairly generic, this system is, and likely will remain closed-source.\nHowever, I still learned a lot -- about ETL, Kubernetes, data science workflows -- and this post is an attempt to summarize those learnings.\n\nI would love to reflect generally on my time at Aclima, similar to my [reflections on leaving Airbnb](/thoughts-on-leaving-airbnb).\nI found it difficult to approach this as a single post, so I'm going to do it in pieces, of which this is one.\nStay tuned for more posts about other things I learned in the last 18 months.\n\n## What's Aclima ##\n\nWhen I joined, Aclima had just completed it's Series A, and was focused on building and scaling a new product.\nThe products customers were regulators, such as Bay Area's own [BAAQM](https://www.baaqmd.gov/) (or \"the district\").\nThese folks are in charge of making the air that citizens in their districts breathe as clean as possible.\nTo do this, they need *data* -- how good is the air, what are some problem areas or pollutants, what is the effect of various interventions.\n\nTraditionally, that data has primarily come from permanent EPA monitoring sites.\nThese contain very expensive regulatory- or lab-grade equipment that has been strenuously vetted to give accurate readings.\nThis equipment has to be secured and property maintained to retain accuracy.\nAs a result, there are not that many such regulatory sites -- here's a map of the Bay Area ones from the [airnow.gov](https://fire.airnow.gov/?lat=37.86988000000008&lng=-122.27053999999998&zoom=12#) website:\n\n![Regulatory sites in the Bay Area](/images/epa-sites-bay-area.jpg)\n\nThe problem is that, while the reference stations give very accurate data, that data is only very accurate for the small area immediately adjacent to the regulatory site.\nAir quality, on the other hand, can differ dramatically, even block-by-block.\nIt's strongly affected by the presence of [major roadways](https://www.epa.gov/air-research/research-near-roadway-and-other-near-source-air-pollution), [toll booths](https://www.macfound.org/media/files/HHM_Research_Brief_-_Living_Along_a_Busy_Highway.pdf), [restaurants](https://www.theguardian.com/environment/2019/oct/10/restaurants-contribution-to-air-pollution-revealed), features of the landscape, and many other factors.\n\nAclima's goal was to quantify this local variability by collecting \"hyperlocal\" air quality measurements.\nEven with the advent of [low-cost sensors](https://www.alibaba.com/product-detail/PLANTOWER-Laser-PM2-5-DUST-SENSOR_62480702957.html) and [cheap connected devices](https://en.wikipedia.org/wiki/ESP32), installing and maintaining thousands of devices on every street would be a challenge.\nInstead, Aclima builds out vehicles equipped with a suite of sensors, and then drives those vehicles down every block -- many times, to collect a representative number of samples.\n\n## Overall Architecture ##\n\nWhat are the technical details around implementing such a data collection system?\nThere are lots of moving -- sometimes, literally moving -- pieces here.\nFirst, a bunch of hardware must be spec'ed, sourced, integrated, and tested.\nThe hardware needs software which can collect data from a suite of sensors, and then the software needs to report this data back to a centralized backend.\nThe data must be collected, stored, and processed.\nFinally, there's a presentation or product layer, which allows customers to gain insights from the pile of data points.\n\nOver my time at Aclima, I worked on all of these components.\nFor instance, I built a tool which generated [STM32](https://www.st.com/en/microcontrollers-microprocessors/stm32-32-bit-arm-cortex-mcus.html) firmware and then flashed it onto boards using [dfu-util](http://dfu-util.sourceforge.net/).\nI experimented with rapidly prototyping low-cost hardware (post coming soon).\nI also worked on fleet-level IOT device management and data collection.\n\nHowever, most of my time at Aclima was spent on the pipeline for processing the data that our vehicles and sensors collected.\nI'd like to focus this post on the technical details behind this component.\n\n## Why Kubernetes? ##\n\nMy immediate project was taking some code written by a data scientist, which had only ever been running on her laptop, and making it run regularly and on some other system.\nThis involved configuring an environment for this code -- the correct versions of python, libraries, and system dependencies.\nI would need to keep the environment in-sync between the OSX laptops of other engineers and data scientists, and the cloud environment where the code ran in production.\nI wanted the ability to have representative local tests, but also a rapid-iteration development environment.\nFinally, I didn't want to introduce too many workflow changes -- only the barest minimum necessary.\n\nThe choice of tooling was already somewhat constrained.\nAclima was already running in the [Google Cloud](http://dfu-util.sourceforge.net/), and many of their existing services ran in pods on Google's [Kubernetes Engine](https://cloud.google.com/kubernetes-engine/).\nMost of my experience was with raw EC2 instances configured with [chef](https://medium.com/airbnb-engineering/making-breakfast-chef-at-airbnb-8e74efff4707), and I was unfamiliar with the Docker ecosystem.\nHowever, I was also unfamiliar with Cloud ETL tools like [Dataflow](https://cloud.google.com/dataflow/) and was reluctant to jump into a ecosystem new for both myself and my colleagues.\nThe other obvious choice would have been [Airflow](https://airflow.apache.org/); Google even has a [hosted version](https://cloud.google.com/composer/).\nHowever, at the time (early 2019), Airflow did not have good support for Kubernetes, and I would have had to configure instances for code to execute on.\n\nIn any case, my first task was fairly simple -- take a Python program which is already running locally, and make it run in the cloud.\nI wrote a Dockerfile to correctly configure the environment, copied the code into the docker image, and uploaded it to [GCR](https://cloud.google.com/container-registry/).\nI used the [Kubernetes CronJob controller](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) to get it to run daily, and [K8s secrets](https://kubernetes.io/docs/concepts/configuration/secret/) to manage credentials for the process.\nBy configuring the [cluster autoscaler](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler), we could avoid paying for cluster resources when we didn't need them.\nWhenever the job ran, if the cluster didn't have enough resources, GKE would automatically add them, and then clean up after the job completed.\n\nThe local setup for users involved installing docker and the [gcloud](https://cloud.google.com/sdk/gcloud/) tool, and getting `gcloud` authenticated.\n`gcloud` takes care of managing permissions for the K8s cluster/the [`kubectl` command](https://kubernetes.io/docs/reference/kubectl/overview/), which is invoked to deploy the `CronJob` and `Secret` manifests.\nTo shield users (and myself!) from the raw `docker` and `kubectl` commands, I immediately added [`invoke`](https://www.pyinvoke.org/) to the data science repo.\nAfter getting the ETL job running locally, the workflow to deploy it to production was a simple `inv build` and `inv deploy`.\nI also created convenience tooling, like `inv jobs.schedule`, to do a one-time run of the job in the cloud, and `inv jobs.follow` to tail it's output.\n\nThese initial steps were simple and easy enough that we decided to continue with Kubernetes for a while, and reconsider when we hit snags.\n\n## Scaling Up to Multiple Jobs ##\n\nOne job does not an ETL pipeline make.\nA few changes were required to add a second job to the pipeline.\nFirst, we wrote the second job in the same repo as the first.\nWe standardized on a job format -- a Python class with a signature like so:\n\n```python\nclass PerformType(Protocol):\n    def __call__(\n      self, start: pendulum.DateTime, end: pendulum.DateTime, config: Dict[str, Any]\n    ) -> None:\n        pass\n\nclass JobType(Protocol):\n    perform: PerformType\n```\n\nWe then created a standard `main.py` entrypoint which would accept parameters like the job name and options, and dispatch them to the correct job class.\n\nThe Kubernetes work is a little more complicated.\nThe jobs look basically the same, but have different arguments -- in K8s-speak, the pod spec container args -- are different.\nThe solution is to template the K8s manifests, but templating a data structure (in this case, `yaml`) like a string (e.g., with [jinja](https://jinja.palletsprojects.com/en/2.11.x/)) is a recipe for disaster, and popular tools like [jsonnet](https://jsonnet.org/) seem quite heavyweight.\n\nWe managed to find [json-e](https://json-e.js.org/), which hit just the right note.\nThis allows you to template and render a pod manifest:\n\n```yaml\nspec:\n  containers:\n    - name: {$eval: 'container_name'}\n      image: {$eval: 'image'}\n      args: {$eval: 'args'}\n```\n\nwith something like:\n\n```python\npod_manifest = jsone.render(\n  YAML(typ=\"safe\").load('pod_manifest.yaml'),\n  {\n    'container_name': job_name,\n    'image': 'latest',\n    'args': [job_name, start_time, end_time, job_config]\n  }\n)\n```\n\nYou end up with a valid pod manifest as a data structure.\nYou can then render it into your `CronJob` manifest:\n\n```yaml\napiVersion: batch/v1beta1\nkind: CronJob\nspec:\n  schedule: {$eval: schedule}\n  jobTemplate:\n    spec:\n      template: {$eval: pod_manifest}\n```\n\nin the same way:\n\n```python\ncron_manifest = jsone.render(\n  YAML(typ=\"safe\").load('cron_manifest.yaml'),\n  {\n    'schedule': '0 2 * * *',\n    'pod_manifest': pod_manifest,\n  }\n)\n```\n\nThe resulting `cron_manifest` can be serialized to YAML again, and passed to `kubectl apply` to update your resources:\n\n```python\nwith NamedTemporaryFile() as ntf:\n  ntf.write(YAML(typ=\"safe\").dump(cron_manifest))\n  ntf.flush()\n  \n  os.system(f\"kubectl apply -f {ntf.name}\")\n```\n\nOf course, we updated our `inv` tooling to support gathering input from users about which jobs they wanted to deploy, and with which arguments.\nWe also provided helpers for the `main.py` dispatcher, so you could specify arguments like `yesterday` to a job and have it run over the previous day.\n\n## Inter-job Dependencies ##\n\nSoon enough, we had dozens of jobs, and they had inter-job dependencies.\nYou can only go so far by saying \"job A runs at 2 am, and usually finishes in an hour, so we'll run job B at 3:30 am\".\n\nThis was time, again, to re-examine existing popular open-source tooling, and we seriously considered [Argo](https://argoproj.github.io/argo/).\nBut again, some functionality was missing -- specifically, we got bit by [this bug](https://github.com/argoproj/argo/issues/703#issuecomment-494183536) preventing the templating of resources, which was a step back from being able to template any part of the manifest using `json-e`.\n\nInstead, we ended up creating two new concepts.\nThe first, a `workflow`, listed all the jobs that depended on each other, along with their dependency relationships, encoding a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph).\nIt's easy to parse a YAML list, and verify that it is indeed a DAG, with [networkx](https://networkx.github.io/documentation/stable/reference/algorithms/dag.html).\nCreating a workflow specification also gave us a place to keep track of standard job parameters, such as command-line options or resource requirements for a job.\n(As an aside, resource management for jobs was a constant chore, especially as the product was scaled up and data volumes increased.)\n\nThe second concept was a `dispatcher` job.\nInstead of actually doing ETL work, this type of job, when dispatched using a `CronJob` resource, would accept a workflow as an argument, and then dispatch all the jobs in that workflow definition.\nA job would not get dispatched until the jobs it depended on completed successfully.\nThis allowed us to schedule a workflow to run daily instead of scheduling a pile of jobs individually.\nMost of the code was re-cycled from the code already used by users locally to schedule their jobs -- the `dispatcher` was doing the same thing, just from *inside* Kubernetes.\n\n## Monitoring via Web UI ##\n\nAs the complexity of the ETL pipeline grew, we began encountering workflow failures.\nWe needed an audit log of which jobs ran for which days, so we could confirm that we were delivering the data we processed.\nWe also needed tooling to reliably re-process certain days of data, and keep track of those re-processing runs.\n\nBy this point, the ETL system we were building had run for a year without requiring any UI beyond the one provided by GKE and `kubectl`.\nHowever, to manage the complexity, it seemed like a UI was needed.\nI ended up building one using [firestore](https://cloud.google.com/firestore/), React, and [ag-grid](https://www.ag-grid.com/).\n\nPreviously, the `dispatcher` job that ran workflows was end-to-end responsible for all the jobs in the workflow.\nIt ran for as long as any job in the workflow was running.\nIf any of the jobs failed the `dispatcher` would exit, and the subsequent jobs would not run.\nLikewise, if `dispatcher` itself failed, remaining workflow jobs would be left orphaned.\n\nInstead, we turned the workflow `dispatcher` into something that merely manipulated the state in `firestore`.\nThe job would parse the workflow `.yaml` file into a DAG, and then create `firestore` entries for each job in the graph.\nThe `firestore` entry would include job parameters like command-line arguments and resource requests, as well as job dependency information.\nJobs at the head of the DAG would be placed into a `SCHEDULED` state, while jobs that were dependent on other jobs were placed into a `BLOCKED` state.\n\nAn always-running K8s service called `scheduler` would subscribe to `firestore` updates and take action when jobs were created or changed state.\nFor instance, if a job was in the `SCHEDULED` status, the `scheduler` would create pods for those jobs via the K8s API, and then mark them as `RUNNING`.\nIf a task finished (by marking itself as `COMPLETED`), the `scheduler` would notice, and clean up the completed pods.\nIt would also check if any jobs were `BLOCKED` waiting on the completed job, make sure all their dependencies had completed, and placed them into the `SCHEDULED` state.\nIf a job marked itself as `FAILED` (via a catch-all exception handler), we had the option to track retries and re-schedule the job.\n\nBecause the `scheduler` became the only thing that interacted with the K8s API, it made building user-facing tooling easier.\nThose tools merely had to manipulate the DB state in `firestore`.\nThis enabled less technical users, without `gcloud` or `kubectl` permissions, to create, terminate, restart, and monitor job and workflow progress.\n\nFrom what I can tell, components like the `scheduler` are often built using K8s [operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) and [custom resources](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).\nHowever, we found that just running a service with permissions to manage pods is sufficient.\nThis avoids having to dive too deep into K8s internals, beyond the basic API calls necessary to create and remove pods and check on their status.\n\n## Parallelism ##\n\nSome jobs in our system were trivially parallelise-able, e.g. because they processed a single sensor's data in isolation.\nThe system of using the `scheduler` to run additional jobs unlocked infinite parallelism inside jobs.\nFor instance, we could schedule a job like `ProcessAllSensors`.\nThis job would first list all sensors active during the `start`/`end` interval, and then could create a child job in `firestore` for each sensor.\nCreating child jobs was as simple as writing a job entry into `firestore`, with the ID of the sensor to process.\nParallelism was limited only by the auto-scaling constraints on the K8s cluster.\n\nI created an abstraction called `TaskPoolExecutor`, based on the existing [`ThreadPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor).\nEach job submitted to the `executor` would run in a different K8s pod.\nNot only did this make data processing much faster, but more resilient, too.\nPreviously, if a particular sensor failed in it's processing, restarting that processing was complicated.\nIn the new system, the existing retry system baked into the `scheduler` could retry individual sensors, without the parent job even knowing about it.\n\n## Takeaway ##\n\nMy focus at Aclima was on business objectives -- making sure data is delivered, data scientists and other technicians are productive, and we can manage our fleets of vehicles and devices.\nIn 18 months, I was able to build an infinitely-auto-scalable, reliable parallel job scheduling system and UI accessible to non-technical users.\nI was able to build this one step at a time, from \"how do I regularly run one cron job\" all the way to \"how to I parallelize and monitor a run of a job graph over a year of high-precision sensor data\".\n\nI think this is a tribute to the power of the abstractions that Kubernetes provides.\nIt's an infinite pile of compute, and it's pretty easy to utilize it.\nThis experience has sold me on the premise, and I would definitely use Kubernetes for other projects again.\n\n## Do Differentlys ##\n\nThe scheduler and all the code that interacts with K8s was written in Python, mostly because the data processing code was also written in Python.\nHowever, as soon as I wanted to add a UI, I had to begin writing Javascript.\nThis means I have to share at least data structures -- for instance, the structure of a `job` in firestore -- between the two languages.\nIn the future, if I'm writing a UI, even a CLI, I will consider strongly whether I should just write it in JS to begin with.\n",
            "url": "https://igor.moomers.org/posts/building-etl-kubernetes",
            "title": "Kubernetes for ETL",
            "date_modified": "2020-09-10T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/aws-mfa-cli-direnv",
            "content_html": "\nYou might already be using multi-factor authentication (MFA) for logins to your AWS account.\nThis will cause AWS to prompt you for your MFA token when you log in via the web console.\nHowever, if you use AWS via command-line tools (e.g., `terraform` or `aws s3`), you might have issued yourself access keys.\nThose are single-factor, and if they leak, anyone on the internet can use them to do horrible things to your account.\n\nWe can make your admin AWS accounts safer by requiring MFA, even for API requests.\nFirst, put your account, and the account of all other admins in your AWS account, in a group like `AdminMFA`.\nThis group should have a policy that looks like this:\n\n```json\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"*\"\n      ],\n      \"Resource\": \"*\",\n      \"Condition\": {\n        \"Bool\": {\n          \"aws:MultiFactorAuthPresent\": \"true\"\n        }\n      }\n    }\n  ]\n}\n```\n\n## Direnv Config ##\n\nNow, you'll need a mechanism to authenticate via your MFA token, get a session token, and put that session token into your environment.\nI do this using [`direnv`](https://direnv.net/).\nI've only recently learned about `direnv`, and I'm already using it, in combination with [`asdf`](https://asdf-vm.com/#/core-manage-asdf-vm), to replace [`rbenv`](https://github.com/rbenv/rbenv), [`pyenv`](https://github.com/pyenv/pyenv), and [`nodenv`](https://github.com/nodenv/nodenv).\n`direnv` and `asdf` setup is beyond the scope of this post, but you can check out [my dotfiles repo](https://github.com/igor47/dotfiles) to get an idea of how I have it configured.\n\nHere's my configuration in a `.envrc` file of a [terraform](https://www.terraform.io/) repo for a project hosted on AWS:\n\n```bash\nuse asdf\n\nexport MASTER_AWS_ACCESS_KEY_ID=AKIA<redacted>\nexport MASTER_AWS_SECRET_ACCESS_KEY=<redacted>\nexport AWS_MFA_ARN=arn:aws:iam::<redacted>:mfa/igor\nexport AWS_SESSION_FILE=\"${HOME}/.config/aws/session-${MASTER_AWS_ACCESS_KEY_ID}\"\n\nwatch_file $AWS_SESSION_FILE\ndirenv_load ~/bin/aws_load_session\n```\n\nThis file causes 4 environment variables to be exported into my environment whenever I `cd` into this repo's directory.\nThe `MASTER_AWS_ACCESS_KEY_ID` and `MASTER_AWS_SECRET_ACCESS_KEY` are just the access key ID and key that I created for my account via IAM.\nI've prefixed their usual environment variable names with `MASTER` to distinguish them from the session-specific keys created by authenticating with MFA.\nThe `AWS_MFA_ARN` variable contains the ID of my MFA token.\nYou can get this from [your security credentials page](https://console.aws.amazon.com/iam/home#/security_credentials), under the `Multi-factor authentication (MFA)` section.\nFinally, the `AWS_SESSION_FILE` variable will keep track of where my MFA session is stored in my filesystem.\n\nThe next two lines handle reloading the MFA session.\nI've told `direnv` to reload my local environment whenever the contents of the file at `$AWS_SESSION_FILE` change.\nNext, we use `direnv_load` (from the [direnv stdlib](https://direnv.net/man/direnv-stdlib.1.html)) to load the environment exported by my `aws_load_session` script.\n\n## Session-management Scripts ##\n\nI have two custom scripts to manage the MFA session.\nThe first is `aws_get_session`, and it's responsible for prompting me for my MFA token, creating an MFA session, and storing it into the `AWS_SESSION_FILE`.\nI run this script whenever my MFA session expires.\nHere's the script:\n\n```bash\n#!/bin/bash\n\nTOKEN=$1\nshift\n\nif [[ -z $TOKEN ]]; then\n    echo \"Usage: aws_get_session <mfa token value>\"\n    exit 1\nfi\n\nset -u\n\nmkdir -p `dirname ${AWS_SESSION_FILE}`\nunset AWS_SESSION_TOKEN\n\nAWS_ACCESS_KEY_ID=${MASTER_AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${MASTER_AWS_SECRET_ACCESS_KEY} aws sts get-session-token --serial-number $AWS_MFA_ARN --token-code ${TOKEN} > ~/.config/aws/session-${MASTER_AWS_ACCESS_KEY_ID} > ${AWS_SESSION_FILE}\necho \"saved session info to ${AWS_SESSION_FILE}\"\n```\n\nThe other script, `aws_load_session`, loads the MFA session into my environment.\nIt's run by `direnv`, whenever the `AWS_SESSION_FILE` changes.\nHere's the script:\n\n```bash\n#!/bin/bash\n\nset -u\n\nif [[ ! -f ${AWS_SESSION_FILE} ]]; then\n  echo \"No session found; did you run `aws_get_session <mfa token>` ?\"\nfi\n\nexport AWS_ACCESS_KEY_ID=`cat ${AWS_SESSION_FILE} | jq --raw-output .Credentials.AccessKeyId`\nexport AWS_SECRET_ACCESS_KEY=`cat ${AWS_SESSION_FILE} | jq --raw-output .Credentials.SecretAccessKey`\nexport AWS_SESSION_TOKEN=`cat ${AWS_SESSION_FILE} | jq --raw-output .Credentials.SessionToken`\ndirenv dump\n```\n\nBoth of these scripts depend on having `aws` and `jq` installed and in your `PATH`.\n\n## Example Session ##\n\nHere's how this looks in real use, with a terraform repo that stores it's state in AWS S3.\n\n```bash\nigor47@fortress:~/repos/terraform/roots/prod {master} $ terraform plan\n\nError: error using credentials to get account ID: error calling sts:GetCallerIdentity: ExpiredToken: The security token included in the request is expired\n\tstatus code: 403, request id: abfd729b-4dad-41f4-857f-2539170f68a9\n\n\nigor47@fortress:~/repos/terraform/roots/prod {master} $ aws_get_session 123456\nsaved session info to /home/igor47/.config/aws/session-AKIA<redacted>\ndirenv: loading ~/repos/terraform/.envrc\ndirenv: using asdf\ndirenv: loading ~/.asdf/installs/direnv/2.21.2/env/733966593-20565860-1008169379-2914714444\ndirenv: using asdf python 2.7.18\ndirenv: using asdf python 3.8.3\ndirenv: using asdf nodejs 12.13.1\ndirenv: using asdf ruby 2.7.1\ndirenv: using asdf direnv 2.21.2\ndirenv: using asdf terraform 0.12.29\ndirenv: export +AWS_ACCESS_KEY_ID +AWS_MFA_ARN +AWS_SECRET_ACCESS_KEY +AWS_SESSION_FILE +DD_API_KEY +DD_APP_KEY +MASTER_AWS_ACCESS_KEY_ID +MASTER_AWS_SECRET_ACCESS_KEY +NPM_CONFIG_PREFIX +RUBYLIB ~AWS_SESSION_TOKEN ~PATH\n\nigor47@fortress:~/repos/terraform/roots/prod {master} $ terraform plan\nRefreshing Terraform state in-memory prior to plan...\nThe refreshed state will be used to calculate this plan, but will not be\npersisted to local or remote state storage.\n```\n\nHere, `terraform plan` fails because my MFA session has expired.\nI re-run `aws_get_session` to update my `AWS_SESSION_FILE`.\n`direnv` notices that the file has been updated, and reloads the environment.\nI can then continue using `terraform` as normal.\nAs a bonus, in any other shell, the session will also be re-loaded automatically whenever I get a new prompt.\n",
            "url": "https://igor.moomers.org/posts/aws-mfa-cli-direnv",
            "title": "AWS MFA on the CLI with `direnv`",
            "date_modified": "2020-08-28T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/website-scalability",
            "content_html": "\nWhen I was [at Airbnb](https://igor.moomers.org/thoughts-on-leaving-airbnb), I learned a lot about how to scale a website.\nI've recently been working in that space again, helping other websites to scale.\nI figured it might be useful to write down some of what I've learned in this space.\nThis is useful for me, to clarify my thinking, but is also useful for collaboration.\nIf my colleagues understand how I think about scaling, it would help set the context for what I'm working on, and why.\n\n## A Basic Web App ##\n\nTo think through various scaling scenarious, lets imagine a basic hello-world web app.\nWhen you come to the site, it renders an HTML page containing something like `<h1>Hello, World!</h1>` and sends it back to your browser, which displays it.\nHow does this app scale?\n\n### Caching ###\n\nThis initial version of the app is quite static, and you could utilize caching to help you scale.\nCaching here would involve routing requests to your app through a CDN, like Akamai or Cloudflare or Cloudfront.\nThe CDN would keep a copy of your “Hello World”, and when visitors ask for it, it would come from the CDN’s servers, not your server.\n\nSuppose your app, instead of printing “Hello World”, printed `“Hello World, today is <day of week>”`.\nThis is still almost entirely static – the text only changes once a day.\nYou will need to carefully configure cache-control headers, so that the CDN mostly serves the file from it’s servers, but will occasionally come back to your servers to retrieve an updated version of the page.\n\n### Horizontal Scaling ###\n\nSuppose your app prints out `“Hello, World! It's <hour>:<minute>:<second> where I’m at.”.`\nAlso, assume you have to render this server-side (as a client-side app, you could do the rendering in JS, and then this app just become the perfectly-cached one from the section above).\n\nThis app would be perfectly horizontally scalable.\nYou do this by putting a load balancer in front of your server, and then adding more, identical web servers as needed.\n\nIf you run a single-threaded web server, it can only serve one visitor at a time.\nWhen a second visitor visitor shows up while your server is busy rendering “Hello, World” for the first visitor, the second visitor has to wait – to get into a queue.\nIf you have a lot of visitors showing up at the same time, the queue will get quite long – and then some of those visitors will give up before seeing “Hello, World”.\nSome of them give up subjectively, because the page isn’t doing anything, and for the really patient ones their browser will give up for them, eventually.\n\nIf you were running your single-threaded web server on a 16-CPU machine, or if the reason it took 200ms to serve “Hello, World” is because you called `sleep(0.196)` in your web server code, your server would not *look* overloaded.\nYou wouldn’t see excessive CPU or memory or IO usage.\nThe only way to tell that visitors are giving up is by looking at metrics, such as queue size, timed-out connections, number of requests at the load balancer vs. your web app, etc.\n\nIf you looked at these metrics, and discovered that you were in fact failing to greet a bunch of visitors, then you might engage in performance engineering.\nYou would say, “why does it take 200ms to serve this page?\"\nIf it took less time, we'd be able to serve more visitors total.\nThis might lead you to finding the `sleep(0.196)`, or switching to a more performant programming language or architecture.\nAlternatively, you might realize that you have a bunch of under-utilized resources on your server – say, 15 additional CPUs – and then you might use a threaded or forked web server to serve more visitors concurrently.\n\nOR – you might not do any of that.\nInstead, you might just decide, since your app is so horizontally scalable, to launch a bunch more web servers behind your load balancer.\nThe end result would be the same – more satisfied visitors to your site.\nBut you might end up paying more money for all those extra servers.\nHowever, you would save all the time and money you spend combing through the code looking for sleep, or tuning your server software for concurrency.\nThis is a relevant trade-off, and should be considered when you want to scale your web app.\n\n### State and Vertical Scaling ###\n\nSuppose that when a visitor came to your page, you would log their IP address and the time of their visit.\nThen, you would display either “Hello, visitor, for the first time!” or “Hello, visitor, welcome back!”, depending on whether you had or had not previously seen their IP address.\n\nThis version of the web app is stateful – it retains state between requests -- and thus is no longer purely horizontally scalable.\nTo realize this, think about where you would store the state.\nYou could store it directly on the server which renders “Hello”.\nBut, if scale by launching additional servers, then a visitor might get different servers on different requests, and would get the wrong “Hello” message and be sad.\n\nTypically, looking up some data in a database is much faster than rendering a complicated web page, and so you end up with a two-tier web infrastructure.\nThe first tier is a bunch of horizontally scalable, stateless web servers.\nThese may have been optimized to some extent, or else just scaled as needed, according to the necessary trade-off (discussed above).\nThe second tier is the database, which is heavily optimized to be as fast as possible.\nWeb optimizations are harder, because each website is different, while “retrieve some data” is pretty generic and can be iteratively optimized over time.\n\nHowever, how the database server can no longer be easily scaled horizontally.\nOnce you are using all the memory and all the CPUs on your database, you would be up against a wall.\nIn thise case, ou might decide to scale vertically – by getting a bigger, badder, beefier database server.\n\nIn this scenario, your goal is to squeeze the most possible out of the resources on your database server.\nYou’ll want to watch for high CPU usage, running out of memory, or saturating your disk IO or network buses.\nThese would alert you that you need to either scale your database server, or reduce the load from your application.\n\n### Scaling through load shedding ###\n\nSuppose that your simple site now has two pages.\nThe first page is that same one that says “Hello, visitor, for the first time!” or “Hello, visitor, welcome back!”, while the second page shows a cute random kitten.\nThe kitten page scales horizontally (each server has it’s own kitten repository), but the “Hello” page still requires a DB read/write before you can render it.\n\nSuppose that your DB server is having trouble keeping up with all the demand for the “Hello” page.\nAs the DB server slows to a crawl, the “Hello” page takes longer and longer to render.\nThe kitten page would keep working quickly but, alas, visitors to the kitten-page are stuck in line behind the “Hello”-page visitors, and nobody is getting either “Hello”ed or kittened.\n\nKind of how an escalator that fails becomes stairs, your kitten-and-hello site, when it fails, should become just a kitten-showing site, instead of no site at all.\nYou can do that if you quickly turn away the “Hello” visitors when you realize that the DB server is having problems.\nThis can be accomplished through automation, called “circuit breaking”, in the DB layer; as a bonus, it might even help the DB layer automatically recover from transient spikes.\n\nThis pattern is often useful with dependencies that scale neither horizontally nor vertically.\nFor instance, suppose that a page on your site will offer visitors the ability to enter their phone number, and then recieve a “Hello, World” as a text message through Twillio.\nTwillio becomes a dependency, but you can neither add more horizontal Twillio capacity, nor increase the size of Twillio’s machines then they’re overloaded.\nIn fact, the only way you know that Twillio is overloaded is when your own site is totally down – visitors can’t get a “Hello” page and they can’t get a kitten, because there are too many people waiting for Twillio API calls that never complete.\nIf you used a circuit breaker around calls to Twillio, then you might be able to quickly show those visitors an error message, while the other visitors continue getting “Hello” and kittens.\n\nComing full circle, caching can also be a form of load shedding.\nYou may be able to serve visitors a cached version of the page which is not strictly correct (for instance, the time on the \"Hello\" page is stale), but still more useful than an error page.\n\n### Dependencies as bottlenecks ###\n\nLife was easy for your awesome “Hello, World” site, so long as you could just horizontally scale it.\nBut, as soon as you began introducing dependencies – DB servers, Twillio APIs – things got a whole lot more annoying.\nWhat’s clear is that any dependency ought to be treated with suspicion – these are what’ll getcha on big launch night.\n\nIn fact, only three things will break your site:\n    \n* You yourself, by accidentally breaking it. This usually happens through a deploy.\n* Some malicious actors, by breaking it on purpose\n* A non-horizontally-scalable dependency of your site, which breaks suddenly in response to increased traffic\n\nIf you want your site to stay up, you have to engineer around all three failure causes.\n",
            "url": "https://igor.moomers.org/posts/website-scalability",
            "title": "Website Scalability",
            "date_modified": "2020-08-04T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/my-syncthing-setup",
            "content_html": "\n[Syncthing](https://syncthing.net/) is a file synchronization tool.\nI decided to try it after seeing [this post](https://tonsky.me/blog/syncthing/) on [hacker news](https://news.ycombinator.com/item?id=23537243).\nMany posts have been written about how awesome it is, and this is another one of those -- I'm really having fun with it.\n\n## Setup ##\n\nI mainly run `syncthing` on three devices -- my Android phone, my server, and my laptop.\nI ended up switching to it because I upgraded my laptop to Ubuntu 20.04, and this [broke Unison](https://unix.stackexchange.com/questions/583058/unison-and-version-compiler-conflicts/583377#583377), which I had previously been using to synchronize a few folders between my server and laptop.\nAfter several hours wasted didn't solve the issue, I gave up on it altogether, and I'm glad I did.\n\nSetup was straightforward on my Ubuntu laptop.\n\nOn the server, I had to do a few manual steps.\nAfter adding the apt rep and installation, I copied a systemd unit file from [here](https://computingforgeeks.com/how-to-install-and-use-syncthing-on-ubuntu-18-04/).\nI wasn't familiar with the `@.server` and the `User=%i` syntax of unit files, and I still can't find it documented anywhere.\nThis confused me for a bit, but eventually I got the file named properly, reloaded unit files with `systemctl daemon-reload`, and got the service running with `systemctl enable` and `systemctl start syncthing@igor47.service`.\n\nNext, I wanted to get into the configuration web GUI.\nI used a local tunnel:\n\n```bash\n$ ssh -L 4567:localhost:8384\n```\n\nI was then able to visit localhost:4567 in my local browser and configure `syncthing` on the server.\nI picked a custom port for the `syncthing` protocol, and punched a firewall hole for incoming connections on that port.\nAlso, I picked a custom port for the web GUI server, so I wouldn't conflict with other users who might want to enable their own `syncthing`.\nI set the web GUI to only listen on localhost, and then added a reverse proxy to this port from my web server config.\n\n```apacheconf\n  ProxyPass /syncthing/ http://localhost:12345/\n  ProxyPassReverse /syncthing/ http://localhost:12345/\n```\n\nOn my phone, I wanted `syncthing` to be able to write stuff onto the SD card.\nApparently, this is [not currently possible](https://github.com/syncthing/syncthing-android/wiki/Frequently-Asked-Questions#what-about-sd-card-support).\nI worked around it by granting Syncthing root permissions, which works for me on my rooted Lineage android build.\nYMMV.\n\n## What I use `syncthing` for? ##\n\n* removed google photos from my phone and allowed syncthing to sync photos\n  directly to my laptop. This is especially handy when I use my phone as a\n  scanner (to take photos of documents for archival), since I then immediately\n  have them available for email. It's nice to be off Google photos -- one step\n  closer to a google-free life!\n* syncing my documents folder between laptop and server\n* local cache of music. I prviously used\n  [dsub](https://f-droid.org/en/packages/github.daneren2005.dsub/) to play my\n  music collection, and occasionally had to fight it's cache system to convince\n  it that I really wanted it to cache my entire music library. Now, I just\n  `syncthing` my music collection onto the SD card in my phone, and then play\n  it with\n  [Pulsar](https://play.google.com/store/apps/details?id=com.rhmsoft.pulsar&hl=en)\n* i use a text-based email reader ([mutt](http://www.mutt.org/)) which I access\n  while SSHed into my server. Dealing with attachments can be annoying.\n  Previously, I would save them to a web scratch folder and open them in a\n  browser. Now, I simply keep a `syncthing`ed scratch folder and throw them\n  into there -- they're immediately accessible on my laptop.\n",
            "url": "https://igor.moomers.org/posts/my-syncthing-setup",
            "title": "Syncthing",
            "date_modified": "2020-07-25T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/embedded-system-ssh-tunnel",
            "content_html": "\nYou install [raspbian](https://www.raspberrypi.org/downloads/raspbian/) on a brand-new [Raspberry Pi](https://amzn.to/2GMdnjA).\nWhen you plug it into power and the ethernet jack, it's online, but how do YOU get into it?\n\nOver the years I've resorted to:\n* giving my Pi a static IP -- which breaks when I put it on a different network\n* scanning the network with `nmap`\n* running a DHCP server with `netmasq` on my laptop's ethernet port (probably a USB one) and then sharing my wireless connection with the Pi to get it online\n\nRecently, I decided I'd like to just get my Pi online and have it open a reverse-tunnel to itself.\nI found a few guides to do this, but none quite put all the pieces together.\nThere is even a [paid service](https://www.pitunnel.com/) to do this!\n\nHowever, this is actually quite easy.\nI put the script necessary to do this, plus the instructions, in [this repo](https://github.com/igor47/pitunnel).\nHope it helps!\n",
            "url": "https://igor.moomers.org/posts/embedded-system-ssh-tunnel",
            "title": "Reliable SSH Tunnel for Raspberry Pi",
            "date_modified": "2019-08-03T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/differences-in-environmentalism",
            "content_html": "\nI've been on a climate-change book-reading spree lately.\nSince March, I've read:\n* [The Weather Makers](https://amzn.to/2Q4dla5)\n* [An Inconvenient Sequel](https://amzn.to/2W3uJBs)\n* [Drawdown](https://amzn.to/2Hi7L1i)\n* [The Uninhabitable Earth](https://amzn.to/2Q5uiB1)\n* [Climate: A New Story](https://amzn.to/2HfKT2e)\n* [Falter](https://amzn.to/2JDLnRo)\n\nThat's not even counting books that are a bit about climate change.\nFor example:\n\n* [Braiding Sweetgrass](https://amzn.to/2HgGYlK)\n* [Parable of the Sower](https://amzn.to/2HjUqWh) (particularly scary and prescient)\n\nMy goal has been to find a book I can recommend as a book-club book for [Spaceship Earth](https://spaceshipearth.org), but I haven't succeeded yet.\nSome books, like *Uninhabitable Earth*, are too gloomy and [disempowering](https://igor.moomers.org/individual-action-and-climate).\nSome, like *Drawdown* or *The Weather Makers*, are too dry.\nSome, like *Inconvenient Sequel*, are too rah-rah and sound like an infomercial (there are several section of *Inconvenient Sequel* that I would like to excerpt and redistribute, though).\n*Falter* starts out strong but manages to be both too unfocused, too gloomy, and too cheery at the same time.\n\nA book that I really struggled with, and one that I've been cautiously recommending to a few select folks, is *Climate: A New Story*.\nFor most people in my community, it's a little too... umm... maybe [woo](https://rationalwiki.org/wiki/Woo)?\nOr anyway I can imagine that it would create cognitive dissonance.\nIt definitely did for me, which I enjoyed but I recognize that others might not enjoy.\n\nListening to *Falter* today, I came across a section that's almost diametrically opposed to the *Eisenstein* book.\nIn this section, Bill McKibben is gushing about solar panels, and how they are going to transform Africa.\nHe follows a salesperson from a solar company as he goes to remote villages to try to sell the solar panels:\n\n> Fossouo was born in Cameroon and went to school in Paris, but his real education seems to have come in the seven summers he spent in the United States selling books for Southwestern Publishing, a Nashville-based titan of door-to-door marketing. (Rick Perry is another alum; ditto Ken Starr.) “I did Los Angeles for years,” he said. “‘Hi, my name is Max. I’m a crazy college student from France, and I’m helping families with their kids’ education. I’ve been talking to your neighbors A, B, and C, and I’d like to talk to you. Do you have a place where I can come in and sit down?’”\n>\n> All selling, he insists, is the same: “It starts with a person understanding they have a problem. Someone might live in the dark but not understand it’s a problem. So, you have to show them. And then you have to create a sense of urgency to spend the money to solve the problem now.”\n>\n> …\n> This prospect is a farmer and a schoolteacher, and we settle down in his classroom, which has a few low desks with slates—literal shards of slate—resting on top. Max quickly figures out that the man has two wives, and he starts sprinkling their names liberally through the conversation. “There’s no pressure. It’s okay. I don’t want to sell you anything,” he says, as they move through the steps familiar to anyone who’s seen an infomercial.\n> \n> … \n> The customer is resistant, but Max tries angle after angle. “You have to think big here. When I talked to your chief, he said, ‘Don’t think small.’ If your kid could see the news on TV, he might say, ‘I, too, could be president.’”\n> “This is great,” the man says. “I know you’re trying to help us. I just don’t have the money. Life is hard, things are expensive, sometimes we’re hungry.”\n> Max nods, helpful. “What if I gave you a way to pay for it, so the dollar wouldn’t even come from your pocket. If you get a system, people will pay you to charge their phones. Or, if you had a TV, you could charge people to come watch the football games.”\n> “I couldn’t charge a person for coming in to watch a game,” the man says. “We’re all one big family. If someone is wealthy enough to have a TV, everyone is welcome to it.”\n\nIt was super-interesting to read this section after the following section, from *Climate: A New Story*.\nI think comparing them will give you a sense for the different perspectives:\n\n> Economic growth means the growth in goods and services exchanged for money. Therefore, a remote village in India or a traditional tribal area in Brazil presents a big growth opportunity, because the people there barely pay for anything. They grow or forage their own food. They build their own houses. They use traditional healing methods to treat their sick. They make their own music and drama. Imagine the development expert goes there and says, “What a tremendous market opportunity! These backward people grow their own food—they could buy it instead. They cook their own food too—restaurants and supermarket delis could do it for them much more efficiently. The air is full of song—they could buy entertainment instead. The children play with each other for free—they could enroll in day care. They accompany adults learning traditional skills—this society could pay for schooling. When a house burns down, the community gets together to rebuild it—if we can unravel those ties of mutual aid, there’s a big market for insurance. Everyone has a strong sense of social identity, a strong sense of belonging—they could buy brand name products instead. Everyone is joyful and content—they could be buying a semblance of that through legal and illegal drugs and other forms of consumption.”\n>\n> Okay, I’m getting a bit dizzy with visions of riches, but you get the idea. The question is: how are these people going to pay for all that? Easy. They earn money by converting local natural resources and their own labor into commodities. The rainforest becomes a palm oil plantation. The mountain becomes a strip mine. The river becomes a hydroelectric plant. The population abandons their traditional ways and goes to work in the money economy. A few become doctors, lawyers, and engineers. The rest migrate to the slums.\n>\n> In a nutshell, this is the process called “development.” It is what development loans have funded for more than half a century. It accompanies an ideology that says that money equates with well-being, that development along the model of the West is a good thing (or an inevitable thing), that a high-tech life is superior to a life close to nature. These assumptions are difficult to refute using logical arguments. Usually, shedding them requires spending time in less developed cultures, witnessing the joy and depth of aliveness there, and seeing their beauty erode as they modernize.\n\nThere's definitely more to say about this, but I wanted to note this down while it's in my mind.\n",
            "url": "https://igor.moomers.org/posts/differences-in-environmentalism",
            "title": "Different Approaches to Environmentalism",
            "date_modified": "2019-05-13T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/individual-action-and-climate",
            "content_html": "\nCan your individual action make a difference for climate change?\nOften, there's little room for such action in the climate change discourse.\nTake Project Drawdown, which aims to rank the best climate-friendly interventions available.\nSome (such as their #1 intervention, \"Refrigerant Management\"), are targeted at small groups of specialist in a specialized industry.\nOthers, such as #3 -- Reduce Food Waste -- seem more approachable for individuals looking to make a difference.\nBut even there, the recommendations are not actionable for most people:\n\n> There are numerous and varied ways to address key waste points.\n> In lower-income countries, improving infrastructure for storage, processing, and transportation is essential.\n> In higher-income regions, major interventions are needed at the retail and consumer levels.\n> National food-waste targets and policies can encourage widespread change.\n\nIt seems that to help, you should either be a grocery magnate or a politician.\nFailing that, you might try to lobby or influence your local politicians or [greengrocers](http://openproduce.org/#contact).\nThis view is well-expressed by David Wallace-Wells in his best seller \"Uninhabitable Earth\":\n\n> …the climate calculus is such that individual lifestyle choices do not add up to much, unless they are scaled by politics.\n\nLater on:\n\n> accusations of individual irresponsibility were a kind of weaponized red herring, as they often are in communities reckoning with the onset of climate pain.\n> We frequently choose to obsess over personal consumption, in part because it is within our control and in part as a very contemporary form of virtue signaling.\n> But ultimately those choices are, in almost all cases, trivial contributors, ones that blind us to the more important forces.\n\nWhat are the \"more important forces\"?\nPolitics:\n\n> Eating organic is nice, in other words, but if your goal is to save the climate your vote is much more important.\n\nWallace-Wells is particularly dismissive of individual action, even calling it \"virtue signaling\".\nBill McKibben, founder of 350.org, expressed a more charitable version of this view on a [recent episode of KQED Forum](https://www.kqed.org/forum/2010101870787/bill-mckibben-warns-of-dire-consequences-of-unchecked-climate-change), right here in the Bay Area.\nSeveral locals called in to the show to ask how they can help.\nA typical caller is Jenny from Petaluma (around 31:30 in the show):\n\n> I'm willing to upend my life to be a part of really solving this problem in my own small way.\n> I'd love to hear how to do that sensibly.\n\nMcKibben responds in [a typical way](https://www.youtube.com/watch?v=DYLWZPFEWTw):\n\n> There are a number of individual actions that are useful.\n> Eating lower on the food chain.\n> Putting solar panels all over your own roof.\n> Figuring out how to get around on public transit.\n> Not jumping on an airplane just because you want to get to some place that's a little warmer than the place you are now.\n> But, let me add this caution.\n> Climate change is a math problem, and at this late stage in the game you can't make the math work anymore one Tesla at a time, one vegan meal at a time.\n> My house is covered with solar panels, I'm prod of them, I don't try to fool myself that this is how we're going to stop climate change.\n> The most important thing an individual can do is be a little less of an individual and join together with others in movements of a size enough to make a difference.\n\nIf you take McKibben's advice and visit the [Bay Area 350.org website](https://350bayarea.org/), their actions -- writing letters, organizing rallies, and convincing local governments to declare a state of emergency -- are all oriented around politics.\nThe biggest call to action on their page is the Donate button.\n\n## What About Money? ##\n\nI live in the Bay Area, where [one out of 11,000 people is a billionaire](https://www.vox.com/recode/2019/5/9/18537122/billionaire-study-wealthx-san-francisco) and [everyone is really busy all the time](https://thebolditalic.com/why-are-san-franciscans-so-goddamn-busy-all-the-time-the-bold-italic-san-francisco-2e15a498d750).\nMaybe one way that these rich, busy people can help is by throwing a few bucks towards a cause?\n\nI saw an extreme version of this approach [articulated on the SSC subreddit recently](https://www.reddit.com/r/slatestarcodex/comments/bm3j6x/how_to_effectively_buy_carbon_offsets/emtimk7?utm_source=share&utm_medium=web2x):\nThe OP is asking about carbon offsets, and another poster expresses skepticism that those are effective.\nThe OP replies:\n\n> I agree it's not the systemic solution, but for now I'm just looking for a personal moral offset.\n\nIn today's world (`/me shakes fist at the kids on their escooters`), where most life concerns are outsourced, this kind of thinking makes sense.\nWhat if i just go about my life as normal, but I pay someone else to clean up the mess?\nThis is a diametric opposite of Jenny from Petaluma, who was willing to \"upend [her] life\" -- u/BistanderEffect is unwilling to change in almost any way, but is willing to spend a little money.\n\nIn the Bay Area, an even more pragmatic approach is available.\nFor instance, Malcolm Handley, founder of [Strong Atomics](https://strong-atomics.com/), once told me that he realized he personally knows several of the Bay's many billionaires.\nPerhaps the most effective way to help with climate change, he told me once, is to convince some of them to spend a bit of their money to fund, say, fusion reactors.\n\nIn a more cliche version of Bay Area thinking -- maybe we don't even have to spend any money at all?\nAfter all, brilliant people like Elon Musk are working on climate solutions, like a fancy electric car, that actually *make* money.\nAll we have to do is wait a little while, and The Market and technology will be our salvation.\n\n## I Disagree ##\n\nI've outlined some typical memes in our culture around climate change which I find fundamentally unsatisfying.\nPolitics is, of course, necessary, but it's also exhausting and it's very difficult to keep focused and motivated.\nI could write a separate blog post specifically on my views around politics as an infinite game of tug-of-rope, but everyone pulling as hard as they can in their direction has given us the current state of stalemate.\n\nThe several versions of \"someone else will do something\" -- be it greengrocers, farmers, refrigerant technicians, or Elon Musk -- are quite disempowering.\nSurely, if you believe the climate change is a big deal, just sitting back and doing nothing won't be the right course of action for you.\nThe most complicated argument to counter is the one around around outsourcing the problem.\nShouldn't it be enough to just donate a few bucks to some organizations, and maybe buy some carbon offsets?\n\n## Living the Difference ##\n\nTwo facts:\n\n1. To stop the worst effects of climate change, we have to leave fossil fuels in the ground.\n2. There is no electric passenger air service\n\nTaken together, these two facts imply that passenger air service is impossible to do in an ecologically friendly way right now.\nWe need to both plant a bunch of trees AND stop flying -- one does not excuse the other.\nIn the same way that you cannot \"pay\" for flights with trees planted in the ground, you cannot \"pay\" for climate change with money.\nWe definitely have to spend money to ameliorate climate change, for instance by building solar and wind farms.\nBut we cannot just pay the universe to take out our CO2 for us, the way we pay a plumber to fix a leak.\n\nThere are too many complicated ideas here for me to unpack all of them.\nFor instance, humans compare themselves to others, and evaluate their own status based on their relative status in their social circle.\nWhen we argue for carbon taxes, we might suspect that they would make certain activities -- like flying -- more expensive and therefore less commonly-practiced.\nBut maybe if *everyone* had to fly less, it wouldn't be as big a deal?\nYou  might still fly more than your friend Bob, and that's good enough.\n\nThis then starts involving complicated ideas of climate justice.\nWhat if we pass [high enough carbon taxes](https://www.jbs.cam.ac.uk/fileadmin/user_upload/research/workingpapers/wp1109.pdf) to double the cost of flights.\nDo the rich people still get to fly as much as they want, while the poor and middle class folks get even less access to flights?\nCarbon taxes would also raise the price of food -- do the rich still get to eat as much as they want while the poor starve in greater numbers?\nAdvocating for solutions like carbon taxes -- the means -- lets you avoid thinking about their consequences -- the actual ends we're pursuing, and what those look like.\n\nI think we should reverse our thinking and start with the end.\nWhat does a world where climate catastrophe has been averted look like?\nMaybe people fly less.\nMaybe we eat less meat and more plants.\nMaybe our energy is produced through renewable sources.\n\nWe don't have to wait for governments to force us to make these changes.\nWe can just start living as through the future was already here.\n\n## Enter the Spaceship ##\n\nI don't want you to keep doing whatever you want in the hopes that someone else will solve climate change.\nBut I also don't want you to decide that, so long as you're pure and holy, you've done your part.\nBill McKibben is not wrong to say that we need collective action, but there might be kinds of collective action that he hasn't anticipated.\n\nThis is where the idea of [Spaceship Earth](https://spaceshipearth.org) comes from.\nYou should be living as though we've already decided to stop pumping petroleum from the ground and instituted carbon taxes.\nBut you need to get everyone you know to start living like that, too.\nThe carbon taxes are one way to get them to start living like there already exist carbon taxes.\nBut Spaceship Earth is another way, and unlike passing carbon taxes, you can start playing Spaceship Earth *right now* (well, once we're done building it).\n\n",
            "url": "https://igor.moomers.org/posts/individual-action-and-climate",
            "title": "Individual Action and Climate Change",
            "date_modified": "2019-05-11T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/issue-journalism-platform",
            "content_html": "\nThe cover story on NYT today is headlined \"Profitable Giants like Amazon Pay $0 in Taxes, and Voters are Sick of It\".\nOkay, cool, that sounds interesting.\nI would like for corporations to pay their fair share of taxes.\nI read the article, and there are many stories of laid-off workers, of political strategy, Democrat vs. Republican priorities, and how to defeat Trump in 2020.\nBut what do *I* do about this issue?\nThe main takeaway seems to be to vote for Sanders in 2020, or perhaps more immediately, to go donate to his campaign.\nBut this is left as an exercise for the reader.\n\nI think this is a problem.\nIt seems like the goal of this article to is to get you to be vaguely informed, but we are in an information abundance age.\nIs this information *useful* to me?\nDoes it connect me to my peers?\nHow does it *make me feel*?\n\nAlso, suppose you really care about the unfairness of corporate taxation, or the plight of the Rohinga.\nHow do you actually follow the story?\nProbably, the Gray Lady would like me to just read the paper cover to cover every day in the hopes of spotting stories I care about, but ain't nobody got time for that.\n\nStories about horrible tragedies, like the genocide of the Rohinga in Myanmar, are are more extreme example of this problem.\nIf you read such a story, the main takeaway seems to be \"I feel terrible and the world is a terrible place.\"\nThere is rarely anything actionable to do after reading it, and if you really care there's no good way to follow up or stay informed.\n\n## An Idea\n\nWhat if all the news you read was oriented around action?\nFor instance, if you care about corporate taxes, you can subscribe to the corporate taxes \"issue\".\nThe issue would *only* ever get updated if there's a concrete action you can take on that issue, right at that moment.\nThe issue would have moderators, and they would do the work of screening out potential actions.\nIf they were convinced that an action is likely to make a dent in the issue, they would post it, and then you could participate right away.\n\nThis is a sort of news organization, but one with a very strong and clear editorial voice.\nHowever, I think this is fine -- I think most people would prefer a strong and clear editorial voice to feigned objectivity.\nIt's also a little like a subreddit, except without random distractions, and maybe a more limited role for discussion.\n\nMy current project, [Spaceship Earth](https://spaceshipearth.org), came out of this idea.\nI took a stab at trying to build the overall platform, but it was too big a project for me to tackle.\nPlus, when I reflected, I realized I would want to focus on the climate change issue anyway.\nFinally, Stacey and I had the additional insight that we don't need to wait for new actions you could take on the issue -- climate change already has a ton of actions you could take right now.\nAlthough, of course, for Spaceship Earth, we do plan to create \"one-off\" missions whenever something pops up, like a vote in congress, or in your state legislature.\n(We will need local moderators who can be on top of such developments).\n\nI think someone should build this.\nIt might well be me, if we manage to get Spaceship Earth up and running.\nBut if you're gonna build it and you want help, let me know!\n",
            "url": "https://igor.moomers.org/posts/issue-journalism-platform",
            "title": "Platform for Issue Journalism",
            "date_modified": "2019-05-01T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/mailman-behind-https",
            "content_html": "\nWhen [Let's Encrypt](https://letsencrypt.org/) became available, I moved most of the vHosts I run on this web server behind HTTPS.\nThis included my [Mailman](http://www.list.org/) web interface.\nHowever, this broke the admin interface for my mailing lists.\nEven though I changed the `DEFAULT_URL_PATTERN` in `/etc/mailman/mm_cfg.py` to `'https://%s/'`, the submit button on the admin interface still took me to `http://mailman.moomers.org`.\n\nI spent way too long debugging this, which is why I'm writing this post.\nIt turns out that the `InitVars()` function in `MailList.py` is only called once, when the list is created, and the resulting information is stored in the `config.pck` file for the list.\nBecause I created the lists over HTTP, back in the day, the url in `config.pck` was still `http://mailman.moomers.org` for all my old lists.\n\nTo fix this problem, I first wrote the correct url to a file:\n\n```bash\necho \"web_page_url = 'https://mailman.moomers.org/'\" > /tmp/newurl\n```\n\nI then ran the following little bash script to fix all my lists:\n\n```bash\nfor i in $(list_lists -b); do config_list -i /tmp/newurl -v $i; done\n```\n\nHopefully, if you're having the same problem as me (mailman's admin page still submits to HTTP instead of HTTPS), you might come across this page and save yourself some trouble!\n",
            "url": "https://igor.moomers.org/posts/mailman-behind-https",
            "title": "Mailman Behind HTTPS",
            "date_modified": "2018-02-23T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/my-projects-belden-spa",
            "content_html": "\nThe Spa was a project for [False Profit's](http://www.false-profit.com/) [Priceless](http://priceless.false-profit.com/) event in 2017.\nAs in past years, we (meaning myself, [Jered](https://github.com/jeredw) and [Abi](https://www.linkedin.com/in/abi-kelly-8698ab36/)) wanted to create a space where participants could interact in a quiet setting (no loud music) that wasn't someone's camp (so, there's opportunity for a chance encounter).\n\nThe Spa was a two-part project -- a Finish-style dry sauna, and an accompanying welcome space.\nThe concept for the welcome space was a fancy spa-type lobby -- think soft white lighting, comfy furniture, and cucumber water dispenser.\nWe ended up spending most of the time on the sauna, and the spa itself ended up being a last-minute cobbled-together afterthought.\n\n# The Sauna #\n\n![All set up](/images/sauna-complete.jpg)\n\nThis is a photo of the completed sauna, currently set up in my back yard.\nIt's a small wooden building, with a floor plan of 8' by 8'.\nThe construction is fairly standard, with the added constraint that the building is modular and can be disassembled, transported, and re-assembled on-site.\nIt is also meant to be installed in uneven terrain, and so should be somewhat-level-able.\n\nI spent much time agonizing over the floor.\nI wanted it to be unfinished wood, but it was going to get a lot of water and sweat spilled on it, and I wanted to be able to hose it off to clean it.\nI eventually settled on building a little patch of deck, and I chose untreated redwood for the material.\nRedwood is relatively inexpensive, rot and mildew-resistant, and looks and smells good.\n\n![Sauna deck](/images/sauna-deck.jpg)\n\nThis is a photo of the deck somewhat-finished (the floor boards are not yet nailed down here).\nI used 2x6s for the structural outside components.\nI raised them off the ground on short segments of 4x4, to allow for leveling and to create airflow underneath that would allow the boards to shed water and to dry.\n\n![Sauna deck feet](/images/sauna-deck-feet.jpg)\n\nI read a lot of advice on the internet to not use 1x4s for decking, since it's too flexy and feels unstable.\nI decided to use them anyway, because it's much less expensive and I was worried about weight (remember, this is meant to be moved to different parties).\nI used a fairly dense grid of 2x3s to give a rigid support for the deck:\n\n![Sauna deck support](/images/sauna-deck-support.jpg)\n\nThe walls are fairly conventionally framed out of 2x4s:\n\n![Sauna walls framed out](/images/sauna-walls-framed.jpg)\n\nI used [aluminum foil vapor barrier](https://superiorsaunas.com/collections/foil-vapor-barrier/products/aluminum-foil-vapor-barrier) on the inside face of the walls.\nOn top of the vapor barrier, I used [this cheap, thin cedar planking](https://www.homedepot.com/p/1-4-in-x-3-5-in-14-sq-ft-Western-Cedar-Planks-6-Pack-8203015/202106509).\nThis was the most expensive part of the sauna, and I agnozed for a long time over the material choice.\nIn the end, it works fairly well, however you can definitely tell when you're leaning against a part of the wall that's just cedar on top of vapor barrier -- there's a lot of give.\nIt's much nicer to lean against the wall in a place where there's a stud, and it's pretty easy to find the studs with your back.\n\nInside, the walls are insulated with denium insulation.\nOn the outside face, i used tyvek sheeting, and then the exterior walls are made of the thinest plywood I could find, painted with exterior-grade paint:\n\n![Sauna walls insulated](/images/sauna-walls-insulated.jpg)\n\nHere you can see the walls laid out on the ground, with the plywood, tyvek, and insulation.\nIn the next shot, you can see the walls set up with vapor barrier, before the cedar is installed.\n\n![Sauna walls before cedar](/images/sauna-walls-precedar.jpg)\n\nThe walls themselves just sit on the deck, like so:\n\n![Sauna walls on the deck](/images/sauna-walls-on-deck.jpg)\n\nTo keep them aligned, we drilled holes through the bottom plate of each wall and into the 2x6 redwood of the deck.\nWe then installed bolts, serving as pegs, through the wall bottom plates.\nTo sit the wall correctly, you have to manuever the wall until the bolt pegs fall into their holes.\nIn some of the photos you can see the cabinet handles we added to the outside walls, to make this manuevering possible.\nThe walls are heavy enough that this is an incredibly annoying, difficult, and dangerious task -- everyone always wants to hold the wall from underneath, but if you successfully align the pegs with the holes then the wall falls on your hand -- the worst part of the assembly process.\n\nThe structure that is formed when the walls sit with their pegs in the holes on the deck is already fairly rigid.\nHowever, to avoid torque on the 2x6s of the deck, we wanted to bolt the walls together, too.\nTo do that, we installed [tee nuts](http://amzn.to/2DYnRKc) inside the wall.\nAfter putting two walls onto the deck, we would bolt the corner together; you can see that if you look carefully at the corners in the next photo.\n\n![Sauna walls bolted together](/images/sauna-walls-bolted.jpg)\n![Sauna wall bolts](/images/sauna-wall-bolt-closeup.jpg)\n\nThe cieling was made from 2\" structural foam.\nWe taped two 4'x8' pieces together and then sandwiched it -- cedar planking on the inside, and two 2x3s on top.\nWe trimmed the foam until it fit snuggly on top of the walls, inside the exterior plywood.\nOver the summer, the sauna had no roof (our contigency for rain during an install was to throw a tarp over the whole thing).\nI installed some galvanized steel panels over the building just in time for the first real rain of the season.\n\n## Layout and Benches ##\n\nI drafted several potential layouts of the sauna on paper before building benches.\nThis is the layout we settled on:\n\n![Sauna layout](/images/sauna-layout.jpg)\n\nIn many saunas, the benches hang from the walls, but we needed flat-pack walls free of hardware and flat-pack benches.\nWe made the bench tops out of western red cedar 1x4s -- quite expensive.\n\n![Sauna bench tops](/images/sauna-bench-tops.jpg)\n\nThe structure beneath the cedar is just normal 2x4s.\nWeight is borne by rectangles made of 2x4s, which are bolted to the bench tops.\n\n![Sauna benches installed](/images/sauna-benches-installed.jpg)\n\n## The Stove ##\n\nI spent a lot of time deliberating over how to heat the sauna.\nOther saunas I've experienced in remote places (like at Burning Man) have been wet -- steam is manufactured using a giant propane burner and a pot of water, and piped indoors to heat the room.\nMost dry saunas are heated with electric stoves, but I wouldn't have the 220V hookup that's usually necessary.\nI thought about using propane indoors, but didn't want to risk carbon monoxide poisioning.\nI went through several fancy design iterations.\nOne particularly insane idea involved heating a metal plate inside the sauna from the outside with a propane torch.\n\nEventually, I decided to fabricate a wood-burning stove.\nI modeled mine on the principles of rocket mass heaters to ensure good draft through the room.\n\n![Sauna stove riser](/images/sauna-stove-riser.jpg)\n\nHere, you can see the J-shaped riser where the combustion happens.\nThis was fabricated from scrap 5x5 mild steel.\nThe riser was installed in a larger box which is the surface which actually heats the sauna:\n\n![Sauna stove bottom](/images/sauna-stove-bottom.jpg)\n![Sauna stove heat exchanger](/images/sauna-stove-exchanger.jpg)\n\nThe inside of the barrel has some 1x1 tubing for structure, and the outside is made of 16ga plate.\nThe top plate of the box, which recieves the brunt of the heat coming out of the exchanger, was made from a thick piece of plate.\nI initially tried to insulate the riser with a mix of portland cement and perlite, but I wasn't seeing the rocketing I wanted.\nUsing [2\" ceramic fire blanket](http://amzn.to/2BjMWfB) created a much better effect.\n\n![Ceramic insulation around the riser](/images/sauna-stove-insulation.jpg)\n\nI created a 6\" diameter exaust port by rolling the same 16ga plate.\nI used 3 90-degree elbows, two 4' long pieces, and a single 1' piece of single-wall stainless steel chimney pipe to vent to the outside.\nI used single-wall because i wanted the portion of the chimney inside the sauna to contribute to heating the sauna.\nAs an aside, finding single-wall chimeny pipe in the Bay Area is quite difficult -- and it's quite slow and expensive to ship.\nI eventually got lucky with [London fireplace](https://londonchimney.com/) in Mill Valley.\nHere's the stove undergoing testing in the back yard.\nYou can see it got quite warm (although we've single learned to get it much hotter with proper feeding).\n\n![The stove, assembled in the back yard](/images/sauna-stove-assembled.jpg)\n![Taking the stove's temperature](/images/sauna-stove-temperature.jpg)\n\nRunning a rocket stove is pretty different from conventional stoves.\nIt eats fuel very quickly, so it must be constantly fed.\nThis is actually nice for an install at a party; if the operator wanders off or becomes incapacitated, the stove soon shuts itself off.\nNow that this sauna lives in my back yard, it's a bit of a chore to constantly run outside for more fuel.\n\nIt's also been a struggle finding the correct fuel to burn.\nRocket stove communities talk about burning thin branches, but I don't have access to those in the city.\nAt Priceless, I mostly burned scraps from construction and [these fatwood firestarter sticks](http://amzn.to/2F04cIQ) (we went through the whole 25 pounds in a weekend).\nI tried burning fuel pellets, but those clump up on the bottom and there's not enough air flow for combustion.\nCurrently, I burn hardwood kindling I split with [a hatchet](http://amzn.to/2G5BM1M), and add a stick or two of fatwood to keep things burning hot enough for a good sweat.\n\n## Sauna Design Verdict ##\n\nOverall, the sauna works very well.\nWe use it several times a week in my back yard.\nThere are a few gotches with the design that I would iterate on further.\n\nFirst, the stove.\nIt works very well and is fun to use, but it's not really enough thermal mass to heat the room.\nAdding a bunch of river rocks on top helps a lot, but the top of the stove is flat, meaning I can't fit that many rocks.\nIn a v2, I would add a hopper on top to contain several layers of rocks.\n\nThe top of the stove gets so hot that water I pour onto it beads off the surface and sputters off.\nI would also extend the walls up a bit, to keep those water beads contained until they fully evaporate.\nAlso, the stove doesn't radiate very well -- if I didn't pour water onto the stove, the room would never get hot enough.\nFor v2, I would create a more heatsink-like surface, with more surface area.\nFinally, I'm not sure how long the stove will last.\n[Rocket stove forums](https://permies.com/f/260/rocket-mass-heaters) claim that the [high heat of the riser causes rapid oxidation and failure of the steel](https://permies.com/t/52544/metal-burn-tunnel-heat-riser).\nI regularly see the portion of the burn tunnel that extends past the barrel glow red, [meaning a temperature of at least 1200°F](https://en.wikipedia.org/wiki/Red_heat).\nI haven't yet seen any spalling inside the stove from these temperatures -- possibly because it's only in-use about 4 to 6 hours a week -- but I'm not optimistic about the lifespan of the stove.\n\nThe decking in the room works well as a floor -- it's easy to clean -- but the gaps between floorboards create most of the draft inside the room.\nThere is a very strong temperature gradient between the floor and the upper bench.\nCovering the floor with lock-together foam squares would significantly improve insulation, but since the room gets hot enough I haven't bothered.\nHowever, better insulating the floor might reduce the intensity at which I run the stove, prolonging it's lifespan.\n\nThe walls of the sauna are the biggest design failure of sauna v1.\nIt's easy to underestimate just how heavy a framed 2x4 and plywood wall is, especially when you have to load it onto a truck or carry it through a forest.\nAt 8' x 7', the walls are also not particularly convenient to transport.\nFor instance, box truck are usually neither 7' tall nor 7' wide.\n(Aside: you'd think I'd have learned my lesson here after building [a boat that, at 16' wide, was wider than any boat launch in Chicagoland](http://boat.moomers.org)).\n\nAlso, the fasteners between the walls and the deck, and between the walls in the corners, didn't work all that well.\nThe wall pegs were extremely difficult to line up with their slots on the deck -- it's hard to adjust a 200# wall 1/2 inch to the left when you have no good way to hold it upright.\nI used a lot of hidden fasteners because I worried too much about the exterior appearance of the sauna.\nIn hindsight, nobody cares about that.\nIn a v2, I would make the walls entirely out of foam sheets -- two 4'x8' sheets per wall, just as we made the cieling of the existing sauna.\nTransporting the cieling was always a welcome reprieve for the build crew after transporting the walls.\nTo assemble a structure from the foam-and-cedar modules, I would use some kind of easily-visible exterior fasteners, maybe even just clamps.\n\n# Belden Spa #\n\nFor the structure of the spa, I originally envisioned an organic dome in tension, made from pencil rod.\n\n![Spa dome concept](/images/spa-dome-sketch.jpg)\n\nHowever, when Jered and I prototyped the design, it was unclear how to keep the tension from deforming the entire structure.\nIn the image below, you can see that the tensioned overhead X wants to turn the floor plan into an oval rather than a circle.\n\n![Spa dome prototype](/images/spa-dome-prototype.jpg)\n\nI'd still love to create a dome of organic shapes (to conteract the traditional regular geodesic dome shape), but it might be easier at a place like Burning Man, where rebar in the ground can counteract the tension and create a rigid base for the rest of the structure.\n\nNext, we tried to build [a stardome](http://stardome.jp/index-en.html), but we had trouble sourcing the appropriate building materials.\nMaybe 6\" diameter bamboo is easy to come by in Japan, but not in the Bay Area.\nI did buy a bunch of thinner bamboo poles.\nBecause we had them, we attempted to use them.\nWe prototyped a space made of several free-standing structures, arranged so as to inclose an area.\n\n![Bamboo pyramids](/images/spa-pyramids.jpg)\n\nIn the end, we decided that bamboo is not good structural material.\nIt was too light, broke too easily, and bent too easily.\nIt would not support fabric in tension without deforming.\n\nAt this point, we were running out of time and the sauna was consuming too much design energy.\nWe decided to bring a bunch of fabric, some spools of paracord, a pile of 7' 2x3s (super-cheap at home depot), and a nail gun and just improve on the spot.\nWe ended up with a pretty nice structure, but as per usual I have no photos of my actual install on-site.\nIf anyone has photos of the Belden Spa, please contact me so I can put them on this page!\n",
            "url": "https://igor.moomers.org/posts/my-projects-belden-spa",
            "title": "The Belden Spa Summer 2017",
            "date_modified": "2018-01-20T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/thoughts-on-leaving-airbnb",
            "content_html": "\nApril 3rd, 2017 was my last day at Airbnb.\nThese last 4 ½ years were an intense, wild adventure, and a very important part of both my career and my life.\nAs I move on, I want to reflect on this experience while it is still fresh in my mind.\nSome of the things I want to focus on: what I was able to accomplish while at the company, what I think I could have done better, and my reasons for leaving.\n\n## So.... what would you say you did around here? ##\n\nI joined Airbnb in September of 2012.\nAt that time, the company was maybe 500 people, the product team around 70 people, and the engineering team around 40.\nI was recruited by [a very good friend](https://www.linkedin.com/in/raphaeltlee) who wanted someone to replace the [previous one-and-only infrastructure engineer](https://www.linkedin.com/in/jasondusek/).\n\nWhen I started, I joined the data infrastructure team run by [Flo Leibert](https://www.crunchbase.com/person/florian-leibert).\nI was interested in the Hadoop ecosystem, and felt that this technology was going to become increasingly important going forward.\nHowever, shortly after joining, I went through the first Airbnb Sysops training, which was recruiting people to join the volunteer on-call rotation.\nDuring this training, it became apparent that the production infrastructure had some serious unsolved issues, and was in need of a lot of attention.\nBy December of 2012, I moved from the data side of the infra to the production side -- specifically, the SRE team -- where I directed my attention for the remainder of my time at Airbnb.\n\n### Configuration Management ###\n\nWe began with configuration management.\nAt that time, Airbnb had [a mechanism for launching new instances](http://airbnb.io/cloud-maker/), but it was unclear how those instances would be configured.\nAlso, it was unclear how to make new instances and previously-existing instances the same, and no audit trail of configuration changes.\nWith [Martin Rhoads](https://www.linkedin.com/in/martin-rhoads-a63a0027/), we decided to introduce [Chef](https://www.chef.io/) to solve some of these problems.\n\nWe first tried a standard Chef-Server approach, but ran into annoying versioning and clobbering issues.\nShortly, we decided to convert to a Chef-Solo approach based around a monorepo.\nThe approach we settled on is documented in [this blog post on Chef at Airbnb](https://medium.com/airbnb-engineering/making-breakfast-chef-at-airbnb-8e74efff4707).\nIt remains in use at Airbnb today, and has also been adopted by several other companies.\n\nSince we had dumped Chef-Server, we now needed an inventory system -- ideally, one that supported custom metadata.\nI wrote a very simple proof-of-concept one called [optica](https://github.com/airbnb/optica), which (surprisingly) remains in-use today.\n(In fact, because optica is now queried from many places, it has become quite embedded, and any replacement would have to re-implement it's rudimentary API.)\n\nAlso, since we were no longer using `knife ec2`, we needed a tool for launching and bootstrapping instances.\nMartin wrote another proof-of-concept, called [stemcell](https://github.com/airbnb/stemcell).\nThis has been refactored several times to support more advanced features, but also continues to be in-use at Airbnb today.\nWhile it remains possible to use stemcell on an engineer's laptop (and indeed, this would be required to re-bootstrap the infrastructure in case of catastrophic failure), most engineers probably shouldn't have the AWS credentials to launch instances.\nInstead, engineers at Airbnb use stemcell though a web service UI.\nThe web interface helps avoid tedius command-line invocations, is responsible for authorization, and eases other common cluster management tasks (AZ balancing, scaling up/down, cluster-wide chef runs).\n\n### Service Discovery ###\n\nAround the same time as we were introducing configuration management, in the spring of 2013, we had an additional problem.\nThere was growing consensus that we couldn't (and shouldn't) write all of our code inside our Rails monolith.\nHowever, we didn't have the tooling to build an SOA.\nConfiguration management (how to configure the instances running individual services) was certainly part of the problem, but another part was connecting the services together.\n\nWe had already written some services in Java, using the Twitter Commons framework which included a service discovery component.\nHowever, this service discovery had to be implemented in every service, and required ZK and Thrift bindings inside that service.\nA team was working on a NodeJS service for the mobile web version of the site, and NodeJS had neither of these available at the time.\n\nWe decided that we would abstract this problem away -- first, with [Synapse](https://github.com/airbnb/synapse) as a service discovery component, and then with [Nerve](https://github.com/airbnb/nerve) for service registration.\nThe entire system is called SmartStack, and the design is more comprehensively justified in the [SmartStack blog post](https://medium.com/airbnb-engineering/smartstack-service-discovery-in-the-cloud-4b8a080de619#.m0x2ks9ja).\n\nSmartStack was very easy to deploy incrementally via our new configuration management system.\nRegistering a service via `nerve` and making it available through `synapse`/`haproxy` required only configuration changes in the Chef monorepo.\nActual deployment of SmartStack involved merely changing where a service finds it's dependent services, and also killing any retry or load balancing mechanisms (since these would now be handled by `haproxy`).\nBy the end of the summer of 2013, all of our services were communicating via an HAProxy, and we were able to kill lots of Zookeeper and server-set-management code in the Rails monolith and other services.\nSmartStack also remains in-use at Airbnb today.\n\n### Load Balancing ###\n\nOnce our services were talking internally through HAProxy, we wanted to bring the same approach to our upstream load balancing.\nAt the time, all traffic inbound to Airbnb always went directly from an [ELB](https://aws.amazon.com/elasticloadbalancing/) to a Rails monolith instance, and managing the set of instances registered with the ELB was a manual process.\nInitially, we planned an ambitious service that would accept all incoming traffic and would be able to mutate it -- for instance, handling authentication and session management and setting authoritative headers for downstream services.\nHowever, we quickly learned that writing a proxy service that could handle Airbnb traffic, even in 2013, was nontrivial.\nAfter several attempts to deploy a Java version, we punted and instead deployed [Nginx](https://www.nginx.com/resources/wiki/).\n\nThe Nginx instances, collectively known as Charon, become our front-end load balancer.\nThe Charon instances were discovered by Akamai through DNS, where they were entered manually.\nAfter inbound traffic arrived at a Charon instnace, it would be routed to the correct service based on request parameters -- most often the hostname in the request headers.\n`HAProxy` took traffic for a specific service (by port number on `localhost`) and load balance it to actual instances providing that service.\nOnce this system was deployed, I was able to kill our ELBs.\nIt was very convenient to have service routing be consistent throughout the stack -- an instance would receive traffic if and only if `nerve` was active on that instance, whether or not it was a \"backend\" or \"frontend\" service.\n\nThis system worked well enough until Spring 2016.\nAt that time, several problems arose.\nThe biggest was that all traffic bound for the Charon boxes was coming from Akamai, and Akamai was not doing a good job load-balancing between the Charon instances.\nSince some of these instances were receiving a lion's share of the traffic, and since `haproxy` is single-threaded, we were seeing traffic queueing due to high CPU usage on those instances.\nScaling the Charon cluster wasn't helping, since we would still end up with individual hot instances.\n\nAkamai claimed that [Route53](https://aws.amazon.com/route53/) [weighted resource sets](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted) were to blame, since they return only a single IP address every time a name is resolved.\nTo avoid Akamai internally caching a single IP, we switched to vanilla `A`-record sets, which return all the IP addresses for a name with each request.\nWe hoped that this would result in Akamai traffic being balanced between Charon nodes, but the approach did not work.\nEventually, we resorted to re-introducing ELB into our stack, this time as a way to load-balance between Charon instances.\nInsert yo-dawg joke about load balancers here.\n\nAs part of this project, we spent a lot of time manually managing DNS or ELB registration for Charon instances.\nTo ease this burden, I wrote a service called `Themis`, which read `nerve` entries from Zookeeper and then took action when the set of these entries changed.\nI wrote actions to manage ELB registration, or to create Route53 entries for either multi-IP `A` records or weighted record sets.\nAs a bonus, this made our stack fully consistent.\nNow, even a load balancer instance would only receive traffic if and only if `nerve` was up on that instance.\nThis system remains in use at Airbnb today.\nAlas, I did not have a chance to open-source Themis before I left Airbnb; hopefully, someone at the company takes that on as a project.\n\n### Internal Load Balancing ###\n\nWhile Charon was handling our load balancing needs for production, we were starting to deploy a lot of internal services, too.\nTo make writing internal services easier, [Pierre Carrier](https://github.com/pcarrier) and I launched Dyno in the fall of 2013.\nThis was Nginx configured, just like Charon.\nIf an engineer marked a service as an internal web service, then `<service name>.dyno` took you to an instance of that service.\n\nDyno eventually added an authentication mechanism, so internal services didn't have to write their own authentication code.\nWhile the `.dyno` instances were initially manually entered into DNS, once Themis become available we allowed it to handle DNS registration for those boxes as well.\nToday, Airbnb engineers regularly interact with dozens of dyno services.\n\n### Monitoring ###\n\nIn the beginning of 2014, our systems monitoring was pretty spotty.\nAt that time, we were using [Scout](http://server-monitor.pingdom.com/) for instance monitoring, but monitoring was inconsistently available.\nAlso, Scout was a dead-end for metrics -- it only supported system-level stats that it's agent was able to collect.\nAt the time, several cool monitoring SaaS companies were getting started, and I embarked on a project to evaluate our options.\n\nIn the end I ended up choosing [DataDog](https://www.datadoghq.com/).\nA strong reason was DataDog's very good `haproxy` integration.\nThis integration allowed us to have metrics for how much traffic each service was getting, where this traffic was coming from, and the distribution of response sizes, result codes, and other interesting statistics.\nAnother reason was that the [DataDog agent](https://github.com/DataDog/dd-agent) accepted [StatsD metrics](http://docs.datadoghq.com/guides/dogstatsd/), so we could monitor instance statistics like CPU, memory, and other resource utilization alongside our own custom metrics.\nFurthermore, DataDog had [server-side CloudWatch scraping](http://docs.datadoghq.com/integrations/aws/), which meant we could see CloudWatch specific information like RDS utilization stats alongside all other metrics (and avoid asking engineers to log into the AWS console just to see CloudWatch metrics).\nFinally, DataDog had a [very comprehensive API](http://docs.datadoghq.com/api/), so it seemed possible to do more automation as time went on.\n\nIn early 2014, I rolled out DataDog and removed Scout from all of our systems.\nI would be a primary point of contact for managing Airbnb's relationship with DataDog until my departure from the company.\nAs I was leaving Airbnb, there were teams contemplating what it would look like for us to run at least some of our own monitoring tools.\nHowever, on the whole the decision to use DataDog worked out.\nDespite some bumps, the product scaled well with Airbnb, and the company was very responsive in managing problems and rolling out new features.\nI strongly recommend DataDog.\n\nI also became a primary point of contact for anything monitoring-related at Airbnb.\nAfter rolling out the `dd-agent` to systems, I added [DatDog's StatsD client](https://github.com/DataDog/dogstatsd-ruby) to many of our applications.\nI encouraged other developers to liberally instrument their code with `statsd` calls.\nAdditionally, I ended up writing lots of internal documentation on monitoring best practices.\nI also developed monitoring curriculum, originally for the SysOps group but later as a bootcamp class for all new hires.\nI encouraged engineers to formulate hypothesises about what could be causing issues, and then to use the available monitoring tools to test these hypothesises.\nSince it is impossible to formulate such hypothesises without at least some understanding the overall infrastructure, my bootcamp class became a primer on both the Airbnb infrastructure as a whole as well as the monitoring tools that illuminate that infrastructure.\n\n### Alerting ###\n\nAfter migrating metric monitoring to DataDog, I was able to tackle alerting.\nI had several strong requirements.\nI wanted alerts to be defined automatically, so new hosts get alerts as they're spun up and alerts are cleaned up when hosts go away.\nI wanted the notifications to be automatically routed to the right people.\nFinally, I wanted alerts to be configuration-as-code, not created via manual clicking in the UI.\n\nFirst, I had to take a stab at the ownership issue.\nThis was a constant problem during my tenure at Airbnb, and we never really solved it.\nAs people moved around and teams formed and dissolved, systems would become orphaned, and the maintenance burden would fall on \"whoever cares most\".\nHowever, I at least made an initial system for assigning ownership, even if the data in that system was not always consistent.\nI stored the information in the Chef repo, inside the role files which defined instances, and made it available for querying via `optica`.\n\nNext, I created [Interferon](https://github.com/airbnb/interferon).\nThis project uses a Ruby DSL to programmatically create alerts.\nI chose a DSL because I wanted complicated filtering logic, and found myself inventing mini-programming languages inside pure-data formats like JSON or YAML.\n\nAs an example for how Interferon was used, lets take CPU utilization.\nI was able to write a single alert file which specified a CPU utilization threshold.\nWhenever Interferon ran, it would pull a list of all hosts from inventory and make sure that each host had a CPU alert, cleaning up stale alerts for any terminated hosts.\nThe alerts would be routed to the owners for each host, and the alert file could be modified to explicitly filter out any systems where high CPU usage is expected.\nBecause all changes are code changes, the usual review process applies, so there are no surprises for new alerts or alerts suddenly going missing.\nAlso, writing alerts in a file encourages developers to write longer, more informative alert messages, and the DSL allows information about hosts to be encoded in the alert, making alerts more actionable.\n\nI gave [a talk about this work](https://www.usenix.org/conference/srecon15/program/presentation/serebryany) at SREConf 2015, which includes more details if you're interested.\nThere's also a [blog post about Interferon/the Alerts Framework](https://medium.com/airbnb-engineering/alerting-framework-at-airbnb-35ba48df894f) on the Airbnb blog.  \n\n### Product Work ###\n\nIn the middle of 2014, I was becoming burned out on SRE work.\nAlso, although I had been running systems at Airbnb, I hadn't done any work on the product -- I didn't even know how to work with Rails.\nTo change things up, I transitioned to a product team which was building an experimental cleaning integration for Airbnb.\nI worked with a front-end engineer for six months to build this product, and we launched it in several markets.\nHowever, in the end it wasn't viable and was shut down.\n\n### Developer Happiness ###\n\nIn early 2015, it was clear that the cleaning product I had been working on wasn't going to ship, and the team would dissolve.\nI was looking around for new problems to focus on, and they weren't hard to spot.\n\nShipping code to the Airbnb Rails monolith was becoming increasingly difficult.\nBuild and test times were increasing rapidly.\nSpurious test failure was a constant problem and required regular re-builds, further delaying shipping.\nThe build-test-deploy pipeline was unowned, meaning that whenever it broke it was up to whoever was most frustrated to fix it.\nOverall, the experience of being a product engineer was quite frustrating because of gaps in tooling, documentation, ownership, and communication.\n\nSo, when my product was finally terminated, [Topher Lin](https://github.com/clizzin) and I started the Airbnb Developer Happiness team.\nOur broad mandate was to work on whatever was causing the most frustration.\nOur initial top target was build times, but we envisioned tackling a wide range of issues around internal communication and tooling.\nTo understand our problem space and to get buy-in for our projects, we began conducting the Airbnb developer survey.\nI collected and analyzed the data, which showed widespread frustration with our tooling and infrastructure.\n\nI spent most of my remaining time at Airbnb working in this problem space.\nThe team Topher and I started ended up expanding to more than 20 engineers on at least 4 sub-teams.\nAlthough we never got time to work on many of the broader problems we initially envisioned tackling (like internal communication practices), the intersection of people and tooling is the area I remain most passionate about.\n\n### Build System ###\n\nThe Developer Happiness Team's initial target was slow build times, at that time creeping into 30-minute-plus territory.\nAt the time, we were using [Solano](https://www.solanolabs.com/), a third-party ruby testing platform, to run any commit-time tasks including builds.\nWe had hacked building an artifact into this system as a fake test.\nWe were also using Solano to build non-ruby projects, including all fat JARs from our Java monorepo.\nSolano was running on AMIs provided by the company, and we didn't understand the build environment, how to debug any problems or build failures, or how to control system dependencies for builds.\n\nWe decided that we would start by moving builds to our own hardware, where we could optimize the environment.\nSince we would end up with multiple systems performing build and test tasks, we decided to create a unified UI where all such tasks could be collected and visualized.\nI also began evaluating multiple build systems to replace Solano, with an eye towards a system which supported arbitrary pipelines to support optimizations to the Java builds as well as Ruby builds.\n\nA build system is just an executor which performs tasks in response to events, usually commit events.\nWe already had a system that fed all [webhook events](https://developer.github.com/webhooks/) from Github Enterprise into RabbitMQ, providing a convenient trigger.\nWe were already very familiar, too, with [Resque](https://github.com/resque/resque), a Ruby task executor for delayed or long-running tasks which we used throughout our production infrastructure.\nFinally, we were tired of writing build tasks as shell scripts (which can't be tested) and which integrated with the build system by making API calls via `curl`.\nWe envisioned instead a small library of common tasks, written as Ruby functions with good test coverage, and which could report their status, progress, and results directly into log systems and databases.\n\nThese design considerations lead us to decide to roll our own build system.\nWe built it into Deployboard, the tool we were already using to deploy the builds.\nInstead of learning about new deployable builds via API calls, Deployboard would now generate them using Ruby executed in response to RabbitMQ events.\nIt would display any progress and error logs.\nThe end result was the Deployboard Build System -- built in less than 4 months by just three engineers, who were also supporting frequently-failing CI for an engineering team of 400+.\n\nWe migrated the Airbnb Rails monolith to this system in summer of 2015.\nThis system immediately improved the speed and reliability of builds by an order of magnitude.\nIn November of 2015, I wrote a Ruby test splitter which allowed arbitrary parallelism on our monolith's RSpec suite, and migrated the tests from Solano to Deployboard as well.\nThis improved test times from 30+ minutes to around 10 minutes, as well as reduced spuriousness and made test result tracking easier.\nBy March of 2016, we completely terminated Solano, migrated all builds to Deployboard, and introduced and integrated Travis CI for testing most projects except the Rails monolith and the Java monorepo (which were tested in Deployboard for performance reasons).\nThe combination of Deployboard Build System and Travis CI remains in use for all projects at Airbnb today.\n\nThere are a few talks about Deployboard online.\nOne is [a talk Topher and I gave at Github Universe 2015](https://www.youtube.com/watch?v=4etQ8s74aHg).\nThere is also [a talk I gave at FutureStack 2015](https://blog.newrelic.com/2015/12/15/airbnb-democratic-deploys-futurestack15-video/).\nHowever, Deployboard was also unfortunately never open-sourced, mostly due to lots of Airbnb-specific code and it's use of O2, an internal Bootstrap-style CSS framework.\n\n### Ruby Migration ###\n\nIn mid-2016, after a brief break to focus on load balancing and Themis, I embarked on what became my final project at Airbnb.\nAt the time, we were still using Ruby 1.9.3 on Ubuntu 12.04 for all of our projects.\nIn general, we had no story around how to upgrade system dependencies of any kind for our projects.\nMy goal was to create such a mechanism, and then to use it to upgrade the Ruby version for our Rails monolith.\n\nOur build artifacts were generated directly on build workers, using system versions of any dependencies.\nThey are deployed as tarballs to instances which are required to have matching versions of these system dependencies.\nUpgrading such dependencies had to happen in concert between the build and production systems -- a difficult operation that would be more difficult still to roll back in case of trouble.\nWe had no way of even tracking what system dependencies a given artifact required.\n\nI began by tagging builds with system dependencies used to create the build -- things like ruby version, NodeJS version, and Ubuntu version (as a shorthand for any dynamic library dependencies).\nNext, I rebuilt our deploy system UI, which previously asked engineers to pick a specific build artifact to deploy.\nThe new UI asked engineers to pick a specific version (SHA) of the code, which may be associated with any number of build artifacts.\nFinally, I modified the system which actually performed deploys on instances.\nPreviously, that system would receive a specific artifact (e.g. a tarball of Ruby code along with it's `bundle install`ed dependencies) and then go through the steps (untar, link, restart) to deploy that artifact.\nIn the new version, the system would receive a list of possible artifacts, along with their tags.\nIt would then compare local dependency versions with the tags on artifacts, and pick an artifact that matched the system (or error out if, for instance, the system Ruby version didn't match the Ruby version tag on any of the available artifact).\n\nThis system allowed me to build the Rails monolith concurrently for Ruby 1.9.3 and Ruby 2.1.10.\nIt also allowed me to have web workers for both versions of Ruby in production -- each worker, upon receiving a deploy, would pick a correct artifact.\nI also began running tests for the monolith under both versions of Ruby, fixing any spec failures that were version-dependent.\n\nBy February of 2017, this preliminary work was completed.\nI began running some upgraded Ruby workers to watch for unexpected errors, and also to compare performance between the populations.\nThe vanilla Ruby 2.1.10 build actually had *worse* performance than the [Brightbox PPA](https://www.brightbox.com/docs/ruby/ubuntu/) build of Ruby 1.9.3 we had been using.\nIn the end, I created a custom build of Ruby 2.1.10 with several performance patches.\n\nIn March 2017, I performed the Ruby upgrade for the monolith.\nAlso, the system dependency upgrade system I created was used to upgrade our Ubuntu version from 12.04 to 14.04.\nOther engineers were beginning to use the system to upgrade other system dependencies, including NodeJS versions for Node projects and Ruby versions of other services in our SOA.\nAfter completing the migration and documenting the work, I announced my departure.\n\n### Non-Technical Projects ###\n\nBesides the big chunks of code I wrote while at Airbnb, I was also involved in lots of non-technical (or at least, non-coding) projects.\nIn hindsight, some of those projects were arguably more important than any of the strictly technical work that I did; see the section below on a post-mortem around those thoughts.\nIt seems worthwhile to document those here, too, while I still remember them.\n\nOne big area of focus was SysOps.\nI spent a lot of time on-call, especially during the hectic years in 2013 and 2014 when we were growing rapidly and our infrastructure was in flux.\nI eventually transitioned into a leadership role of the SysOps group.\nThis involved organizing training for new members, planning the on-call schedule, and running the weekly postmortem meetings.\nThe SysOps group was incredibly successful, and I frequently hear astonishment from my peers when I tell them that Airbnb has a strictly volunteer on-call rotation.\nThe group was so successful that we eventually had more people who wanted to be in the on-call rotation than we could fit into slots during a 6-month period.\nWe ended up reducing on-call shift duration from a week to just two days.\nIn 2015 several long-tenured members, including me, began stepping back from the group to allow newer engineers to take the lead.\n\nAnother big focus was our overall technical vision.\nI was a member of Tech Leads, an initial stab at such a vision, in 2013.\nHowever, when [Mike Curtis](https://www.linkedin.com/in/curtismike/) became VP of Engineering, he dissolved the group as part of his efforts to abolish any hierarchy among engineers.\nHowever, we still needed a way to collectively decide how to evolve our infrastructure.\nI took lead on an initial stab at such a system, called Tech Decisions, in late 2013, but that system was too bureaucratic and never had much adoption.\n\nIn 2014, a crisis around whether and how we run an SOA precipitated another attempt, called the Infrastructure Working Group.\nWe held a series of meetings to come up with a shared set of principles for our infrastructure, which formed the basis of any future decision-making.\nI drafted several of the principles we eventually settled on.\nWe also created a structure called the Infrastructure Working Group, which worked to influence individual teams and engineers to make technical decisions in accordance with the principles.\nI was heavily involved with the group, at least until the Developer Happiness Team began taking all my time.\n\nI was very involved with our Bootcamp efforts for new hires.\nI participated in the meetings that created the Airbnb Bootcamp.\nAfterwards, I ended up regularly teaching two of the sessions.\nThe first was on monitoring our infrastructure.\nThe second was titled \"Contributing and Deploying\", and covered the developer workflow from committing code (including how to write good commit messages -- a personal quest) to getting that code successfully out in production.\nThis was the only mandatory session of the bootcamp.\n\nFinally, as a technical leader and senior engineer, I spent a lot of time on mentorship and code review.\nDuring our Chef roll-out, I ended up reviewing almost every pull request to our Chef repo in an effort to broadly seed Chef best practices.\nLater, as I transitioned on working primarily on Deployboard, I reviewed most PRs to that repo, trying to ensure consistent architecture, style, and test coverage.\nAs the Developer Happiness/Infrastructure group of teams grew and hired many new engineers, I worked to get them up to speed on the codebase and to become productive and self-sufficient contributors.\n\nFinally, I spent a large amount of time maintaining what I call \"situational awareness\".\nThis meant engaging with the firehouse of stuff that the Airbnb engineering team was doing, from project proposals and infrastructure decisions down to individual pull requests, email threads, and even Slack conversations.\nI attempted to inject vision and guidance wherever I could, connect the dots between disparate projects, and in general to be helpful.\nFor instance, I could tell an engineer that a project they were trying to accomplish would become easier when another engineer on a different team completed a different project.\nI could catch PRs that were likely to cause problems, or connect outages to specific changes.\nThis connector role was performed by several people in the engineering organization, and these people never got the credit they deserved for this thankless and never-ending task.\n\n## What Didn't Work? ##\n\nWhile I knew that I had been incredibly productive at Airbnb, writing everything that I worked on really created some perspective.\nLooking back over that I worked on, I see that my technical projects -- Chef, SmartStack, Deployboard, the work on Monitoring -- were very successful.\nThey made life easier for other engineers, and have survived the test of time to continue providing value.\n\nHowever, I always had more grand ambitions than to just accomplish a specific project.\nI had a vision for how I wanted our infrastructure and our engineering team to function, and I did not succeed, in most cases, in making that vision a reality.\n\nA great example of this is our original vision for the Developer Happiness team.\nWe did not set out to become the CI team, although that's where we eventually ended up.\nWe wanted to improve documentation, communication, and make being an Airbnb engineer easier and more fun.\nI ran the engineering team survey and collected pain points, but never had enough bandwidth to address more than a few of these pain points.\n\nWhy didn't I have enough bandwidth?\nI think it's because I failed to navigate the transition from individual contributor to technical leader.\nThe easiest way for me to get things done at Airbnb was to just do the things I thought needed doing.\nWhen our builds were slow, I jumped in with both feet and made them faster -- by writing a build system and a test splitter and a Javascript UI for result visualization, etc...\n\nIn the meantime, I let things fall to the floor.\nI ignored the structural problems that lead to the situation, or merely kvetched about them and left others to try to solve them.\nI spent too much time writing code, and not enough time influencing or guiding other contributors -- which would have allowed me to focus on a broader range of problems.\nI focused on my ability, as an IC, to definitively solve smaller (though usually still very important) problems.\nThis took away from my ability to address the larger ones.\n\nThis wasn't *entirely* my fault.\nAirbnb could have done better to support my transition.\nThe engineering team is completely flat -- even though we have engineering levels, they're supposed to be secret.\nExpectations around what engineers should be focusing on at each level are vague at best, and there's no consensus among managers about what makes a more senior engineer.\nMike Curtis once told me, in a one-on-one, that it should be possible to reach the highest engineering levels by focusing on deep technical work, but I think that assertion would come as a surprise to most of his engineering managers.\n\nI had always wanted to make technical leadership an explicit role, not something a few engineers did in their spare time.\nI didn't succeed in making this vision a reality.\nI don't think I really even tried.\n\nIf I could change anything about my time at Airbnb, it would be a change in focus.\nI wish I was more focused on building relationships, and less on accomplishing objectives.\nIn the end, when I was leaving, it was the relationships that I did manage to build that persisted in my life -- the technical stuff is all someone else's problem now.\n\n## Why I Quit ##\n\nIt took about 6 months for the Developer Happiness team to go from three to 4 engineers.\nThis was an incredibly stressful time.\nBetween broken builds, broken tests, and bugs in the code, it could take an entire day to ship a single changeset.\nWe were concerned about a code backlog so deep we couldn't clear it in one day.\nWe were performing heroics to keep the system running, at the same time trying to build and roll out a replacement, all with minimal headcount.\nYet, we didn't seem to get the recognition and support we deserved from engineering management.\n\nFrom the founding of the team, our manager was part-time, also managing a separate product team.\nIn the summer of 2015, he approached me about potentially taking over as the full-time manager.\nHowever, I was in the middle of several deep technical projects, which I felt couldn't lose me as an IC.\nAlso, I felt I was a role model for the parallel IC career path at Airbnb.\nI wanted to continue being an example of a successful IC who got things done without transitioning.\n\nAs a result, I declined the offer.\nInstead, another recently-hired team member took on the management role, and we also added a newly-hired project manager.\nTogether, the two of them came up with broad roadmap for the team.\nThey organized a process to plan specific projects to meet that roadmap.\n\nThis process turned out to be incredibly frustrating for me.\nI was spending all my time putting out fires and shipping high-impact code, but the new roadmap had no room for several of the projects I thought were most important, including some that I was in the middle of working on.\n\nMore, this situation epitomized the disconnect between the Airbnb engineering team's vision for technical leadership, and the reality.\nIt seemed to me that senior engineers should have a lot of input over our planning process.\nInstead, I faced what I now recognize as the structural disadvantage of remaining an IC.\nThe new manager and PM had all their time to influence, plan, and decide.\nAs an IC, I had to split my time between those activities and actually participating in the tech process -- coding, code review, maintenance, mentoring.\nTo live up to our goals, the management team would need to put a lot of effort into active engagement and deference.\nThis just didn't seem likely.\n\nThe combination of high-stress firefighting, loss of control in planning, and disillusionment with our ideals lead me to utterly burn out.\nI ended up taking a two-month leave of absence in March 2016.\nBy the time I returned from leave, I was even further sidelined in my own team.\nInstead of continuing to fight for influence, I decided to step aside.\n\nThe Observability team I joined consisted of some of the most senior engineers working on some of the deepest technical problems of the infrastructure.\nI focused on a single project, the Ruby upgrade, and lead it through to completion almost single-handedly.\nHowever, it was clear to me that this wasn't the kind of work I wanted to be doing.\nI enjoyed the social aspects of my job as much or more as the deep technical ones.\nI am most passionate about the interface of people and technology, and I wanted to be a technical leader.\nIt felt like the right moment to move on from Airbnb.\nI announced the completion of the Ruby upgrade, and my departure, in the same all-hands meeting.\n",
            "url": "https://igor.moomers.org/posts/thoughts-on-leaving-airbnb",
            "title": "Reflections on Leaving Airbnb",
            "date_modified": "2017-04-26T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/setting-up-imap",
            "content_html": "\nRecently, I wanted the ability to more reliably check email while on my phone.\nSo far, I had gotten by with [VX ConnectBot](http://connectbot.vx.sk/), which I would use to SSH into my phone and connect to my `tmux` session running `mutt`.\nBut I wanted to be able to check personal email while on the go without a laptop more often, and I wanted the ability to access attachments without having to forward them to my gmail address.\n\nIt turns out that setting up IMAP is easier than I thought, because the software is so good.\nWhat's difficult is navigating the maze of confusing, overlapping email standards and options.\nHere's how I did my setup, but YMMV.\n\n## Migrate existing mail\n\nI decided to run [Dovecot](https://www.dovecot.org/) as an IMAP server.\nDovecot has excellent documentation.\nFor instance, on their page about [MailLocation](http://wiki.dovecot.org/MailLocation), I learned that it is \"not possible to mix maildir and mbox formats\".\n\nThis was going to be a problem because I use [Maildir](http://www.qmail.org/man/man5/maildir.html) for my personal email folders, but [Postfix](http://www.postfix.org/), my SMTP server, delivers to an [mbox](https://en.wikipedia.org/wiki/Mbox) file as it's inbox.\nI was going to need to standardize on one or the other.\n\nI chose Maildir, IMHO a superior format that is more reliable without complicated locking.\nMaildir seemed like the correct choice if I wanted to continue using `mutt` locally while also accessing those same emails via Dovecot remotely.\nSo, I began by using [mb2md](http://batleth.sapienti-sat.org/projects/mb2md/) to migrate all of my existing messages to a local Maildir.\n\nI installed this program into `/usr/local/bin` by downloading it from the link above.\n\n```bash\n$ cd /usr/local/bin\n$ wget http://batleth.sapienti-sat.org/projects/mb2md/mb2md-3.20.pl.gz\n$ gunzip mb2md-3.20.pl.gz\n$ chmod a+rx mb2md-3.20.pl\n$ ln -s mb2md-3.20.pl mb2md.pl\n```\n\nI ran it like so:\n\n```bash\nigor47@purr:~/procmail $ mb2md.pl -s /var/mail/igor47 \nConverting /var/mail/igor47 to maildir: /home/igor47/Maildir\nSource Mbox is /var/mail/igor47\nTarget Maildir is /home/igor47/Maildir \n2766 messages.\n```\n\nAfterwards, I removed my pre-existing `mbox` inbox to prevent confusion:\n\n```bash\n$ echo > /var/mail/igor47\n```\n\n## Procmail to local Maildir\n\nNext, I wanted to ensure that new mail would continue to be delivered to my Maildir folder.\nI was already using [Procmail](https://wiki.archlinux.org/index.php/Procmail) to filter spam and other kinds of messages, but I didn't have a final fallback rule.\nThis meant that any mail not delivered to a specific location by Procmail would come back to Postfix, which delivered it to the `mbox` inbox.\nTo resolve the situation, I added a new catch-all rule to my `procmailrc`:\n\n```bash\n$ echo INCLUDERC=${PMDIR}/rc.final > ~/procmail/procmailrc\n```\n\n`rc.final` looks like so (my `procmailrc` sets `$MAILDIR` to `~/Maildir`):\n\n```\n:0:\n$MAILDIR/new\n```\n\nAs always, [this reference](http://www.zer0.org/procmail/quickref.html) is inestimably helpful to write these obscure Procmail filter rules.\n\n## `mutt` uses new inbox\n\nNow, mutt should be told where my mail is coming in.\nI set the following variable in my .muttrc:\n\n```\nmailboxes ~/Maildir\n```\n\nNote that I am continuing to get an error (`/var/mail/igor47 is not a mailbox`) when I first open mutt, but it seems to cause no trouble after I open the correct inbox.\n\n## SSL certs for mail\n\nI used [Let's Encrypt](https://letsencrypt.org/) to get SSL certs for my mail server.\nBecause let's encrypt uses HTTP to authenticate that you really own the domain, I first needed my mail server (`mail.moomers.org`) to be accessible on an HTTP port.\nI did this in Apache by making `mail.moomers.org` a [`ServerAlias`](https://httpd.apache.org/docs/2.4/mod/core.html#serveralias) for the [www.moomers.org](https://www.moomers.org) virtual host.\n\nThat done, I invoked Let's Encrypt like so:\n\n```bash\n$ letsencrypt certonly -a webroot -d mail.moomers.org -w /var/www/moomers.org/htdocs\n```\n\nOnce the cert was acquired, I double-checked that automatic renewal works, too:\n\n```bash\n$ letsencrypt renew --dry-run\n```\n\n[This article was very helpful with helping to configure Dovecot/Postfix for SSL](https://ubuntu101.co.za/ssl/postfix-and-dovecot-on-ubuntu-with-a-lets-encrypt-ssl-certificate/).\n\n## Configure Dovecot\n\nI was ready to [install Dovecot](https://help.ubuntu.com/community/Dovecot):\n\n```bash\n$ aptitude install dovecot-imapd\n```\n\nI want system users to also be Dovecot users, but I didn't want passwords to be transmitted unencrypted over the web.\nI modified `10-auth.conf` (all of the config files here are relative to `/etc/dovecot/conf.d`) like so:\n\n```\ndisable_plaintext_auth = yes\nauth_mechanisms = plain login\n```\n\nTo enable SSL, I set these in `10-ssl.conf`:\n\n```\nssl = yes\nssl_cert = </etc/letsencrypt/live/mail.moomers.org/fullchain.pem\nssl_key = </etc/letsencrypt/live/mail.moomers.org/privkey.pem\n```\n\nI wanted Postfix to SASL-auth against Dovecot (so, Dovecot users, who are system users, are also Postfix users).\nI set this in `10-master.conf`:\n\n```\n  # Postfix smtp-auth\n  unix_listener /var/spool/postfix/private/auth {\n    mode = 0666\n    user = postfix\n    group = postfix\n  }\n```\n\nFinally, I wanted Dovecot to read my Maildir inbox.\nI set this in `10-mail.conf`:\n\n```\nmail_location = maildir:~/Maildir\n```\n\nI was ready to start dovecot:\n\n```bash\n$ service dovecot restart\n```\n\n## Configure Postfix\n\nWe use [SASL](http://www.postfix.org/SASL_README.html) to allow postfix to authenticate users.\nGiven that we've already configured Dovecot, above, we can skip straight to [here](http://www.postfix.org/SASL_README.html#server_sasl_enable) in the Postfix documentation.\nWe also need to [enable TLS on postfix](http://www.postfix.org/TLS_README.html).\n\nIn the end, my config (the relevant parts) looks like this:\n\n```\n# Enable TLS using Let'sEncrypt certs:\nsmtpd_use_tls=yes\nsmtpd_tls_cert_file=/etc/letsencrypt/live/mail.moomers.org/fullchain.pem\nsmtpd_tls_key_file=/etc/letsencrypt/live/mail.moomers.org/privkey.pem\nsmtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache\nsmtp_tls_session_cache_database = btree:${data_directory}/smtp_scache\n\n# Disable Poodle\nsmtp_tls_security_level = may\nsmtpd_tls_security_level = may\nsmtp_tls_mandatory_protocols=!SSLv2,!SSLv3\nsmtpd_tls_mandatory_protocols=!SSLv2,!SSLv3\nsmtp_tls_protocols=!SSLv2,!SSLv3\nsmtpd_tls_protocols=!SSLv2,!SSLv3\n\n# Changes to SSL Ciphers\ntls_preempt_cipherlist = yes                                                                                                                                                                  smtpd_tls_mandatory_ciphers = high                   \ntls_high_cipherlist = ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA256:ADH-AES256-GCM-SHA384:ADH-AES256-SHA256:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-RSA-AES256-SHA384:ECDH-ECDSA-AES256-SHA384:AES256-GCM-SHA384:AES256-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-DSS-AES128-SHA256:ADH-AES128-GCM-SHA256:ADH-AES128-SHA256:ECDH-RSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-RSA-AES128-SHA256:ECDH-ECDSA-AES128-SHA256:AES128-GCM-SHA256:AES128-SHA256:NULL-SHA256\n\n# Enable SASL\nsmtpd_sasl_auth_enable = yes\nsmtpd_sasl_type = dovecot\nsmtpd_sasl_path = private/auth\nsmtpd_sasl_security_options = noanonymous, noplaintext\nsmtpd_sasl_tls_security_options = noanonymous\nsmtpd_tls_auth_only = yes\n\n# Permit SASL-authenticated users to relay mail\nsmtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination\n```\n\n## Mail Client\n\nFirst, I made sure that the Dovecot IMAP port (143) was accessible from the internet through the firewall on my server.\nI didn't need to punch a hole for port 25, because it was already open to allow SMTP traffic from the internet.\nI did notice, while testing, that my ISP (AT&T) blocks outbound port 25 from my local network.\nI had to VPN out from my devices to test things, and will continue to need to do that to send email while my phone is on my local network.\n\nTo configure my mail client, I selected IMAP.\nMy user name is my local system unix account, and my password is my normal unix password.\nSet up your client for incoming mail via IMAP on port 143, using TLS.\nOutbound mail goes through port 25, via `STARTTLS`.\n",
            "url": "https://igor.moomers.org/posts/setting-up-imap",
            "title": "Setting up IMAP",
            "date_modified": "2017-03-20T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/backups-to-another-server",
            "content_html": "\nI've been running a personal Linux server for just about ten years.\nThis makes me a [Gladwellian expert](http://gladwell.com/outliers/the-10000-hour-rule/) at the task!\nOne of the chores of running a personal server is backups.\nRecently, I got access to a second machine with enough disk space to back up at least the most important files.\nThis documents how I set up [duplicity](http://duplicity.nongnu.org/) to regularly back up the data to the remote machine.\n\nI was inspired by other guides, primarily [this one by Marc Gallet at zertrin](https://zertrin.org/how-to/installation-and-configuration-of-duplicity-for-encrypted-sftp-remote-backup/).\nMarc offers a script which makes using `duplicity` easier, especially if you're backing up to S3.\nI also found the [duplicity man(1) page](http://duplicity.nongnu.org/duplicity.1.html) a useful reference.\n[This guide by Justin Ellingwood](https://www.digitalocean.com/community/tutorials/how-to-use-duplicity-with-gpg-to-securely-automate-backups-on-ubuntu) is also substantiatively similar.\n\nMy focus was on security on both ends.\nI wanted to make sure that\n1. Since the remote machine is untrusted, that machine cannot be used to read the content of the backups\n2. The backup user cannot be used to get access to anything on the remote machine\n\n(1) is ensured by using duplicity and pgp-encrypting the backups.\n(2) is ensured by creating a very limited remote user which can only access the backup data.\nI also create a local user dedicated to running backups, to encapsulate the configuration.\nYou can probably use `root` to run the backups, in which case you can skip the user and `sudo` configuration below.\nBut I don't like to keep too much configuration under the root user.\n\n## Setting up users for secure communication\n\nOn the local machine, lets set up a user that will own the backup process.\nSince I'm backing up the `moomers.org` server, I named the user `moobacker`.\n\n    igor47@local:~$ sudo useradd -N -m -s /bin/false moobacker\n\nI prevent the creation of any groups for the user (`-N`), cause `useradd` to create a home directory in `/home/moobacker` (via `-m`), and set the shell to `bin/false` (via `-s /bin/false`) to limit what this user can do.\nNote that setting the login shell to `/bin/false` is not foolproof -- someone can still run `/bin/bash` as the user if they manage to log in, for instance.\nHowever, we prevent any logins as this user by not setting a password (never running `sudo passwd moobacker`) and by not setting up an `.ssh/authorized_keys` file.\nThe only way to run commands as that user is via `sudo` or from cron.\n\nWe will need `moobacker` to have an ssh key to get into our remote server.\n\n    igor47@local:~$ sudo -u moobacker ssh-keygen -t rsa\n    igor47@local:~$ sudo cat /home/moobacker/.ssh/id_rsa.pub\n\nDon't pick a passphrase for the key -- we won't be using it interactively.\nI originally used the new elliptic-key type ssh key (via `-t ed25519`), but switched back to RSA after the Paramiko SSH implementation in `duplicity` had trouble with this key type.\nWe `cat` out the public component of the key we just created; we're going to need it shortly.\n\nNext, we should set up a user on the remote machine who will own the backups.\nI named the user `purrbackups` because I'm backing up a server called `purr`.\nOn the remote server, do the following:\n\n    igor47@remote:~$ sudo useradd -N -m -s /bin/false purrbackups\n    igor47@remote:~$ sudo -u purrbackups mkdir /home/purrbackups/.ssh\n    igor47@remote:~$ echo 'from=\"<local-server-ip>\" <public key> | sudo -u purrbackups tee /home/purrbackups/.ssh/authorized_keys\n\nThis will allow the `moobacker` user from the local server to log into the `purrbackups` account on the remote server.\nCopy-pasta the actual SSH public key in place of `<public key>`.\nAlso, replace `<local-server-ip>` with the actual IP address of the source server.\nThe `from=` option in `authorized_keys` only allows this user to log in from that IP address for greater security, even if the SSH private key leaks out.\n\nAt this point, you should test the setup so far by trying to SSH from the local server to the remote server as the two new users we set up:\n\n    igor47@local:~$ sudo -u moobacker ssh purrbackups@<remote>\n\nAfter accepting the host keys, you should connect and then immediately get a `Connection closed` error.\nCongratulations -- we got the two machines talking to each other!\n\nWe should still lock down the `purrbackups` user so it can only be used for sftp purposes.\nTo do this, lets edit the `/etc/ssh/sshd_config` file.\nFind a line containing `Subsystem sftp` and make sure it is uncommented (no `#` at the beginning) and looks like this:\n\n    Subsystem sftp internal-sftp\n\nThen, at the end of the file, add a section like so:\n\n```\nMatch User purrbackups\n  ForceCommand internal-sftp\n  ChrootDirectory /home/purrbackups\n  AllowAgentForwarding no\n  AllowTCPForwarding no\n  X11Forwarding no\n```\n\nThis will only allow the `purrbackups` user to use SFTP (for file-copying purposes), and will further only allow it access to it's own home directory.\nFinally, set permissions on the `Chroot` directory, and add another directory for backup purposes:\n\n    igor47@remote:~$ sudo chown root:root /home/purrbackups\n    igor47@remote:~$ sudo chmod 755 /home/purrbackups\n    igor47@remote:~$ sudo mkdir /home/purrbackups/backups\n    igor47@remote:~$ sudo chown purbackups /home/purrbackups/backups\n\nNow, test this config again via `sftp`:\n\n    igor47@local:~$ sudo -u moobacker sftp purrbackups@<remote>\n\nYou should get an SFTP prompt.\nHooray!\n\n## Allow local user to run backups\n\nWe want the backup user to be able to access files owned by other users on the system.\nThis means the backup user will need to run the backup program as root.\nLets set up a backup script for this purpose.\n\n```bash\n#!/bin/bash\n\nwhoami\n```\n\nSave this as `/home/moobacker/backup.sh`, and then set permissions appropriately:\n\n    igor47@local:~$ sudo chown root:root /home/moobacker/backup.sh\n    igor47@local:~$ sudo chmod 755 /home/moobacker/backup.sh\n\nNext, lets allow the backup user to run this as root.\nUse `visudo` to edit `/etc/sudoers.d/backups`:\n\n    igor47@local:~$ sudo visudo /etc/sudoers.d/backups\n\nThe contents should be:\n\n```\nmoobacker ALL = (root) NOPASSWD: /home/moobacker/backup.sh\n```\n\nThis will allow the `moobacker` user to invoke the script as root.\nTest it like so:\n\n    igor47@local:~$ sudo -u moobacker sudo /home/moobacker/backup.sh\n\nThe output should be the word `root`\n\n## Set up GPG key for `duplicity`\n\nWe will use this key to encrypt the backups.\nSince the key will need to be distributed to other systems (so you can decrypt your backups in case you need them), set a passphrase on the private key.\nYou can pass this passphrase to the backup script when it runs.\n\n    igor47@local:~$ sudo -H -u moobacker gpg --gen-key\n\nAccept all the defaults.\nYou can pick some values for the name and email addresses.\nThe command will take a while to generate the entropy required for the keys -- you can run some random tasks in the meantime (like `aptitude update`).\nNote the `-H` option to `sudo` -- we need this, otherwise `gpg` won't know where the home directory is and won't save the resulting keys.\n\nNormally, private keys should remain on the system where they were generated.\nIn this case, you'll want to copy the private key to another system so you can decrypt your backups if necessary.\nI used the `ccrypt` program to encrypt the backup with a symmetric key:\n\n    igor47@local:~$ sudo -u moobacker tar -czf - /home/moobacker/.gnupg | ccencrypt > /tmp/moobacker.gnupg.tgz.cc\n\nYou can then distribute the file freely.\nYou'll need the passphrase to the `.cc` file containing the GPG private key, as well as the GPG key passphrase, to recover your backups.\nIf you don't have the `ccencrypt` binary, it can be had on most distros by installing the `ccrypt` package.\n\n## Set up backups\n\nLet's put all of the pieces together.\nRemember the script we created earlier, in `/home/moobacker/backup.sh`?\nHere's what the final version of mine looks like:\n\n```bash\n#!/bin/sh\n\nset -o errexit\nset -o nounset\n\nencryption_key_id=\"DC3EEE04\"\nduplicity=\"duplicity --verbosity error --no-print-statistics --encrypt-key $encryption_key_id\"\nremote=\"sftp://purrbackups@remote/backups\"\n\n# back up /etc\n$duplicity /etc ${remote}/etc\n\n# back up /var\n$duplicity /var ${remote}/var\n\n# back up /home\n$duplicity /mnt/raid/home ${remote}/home\n```\n\nThe `encryption_key_id` can be gotten like so:\n\n    igor47@local:~$ sudo -H -u moobacker gpg --list-keys\n\nI picked the ID of the `sub` key, which is typically used for encryption.\nFor initial runs of the script, you might wish to use a more verbose output model, so you can see any errors:\n\n```bash\nduplicity=\"duplicity --encrypt-key $encryption_key_id\"\n```\n\nPerform an initial run (or two) of the script:\n\n    igor47@local:~$ sudo -H -u moobacker /home/moobacker/backup.sh\n\nIf this succeeds, it's time to add the script to a cron tab to run regularly.\n\n    igor47@local:~$ sudo -H -u moobacker crontab -e\n\nMy `moobacker` user's crontab looks like this:\n\n```cron\nMAILTO=\"admins@example.org\"\n\n# m h  dom mon dow   command\n23 23 * * 0 sudo /home/moobacker/backup.sh\n```\n\nThis will run backups at 23:23 every Sunday.\nI expect the backup script to produce no output -- any output indicates errors.\nThe `MAILTO` setting will send any error output to me via email.\n",
            "url": "https://igor.moomers.org/posts/backups-to-another-server",
            "title": "Backups to another server",
            "date_modified": "2016-12-22T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/what-is-to-be-done",
            "content_html": "\nI'm a technologist -- someone who builds tools and systems.\nI'm also not someone who draws a sharp distinction between work and life.\nI view life as a series of projects I care about, the only distinction being that sometimes people want to pay me money to work on some of these projects.\n\nI am motivated in my work by the premise that what I am doing is somehow helping to improve the world.\nIn the past year, I experienced a crisis of faith, where I stopped believing this premise, and as a result my work (and so also my life) became meaningless to me.\n\nThis post is a somewhat-personal account of how this crisis came about, how I dealt with it, and what I concluded in the end.\nI wrote this for me, so I have a record of the journey.\nBut if you, too, are a technologist who is finding herself depressed and uninspired, then maybe this could help.\nOr maybe you're just trying to figure out where best to apply yourself?\nIf so, [skip to the end](#ok-but-whats-to-be-done).\n\n## Catch-22 ##\n\nI am an environmentalist as well as a technologist.\nI really, really like this awesome planet we're all riding around on; so beautiful, so full of neat things.\nMy environmentalism came into direct conflict with my profession, as it became clear to me that we're using technology to make a mess of things.\n\nFor instance, [we create nuclear fuel which will last for 10,000 years and has to be stored in complicated facilities lest it poison us all](https://www.damninteresting.com/this-place-is-not-a-place-of-honor/).\nWe release chemicals into the environment which are [turning amphibians female](http://www.newsweek.com/female-frogs-estrogen-hermaphrodites-suburban-waste-369553).\n[Our bees are dying](https://en.wikipedia.org/wiki/Colony_collapse_disorder), probably [thanks to our pestides](https://en.wikipedia.org/wiki/Neonicotinoid#Bees), but [our pine beetles are thriving (and killing all the trees)](http://www.tahoedailytribune.com/news/california-tahoe-area-tree-deaths-climb-to-record-levels-thanks-to-bugs-drought/) because of global warming.\n\nTechnology is not just causing environmental degradation, but seems to be digging at the very fabric of society.\nWe can [build weapons which may destroy us all](https://en.wikipedia.org/wiki/Nuclear_warfare).\nAt the same time, we have isolated ourselves in [filter bubbles](https://www.techopedia.com/definition/28556/filter-bubble), wherein conflict and rhetoric can escalate until [the use of those weapons doesn't seem so bad](http://www.politicususa.com/2016/08/03/trump-asks-if-nuclear-weapons-them.html).\nAt a time when we're faced with huge collective challenges, technology seems to [have taken away even our ability to agree on 'facts'](http://www.newyorker.com/magazine/2016/03/21/the-internet-of-us-and-the-end-of-facts).\n\nSo, here was the catch-22.\nHow could technology be both the cause of *and* the solution to all of our problems?\nIf I continued working on improving technology, wouldn't I be hastening the very outcomes I decried?\nBut if I refused to work on technology any further, than what was I to do instead?\nI could retreat into the woods and hide from the world, but that didn't seem like the solution to *any* problem -- not even my distinctly personal one.\n\n## Descent ##\n\nThis crisis of confidence was the cause of (or caused by, or just correlated with) some major sad times in my life.\nI felt paralysed with inaction.\nI felt as though I should be working to fix the problems I saw in the world, but there was no action to take that wouldn't make things worse.\nDuring this period -- roughtly, autumn 2015 to summer 2016 -- I felt like an automaton going through the motions of life, while inside I felt nothing but dread.\nDuring this time, my relationship with my long-term parner disintegrated.\nThe shared house community I had been living in fell apart as well, and I moved into an apartment by myself for the first time in 14 years.\nI burned out at work, and went on leave to attempt to recover my sanity.\n\n## Escape ##\n\nToday, I am feeling much more optimistic about the world, and my role in it.\nA few things really helped me to overcome this malaise.\n\n### Spirituality and Inward Focus ###\n\nThere were two disjoint sets of problems.\nOne set included the problems I discussed above -- environmental degradation, political gridlock, the threat of catastrophe.\nBut a totally different set of problems was internal -- my own sad, depressed mental state.\n\nI could do nothing about the former until I addressed the latter.\nThat, itself, was a useful realization.\n\nI had several tools to address my internal problems.\nOne learning, which I picked up from the various books on communication I had been reading (especially [Non-Violent Communication](http://amzn.to/2dveZlp) and [Crucial Conversations](http://amzn.to/2dmUanB)), was that I chose my reactions.\nA person in an interaction with me could not \"make\" me upset -- they did whatever, and then I made myself upset by reacting.\nLikewise, global warming didn't \"make\" me depressed -- I could react with either ennui or with determination when confronted with such a problem, and it was up to me to choose which.\n\nAnother tool, which I gained from my healing circle, was insight into the nature of control.\nIt's what the shaman I worked with would call the \"control virus\".\nI was upset because I felt like all of these huge world problems were beyond my control.\nBut -- of course they were!\nMost things are beyond our control, even when we think they're not.\nJust about the only thing we *can* control is how we react, which actions we ourselves take.\nIt's not up to me to solve all of the worlds problems, but it's up to me to do the best I can, to *be* the best I can.\nThis realization may sound trite.\nBut, as with many important realizations, there's a huge difference between hearing or reading or knowing it, and having it really sink into your bones.\n\nI group realizations like this into an overall spiritual journey.\nAny work I do in the external world is secondary to work I do on the inside, on my own conciousness.\nIt is the latter which enables the former.\nWhatever your focus -- environmentalism, or poverty, or health and healthcare -- it must start with inward healing,\n\n### Community ###\n\nAnother amazing source of healing while I dealt with my crisis was my community.\nMy healing circle has provided me with many mental tools and insights.\nI meet regularly with a group called HeartTribe, the members of which support one another in making the positive changes they wish to see in their lives.\nAlso, my general group of close friends is wonderful, and supported me during the dark times.\nWhat was most helpful was feeling like I was not alone in my spiritual journey, but that these wonderful people were there with me.\n\nAlso, many of the people I'm close to are also technologists who have struggled with some form of the same crisis.\nI was able to learn how my peers were able to create meaning in their lives.\nI could learn about the types of projects which were inspiring them, and become infected by their enthusiasm.\nI am very fortunate to be surrounded by so many generous people doing so many cool things.\n\n## Ok, But What's To Be Done? ##\n\nUltimately, a powerful realization (which came from my friend Other Igor) was that there's no way back.\nWe cannot just shut off our technology -- we're too dependent on it, now.\n\nThus, we must move forward.\nWe have to create technology which preserves the environment, respects human values, and enables us to be the best species we can be.\nIf we simply accept this premise, then there's no time for despondency, because there's so much work to do.\n\nThe problem then becomes (as usual) how to pick the biggest thing to work on.\nBret Victor already has [an amazing list of priorities in climate change](http://worrydream.com/ClimateChange/).\nI've also been thinking of projects in FinTech and in digital communication and privacy which could have a large impact.\nI plan on writing more about my ideas for improving communication, but currently I'm excited about [sandstorm](https://sandstorm.io/).\n\nFinally, an important source of inspiration for me has been a careful re-reading of [Meditations on Moloch](http://slatestarcodex.com/2014/07/30/meditations-on-moloch/).\nThis is a brilliant piece of writing, which really gets at all of the underlying issues I am concerned about -- really, all the same issue, *Moloch* (*whose mind is pure machinery!*).\n\nIf you are looking for inspiration, reading this essay should fill you with a thousand ideas.\nThere is a ton of room for attacking the underlying coordination problems that the author discusses.\nBut his final conclusion suggests that *the most important project* is AI.\nFor a dispirited technologist, this is an incredibly energizing conclusion.\nYou can join the ranks of thousands of other engineers and scientists who have worked on this problem, and there's room for everyone no matter your specific skill set.\nProbably, even a crusty systems guy like me can contribute here.\n",
            "url": "https://igor.moomers.org/posts/what-is-to-be-done",
            "title": "What is to be done?",
            "date_modified": "2016-09-28T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/the-12v-music-manifesto",
            "content_html": "\nParty people, we need to talk.\nEvery time I go to a renegade event -- a party on the beach, or in the forest, or in an abandoned tunnel -- there is always a generator involved.\nThe generator powers the sound system, but it also creates a steady hum of undesirable noise and a plume of smelly smoke.\nThe generator inevitably runs out of gas, causing the party to pause for a minute while it's topped off.\nOh, and topping off the generator is always done using [those annoying CARB cans](http://www.gad.net/Blog/2012/11/22/one-mans-quest-for-gas-cans-that-dont-suck/), in the dark -- and so fuel is always spilled *everywhere*.\n\n## Generator Power Is Inefficient ##\n\nActually, amps run on DC power.\nInside your amp is a [rectifier](https://en.wikipedia.org/wiki/Rectifier) which turns the 120VAC line supply into DC, and then probably a [voltage regulator](https://en.wikipedia.org/wiki/Voltage_regulator) which dissipates some of that power to get a level usable by the [op-amps](https://en.wikipedia.org/wiki/Operational_amplifier) and [transistors](https://learn.sparkfun.com/tutorials/transistors) in the amp.\n\nLet's assume that the effiency of your small generator is [a generous 20%](https://settysoutham.wordpress.com/2010/05/26/portable-generators-about-half-as-efficient-as-power-plants/), and then [80% for the full-wave rectifier](http://www.brighthubengineering.com/consumer-appliances-electronics/96645-efficiency-of-ac-rectifiers/), and then say another 90% for the voltage regulator.\nThen, only `0.20 * 0.80 * 0.9 ≈ .15` or 15% of the power in the gasoline you're burning is actually used to power your sound system.\n\n\"Okay, okay, we get it -- you hate generators,\" you're saying at this moment.\n\"But like what are we supposed to do?\"\n\n## 12V Sound Systems ##\n\nActually, I love my [Honda EU2000](http://amzn.to/2cE5yk1).\nIt's quiet, lightweight, runs for a long time on very little gas, and has proven extremely dependable.\nIt's just that I don't need my generator unless I plan to be running sound for many, many days -- like, a week and a half of Burning Man.\nBut I can run a great-sounding system for ***about 24 hours*** off a single [120 amp-hour deep-cycle battery](http://amzn.to/2dc97Kk).\n\nBecause people often ask me how I manage to run such a sound system without a generator, I am writing this post that breaks down my whole system.\nRead on to learn about each component.\n\n### The Battery ###\n\nThe 12V batteries that go in your car are ***not correct*** for this task.\nThose batteries are meant to start your car's engine, and then be immediately topped off by the alternator.\nIf you try to draw power off them, they will quickly die, and if they remain dead they will calcify and will need to be replaced.\n\nThe batteries I use are sold as either deep-cycle or marine batteries.\nThese are meant to be used on boats or in RVs, where you're expected to run appliances off the batteries without the engine running.\nThey're also often used as part of energy storage for a solar system.\n\nThe capacity of deep-cycle batteries is measured in [amp-hours](https://en.wikipedia.org/wiki/Ampere-hour)(or `Ah`).\nA `120Ah` battery will run a device which consumes 120 amps for an hour, 60 amps for two hours, etc...\nI recommend buying the 120 amp-hour ones because they're not much heavier or more expensive than smaller batteries, and give you more capacity.\n\nI buy my deep-cycle batteries at Costco, because they're readily available there, and relatively inexpensive (around $150 per battery).\nAlso, Costco will let you trade them in if they lose their capacity too quickly.\nI've owned maybe fifteen of the Costco-branded or Interstate batteries over the last decade, and I've found most of them to last for three to four years with no problems and when they stop holding their charge you can trade them in for new ones.\n\nI charge my batteries using a couple of [these Duracell battery chargers](http://amzn.to/2cZGlzX).\nAt $50 each, these are the most economical 12V battery chargers I've found.\nAlso, their maximum charge rate is 15 amps, which means it can charge a `120Ah` battery from fully empty to totally full in about 8 hours (overnight).\nEven at Burning Man, [Camp Warp Zone](http://igor.moomers.org/warpzone/) never ran any lights or art pieces off the generator.\nWe always run everything off 12V power, but several times during the burn we'll run the generator to recharge all the batteries.\n\n### The Amp ###\n\nThe heart of your 12V sound system is a 12V amp.\nThese amps are made for automotive audiophiles, and so there's a huge range of options to chose from.\nI've been using [this Sound Storm 2000W amp](http://amzn.to/2cLB0KO).\nI like it because it's totally sealed and fairly compact, and it sounds fine.\nOne complaint I have is that it puts out a hiss when no audio is playing or connected.\n\nIf you want to add a subwoofer to your system, you'll need an additional amp for that.\nAmps designed for subwoofers are called \"monoblock\" amplifiers, because they only have one channel -- unlike stereo amps, which have a left and right channel.\nI use [this Audio Pipe 1500W monoblock amp](http://amzn.to/2cDiAeo), which sounds great and comes with nice features like a built-in crossover and a subsonic filter to protect your subwoofer.\nOne potential problem is that the Audio Pipe is actively cooled with fans which suck air from the environment.\nThis will probably cause problems over time in dusty outdoor environments.\n\nIt's hard to find suitable monoblock amps for hi-fi applications because of impedance mismatch.\nMost automotive subwoofers are rated at either 4 Ohm or 2 Ohm.\nAlternatively, because of the tight space requirements inside cars, several smaller 8 Ohm or 4 Ohm subs might be installed in parallel, halving the impedance.\nAs a result, automotive monoblock amps typically put out their rated power at these low impedances.\nOn the other hand, most performance subs for clubs are 8 Ohm.\nSo, if you want to match the power supplied by the amp to the power requirements of the sub, you have to buy wildly overpowered monoblock amps.\nFor example, to power a 1500W subwoofer, you might need an automotive amp which claims to be rated at 4000W, because that rating will typically be at 2 Ohms.\nTo get the power rating at 8 Ohms, you'll need to divide twice to get 1000W.\n\nKeep in mind that power ratings for both speakers and amps are typically inflated.\nThere's no way your amp can supply 1000W *continuously*, but neither can your sub sink that much power continuously.\nIf it looks like your amp is slightly underpowered, that's okay -- just don't try to compensate by turning up the gain, or you'll [risk blowing your speaker cones](http://www.bcae1.com/2ltlpwr.htm).\n\n### Speakers ###\n\nThis is a contentious issue -- everyone has their preferred speakers.\nI've found that my system sounds okay with just four of [these Behringer B212XLs](http://amzn.to/2cXPMxe).\nI like them because they're very lightweight, made of rugged plastic, and fairly inexpensive.\nI don't worry about them getting beat up in the back of a van, or sitting directly on the dirt at the party.\nThey sound much better at higher volumes, which is perfect for parties.\n\nI don't own my own subwoofer yet, but I had great success driving a friend's [Behringer B1800X](http://amzn.to/2cE7qcx).\nI wouldn't buy this sub for myself, though, because it's a little too large and unwieldy.\nI've been eyeing the [Peavy 118D](http://amzn.to/2cZJdg6) because it's weighs a few pounds less, and it can run in both active and passive mode.\nI could run it in passive mode off batteries during parties, and use the built-in amp if I wanted to blast bass in my one-bedroom apartment.\n\nYou'll probably want to make up your own mind on which speakers to buy.\nEither way, to run a 12V system, the important thing is to buy either ***passive*** speakers or ones which, like the Peavy I linked, can be run in both active and passive mode.\n\n### Extra Goodies ###\n\nI keep a few more components which I think make the system run smoother.\nOne is a capacitor -- I use [this 2-farad model](http://amzn.to/2cLDd9j).\nI got one of these after I noticed that in our theme camp at BM2014, where I was running all of the lighting as well as the sound system off a single battery, the lights would dim when the bass kicked in the music.\nHaving a large capacitor helps smooth out such problems, and is probably more gentle on the battery.\nI doubt if the capacitor has any impact on sound quality, since the amps themselves should already have large-enough internal capacitors to smooth out their own power demands.\nI like that the capacitor comes with a built-in volt meter, so I can keep an eye on the charge of the battery and avoid draining it too far.\n\nAnother useful component is [this electronic crossover](http://amzn.to/2cyrk3E).\nIt gives better control over the distribution of signal than the built-in crossover in the monoblock amp, and it also allows me to connect an additional amp if I want to run even more speakers.\nI usually configure the sound so that two of the Behringers sit *behind* the DJ but still facing forward, so the DJ gets good monitors but they're still contributing to the party.\nBut the crossover makes it easy to connect dedicated monitors if desired.\n\nFinally, it's helpful to have some power plugs.\nI use [this cigarette lighter block](http://amzn.to/2dc9iVN) to provide USB power, which is nice to charge phones and also to run [little USB lights](http://amzn.to/2dc7WdK) (super-helpful when you need to plug and unplug stuff, or to see the DJ equipment).\nI also like to keep a small inverter on-hand, [like this one](http://amzn.to/2cE4ahA).\nThat's helpful to power any laptops, mixers, or DJ controllers with dedicated power.\nHowever, here, too, I recommend buying devices which natively support 12V.\nFor instance, I am planning to get the [Xone:23](http://amzn.to/2dc9sNb) as my next mixer because it runs on 12V, so I can just plug it right into the battery without needing an inverter.\n\n\n## How to Wire Everything Together ##\n\nThere are two separate wiring paths -- one for power, and one for signal.\nIn either case, there's no need to buy into the cable hype.\nFor instance, if you're shopping for automotive audio, you'll often be told that you need to use at least 4GA wire.\nThis might be true inside actual cars, where the cable runs might have to be 20 or even 30 feet long.\nHowever, if your sound system, if you keep your power runs to around 6 feet then you can get away with 12GA or 10GA wire, which is much cheaper and much easier to work with than the thick stuff.\nI bought a couple of [spools of primary wire](http://amzn.to/2d6d1Iu) in the correct colors, and those have worked fine for me.\n\nAutomotive amps always have a `remote` terminal, usually next to the 12V `+` and `-` terminals.\nThis is meant to attach to the head unit in the car, so you can control power to the amps without digging around in the trunk.\nI wire the `remote` terminal to the `+` terminal of the amp via a [toggle switch like this one](http://amzn.to/2dc79cN).\nThis allows me to wire up everything, double-check it, and only *then* to try to power up the amps.\n\nFor signal, I use tons of [these composite cables](http://amzn.to/2cNIbiH) in lengths of 3' or 6'.\nTo connect from your amps to your speakers, you'll usually need some random connector.\nFor instance, my speakers use normal XLR connectors, while that Behringer sub used some annoying Nuetrik connector.\nIn all cases, the side of the cable which connects to the amp is just bare wire, so you'll just need to cut off any connector and wire it straight into your amp.\n\n## Yes, But How Does It Sound? ##\n\nMy system has been used in several art installations where sound was nice to have but which weren't full-on dance parties.\nIn these cases, I just use the four 12\" speakers and the amp and keep it simple.\nMy setup has also been the primary system for several full-blown outdoor dance parties of approx. 50 people.\nI have no problem filling a forest clearing with beautiful, clear sound.\nI never hit the gain limits on the equipment -- I've always had more volume headroom than I've used.\n\nI can keep the setup running of a single 120-amp-hour battery for two nights, although I like to switch the battery out after one night to avoid draining it down too far and causing damage to the battery.\nOf course, with this setup, there's no generator noise -- all you hear is the music!\n\n![All set up](/images/soundsystem.jpg)\n![Rocking out](/images/djing.jpg)\n\n## Updates ##\n\nDr. Niels has built a version of this system.\nHe has [a spreadsheet outlining the components he used](https://docs.google.com/spreadsheets/d/1q0VkUu1GSiJAtObuSfRULU4YRTkQmtfJzZ-F6q1a--8/edit?usp=sharing).\n",
            "url": "https://igor.moomers.org/posts/the-12v-music-manifesto",
            "title": "The 12V Music Manifesto",
            "date_modified": "2016-09-19T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/heart-disease-in-the-ussr",
            "content_html": "\nI recently had heart surgery to fix a [leaky mitral valve](http://www.heart.org/HEARTORG/Conditions/More/HeartValveProblemsandDisease/Problem-Mitral-Valve-Regurgitation_UCM_450612_Article.jsp).\nWhen investigating my condition and the surgery online, I found individual people's accounts of their progression to be most helpful to understand what I'll be going through.\nThese accounts are unfortunately rare -- they mostly seem to crop up in individual posts in random forums in the dusty corners of the web.\nI plan on documenting my full journey once I've made some additional headway in my recovery.\n\nHowever, this was not my first experience with heart problems.\nAs an infant, I was diagnosed with [coarctation of aorta](https://en.wikipedia.org/wiki/Coarctation_of_the_aorta) and was the subject of the very first [balloon angioplasty](https://en.wikipedia.org/wiki/Angioplasty) performed on an infant in the USSR.\nWhile I had heard bits and pieces of the story over the years, my parents' coming to stay with me during my recent recovery was a great opportunity to hear the whole thing.\nI took notes!\n\nThis is my parent's account of that adventure.\nI obviously find it autobiographically fascinating, but I think it might be interesting to other readers as well.\nI especially found insights into the functioning of the Soviet medical system enlightening.\nRead on, if you care!\n\n### Spring 1984 ###\n\nIn 1984, when I am about 6 months old, my mom takes me to the local regional hospital in Nikolayev, Ukraine.\nShe is concerned because she had performed the (I think?) [Barlow maneuver](https://en.wikipedia.org/wiki/Barlow_maneuver), but could not get my hips to lay flat as they're supposed to.\nIn the hospital, the doctors get an X-Ray, but see no physiological pathology.\nInstead, they suspect a neurological problem, and so refer her to a neurologist.\n\nMy family was well-known in the medical community of Nikolayev.\nBoth of my mom's parents were pharmacists in a country with chronic shortages of critical medication.\nIf you were sick, it definitely helped to be on my grandparent's good side, and so they were owed favors by many people in town.\nThis fact helped my mom and me to quickly be seen, first by the on-duty physician who did the X-Ray and then by the neurologist.\n\nAdditionally, a decade before this story begins, my cousin Boris was born to my mom's brother and his wife.\nBoris and his mother had [incompatible blood types](http://www.cerebralpalsy.org/about-cerebral-palsy/risk-factors/blood-incompatibility).\nAs a result, Boris was born with jaundice, but the cause was not diagnosed and his condition was allowed to progress until he suffered brain damage.\nBoris remains severely handicapped today.\n\nSo, when my mom and grandma show up at the neurologist office, they are quickly recognized.\nIt takes only a minute for the neurologist to make her diagnosis.\n\"You understand that I can't write you a prescription for these, but I'll give you a list because I know you can get them,\" she says to my grandmother, handing her a long list of medications.\nHer diagnosis: hydrocephaly.\n\nAt home, after looking up hydrocephaly in the family encyclopedia, my mom has a mild nervous breakdown and stops producing breast milk.\nMy dad's mom Anika, also a doctor (of optometry), is dispatched from nearby Kherson to come provide moral support.\nIt is mid-spring -- a cold, rainy time -- and on top of my hydrocephaly I have picked up a cold.\nAnika tells my mom to chill out about the hydrocephaly, but to be more concerned about my cold progressing to pneumonia.\n\nThe on-call physician at the local clinic is summoned for a house call -- typical for the USSR at the time.\n\"Mamochka, why did you call me\", he asks.\n\"Well, the weather is cold, the child is sniffly, we were worried about pneumonia,\" says Mom.\n\"Nah, he doesn't have pneumonia,\" announces the physician.\n\"He has a congenital heart defect.\"\n\nAt this point, I've been diagnosed by roving bands of doctors to have pneumonia, hydrocephaly, and a heart defect.\nEveryone is freaking out.\nTo instil calm, my mom calls the father of her best friend, Issac Issacovich, who is one of the most well-regarded cardiologists in Nikolaev.\nHis opinion is that the on-call physician is a dick.\n\"Yes, he has a murmur, but that's [common in infants](http://kidshealth.org/en/parents/murmurs.html),\" he says.\n\"You need to wait until he's at least a year old. If the murmur persists, then we can investigate further\".\n\n### Fall 1985\n\nMy heart murmur persists, and so my family is referred to doctors in Kiev who may be able to diagnose the cause.\nKiev is no Nikolayev (a regional back-water), but my family is still well-connected there.\nAntonina Grigorivna, a good friend of my grandfather, is the head of the 4th Municipal Pharmacy, which served the Communist Party apparatus of Ukraine -- the party bosses and their families.\nAs a result we are able to get an appointment very quickly, and the doctors in Kiev perform an ultrasound -- the first I'd had at that point!\n(Aside: In the US at that point, prenatal screening with ultrasound [was routine](http://www.ob-ultrasound.net/history1.html)).\n\nThe ultrasound allowed the cardiologist in Kiev to definitively diagnose me with coartation of aorta.\nHowever, surgeries to fix the condition were not performed anywhere in Ukraine.\nUsing our connections to Antonina Grigorivna, my parents are able to secure a referral from the Ministry of Health to the main Soviet hospital in Moscow.\nIt is late fall, and nobody wants to travel to Moscow for the full winter experience, so my family makes plans to go there in the Spring.\n\n### Spring 1986\n\nIn the spring of 1986, my mom, along with my dad's mom Anika, fly to Moscow and take up residence at my aunt's house.\nHer hard-won referral from the Ministry of Health in hand, my mom shows up at the hospital, and attempts to make an appointment to see a doctor.\nShe is rebuffed at the reception.\n\"Why are you here?\" asked the woman behind the counter.\n\"You should have mailed us the referral and waited for us to summon you.\"\n\nThe next day, my mom purchases a fancy chocolate bar in a Moscow department store.\nOver the objections of my grandmother and my aunt (\"We have never done such a thing, and never would!\"), she puts a 50-ruble bill inside the wrapper.\nThe median monthly salary in the USSR at this point is around 75 rubles.\nUnsure of herself (because she has always just gotten by on family connections), she shows up back at the hospital, confronts the same receptionist, and slips the chocolate and money into the front pocket of her white coat.\nNow, the answer is \"I'll see what I can do.\"\n\nThe cardiologists in Moscow perform a doppler echo, and confirm that I definitely need surgery to fix the coarctation.\nThey schedule a date to admit me to the hospital.\nHowever, in the chill of the Moscow spring, I get sick again.\nMy family and the doctors decide that I should come back to Moscow in June, when I am feeling better and the weather is improved.\nWe fly back to Nikolayev from Moscow on the infamous date of [April 26th, 1986](https://en.wikipedia.org/wiki/Chernobyl_disaster).\n\n### June 1986\n\nBack home, my family begins scheming for how to get me the best care during the surgery.\nInvoking the extended family network, they involve Aunt Lora.\nLora is both my grandmother's cousin, and also my grandmother's brother's wife's sister in law.\nIt gets better: Lora's childhood friend lives in Moscow, and is neighbors with the mother of [Gennady Khazanov](https://en.wikipedia.org/wiki/Gennady_Khazanov), a famed Soviet Comedian.\nKhazanov's mother is friends with the family of Professor Falkovsky.\nI was able to find [a reference to Falkovsky](http://articles.dailypress.com/1990-11-13/news/9011140436_1_soviet-health-care-soviet-doctors-soviet-central-asia) as the \"director of the cardiosurgery department at the Soviet Academy of Medical Science.\"\n\nSo, when my mom and I show up in Moscow in June, we have at least some tenuous connection to the people at the top.\nI am immediately admitted to the hospital.\nThere, using some newly-purchased markers from my mom, a girl in my ward and I cover each other in little red dots, creating a chicken pox scare.\nThe nurses demand payback of the half-litre of medical-grade ethanol they used to wash the dots off us (remember, people drink this stuff straight!)\n\nThat night, the Soviet surgeons perform their first balloon angioplasty on an infant.\nRather than using the femoral artery, as is common for adults, they gain access via the brachial artery in my left arm.\nAfter the procedure, they accidentally suture closed the artery, cutting off blood flow to my left arm, and send me to recovery.\n\nThe next morning, Professor Falkovsy and Dr. Leo Bakeria (apparently, [Russia's chief cardiologist](http://rbth.com/articles/2012/08/30/a_walk_in_the_park_with_russias_world-renowned_cardiologist_17813.html)) make the rounds of the hospital.\nI had been complaining of arm pain all night -- my earliest memory may be of standing in my hospital bed with my arm in pain, trying to get the attention of a doctor through the glass door.\nHowever, the on-call doctors don't realize that anything is wrong.\nThankfully, the two chief cardiologist recognize the problem with my arm, and proceed to raise hell.\nBy the time my mom arrives in the hospital in the morning, I am already in surgery again.\nDr. Bakeria re-opens my incision, cuts out the dead portion of the brachial artery, and connects the two remaining sections together to save my arm.\n\n### The Recovery\n\nI am transferred to the ICU.\nMom shows up there with Aunt Lora, and they bribe the head of the ICU (Dr. Yuri Buziashvili) 200 rubles more to ensure quality care.\nMy dad was already planning to come to Moscow, but my mom calls his boss at the ship-building plant, who puts him on the next plane.\nBecause in the USSR, family is not allowed to visit in the hospital, Mom enrolls temporarily as a hospital worker so she can spend time with me while I recover.\n\nAlthough gangrene starts in my arm, it slowly recovers.\nI lost a lot of range of motion in it -- for instance, when crawling I kept the left hand balled up in a fist.\nTo help in the recovery, Prof. Falkovsky connects my mom to a woman named Valentina Nikolayevna.\nShe is a neurologist, but she also practices alternative medicine -- probably not legal, but she has cover from her husband, a KGB officer.\nI spend many hours at her apartment in Moscow, getting acupuncture.\nMy reward for good behavior during the treatment is to play with her son's remote-controlled waking robot, a toy light years ahead of anything I had access to.\n\n### The End\n\nIn the end, this is a happy story.\nThanks to my family connections and a bunch of money, my coarctation is fixed, and I am able to grow up normally.\nEven my left arm is not much of an impediment.\n\nThe hero of this story is my mom.\nIt couldn't have been easy to deal with all the doctors, or to travel around all over the place with a child, bribing every other person.\nHer dedication saved my life -- thanks, Mom!\n",
            "url": "https://igor.moomers.org/posts/heart-disease-in-the-ussr",
            "title": "Congenital Heart Disease in the USSR",
            "date_modified": "2016-09-12T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/recovering-openid",
            "content_html": "\nI recently became locked out of [my StackOverflow account](https://stackoverflow.com/users/153995/igor-serebryany).\nThis was because, back in the day when I first created the account, I set it up to authenticate via OpenId.\nHowever, I never ran my own OpenId provider, which seemed like a huge hassle.\nInstead, I started out delegating to an OpenID provider called `MyOpenid`.\nThis worked for a while, but eventually this provider went out of business.\nI then delegated to google, which actually acted as an openid provider for a little while.\nHowever, it seems like in the past few years google also stopped providing any sort of OpenId services.\n\nI had a long-lived session with StackOverflow, but at some point it expired.\nSo, unable to log in, I found myself using StackOverflow a lot less.\nSeveral times, I had urges to add an answer to a question, or to post my own question/answer pair on some obscure issue, but I would just skip it because I was logged out.\n\nI finally decided to recover my access to my account.\nI did it using [local-openid](https://bogomips.org/local-openid/).\nHere's how!\n\n1. Install local-openid -- I just did `gem install local-openid`.\n\n2. Start the local-openid server.\n  I just ran `local-openid`, which booted up a WEBRick server on port 4567.\n\n3. Forward traffic to the local-openid server from apache.\n  I did this in my `<VirtualHost>` section:\n\n   ```apacheconf\n   ProxyPass / http://localhost:4567/\n   ProxyPassReverse / http://localhost:4567/\n   RewriteEngine on\n   RewriteCond %{HTTP:Authorization} !^$\n   RewriteCond %{QUERY_STRING} openid.mode=authorize\n   RewriteCond %{QUERY_STRING} !auth=\n   RewriteCond %{REQUEST_METHOD} =GET\n   RewriteRule (.*) %{REQUEST_URI}?%{QUERY_STRING}&auth=%{HTTP:Authorization} [L]\n   ```\n  I made sure that this was the only ProxyPass directive that was uncommented, and then ran `apache2ctl restart`.\n\n4. Attempt to log in via my OpenID at StackOverflow.\n  This caused some output like so to be printed out from the running server:\n\n   ```\n   localhost - - [23/Apr/2016:19:59:48 CDT] \"GET /xrds HTTP/1.1\" 200 567\n   - -> /xrds\n   Not allowed: 172.5.245.84\n   You need to put this IP in the 'allowed_ips' array in:\n    /home/igor47/.local-openid/config.yml\n   ```\n\n5. Edit my `~/.local-openid/config.yml` file.\n  This file had an `allowed_ips` section into which I added the IP address that was making requests.\n  I also saw a section like so:\n\n   ```yaml\n   https://stackoverflow.com/users/authenticate/:\n     assoc_handle:\n     updated: 2016-04-24 01:01:39.873137626 Z\n     expires: 1970-01-01 00:00:00.000000000 Z\n     session_id: 1461459588.9306.0.1628395514528984\n     expires1m: 2016-04-24 01:02:39.873137626 Z\n   ```\n  I removed the `expires` key and renamed the `expires1m` key to `expires`.\n\n6. Reload the auth page in the browser. Viola! I was logged in!\n\nHopefully this helps you if you also lost access to some old OpenId account and would like to regain access.\n",
            "url": "https://igor.moomers.org/posts/recovering-openid",
            "title": "Recovering OpenID",
            "date_modified": "2016-04-23T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/smartstack-vs-consul",
            "content_html": "\nI am one of the primary authors of Airbnb's SmartStack, which is composed of two pieces: [nerve](https://github.com/airbnb/nerve) and [synapse](https://github.com/airbnb/synapse).\nWhen we released this software, we documented a lot of the reasoning behind it in a [very comprehensive post](http://nerds.airbnb.com/smartstack-service-discovery-cloud/) on service discovery.\nI recommend reading that post carefully to understand why we made the design decisions we did.\n\nRecently, I've been getting a lot of questions on how SmartStack compares to [Consul](http://www.hashicorp.com/blog/consul.html), which is an alternative take on service discovery from the amazing guys at [HashiCorp](http://www.hashicorp.com/).\nI am excited to see more people taking on this operational challenge.\nIn general, better service discovery will lead to more available SOA infrastructures, which makes for a better web experience for all web users.\nAlso, it will lead to a better engineering experience for the people maintaining those SOAs.\n\nRecently, HashiCorp put out [a comparison between Consul and SmartStack](http://www.consul.io/intro/vs/smartstack.html), which gets somethings right but also some things wrong.\nThis post aims to complement HashiCorp's comparison from my perspective.\nOf course, I welcome constructive criticism to the opinions expressed here.\n\n## The Gossip Protocol ##\n\n> Consul uses an integrated [gossip protocol](http://www.consul.io/docs/internals/gossip.html) to track all nodes and perform server discovery.\n> This means that server addresses do not need to be hardcoded and updated fleet wide on changes, unlike SmartStack.\n\nThis is a fair criticism of SmartStack -- the addresses of the [Zookeeper](https://zookeeper.apache.org/) machines must be statically configured.\nOf course, [Serf must also be bootstrapped](http://www.serfdom.io/intro/getting-started/join.html) with at least one existing node to join the cluster.\nIf all of the bootstrapped nodes you have hard-coded into your configuration management system (like [Chef](http://www.getchef.com/chef/) or [Puppet](http://puppetlabs.com/)) die, new nodes will not be able to join the cluster.\n\nReally, there are two choices here.\nThe first is statically hard-coding a list of Zookeeper instances and relying on Zookeeper.\nThe second is static configuration of bootstrapping information for Serf and relying on Serf's [gossip protocol](http://www.serfdom.io/docs/internals/gossip.html).\n\nThe gossip protocol is a modified version of [SWIM](http://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf).\nConsul uses this not just for bootstrapping but for propagate ALL information, including the availability information you're trying to discover.\nI have many unanswered questions about the gossip protocol.\nFor instance, in the case of a network partition, it seems like a partitioned-off node will be alternatively marked suspected-down and then back up by different group members.\nThis may result in a partitioned-off node never leaving the cluster.\n\nIn the end, replacing ZooKeeper with Serf may be a viable option for SmartStack.\nI would welcome pull requests to [synapse](https://github.com/airbnb/synapse) that use Serf, or maybe Consul, as a [service watcher](https://github.com/airbnb/synapse/tree/master/lib/synapse/service_watcher) instead of [Zookeeper](https://github.com/airbnb/synapse/blob/master/lib/synapse/service_watcher/zookeeper.rb).\n\n## Service Discovery ##\n\n> For discovery, SmartStack clients must use HAProxy, requiring that Synapse be configured with all desired endpoints in advance.\n> Consul clients instead use the DNS or HTTP APIs without any configuration needed in advance.\n> Consul also provides a \"tag\" abstraction, allowing services to provide metadata such as versions, primary/secondary designations, or opaque labels that can be used for filtering.\n> Clients can then request only the service providers which have matching tags.\n\nThe first sentence here doesn't really even make sense.\nSure, Synapse must be configured to discover the services you are going to want to talk to, but you could just as easily configure it to discover ALL of your services.\nOn the other hand, explicitly specifying which services you are going to want to talk to from which box is extremely useful, because it allows you to build [a dependency graph of your infrastructure](/images/airbnb-infrastructure-oct13.png).\nI view this as a benefit, not a drawback.\n\nAnother benefit is using [HAProxy](http://haproxy.1wt.eu/#desc) to actually route between services.\nWhenever a service inside Airbnb talks to a dependency SmartStack, that service knows nothing about the underlying implementation.\nThe ability to avoid writing a client (even a simple, HTTP client) for service discovery into each application was a fundamental design goal for us.\nIf you want a third-party application you didn't write to run on your network and consume Consul information, you must use DNS.\nHowever, DNS is even worse -- when, how, and for how long will DNS resolutions be cached by your underlying libraries or applications?\n\nInstead of insisting on a simple HTTP API, Consul provides you with the ability to do complex tag-based discovery.\nIt is almost certainly a mistake to utilize these features.\nYour infrastructure should aim to be as simple and flat as possible.\nA service instance is a service instance, and if it's different then it is a different service!\nIf you find yourself 6 months in, only talking to instances of service Y which provide property X from some unknown number of clients which have requirement X hardcoded into an HTTP request buried in their codebase, you are going to wish that you hadn't done that.\n\nFinally, HAProxy is an extremely stable, popular, well-tested, well-utilized, fundamental component of the internet which provides amazing introspection.\nThat we use HAProxy means that synapse and zookeeper can just go away, and your service will keep on working (although it won't get updates about new or down instances).\nUsing connectivity checks in HAProxy means that we can survive network partitions -- services which remain registered will be taken out of rotation by HAProxy.\nUsing HAProxy's [built-in load balancing algorithms](http://docs.neo4j.org/chunked/stable/ha-haproxy.html) meant that we didn't have to write them.\nUsing HAProxy's [built-in status page](http://haproxy.1wt.eu/img/haproxy-stats.png) means we can easily see what's happening on a particular box with that box's service dependencies.\nUsing HAProxy's logging, we can see a detailed history of communications between services.\nAnd using monitoring tools that scrape and aggregate HAProxy's stats, we can get instant insight into what kinds of load services are seeing, from which kinds of other services.\n\nMany of these advantages can again be gained by configuring synapse to use Consul as a discovery source.\nBut I strongly feel that synapse/HAProxy combo is better in many ways than Consul, and urge you to consider the benefits I've outlined above.\n\n## Health Checking ##\n\n> Consul generally provides a much richer health checking system.\n> Consul supports Nagios style plugins, enabling a vast catalog of checks to be used.\n> It also allows for service and host-level checks.\n\nThe current list of health checks in nerve is minimal at best, although it's been sufficient for our needs here at Airbnb.\nI like the simple model, of nerve doing a direct check on a service from the machine it's running on.\nConceptually, it's easier to wrap your head around.\nWhy is this box deregistered?\nBecause it failed it's nerve health check?\nOr because Nagios is down or overloaded, or because the application pinged your service to ask to be deregistered and then kept running, or for what other unseen reasons?\n\nAlthough I would discourage the use of complex health checks, I can see the advantages, and I would welcome PRs to nerve to add better health checking.\n\n## Multi-DC ##\n\n> While it may be possible to configure SmartStack for multiple datacenters, the central ZooKeeper cluster would be a serious impediment to a fault tolerant deployment.\n\nI am not certain that I would want to run a UDP-based gossip protocol across the public internet.\nRunning a Zookeeper cluster across the public internet is also not an ideal situation.\n\nI think that the correct approach is to provide mostly-local service clusters per datacenter.\nA single, global Zookeeper cluster will contain only the list of services that are truly cross-DC (like the front-end load balancers), while most services only talk to services inside their local DC.\nAssuming a flat cross-DC topography is setting yourself up for much higher than necessary latency.\n\nOf course, with Consul you could probably configure your services to discover only dependencies tagged with your local datacenter.\nBut this reaches into the realm of configuration management, and at that point both Consul and SmartStack become equivalent -- a Chef change is a Chef change.\n\n## Summing up ##\n\nI love Hashicorp, and I think Serf is a great idea, implemented well.\nI think that from an operations perspective, SmartStack has a bit of an edge on Consul.\nI am happy to have the opportunity to engage in dialog like this, and I'm excited about how much easier it's getting all of the time to operate internet infrastructure.\nIf you have comments, or corrections, please do get in touch!\n",
            "url": "https://igor.moomers.org/posts/smartstack-vs-consul",
            "title": "SmartStack vs. Consul",
            "date_modified": "2014-05-01T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/third-party-domains",
            "content_html": "\nYour public domain name is the name that your users use to access your site.\nFor instance, if you're google, your public domain name is [`google.com`](https://www.google.com).\nFor Airbnb, it would be [`airbnb.com`](https://www.airbnb.com).\n\n## Delegation ##\n\nUsually, when a user initiates a request for something under your domain name, one of your own servers will respond.\nHowever, this is not always the case.\nFor instance, you may be using a bulk email provider like [sendgrid](https://www.sendgrid.com) to send email.\nIn that case, you might want to point `email.yourdomain.com` to sendgrid's servers.\nAnother example might be your blog, under `blog.yourdomain.com`, which is actually operated by [wpengine](https://wpengine.com).\n\nThere are typically two ways to do such delegation.\nBy far the most common is [DNS](https://en.wikipedia.org/wiki/Domain_Name_System)-based.\nYou simply edit the DNS entry for `thirdparty.yourmain.com` to resolve to an [ip address](https://en.wikipedia.org/wiki/IP_address) for a server operated by the third party.\n\nIn the age of proxying web servers like [nginx](http://nginx.org/en/), another way has emerged to do delegation.\nThe domain name in question can point to your server, but it can then simply be proxied to the third party.\nIf you're reading this blog post at the domain [igor.moomers.org](http://igor.moomers.org), you are actually an end-user of such delegation.\nThe content you're reading actually comes from [igor47.github.io](http://igor47.github.io/).\n\n## Delegation considered harmful ##\n\nThere are many reasons why it is a bad idea to do the kind of delegation mentioned above.\nThis is mostly for security reasons, but might also cause usability issues.\n\nLets talk about each of the issues in turn.\nI will use the example website at `yourdomain.com` with a third-party delegation to a blog provider at `blog.yourdomain.com`.\n\n### Session Leakage ###\n\nIf you provide your users with a session cookie, anyone who has this cookie can trivially impersonate the user.\nIt is very common to serve traffic under `www.yourdomain.com` but set a session cookie for `*.yourdomain.com`.\nWhen your users read your blog, at `blog.yourdomain.com`, the blog provider will get a copy of all of your session cookies.\n\nAn attacker that compromises the systems of the blog provider can now steal the identities of all of your users who have visited your blog.\nThe blog provider is an attractive target, because all the sites that use this provider can be simultaneously compromised.\n\nThis attack can be mitigated by setting your cookies to specific subdomains, but this may not be possible if you operate on several subdomains.\nThis can also be mitigated by serving your production traffic on `https` only and setting your cookies' `secure` flag.\nThen, your user's browser will not send the cookies to your blog provider.\nObviously, in this case you should NEVER give your blog provider an SSL certificate covering that subdomain.\n\n### Script Injection ###\n\nRather than stealing your user's session, attackers an force your users to perform actions on your site via the third-party site.\nFor instance, suppose that `blog.yourdomain.com` does not properly sanitize javascript in your blog's comments.\nThis is the equivalent of your own site allowing [javascript injection](https://en.wikipedia.org/wiki/Cross-site_scripting).\nAn attacker putting malicious javascript into the blog comments will force your users to perform actions on your main site.\n\n### Session Clobbering/Cookie Fixation ###\n\nIf your blog provider happens to set the same cookies as you do, you will log out your user or ruin your tracking or analytics.\nFor instance, if both you and your blog provider use google analytics, you will both be attempting to set the various `_utm*` cookies.\n\nThe name collision can cause usability problems for your users.\nHowever, this is also a potential security vulnerability given vulnerable browsers.\nAn attacker who can set cookies on your domain can set a sensitive cookie to a value he controls and then force you to make that value valid.\n[Here is a description](http://homakov.blogspot.com/2013/03/hacking-github-with-webkit.html) of such an attack on Github.\n\n### Personal Data Leakage (PII) ###\n\nYou might be leaking other data than your session cookies via additional cookies, such as tracking cookies.\nFor instance, your [mixpanel](https://mixpanel.com) cookie might contain a bunch of attributes you want to track about your user.\nEven if you're diligent about setting proper settings on your session cookies, your mixpanel cookie is likely clear-text.\nThis means that your blog provider now has a large [PII](https://en.wikipedia.org/wiki/Personally_identifiable_information) dataset on your users.\nThis could get you into trouble if the blog operator experiences a breech of their log data.\n\n### Social Engineering & Reputation ###\n\nUsers assume that content under `yourdomain.com` comes from you.\nBy gaining access to your delegated domains, an attacker can convince users to simply give you their passwords or other information.\nAdditionally, you can gain a poor reputation if your delegated domains are defaced or simply contain negative content.\nThe internet will assume that you endorse that content.\n\n## Workarounds ##\n\nYou could try to be very diligent about your security settings, and \"do it right\" with domain delegation.\nHowever, there are many ways for you to fail here.\nThe best approach is to avoid delegating domains in the first place.\nYou should control all of the content that is served at `yourdomain.com` and it's subdomains, and sanitize any user-generated content there.\n\n### Separate top-level domain ###\n\nTypically, for sites operated by third parties, you would use a separate top-level domain.\nFor instance, when [github](https://github.com) first launched [github pages](http://pages.github.com/), they were hosted under the main `github.com` domain.\nHowever, github quickly [moved this content to it's own top-level domain](https://github.com/blog/1452-new-github-pages-domain-github-io) at `github.io`.\nSimilarly, google began [hosting user-generated content at \\*.googleusercontent.com](http://googleonlinesecurity.blogspot.com/2012/08/content-hosting-for-modern-web.html).\n\n### Avoiding Migrations ###\n\nIt may be tempting to just use a single domain for all of your content initially.\nHowever, my experience is that it is much more difficult to migrate than to make the right choice from the beginning.\nYou will not want to move `blog.yourdomain.com` to `blog.yourdomain.io` later on.\nSo, if you're just starting out, don't go down the wrong path -- resist the temptation to delegate!\n",
            "url": "https://igor.moomers.org/posts/third-party-domains",
            "title": "Third-party domain delegation considered harmful",
            "date_modified": "2014-03-26T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/recovering-an-android-phone",
            "content_html": "\nOver the break, I dropped my phone on a tile floor and totally shattered the screen.\nI can still see what's happening, but the digitizer is not working so I cannot unlock it or do anything.\nI ordered a new phone, but in the meantime I would like to access my text messages (a lot of them NYE well-wishes from friends I don't see too often) and get to my Google Auth credentials.\n\nThere are a few people who have managed to get into their phones using adb shell.\n[This post on reddit](http://www.reddit.com/r/Android/comments/1r2zha/) is the best writeup I've found.\nHowever, they omit the steps they followed to enable adb debugging on a phone where this was disabled.\n\n## Enabling ADB via Recovery Mode ##\n\nThis is what I did:\n\n  1. Hold the power button until the phone reboots\n  2. Hold down the \"down volume\" button until you get into the bootloader\n  3. Use the volume and power buttons to boot into recovery mode\n  4. Follow the instructions in step 1 of [the reddit post](http://www.reddit.com/r/Android/comments/1r2zha/) but also run `update global set value = 1 where name = \"adb_enabled\";` to enable debug mode\n\nThe debug mode will not persist after the phone reboots.\nTo fix this, I followed instructions [in this stackexachange question](http://stackoverflow.com/questions/13326806/enable-usb-debugging-through-clockworkmod-with-adb):\n\n```bash\nadb shell\nmount /system\ncd /system\ncp build.prop build.prop.backup\necho persist.service.adb.enable=1 >> build.prop\n```\n\nIf you edit `build.prop` via `adb pull` and `adb push`, remember to `chmod 644 build.prop` or your phone won't boot.\n\n## Accessing the phone ##\n\nOnce I did this, my phone booted with USB debugging turned on.\nNow, I wanted access to my phone.\nThere are many options for this -- screencasting, VNC server, etc... -- but the easiest solution is a bluetooth mouse.\nOnce you've got one paired with your phone, it's as though you've go your touchscreen back.\n\nYou will need to access your settings menu and pair the mouse.\nFrom `adb shell`, you can use the `input tap` command to send screen tap events.\n`input tap X Y` is a tap at the corresponding X and Y coordinate, with `0 0` being the upper left-hand corner.\n\nI had an icon for the settings menu on my home screen, so I got to it by running `input tap 250 800`.\nOnce there, from `adb shell` run:\n\n```bash\ninput tap 600 400  # for bluetooth\ninput tap 100 1100 # search for devices\ninput tap 100 1000 # to pick the first device that you've found\n```\n\nAt the end of this process, you will have a pointer on-screen which works just like your finger.\n\n## Moving to a new phone ##\n\nI used titanium backup (I paid for the Pro version).\nI used the mouse to do a backup of all of my apps and their data.\nI also backed up my SMS/MMS and call log.\nTo do that, click `Menu` -> `Backup data to XML` and then pick a local location to save the XML files.\n\nI copied the `TitaniumBackup` dir using `adb pull` and pushed it to the new phone with `adb push`.\nI had to do the restore of the XML data separately, using `Menu` -> `Restore data from XML`.\nI had to enable the advanced view in the file picker to find the location of my XML files on KitKat 4.4.2.\n\nAfterwards, I had Google Play update all of those apps to newer versions.\nThis happened without problems.\n",
            "url": "https://igor.moomers.org/posts/recovering-an-android-phone",
            "title": "Recovering an Android Phone With a Broken Screen",
            "date_modified": "2014-01-04T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/interview-questions",
            "content_html": "\nAs an engineer at [Airbnb](https://www.airbnb.com), I do a LOT of interviewing.\nI talk to at least two to three people a week, but sometimes it's as many as 5 or 6.\nAll of my interview questions involve asking people a technical question that we can work through to generate real, working code.\n\nUsually, I'll whiteboard the question and we'll spend a moment talking about possible approaches.\nBut the goal is to get the candidate to start writing code quickly so we can get to a solution.\n\nWe've all been faced with the terrible, knowledge-based, \"I could look that up in 1 minute but I don't have a computer\" question.\nWorse are the gotcha questions that you wouldn't be able to solve unless you happen upon a moment of brilliant insight.\nThe questions I ask aim to avoid that.\n\nGood questions are fun and engaging for candidates.\nGood questions also always have a path forward.\nIf the candidate is stuck, I should be able to give a hint that allows them to get unstuck but that doesn't give everything away.\n\nI like to arrive at some running code that solves at least a subset of the problem at the end of every interview.\nIf my question just wasn't going well with a candidate, getting something running keeps them from spiraling down a mental failure vortex, and allows them to relax and focus on the next interview.\n\n## Preparing to Ask a Question ##\n\nA lot of work happens before you ever see a particular interview question.\nFirst, I myself have probably solved it in one or two possible ways.\nThe first time I solve it, I try to give myself the same constraints a candidate would have -- limited time, no previous knowledge, no specific preparation.\n\nNext, if I'm the one who came up with the question, I will ask it to a few of my coworkers to get a basic calibration.\nIf someone else came up with the question, then I have probably sat in on (shadowed) a few interviews where the question was asked.\n\nBy the time I ask you, I am familiar enough to quickly know the various dead ends and blind alleys that you can fall into.\nI know of a few ways to steer you towards something that would work.\nFinally, I know how people of various experience and skill levels usually perform.\nI know enough to be amazed at your quick and clean approach.\nAlternatively, I've seen how good whiteboarding goes bad and results in spaghetti code that's impossible to debug.\n\n## Don't Leak the Questions! ##\n\nWhen you publicly post a question that you were asked after interviewing, you are undoing weeks of work for your fellow engineers.\nIn it's place, you a creating weeks of new work as we develop, test, and calibrate on new questions.\n\nDuring that calibration, we might ask your fellow engineers sub-optimal questions.\nThey will have a bad time in their interviews, and will post rants about the sorry state of interviewing in engineering.\nAlso, because we're not calibrated, they might get undeserving feedback which causes them to miss out on their dream job.\n\nLeaking interview questions is antithetical to what we're trying to do as engineers.\nOur goal should be to spare our peers work -- that's what writing clean code and creating nice architecture is all about.\nOnce software engineering becomes [a real profession](http://michaelochurch.wordpress.com/2012/11/18/programmers-dont-need-a-union-we-need-a-profession/), there will be no need to ask our peers silly little puzzles to evaluate them.\nBut until then, lets be professional and JUST SAY NO.\n",
            "url": "https://igor.moomers.org/posts/interview-questions",
            "title": "Tech Interview Questions",
            "date_modified": "2013-11-09T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/soviet-kgb-stories-pt1",
            "content_html": "\nIn 1986, my uncle Michael got summoned to the offices of the KGB in Nikolaev, Ukraine.\nThey wanted to know about his friend, Tolic Dobrusin.\nDid Tolic have any relatives outside the country?\nMichael plead ignorance, and the KGB let him go.\n\nWhen Michael got home, he called his friend Tolic.\nHe told him that the KGB had been asking questions about him, and that they needed to talk.\nTolic came over to my uncle's (actually, most of my family's) apartment for an evening chat.\nOver vodka and pickles, he admitted that he did indeed have a relative in Germany.\nIt was his uncle, with whom he recently met in St. Petersburg (then Leningrad).\nHowever, Tolic did not give his uncle any military secrets, because his uncle was working as a cleaner in a hotel and didn't care about any of that stuff.\n\nThe next workday, when my uncle showed up to work to the shipbuilding plant, the KGB was waiting for him at the gate.\nHe was taken away to their HQ, where he was questioned about his meeting with Tolic.\nMy uncle tried to evade, by claiming that he was just trying to confirm the KGB's hunch about Tolic's relatives.\nHowever, the KGB was not happy with having their investigation revealed, especially by a party member (my uncle had joined the communist party at the behest of his father, my grandfather).\n\"We didn't want you revealing our investigation.\nWe're going to put you away.\"\n\nMy mom's good friend, Ira, had a father who was a cornel in the KGB.\nAround that time, Ira came to talk to my mom.\nShe was asking about whether our family was in serious trouble, but my mom was ignorant about the conversations her brother was having with the KGB.\nMom claimed she didn't know what Ira was talking about, which made Ira think that Mom was playing some kind of crazy political game with her.\n\nIn the meantime, my uncle's friend in the police department came to speak with him.\nHe told him that the KGB was sure to arrest him in the near future.\n\"Get out of here while you still can\" was the message.\n\nMichael was walking down the street and he saw a sign on a streetpost about workers needed in Deputatsk.\nDeputatsk was a small settlement about two hour's flight north of [Yakutsk](https://www.google.com/maps/preview?hl=en&authuser=0#!q=yakutsk%2C+russia&data=!4m10!1m9!4m8!1m3!1d1951509!2d129.3333433!3d62.7799773!3m2!1i1438!2i802!4f13.1), which claimed home to one of the only tin mines in the USSR.\nAt incredible expense, tin was mined there.\nThe raw ore was loaded onto planes, which was shipped to Yakutsk for processing.\n3 tons of ore yielded several kilograms of tin.\nDeputatsk was a totally fabricated Soviet affair, with a single restraunt, a move theater, block housing, and triple hazard pay for workers.\n\nShortly, over a near-heart attack from my grandmother, my uncle found himself in Deputatsk.\nHe showed up to talk to the head of the electric plant there, and shortly realized that everyone in the settlement was doomed.\n\"Look,\" the head told him, \"here's the deal.\nThe whole town survives on the electricity from the plant.\nIt produces electricity via diesel turbines manufactured at a plant in Nikolaev.\nHowever, we have no access to maintenance materials or spare sparts.\nWe are running on the edge of capacity, and no government plan has room for us to receive the necessary materials.\"\n\nMy uncle claimed to be able to use his personal connections in his home town to get the necessary parts.\nHowever, it was going to take money.\nHe named the first price that came into his head: 10,000 rubles.\nThe plant head claimed that this would be no problem, and also approved any additional travel expenses.\n\nMy uncle moved into the dormitory while he waited for his wife to come join him in Deputatsk.\nHe quickly made friends with his dorm mates, and explained that he was about to go on a trip to the \"mainland\" to procure spare parts for the settlement's electric plant.\nThe housemates were desperate for any products from the mainland (top request: apples, vodka).\nThey quickly gathered another 2000 rubles each (which they all had because they were receiving triple hazard pay for living in that frozen hellhole).\nThey assured Michael that if he was unable to procure anything that would be fine, but they would like him to please try.\n\nMy uncle showed up back in his home town with close to 30k rubles in his pocket.\nFor that kind of money, in Nikolaev in 1986, he coud buy a house, a car, a dacha and furniture for those.\nHis wife convinced him that he had to do the right thing for the people counting on him.\n\nVia his personal connections at the turbine plant, he was able to procure the necessary supplies (which did not appear on any Soviet production plan).\nHe packed them into suitcases, and set off for Moscow, an intermediate destination on the way to Siberia.\nIn Moscow, he still head several thousand rubles to spend.\nHe bought the produce his dorm mates were asking for.\nHe also managed to buy special checks which granted access to special stores selling rare foreign goods destined for sailors getting off long-haul voyages and for party officials.\nIn these stores (where he saw goods he's never seen before), he purchased among other things several cases of French perfume.\n\nMy uncle got back to Deputatsk, and began handing out his bounty.\nWord quickly spread around town about his perfume purchase, and girls from all over the settlement begged for a chance go buy some, at a price some 7 or 8 times what he had paid for it in Moscow.\nOf course, this was speculation -- strictly forbidden in the USSR.\n\nThe next day, my uncle found two officers from the local police department visiting him at work.\nThey asked him about the goods he had brought back from Moscow, and he mentioned the perfume (which was insanely popular; almost all of the females in the settlement had bought some from him).\nHe spun the cops a yarn about how he was planning to profit from his trade.\nHowever, his young beautiful wife told him that if he sold the perfume for even a ruble over what he bought it for, she would kick him out into the street.\n\nVia some magical chance, the cops bought my uncle's story.\nThey told him to thank his wife for her good advice, and left.\n",
            "url": "https://igor.moomers.org/posts/soviet-kgb-stories-pt1",
            "title": "Soviet KGB Stories Pt. 1",
            "date_modified": "2013-09-14T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/github-pages-proxying-and-redirects",
            "content_html": "\nWhen I wanted to start a github pages blog, github was serving it's page requests on `<username>.github.com`.\nThese days, they service it on `<username>.github.io`, probably for security reasons (session cookies?).\n\nAnyway, when I wanted my page to be at `igor.moomers.org`, my own domain name.\nI also wanted the same thing under `igor.monksofcool.org`, which is another domain name that is also my OpenID.\nSo, I configured apache to just proxy my domains to the github page.\nHere is my config:\n\n```nginx\n   ProxyPass / http://igor47.github.com/\n   ProxyPassReverse / http://igor47.github.com/\n```\n\nThis change made my openid inaccessible.\nI [had my openid settings in my head](https://github.com/igor47/igor47.github.com/commit/1b7b28605aa5b74e7f150e1a1a5e67f93b8d6138).\nHowever, when I would sign in to my openid, [stackoverflow](http://stackoverflow.com/) would ask me if I wanted to create a new account for `igor47.github.io`.\nIn fact, I realized that even though I was proxying, I would get redirected to `github.io` in my browser.\n\nLooks like the github web server only accepts requests on one hostname.\nYou can tell it's just one because they say so [here](https://help.github.com/articles/my-custom-domain-isn-t-working#multiple-domains-in-cname-file).\nSo, I figured I could fix my problem by setting a [custom cname page](https://help.github.com/articles/setting-up-a-custom-domain-with-pages).\nI tried that, [here](https://github.com/igor47/igor47.github.com/commit/2d3ce308de32dd734d35633f32442db6759cec68).\n\nThis resulted in strange behavior.\nEven going to `igor.monksofcool.org` in my browser resulted in an infinite 301 redirect loop to `igor.monksofcool.org`!\n\nThis finally made me realize what was happening.\nI was proxying all requests to `github.com` and should have been using `github.io`.\nEven requests to `igor.monksofcool.org` would go to `igor47.github.com`, which is NOT the domain name in my CNAME file.\nGithub would redirect away, and hitting that domain would again issue an improper sub-request.\n\nTo fix the problem, I removed my CNAME file and fixed my apache config to go to proxy to `github.io`.\nSuddenly, everything was magically working again!\n",
            "url": "https://igor.moomers.org/posts/github-pages-proxying-and-redirects",
            "title": "Github pages: proxies and redirects",
            "date_modified": "2013-07-14T00:00:00.000Z"
        },
        {
            "id": "https://igor.moomers.org/posts/now-feb-2025",
            "content_html": "\nWelcome to my 2025 life update!\nMy previous now page [is here](/posts/now-march-2024).\nHere is a [permalink to this page](/posts/now-feb-2025).\n\n## Work\n\nI'm still working at [Rock Rabbit](https://rockrabbit.ai).\nWork has been pretty interesting and fun.\nThe core problem at the company is how to model the crazy world of incentives.\n\nIncentives have really complicated eligibility criteria.\nYou have to get only specific equipment, with specific efficiency ratings.\nYou might only be eligible if you're replacing an existing appliance of a specific type.\nThere are different programs for emergency repairs vs. planned replacements vs. new construction.\n\nIt's also pretty complicated to show you how much money you might be eligible for.\nThe total might depend on everything from the number of condensers in your HVAC unit to whether you've programmed the appliance to make use of TOU rates.\nClaiming the money is another exciting adventure.\nPrograms have all sorts of documentation requirements, and they range across the board in terms of systems for submitting and validating your claims.\nSometimes, you send your application to Susan, and she lets you know if she approved it.\nOther times, there's a complicated portal, which requires a CAPTCHA for every interaction.\n\nFinally -- the most interesting wrinkle is that we're not encoding all of these rules in software.\nInstead, we have a team of analysts who are building and maintaining the incentive model.\nThe software has to provide mechanisms for analysts to express the complexity, and then the UI and functionality is driven by the metadata they construct.\nThis is the most interesting component of the system, and is keeping me pretty heads-down trying to figure out how to build meta-UIs and storage systems.\n\n## Philanthropy & Thoughts on Climate\n\nFor the past few years, I've been trying to figure out how to effectively deploy philanthropic (501(c)(3)) funds in the climate space.\nI've gone through interesting values-alignment exercises with [Founders Pledge](https://www.founderspledge.com/).\nI joined [SV2](https://sv2.org/) as a partner.\nI did a lot of personal research, and found worthwhile organizations like [Rising Sun](https://risingsunopp.org/), [Rewiring America](https://www.rewiringamerica.org/), or [Climate Cabinet](https://climatecabinet.org/).\n\nAt SF Climate Week 2024, I heard a great talk by [Dan Stein](https://www.linkedin.com/in/daniel-stein-8210a639/) and [Giving Green](https://www.givinggreen.earth/how-it-works).\nI bought their core premise -- that a lot of climate innovation is downstream of policy, and that funding policy advocacy orgs can be a big lever.\nAs a result, a lot of my donations in 2024 were to Giving Green-recommended organizations: [The Good Food Institute](https://gfi.org/), [Project Innerspace](https://projectinnerspace.org/), and [Clean Air Task Force](https://www.catf.us/work/).\n\n## Regime Change & Information Ecosystem\n\nWe had an election.\nI had a pretty strong sense that Trump was going to win, and felt fairly disengaged around my options in the opposition party.\nI remain a strong proponent of the Biden administration -- it accomplished a ton of amazing policy work, and it's pretty sad how little credit those people get for their enduring accomplishments, especially the IRA.\nHowever, neither Biden as candidate, nor his appointed heir Harris inspired much confidence for me, and I felt fairly disengaged throughout the election.\nNow, I, along with everyone else who was similarly disengaged, get to watch in horror as the incoming administration tears apart not only those accomplishments I was so happy about, but the entire federal government, plus the rule of law to boot.\n\nMy focus on climate change over the past decade was two-fold.\nI love the natural world, and feel especially drawn to help protect it.\nBut also -- I've been viewing the climate crisis as both foundational and especially time-sensitive.\nIf people are missing health care, we can still pass health care policy next year, and those people will get health care.\nIf we wait a year on climate change, some things might change in irreversible ways.\nMaybe the coral reef never recovers from the next bleaching event, or another species goes extinct.\nClimate change itself becomes harder and more expensive to solve -- even more warming locked in, more stranded assets deployed, more infrastructure lock-in.\nPlus, climate change is a tax on the world, making other problems harder to solve.\nAs we spend more resources on cleaning up from the next hurricane or mega-fire, we have fewer resources for all other problems, including the problem of climate change itself.\n\nWatching the new regime take the reigns, not only administratively but in the culture, has caused my thinking to jump another abstraction level.\nWhile a lot of problems are downstream of climate change, climate change itself is downstream of the problem of the mess in our information ecosystem.\nEpistemology is out the window, now.\nWe cannot hope to solve *any* problems without knowledge, consensus, and attention.\nThese resources are now under the control of a few demagogues, who will use them to centralize their own power.\n\nI'm re-thinking my philanthropic commitments, and would love to re-direct my available funds towards projects that aim to build a strong information commons.\nI've already been supporting some projects in this space -- notably, local journalism through [CalMatters](https://calmatters.org/), [CitySide Journalism](https://www.citysidejournalism.org/) (publishes BerkeleySide), or [Mercury News](https://www.mercurynews.com/).\nHowever, now that the Trump administration is [actively](https://www.foxnews.com/media/george-stephanopoulos-abc-apologize-trump-forced-pay-15-million-settle-defamation-suit) [attacking](https://apnews.com/article/trump-meta-settlement-zuckerberg-capitol-riot-9939e52679364080c983e0cab739b805) [information sources](https://www.foxnews.com/media/trumps-lawsuit-against-cbs-expands-after-release-60-minutes-transcript-adds-paramount-defendant), I think there might be room for more structured defense of journalism, knowledge, and truth.\n\nI'm open to recommendations, both in terms of orgs to fund, and in terms of places were technologists can contribute to the problem domain.\n\n## Projects\n\nA big project over the last few months has been moving back to Berkeley.\nThis has involved getting settled in a new house, and especially setting up a new garage.\nI'm still working on getting my workshop settled; I got a WallBoard!\n\n![A photo of my in-progress wallboard](/images/wallboard.jpg \"Still in-progress\")\n\nI'm still working a bunch on my self-hosted setup.\nSome big new additions have been [Rallly](https://github.com/lukevella/rallly) for a self-hosted Doodle alternative, and [Vikunja](https://vikunja.io/) as a self-hosted task tracker.\nI tried to deploy a self-hosted temporary file sharing tool, but [ran into issues](https://github.com/eikek/sharry/issues/1597).\n\nFor real-world project ideas, one thing I noticed is that there are some really dark blocks in Berkeley.\nI'm plotting how to install guerilla light installations, possibly powered by solar panels and batteries.\nI now have some experience in unattended electronics after installing the Peter Bench in the deep playa at Burning Man 2024.\n\n![The Peter Bench](/images/peter-bench-at-home.png \"As usual, I didn't get on-playa photos\")\n\nI'm also plotting a sound-reactive light project for Priceless 2025.\n\n### Shop Talk\n\nNow that I'm living back in Berkeley, I'm really enjoying the chance to connect more with people.\nI'm really interested in connecting with people over their ideas and creative projects.\nI think it's helpful to create spaces like this explicitly -- I certainly often feel awkward getting deep into work or politics or philosophy with people, especially when other folks in the conversation might not be down for it, or if the context is not such that we can really get into it.\n\nI'm hoping to spin up a recurring event called \"Shop Talk\".\nI want to focus on one or two people presenting what they're working on, followed by Q&A and discussion.\nLet me know if you're interested!\n\n## Reading\n\nLast year, my big epiphany was that the [Bobiverse](https://en.wikipedia.org/wiki/Dennis_E._Taylor#Bobiverse_%282016%E2%80%932024%29) books are really good, despite having an uninspiring title.\nThis year, I similarly discovered that the [Murderbot  Diaries](https://en.wikipedia.org/wiki/The_Murderbot_Diaries) are really good -- I binged these books very quickly.\nI really enjoyed finally reading [This Is How You Lose the Time War](https://en.wikipedia.org/wiki/This_Is_How_You_Lose_the_Time_War).\nI can also recommend [the new Robin Sloan](https://www.robinsloan.com/moonbound/) and [the new Adrian Tchaikovsky](https://en.wikipedia.org/wiki/Alien_Clay).\nFantasy books haven't been appealing to me for the last few years, so I was surprised to enjoy [the Licanius Trilogy](https://bookshop.org/p/books/the-shadow-of-what-was-lost-james-islington/111298), which I devoured on vacation.\nFinally, I re-listened to [the Delta-V series](https://www.penguinrandomhouse.com/series/NVD/a-delta-v-novel/#) by Daniel Suarez.\nThis was in the aftermath of the election, and provided amazing copium -- maybe we'll solve our terrestrial problems through space industry?\n\nIn non-fiction books, I enjoyed [Dopamine Nation](https://www.annalembke.com/dopamine-nation), which helped me better understand my own addictions (CANDY!) and have a language for the tools I use to control them (chronological, physical, and categorical self-binding).\nHaving previously read [Seeing Like A State](https://en.wikipedia.org/wiki/Seeing_Like_a_State), I picked up [Against the Grain](https://en.wikipedia.org/wiki/Against_the_Grain:_A_Deep_History_of_the_Earliest_States), which is another grand-narrative type deep-history book in the style of Yval Noah Harari or Bradford Delong.\nSpeaking of Harari, I'm still working through [Nexus](https://www.ynharari.com/book/nexus/), which is full of amazing ideas and provides a way to conceptualize our current slide into Autocracy in information-network terms.\nI think it's one of those books that I could focus on better when reading, vs. when listening.\n\nFinally, I spent a few months this past year working through [The Power Broker](https://en.wikipedia.org/wiki/The_Power_Broker).\nI have a lot of thoughts on this book, which I want to write up as a separate blog post.\n\n## Future\n\nI'm feeling done with climate tech as a field, and feel ready to move on to something else, even while recognizing that I'm currently in the midst of solving interesting and useful problems in the space.\nI'm still feeling a little over sitting at a computer all day, too, though I'm also feeling trapped by my comparative advantage in the space.\nI'm really not sure what happens next!\nOpen to possibilities and suggestions!\n",
            "url": "https://igor.moomers.org/posts/now-feb-2025",
            "title": "Now: February 2025",
            "summary": "Now page for Feb 2025\n",
            "image": "https://igor.moomers.org/images/city-burns.jpg",
            "date_modified": "2001-02-20T00:00:00.000Z"
        }
    ]
}