Show newer

Here's what I got done this week:
* Hired a marketing agency
* Published my April retrospective
* Published new TinyPilot landing page
whatgotdone.com/michael/2022-0

If I'm making $58k/month in revenue, how can I be losing $3k/month overall? Come with me to my April retrospective, where I explore what's holding TinyPilot back from profitability. mtlynch.io/retrospectives/2022

The Morning Show is exactly how I imagine it would be if Aaron Sorkin wrote a series exploring life behind the scenes of a struggling TV show.

I'm so excited about this!

For the past year I've been building all of my apps using Fly and Litestream, so it's awesome to see them come together as a single company.
---
RT @flydotio
Breaking: Litestream is now part of Fly.io!

Do you like SQLite? Do you like streaming replication? No? Read @benbjohnson’s new post on the Fly.io blog anyway: fly.io/blog/all-in-on-sqlite-l
twitter.com/flydotio/status/15

RT @__agwa
If your website's SSL certificate was issued in 2020, it may have stopped working in Chrome today (with the error NET::ERR_CERTIFICATE_TRANSPARENCY_REQUIRED). Fix is to get a new certificate from your CA.

Use this tool to check if your site is affected: sslmate.com/labs/ct_policy_ana

RT @allison_seboldt
The results are in for April:

My programmatic SEO experiment is killing it 6 months in! Over 2k visitors in April alone. That's +226% from last month!

All the nitty gritty details in my April retrospective: allisonseboldt.com/april-2022/

Creating the index is a one-line fix, and it doesn't have the drawback my original PR did of storing a redundant copy of the file size. github.com/mtlynch/picoshare/p

Show thread

Update! @wmertens and @dholth showed me a simpler way to achieve the same performance without storing the file size redundantly. Creating an index achieves the same thing.

Show thread

Enjoyed this post about disagreeing constructively. I like Cory's distinction between "range of opinions I personally hold," and "range of opinions I find it reasonable for others to hold," and how they shouldn't be identical most of the time.
---
RT @czue
After a heated disagreement with a friend went way better than expected, I figured out what saved it, and packaged it up into a long-ass essay.

Maybe it will help everyone on here be just …
twitter.com/czue/status/152004

The last tricky part was writing a SQL migration to populate the sizes of files that were uploaded and stored in the DB before this change. I've never written an update that derives from other data in the DB before, but it wasn't too hard.

Show thread

I still didn't love the idea of storing the redundant file size, and I considered alternatively creating a virtual SQLite table.

But we'd still have to populate the virtual table still at app load time, which would take ~10s per GB of data
sqlite.org/vtab.html

Show thread

And we have a winner! For the same 1.1 GB file, latency dropped from 9s to 9ms, a 100x speedup.

Show thread

Next, I tried storing the file size along with the file metadata

Show thread

This surprised me, and I still don't have a good explanation for it. It's 3,708 rows, so it doesn't seem like it should take SQLite *that* long to calculate the SUM of 3708 values.

I'm guessing the large blob in each row slows down the query even though we don't read it.

Show thread

Storing the chunk size worked, and it brought the latency down from 9s to 839ms, a 10x performance boost.

But 839ms to calculate the size of a single file was still pretty slow...

Show thread

But based on the 9s latency, calculating sizes on the fly wasn't going to work.

My first thought was to store the chunk size alongside the blob in the table containing file data. That had the advantage of keeping size close to the data it described.

Show thread

I had specifically avoided storing the file size in my SQLite DB because it was redundant. The raw data is there, so we can always derive the size. It shouldn't be something we store independently.

Show thread

I checked the SQLite docs. They didn't explicitly say that LENGTH reads the full blob data, but it suggested for strings, it calculated length on the fly by looking for the first null byte. I'm assuming for BLOB types, SQLite iterates through the full contents of the column

Show thread
Show older
Michael Lynch's Mastodon

Michael Lynch's personal Mastodon instance