Meshcore

Cute, no?

That’s the repeater. More about it later.

I have a complicated history with ham radio. Years ago, a friend/co-worker and I got our one-day ‘Technician’ licenses, and for years after that I tried to make first contact. Like, talk to anybody. I learned about bands and power and radios and repeaters and programming software and hackable firmware and only managed walkie-talkie on FRS and GMRS. Suuuper frustrating.

I was recently emailing a former co-worker (Hi, Jon!) who’s taken up ham radio as hobby and somewhere in there we started talking about Meshtastic. Then in December it made front page on either Tildes or Lobsters and I was intrigued. Especially this story about using it on sailboats. And this followup post. I also sometimes watch his videos, so this Jeff Geerling post was also valuable.

Key bits:

  1. It’s inexpensive – radios are $50 to $150, repeaters $100 to $300. Software, protocol and firmware are all open source.
  2. It’s in the no-license-required ISM spectrum, meaning that you don’t need a ham or GMRS license.
  3. It’s low power, so a radio can run for days, a repeater can be compact and solar-powered and you don’t need to plug stuff into the wall.
  4. No need to drill holes in my house for coax or power because of .
  5. It’s super limited: Line of sight, text messages. No pictures, no web pages.
  6. It looks like low-stakes fun to learn and use.
  7. Radio and repeater programming is via a web page – no programs to install! Yay Web Serial.

Based on some research, I ordered

  1. Two Seeed SenseCAP T1000-E cards (waterproof, 2 day battery, GPS included) via Amazon for $51 each.
  2. Lilygo T-Echo (e-ink) via eBay for $110
  3. Lilygo T-Deck Plus with GPS via Amazon for $102. This is the Blackberry-looking radio seen above.
  4. Heltec v3 with e-paper display via eBay for $55.
  5. RAKwireless Wismesh Repeater Mini plus wall and pole mount from Rokland for $130
  6. High-gain 8dBi antenna from Amazon for $44.

Note that I’m simplifying here a bit – the repeater came later, and the antenna was a recommendation from the ‘San Diego Mesh’ Discord.

The Meshtastic site planner predicted that the repeater + antenna would provide incredible coverage. Remember that this is based on a tenth of a watt! Like, less than a single display LED:

Cue detours in aforementioned Discord group, learning about device roles and the tyranny of Long-Fast, and I was back to not being able to talk to anyone.

In searching for help, I found that many folks had gotten frustrated with the design tradeoffs of Meshtastic and had started a newer project called Meshcore to address them:

  1. Designed to scale to more nodes and distance. (Claimed. I’m not competent to judge)
  2. Works on the same devices and frequencies as Meshtastic, so you can try one or both just by re-flashing devices. This in particular was compelling.
  3. Simpler configuration

More about Meshcore:

  1. Main site
  2. Web-based flasher/programmer (which works notably better than the Meshtastic one)
  3. Map of radios and repeaters and this community map and this West Coast Mesh map.

So I spent half a day building and installing the repeater in the back yard only to a) not see traffic or get more than a single response and b) the estimated coverage was really shit:

One of the folks on the Discord sent me the key information: as seen on the West Coast Mesh page, local radios do not use the USA presets for frequency, bandwidth, spread factor and coding rate. I had been talking to no one! Here are the correct settings:1

That made the difference! Now I can see and talk to people. Here’s a chat from me, using the iOS app to a T1000-E card to the repeater, going 10 hops to get north of Los Angeles:

That trace is from the very useful tool here, by the way. Just click on a message to see its trace.

Key things to know

  1. One of the two T1000-E cards is basically bad – after DFU reset, it’ll program and then not work. The WisMesh Tag from RAKwireless is supposed to better and more reliable. Others have also had issues with the T1000-E.
  2. The Lilygo T-Echo sorta works.
  3. The Heltec V3s all murder batteries – like an hour. Get V4 or something else.
  4. The Lilygo T-Deck is pretty awesome but you have to select the ‘use SD card for storage’ version of the firmware or else you are limited to 6-character channels.
  5. The repeater seems to be good, but you gotta get that antenna way up high. I will try to do so – roof? PVC pipe as mast? Tree? Dunno yet but right now I can’t reach the friend I’ve convinced to try this with me.
  6. It’s not super useful. The real uses are stuff like cellular outages, off-grid camping/hunting/hiking, remote no-infrastructure land use and nerding out.
  7. My repeater is now showing up on on the WCMesh Map after I uploaded it. I am unreasonably pleased with this. Repeater public key is 4450A0B0C86FC09D7193196DE7C29078AACFB2999E75F06FE2C41C7E35861434
  8. There’s a ‘room server’ feature where you program a radio to basically be a BBS / IRC server. This looks really cool, I see quite a few on the network and looks fun to try out.
  9. Hashtag channels are public and ad-hoc. Try , as well as the Public channel. Locally we’ve also got .
  10. Check this for a local group and their Discord. That got me unblocked several times, as well as advice on antennas and more.

Hardware I’d recommend to try it out based on a week or so:

  1. Lilygo T-Deck+. I got mine from Amazon but it took weeks to arrive. Maybe look for in-stock.
  2. WisMesh Tag from RAKwireless – about the same $50 as the T1000-E card but reportedly better.
  3. If you need a repeater, my Wismesh Repeater Mini seems good. I like that it’s install-and-forget due to the solar+battery, plus I can check and administer it locally with Bluetooth or over the Meshcore network using the app. The consensus on Discord was that the 8dBi antenna is essential once you get past basic local use.

If you want to spend the least in order to see if anyone’s using it locally, I’d suggest checking the maps above first and then trying the WisMesh Tag or T-1000E cards.

I’m having a lot of fun. You might too.

Melanzana hoodie makes the list

A week or so ago, I read in the Scope of Work newsletter about a Colorado brand of hoodie called Melanzana:

  • Here’s a workshop and showroom tour of the Melanzana cut-and-sew shop in Leadville, Colorado. Here’s an interview with Melanzana’s founder, Fritz Howard. Melanzana produces all of their clothing in Leadville, a town of about 2600 people which, at over ten thousand feet above sea level, is the highest incorporated city in the US. This makes it challenging for them to expand production, and ultimately constrains their growth as a company. They handle this partly by selling the vast majority of their product in-person at their showroom, which actually requires appointments for most purchases and has a per-customer, per-visit limit on the quantity of clothing one can purchase. 

    I find Melanzana’s business strategy counterintuitive and unexpected, and I suppose it helps to explain the fact that I was totally unaware of Melanzana until a couple of months ago, when I was given one of their hoodies more or less by accident. What has remained a mystery, though, is the honestly shocking amount of attention this hoodie has received since then; it’s easily the most commented-upon garment I’ve ever owned, with both friends and strangers calling the brand out by name and complimenting the sweatshirt’s frumpy yet somehow athletic drape. I guess this is all to say that Melanzana seems to have forged their own idiosyncratic moral framework, and has somehow managed to convince a large number of New Yorkers that that framework is worthy of their attention.

I did a bit of looking and the video by MyLifeOutdoors was convincing – I managed to time a release on the site and bought the micro grid hoodie. It’s gonna get a lot of wear, and I figured it for something worth sharing. Melanzana reminds me of the Patagonia ethos – made for use, treats the employees well, does repairs and isn’t trying to grow endlessly. Good stuff.

Added to the YouShouldBuy tag, my highest praise.

(Oh yeah, Scope of Work is also a recommend.)

ICE protest surveillance

As I have written about before, taking your phone to a protest is 100% going to get you written into various government data stores. This story today confirms that ICE is using Stingrays.

So. As explained in Domestic surveillance and police riots, you can get a cheap Android device to communicate and photograph; since then there’s a new EFF project called Rayhunter that I’d also highly recommend. It’s inexpensive and quite simple:

  1. Go to Amazon and spend 31 bucks on an Orbic LTE router.
  2. Go to the Github page and get the Rayhunter firmware for it
  3. Install it
  4. Take the Rayhunter with you – even without connecting it to a computer, it will display if it detects a Stingray or other cell-site simulator.
  5. Consider a donation to the EFF for work like this.

A picture, just to show what it looks like. There are other supported devices and many places to buy them; this was easiest at the time.

P.S. – on Mac, you may need to run this to remove the app-signing error:

xattr -c installer

New watch-search site. And my new site.

As Reddit on iOS minus ads explained, Reddit still has a lot of value and today I read this post about a new watch search engine. To quote his post:

Sorry for the image link, WordPress mangled the quote. The link is https://chronoscout.co/en/watches/

Give it a try!

Oh yeah, Margaret and I have some ideas, for now there’s just a single sad index.html at https://dialedin.watch/ but stay tuned!

WatchOtaku rides… err limps … again

Backstory – for a few years I had a Confluence-based site at WatchOtaku.com where I wrote and reviewed watches. I shut it down in 2019 and since then the files have been available at http://www.phfactor.net/swr

However, that URL is gross, the files were raw HTML and thus hard to read, so this morning I moved them to a new hostname and added the lovely 4KB water.css, so now the URL is semi-clean and the pages are readable, light/dark compatible and responsive for mobile. Still 100% free, no JS, no tracking. Enjoy!

https://watchotaku.phfactor.net/

Wristwatch macro photography

Some links for a burgeoning student I know:

James Stacey’s tutorial – part 1 and part 2.

SteveG, the OG god of macro, documented his setup here, in case money is no object.

Anyone have an iPhone app that they like that automates macro focus stacking?

Here’s a SteveG, from his Credor writeup:

More recently, on the TGN podcast, James recommended the free Adobe app ‘Project Indigo’ for wrist shots. It doesn’t do focus stacking but I’m gonna try it out. Their writeup says that they plan to add stacking as a feature and show this example of manually stacking 41 images:

UNIX Magic poster

I went looking today for mounting options for my new! signed! Effin’ Birds poster and my first thought was how much I like the aluminum dibond on my UNIX Magic poster:

then I realized that I had never blogged it! So back in 2021, I read this story via Hacker News about a legendary 1970s conference poster full of inside Unix jokes and references, was instantly smitten and ordered one:

Anyway, the Unix poster is all that and continues to occupy a place of honor, right in the corner of my eye, because frankly my entire career has been Unix in various forms: Linux, NetBSD and OpenBSD, Irix, SunOS, Solaris, AIX, and others that I can’t recall any more. I’m counting MacOS since yeah, it’s Unix with a professional makeup, and so are Android and IOS.

(The time spent on OS/2, BeOS, Windows flavors, Desqview, GeoWorks, DOS… well, ahh well)

Now its time to add my signed EB poster – my better half bought this for me at ComiCon 2025:

but you can’t buy the dibond sans a print, so I’ll probably cheap out and get this locally from Blicks.

So this is a mixed post – yeah, you should totally get a unix poster, you can DIY it for much less than the fancy print. Secondly, Effin Birds is awesome, I also have his wall calendar and some conference tchotchkes.

Setup a private LLM server

Why do this?

The scenario: You want to play around with the fast-moving large language models, but don’t want to every random query / thought / personal health question to be logged / trained-on.

Requirements

You need to have a machine to run the LLMs. It needs to have lots of memory. Depending on the models, of course, but larger models are much better and useful. 16GB absolute minimum for a 7b-parameter model, and 27b models need 64G or more. I’m using an M4 Max Mac Studio with 128GB and that seems plenty.

The software is simple – ollama to manage and serve models, and the llm cli to run on your laptop. That way, the server does the lifting and your laptop doesn’t. They don’t, however, work out of the box, so I wanted to share some necessary steps.

Server setup with Ollama

On the server:

brew install ollama
brew services start ollama

Now we need some models. With lots of memory, I got the biggest models, but you should read the list to choose a few. This step pulls gigabytes, so beware on a metered connection.

ollama pull llama4:latest
ollama pull deepseek-r1-70b
ollama pull gemma3:27b

Now we need to tell Ollama to listen to all interfaces – this makes it accessible, since by default it binds only to localhost. There’s a GitHub issue about this if you want more details. We have to edit the plist:

open /opt/homebrew/opt/ollama/homebrew.mxcl.ollama.plist

As you can see, we have to add the OLLAMA_HOST environment variable, set to 0.0.0.0:11434

After that, save the plist and run

brew services restart ollama

To test the change, open the new page in a browser. (I’m using hostname axiom, FYI.)

open http://axiom:11434/

and you should see

That’s the server sorted – on to the client!

LLM client

The LLM cli by Simon Willison is just neat. It can do all sorts of things for you. We want to do the following:

  1. Install LLM
  2. Add our models using Ollama’s OpenAI compatible REST API to extra-openai-models.yaml
  3. Set the default model
  4. Set the sorta-not-needed API key (it’ll ask for it, give it the key ‘ollama’)
brew install llm
llm keys set ollama
llm keys path

Save that path – mine is

/Users/pfh/Library/Application Support/io.datasette.llm/keys.json

We need to create and populate
/Users/pfh/Library/Application Support/io.datasette.llm/extra-openai-models.yaml

Here’s a ChatGPT-generated shell script to query Ollama over ssh and populate the extra-openai-models.yaml

#!/bin/bash

pushd /Users/pfh/Library/Application\ Support/io.datasette.llm

# Configuration
HOST="axiom.phfactor.net"
PORT="11434"
OUTPUT="extra-openai-models.yaml"
API_BASE="http://${HOST}:${PORT}/v1"
API_KEY_NAME="ollama"

# Fetch models from Ollama
echo "Fetching models from Ollama at ${HOST}:${PORT}..."
MODEL_LIST=$(curl -s "http://${HOST}:${PORT}/api/tags" | jq -r '.models[].name')

# Begin YAML
echo "# Auto-generated OpenAI-compatible model config for LLM CLI" > "$OUTPUT"
echo "# Source: http://${HOST}:${PORT}/api/tags" >> "$OUTPUT"
echo "" >> "$OUTPUT"

# Write each model as a separate YAML block
while IFS= read -r MODEL; do
  MODEL_ID="${MODEL//:/-}"  # replace colon with dash
  cat <<EOF >> "$OUTPUT"
- model_id: $MODEL_ID
  model_name: $MODEL
  api_base: "$API_BASE"
  api_key_name: $API_KEY_NAME
  can_stream: true
  supports_schema: true
  vision: true

EOF
done <<< "$MODEL_LIST"

echo "Wrote $OUTPUT with $(echo "$MODEL_LIST" | wc -l) models."

Here’s my models file:

# Auto-generated OpenAI-compatible model config for LLM CLI
# Source: http://axiom.phfactor.net:11434/api/tags

- model_id: qwen3-latest
  model_name: qwen3:latest
  api_base: "http://axiom.phfactor.net:11434/v1"
  api_key_name: ollama
  can_stream: true
  supports_schema: true
  vision: true

- model_id: deepseek-r1-70b
  model_name: deepseek-r1:70b
  api_base: "http://axiom.phfactor.net:11434/v1"
  api_key_name: ollama
  can_stream: true
  supports_schema: true
  vision: true

- model_id: llama4-latest
  model_name: llama4:latest
  api_base: "http://axiom.phfactor.net:11434/v1"
  api_key_name: ollama
  can_stream: true
  supports_schema: true
  vision: true

- model_id: gemma3-27b
  model_name: gemma3:27b
  api_base: "http://axiom.phfactor.net:11434/v1"
  api_key_name: ollama
  can_stream: true
  supports_schema: true
  vision: true

Set the default model – I’m still experimenting, but for now its

llm models default llama4-latest

Important!

The api_base there is not the one in the docs – I had to use ChatGPT (sauce for the gander) to figure out that you have to add the /v1 suffix or the llm cli will return 404s.

What do you get?

From my laptop, I can run random queries and it works great – no limits, ask anything (Lots of stories about this!) and I feel oddly better about asking basic, stupid tech question that I “should” know or remember. For example:

  • What’s the makefile syntax for this?
  • Tell me how to glob the largest non-zero-file of episode.mp3 in a tree of files?
  • What’s the jq invocation to pull this?
llm 'nmap syntax to look for open tcp on a single host'
To use `nmap` to scan for open TCP ports on a single host, you can use the following basic syntax:

```bash
nmap -sT <hostname or IP address>
```

Here's a breakdown of the options used:
- `-sT` specifies a TCP connect scan, which is a type of scan that tries to establish a full TCP connection to the target ports. This is one of the most common scan types and works well for finding open TCP ports.

If you want to scan all 65,535 TCP ports (which can take a long time), you can use:

```bash
nmap -sT -p- <hostname or IP address>
```

However, for a more efficient and commonly used approach, scanning the top 1000 most common ports can be sufficient for many cases:

```bash
nmap -sT -p 1-1000 <hostname or IP address>
```

- `-p 1-1000` specifies the range of ports to scan. Adjust this range according to your needs.

### Additional Options

- **OS Detection:** `-O` (capital O, not zero) can help identify the operating system of the target, though this might not always be accurate or useful.
- **Version Detection:** `-sV` can provide information about the versions of services running on open ports.
- **Script Scanning:** `--script` can run NSE (Nmap Scripting Engine) scripts against the target.

### Full Command Example

For a more detailed scan:

```bash
nmap -sT -p 1-1000 -sV -O <hostname or IP address>
```

Make sure to replace `<hostname or IP address>` with the actual hostname or IP address of the target you're scanning.

### Permissions

Keep in mind that running `nmap` might require administrative or root privileges, especially for certain types of scans. If you run into issues, consider using `sudo` (on Unix-like systems) before your `nmap` command.

### Legal and Ethical Considerations

Always ensure you have permission to scan a target. Unauthorized scanning can be considered malicious and may lead to legal consequences.

Postscript

I opened an issue on the LLM cli, so their docs may be updated to document these steps.