Here’s a workshop and showroom tour of the Melanzana cut-and-sew shop in Leadville, Colorado. Here’s an interview with Melanzana’s founder, Fritz Howard. Melanzana produces all of their clothing in Leadville, a town of about 2600 people which, at over ten thousand feet above sea level, is the highest incorporated city in the US. This makes it challenging for them to expand production, and ultimately constrains their growth as a company. They handle this partly by selling the vast majority of their product in-person at their showroom, which actually requires appointments for most purchases and has a per-customer, per-visit limit on the quantity of clothing one can purchase.
I find Melanzana’s business strategy counterintuitive and unexpected, and I suppose it helps to explain the fact that I was totally unaware of Melanzana until a couple of months ago, when I was given one of their hoodies more or less by accident. What has remained a mystery, though, is the honestly shocking amount of attention this hoodie has received since then; it’s easily the most commented-upon garment I’ve ever owned, with both friends and strangers calling the brand out by name and complimenting the sweatshirt’s frumpy yet somehow athletic drape. I guess this is all to say that Melanzana seems to have forged their own idiosyncratic moral framework, and has somehow managed to convince a large number of New Yorkers that that framework is worthy of their attention.
I did a bit of looking and the video by MyLifeOutdoors was convincing – I managed to time a release on the site and bought the micro grid hoodie. It’s gonna get a lot of wear, and I figured it for something worth sharing. Melanzana reminds me of the Patagonia ethos – made for use, treats the employees well, does repairs and isn’t trying to grow endlessly. Good stuff.
I was using the SX-6 coaster and realized how many I have that are mementos of things past.
Leaded plastic from my time in the X-ray lab. LabView, ditto. NSA back from when Chris interviewed there. Not sure where I got the SX-6 and National Data coasters.
Proust had madeleines, I guess, and I have … these. What do you have?
As I have written about before, taking your phone to a protest is 100% going to get you written into various government data stores. This story today confirms that ICE is using Stingrays.
Backstory – for a few years I had a Confluence-based site at WatchOtaku.com where I wrote and reviewed watches. I shut it down in 2019 and since then the files have been available at http://www.phfactor.net/swr
However, that URL is gross, the files were raw HTML and thus hard to read, so this morning I moved them to a new hostname and added the lovely 4KB water.css, so now the URL is semi-clean and the pages are readable, light/dark compatible and responsive for mobile. Still 100% free, no JS, no tracking. Enjoy!
More recently, on the TGN podcast, James recommended the free Adobe app ‘Project Indigo’ for wrist shots. It doesn’t do focus stacking but I’m gonna try it out. Their writeup says that they plan to add stacking as a feature and show this example of manually stacking 41 images:
I went looking today for mounting options for my new! signed! Effin’ Birds poster and my first thought was how much I like the aluminum dibond on my UNIX Magic poster:
then I realized that I had never blogged it! So back in 2021, I read this story via Hacker News about a legendary 1970s conference poster full of inside Unix jokes and references, was instantly smitten and ordered one:
Anyway, the Unix poster is all that and continues to occupy a place of honor, right in the corner of my eye, because frankly my entire career has been Unix in various forms: Linux, NetBSD and OpenBSD, Irix, SunOS, Solaris, AIX, and others that I can’t recall any more. I’m counting MacOS since yeah, it’s Unix with a professional makeup, and so are Android and IOS.
(The time spent on OS/2, BeOS, Windows flavors, Desqview, GeoWorks, DOS… well, ahh well)
Now its time to add my signed EB poster – my better half bought this for me at ComiCon 2025:
So this is a mixed post – yeah, you should totally get a unix poster, you can DIY it for much less than the fancy print. Secondly, Effin Birds is awesome, I also have his wall calendar and some conference tchotchkes.
The scenario: You want to play around with the fast-moving large language models, but don’t want to every random query / thought / personal health question to be logged / trained-on.
Requirements
You need to have a machine to run the LLMs. It needs to have lots of memory. Depending on the models, of course, but larger models are much better and useful. 16GB absolute minimum for a 7b-parameter model, and 27b models need 64G or more. I’m using an M4 Max Mac Studio with 128GB and that seems plenty.
The software is simple – ollama to manage and serve models, and the llm cli to run on your laptop. That way, the server does the lifting and your laptop doesn’t. They don’t, however, work out of the box, so I wanted to share some necessary steps.
Server setup with Ollama
On the server:
brew install ollama
brew services start ollama
Now we need some models. With lots of memory, I got the biggest models, but you should read the list to choose a few. This step pulls gigabytes, so beware on a metered connection.
Now we need to tell Ollama to listen to all interfaces – this makes it accessible, since by default it binds only to localhost. There’s a GitHub issue about this if you want more details. We have to edit the plist:
open /opt/homebrew/opt/ollama/homebrew.mxcl.ollama.plist
As you can see, we have to add the OLLAMA_HOST environment variable, set to 0.0.0.0:11434
After that, save the plist and run
brew services restart ollama
To test the change, open the new page in a browser. (I’m using hostname axiom, FYI.)
open http://axiom:11434/
and you should see
That’s the server sorted – on to the client!
LLM client
The LLM cli by Simon Willison is just neat. It can do all sorts of things for you. We want to do the following:
Set the sorta-not-needed API key (it’ll ask for it, give it the key ‘ollama’)
brew install llm
llm keys set ollama
llm keys path
Save that path – mine is
/Users/pfh/Library/Application Support/io.datasette.llm/keys.json
We need to create and populate
/Users/pfh/Library/Application Support/io.datasette.llm/extra-openai-models.yaml
Here’s a ChatGPT-generated shell script to query Ollama over ssh and populate the extra-openai-models.yaml
#!/bin/bash
pushd /Users/pfh/Library/Application\ Support/io.datasette.llm
# Configuration
HOST="axiom.phfactor.net"
PORT="11434"
OUTPUT="extra-openai-models.yaml"
API_BASE="http://${HOST}:${PORT}/v1"
API_KEY_NAME="ollama"
# Fetch models from Ollama
echo "Fetching models from Ollama at ${HOST}:${PORT}..."
MODEL_LIST=$(curl -s "http://${HOST}:${PORT}/api/tags" | jq -r '.models[].name')
# Begin YAML
echo "# Auto-generated OpenAI-compatible model config for LLM CLI" > "$OUTPUT"
echo "# Source: http://${HOST}:${PORT}/api/tags" >> "$OUTPUT"
echo "" >> "$OUTPUT"
# Write each model as a separate YAML block
while IFS= read -r MODEL; do
MODEL_ID="${MODEL//:/-}" # replace colon with dash
cat <<EOF >> "$OUTPUT"
- model_id: $MODEL_ID
model_name: $MODEL
api_base: "$API_BASE"
api_key_name: $API_KEY_NAME
can_stream: true
supports_schema: true
vision: true
EOF
done <<< "$MODEL_LIST"
echo "Wrote $OUTPUT with $(echo "$MODEL_LIST" | wc -l) models."
Set the default model – I’m still experimenting, but for now its
llm models default llama4-latest
Important!
The api_base there is not the one in the docs – I had to use ChatGPT (sauce for the gander) to figure out that you have to add the /v1 suffix or the llm cli will return 404s.
What do you get?
From my laptop, I can run random queries and it works great – no limits, ask anything (Lots of stories about this!) and I feel oddly better about asking basic, stupid tech question that I “should” know or remember. For example:
What’s the makefile syntax for this?
Tell me how to glob the largest non-zero-file of episode.mp3 in a tree of files?
What’s the jq invocation to pull this?
llm 'nmap syntax to look for open tcp on a single host'
To use `nmap` to scan for open TCP ports on a single host, you can use the following basic syntax:
```bash
nmap -sT <hostname or IP address>
```
Here's a breakdown of the options used:
- `-sT` specifies a TCP connect scan, which is a type of scan that tries to establish a full TCP connection to the target ports. This is one of the most common scan types and works well for finding open TCP ports.
If you want to scan all 65,535 TCP ports (which can take a long time), you can use:
```bash
nmap -sT -p- <hostname or IP address>
```
However, for a more efficient and commonly used approach, scanning the top 1000 most common ports can be sufficient for many cases:
```bash
nmap -sT -p 1-1000 <hostname or IP address>
```
- `-p 1-1000` specifies the range of ports to scan. Adjust this range according to your needs.
### Additional Options
- **OS Detection:** `-O` (capital O, not zero) can help identify the operating system of the target, though this might not always be accurate or useful.
- **Version Detection:** `-sV` can provide information about the versions of services running on open ports.
- **Script Scanning:** `--script` can run NSE (Nmap Scripting Engine) scripts against the target.
### Full Command Example
For a more detailed scan:
```bash
nmap -sT -p 1-1000 -sV -O <hostname or IP address>
```
Make sure to replace `<hostname or IP address>` with the actual hostname or IP address of the target you're scanning.
### Permissions
Keep in mind that running `nmap` might require administrative or root privileges, especially for certain types of scans. If you run into issues, consider using `sudo` (on Unix-like systems) before your `nmap` command.
### Legal and Ethical Considerations
Always ensure you have permission to scan a target. Unauthorized scanning can be considered malicious and may lead to legal consequences.
As I’ve written about before, LLMs are an amazing tool for programming. In addition to IDE plugins / autocomplete integrations (Github CoPilot, Qodo) I’m finding utility in a pattern of ‘running conversation in ChatGPT in dedicated app.’ I can ask anything there, snippets and one-offs and what-ifs. The full -o1 model is really good.
Anyway, this is the current state – side project (open source here) is an iOS app in SwiftUI that will automatically capture context using Bluetooth iBeacons with the goal of generating a log that can become contact engineer billing records. The idea:
Enter region OFFICE at 8:00AM.
Leave region OFFICE at 6:00PM.
Enter region CAR at 2:15PM at GPS location LAT, LONG.
The GPS idea is for possibly trying to compute miles driven; may work or not. The regions will be marked using iBeacons. Cheap, supported by the operating system, easy.
I don’t really know SwiftUI, so I just asked ChatGPT for code and started hacking. It works and continues to work! LLMs are amazing for learning this way. I can ask random changes, explanations, fixing errors; anything! Here’s todays version of the app:
Current state showing real data.
Damn.
If you’ve not tried coding with an LLM, go try it. Treat it like a freshly graduated student – you can ask it anything, and it’ll know the most amazing ways to write code and solve problems. It’ll be wrong yet confident sometimes, but as a way of getting unstuck and having fun hacking, I’m having a ball.