Tuesday, May 12, 2026

NetBSD 11 RC3 upgrade foibles

 I like this peaking pattern, particularly since it stopped. Looping test case, repeated twice maybe?



Then, closer scrutiny shows the major test sections as cron restarts them daily. Faster systems I'd schedule more per day. This might squeak in 2 a day on the Pi 0. Maybe.

48 hours - 1 CPU, 2 test cycles

The upgrade process from 11.0 RC2 to RC3 was more complicated on the Pi 0W than on the 3 or 4, where the kernel resides in the root directory and can be replaced during sysupgrade or another method. For the 0w, the kernel and supporting files are in a boot filesystem and aren't replaced by the upgrade. I have used different ways to put the newer files into place; this time I installed a fresh image to another SD card, then carted over the necessary files under /boot.

$ ls -l /boot/
total 19320
-rwxr-xr-x  1 root  wheel     1594 Apr  4 10:08 LICENCE.broadcom
drwxr-xr-x  1 root  wheel     1024 Mar 10 15:48 System Volume Information
-rwxr-xr-x  1 root  wheel    52476 Apr  4 10:08 bootcode.bin
-rwxr-xr-x  1 root  wheel      115 Apr  4 10:08 cmdline.txt
-rwxr-xr-x  1 root  wheel      382 Apr  4 10:08 config.txt
drwxr-xr-x  1 root  wheel     2048 Mar  4 21:02 dtb
-rwxr-xr-x  1 root  wheel     7269 Apr  4 10:08 fixup.dat
-rwxr-xr-x  1 root  wheel     3180 Apr  4 10:08 fixup_cd.dat
-rwxr-xr-x  1 root  wheel  8055184 Apr  4 10:08 kernel.img
-rwxr-xr-x  1 root  wheel  7865424 Apr  4 10:08 kernel7.img
-rwxr-xr-x  1 root  wheel  2979264 Apr  4 10:08 start.elf
-rwxr-xr-x  1 root  wheel   808060 Apr  4 10:08 start_cd.elf

I could have "cherry picked" which dtb driver files to replace, but chose the slower copy-them-all. In prior upgrades, space was at a premium on the boot device. This iteration is not tight.

$ df /boot

Filesystem      1K-blocks         Used        Avail %Cap Mounted on
/dev/ld0e           81269        19655        61614  25% /boot

$ uname -a
NetBSD rpi 11.0_RC3 NetBSD 11.0_RC3 (RPI) #0: Sat Apr  4 06:08:56 UTC 2026  mkrepro@mkrepro.NetBSD.org:/usr/src/sys/arch/evbarm/compile/RPI evbarm

An issue from the RC2 tests hasn't reappeared on RC3, where a run triggered a runaway core or something, shown on the first image above. I have an open PR though it seems the report isn't a surprise.

The Pi3 and amd64 upgrades to RC3 worked well, and the former has the lowest test failure rate of the architectures I have available. The i386 port also has few failures, a couple of them caused by me using the CD image for an upgrade instead of using the sysupgrade package. Mainly because I wanted to test a different mode.

To fit the install code into 700MB or less, the NetBSD developers seem to have left out a couple of the test sets. I noticed the messages on running the upgrade, as they were unusual in my experience.




As I half-expected issues, I went ahead with the install without the 2 distribution sets. Eventually the test suite results flagged the glitch.

Failed test cases:

dev/audio/t_audio:AUDIO_SETINFO_pause_WRONLY_2, dev/audio/t_audio:open_audioctl_RDWR, lib/libutil/t_snprintb:snprintb, net/carp/t_basic:carp_handover_ipv6_halt_nocarpdevip, net/if_wg/t_misc:wg_rekey, net/npf/t_npf:npf_guid, usr.bin/mtree/t_sets:set_base, usr.bin/mtree/t_sets:set_xbase, fs/vfs/t_renamerace:ext2fs_renamerace_cycle


The list [] to be installed is determined by the optional arguments
passed to the command or, if none, from the value of the SETS
configuration variable.

< SETS=AUTO  # Guess from /etc/mtree/set.* files.
> SETS="tests xbase"

Finally downloaded the entire set (apparently ${SETS} applies to the install, *not* the fetch).

After effects (success in reinstalling missing sets):

Failed test cases:

net/carp/t_basic:carp_handover_ipv6_halt_nocarpdevip, net/carp/t_basic:carp_handover_ipv6_ifdown_nocarpdevip, net/if_wg/t_misc:wg_rekey, net/npf/t_npf:npf_guid, crypto/opencrypto/t_opencrypto:ioctl

RC3 info:

$ uname -a
NetBSD neti386 11.0_RC3 NetBSD 11.0_RC3 (GENERIC) #0: Sat Apr  4 06:08:56 UTC 2026  mkrepro@mkrepro.NetBSD.org:/usr/src/sys/arch/i386/compile/GENERIC i386 i386 Intel 686-class NetBSD

I should summarize the test suite results across the several machine types I've installed 11.0 RC3 on, and analyze for frequency given many tests do not pass or fail 100% of the time. I had coined the term Heisenbergars for those maybe maybe not cases.


Monday, April 27, 2026

Weather rock uncovering

 After putting off finishing the rest of the environmental or weather readings from the nearest airport into a Zabbix suite long enough, I standardized the newest release level 7.4 using FreeBSD.


You can't use the Zabbix send tool for random text without it going somewhere structured so I first set up the variables, naming with Ambient and tagging with Environment. A typical METAR weather file pull looks like:


Baltimore / Martin, MD, United States (KMTN) 39-20N 076-25W
Jun 11, 2022 - 06:54 PM EDT / 2022.06.11 2254 UTC
Wind: Calm:0
Visibility: 10 mile(s):0
Sky conditions: partly cloudy
Temperature: 69 F (21 C)
Dew Point: 59 F (15 C)
Relative Humidity: 68%
Pressure (altimeter): 29.96 in. Hg (1014 hPa)
ob: KMTN 112254Z 00000KT 10SM SCT180 21/15 A2996
cycle: 23

In hindsight, I erred not adding a separator on the pressure column names. Visibility is reported only in miles; adding Km could be a derived field in Zabbix.

The command line I used is, ish:

export ZABBIX_SEND="zabbix_sender -vv -z "${ZABBIX_SERV}" -p 10051 -s "${ZABBIX_HOST}" -k "

Each key is from a row and column in the weather dump, neatly prefixed line-by-line and at most 2 or 3 datapoints per row. I have wind rose in text style, but not "sky conditions"

Wind: from the S (190 degrees) at 7 MPH (6 KT):0



The early-2026 cold snap is quite evident, over days, looking back at a 7.0 level Zabbix server.

The shell script has a bunch of grep/sed/awk when Perl or whatever would be slicker. This just grew piece by piece. We want it to run for instance:

  zabbix_sender -vv -z 127.0.0.1 -p 10051 -s place -k  "enviro[Temperature.Celsius]" -o  8



# echo C
grep  "^Temperature: "           $DATAFILE  | sed -e "s/(//" -e "s/)//"   | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[Temperature.Celsius\]\"    -o ", $4}'
# echo F
grep  "^Temperature: "           $DATAFILE  | sed -e "s/(//" -e "s/)//"   | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[Temperature.Fahrenheit\]\" -o ", $2}'

# echo dew
grep  "^Dew Point: "             $DATAFILE  | sed -e "s/(//" -e "s/)//"   | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[DewPoint.Fahrenheit\]\"    -o " $3}'
grep  "^Dew Point: "             $DATAFILE  | sed -e "s/(//" -e "s/)//"   | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[DewPoint.Celsius\]\"    -o " $5}'

# echo hum
grep  "^Relative Humidity: "     $DATAFILE  | sed -e "s/%//"              | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[Humidity\]\"               -o ", $3}'
grep  "^Pressure (altimeter): "  $DATAFILE  | sed -e "s/(//" -e "s/)//"   | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[PressureHG\]\"             -o ", $3}'
grep  "^Pressure (altimeter): "  $DATAFILE  | sed -e "s/(//g" -e "s/)//g" | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[PressurePA\]\"  -o ", $6}'

grep  "^Visibility: " $DATAFILE  | sed -e "s/(//g" -e "s/)//g" | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[Visibility]\" -o ", $2}'
grep  "^Wind: " $DATAFILE  | sed -e "s/(//g" -e "s/)//g" | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[Wind.Rose\]\"        -o ", $4}'
grep  "^Wind: " $DATAFILE  | sed -e "s/(//g" -e "s/)//g" | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[Wind.Direction\]\"   -o ", $5}'
grep  "^Wind: " $DATAFILE  | sed -e "s/(//g" -e "s/)//g" | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[Wind.Speed\]\"    -o ", $8}'

# eof
grep  "^ob" $DATAFILE  | sed -e "s/(//g" -e "s/)//g" | awk -v zs="$ZABBIX_SEND" '{print zs, "\"enviro\[Metar\]\" -o \"" substr($0,5,99) "\""  }'

 
After collecting wind direction for a few hours, the results look as expected.

TimestampValue
2026-04-27 05:21:01 PM
S
2026-04-27 04:21:01 PM
S
2026-04-27 03:21:01 PM
S
2026-04-27 02:21:01 PM
S
2026-04-27 01:21:01 PM
S
2026-04-27 12:21:01 PM
S
2026-04-27 11:21:01 AM
S
2026-04-27 03:21:01 AM
NNW
2026-04-26 10:21:01 PM
SE
2026-04-26 07:21:01 PM
ESE
2026-04-26 06:21:01 PM
ESE

Saturday, April 4, 2026

Key flipping from sheet to form

 After I wrangled 2 spreadsheets (and 1 Google sheet) into database tables, my first picks for primary keys needed to be changed based on how the data were and how I wanted things to be.

Whoever did the manual entry into rows and columns used month and year as the key separators, since the events were generally produced every month (for literal decades). So it made sense that the first attempt used these as keys. I don't like to use a serial number key for tasks like this as part of the outcome should be normalizing the data; getting an error message that there are duplicate months in a year needs to be investigated.

Among other things I found were complete cast/crew duplicates for 1 show that ran for 2 months (others had only ditto marks, making them easier to spot and tidy up). I needed to switch from a monthly key to a sequence value because after 2000, the shows were scheduled for several week runs but not staying within a month. And there were weekend shows, plus multi-play productions in single ticket access.

 References at the end, one LibreOffice and two DBA Stack Exchange

I started with this key: 

   primary key (year_str, month_str, production, director, person, role)

 The year and month are strings, instead of integers because there were originally a variety of formats which I fixed after initial load.

The propose change: 

ALTER TABLE cast_cooked DROP CONSTRAINT cast_cooked_pkey;
ALTER TABLE cast_cooked ADD PRIMARY KEY (season, sequence, production, person, role, band_cast_crew);

 

Using the example questions, I deleted duplicate rows as I found them, mainly trial-and-error because the first line above always works and the second only when all dupes are clear.

I was overzealous in thinking I should include what turned out to be an indistinct key:

psql:alter_key.sql:3: ERROR:  constraint "cast_cooked_pkey" of relation "cast_cooked" does not exist

psql:alter_key.sql:4: ERROR:  column "sub_seq" of relation "cast_cooked" contains null values

 

All nulls are the same, not unique. Turns out this key also messed with LibreOffice forms, as mismatched tables showed no rows in one (scary!). Had to be removed from the primary key, or filled somehow; I chose the former as faster.

 

ALTER TABLE
psql:alter_key.sql:4: ERROR:  could not create unique index "cast_cooked_pkey"
DETAIL:  Key (season, sequence, sub_seq, person, role)=(4, 11, C, First Last, Director) is duplicated.


These were usually easy, except for the 44 rows of cast+crew dupes, and then one complete duplicate (don't ask).

db=> delete  from cast_cooked where season = 12 and sequence = 7 and month_str = '5' ;

DELETE 44

 
A simple statement but should be sanity claused.

Drop the table constraint and retry:

psql:alter_key.sql:3: ERROR:  constraint "cast_cooked_pkey" of relation "cast_cooked" does not exist

psql:alter_key.sql:4: ERROR:  could not create unique index "cast_cooked_pkey"

DETAIL:  Key (season, sequence, production, person, role, band_cast_crew)=(26, 3, Scrooge (shocking), First Last, Director, crew) is duplicated.


 db=>  select * from cast_cooked where season = 26 and sequence = 3 and role = 'Director' order by month_str, person;
  year_str  | month_str  |     production     |    director     |     person      |   role   | band_cast_crew | season | sequence | opening | closing | sub_seq
------------+------------+--------------------+-----------------+-----------------+----------+----------------+--------+----------+---------+---------+---------
 87         | 12         | Scrooge (shocking) | First Last| First Last  | Director | crew           |     26 |        3 |         |         |
 87         | 12         | Scrooge (shocking) | Firss Last| First Last | Director | crew           |     26 |        3 |         |         |

 There was another typo I redacted, duplicated, as director was not a key value.

To remove a duplicate you might need a "ROW ID" which in PostgreSQL acts like this:

db => select ctid from cast_cooked where season = 26 and sequence = 3 and role = 'Director' order by month_str, person;
  ctid
--------
 (5,14)
 (5,30)
(2 rows)

db=> delete  from cast_cooked where season = 26 and sequence = 3 and role = 'Director' and ctid = '(5,30)';
DELETE 1
 

Then the anticlimactic "done" success message (along with the "you already did that" warning).

 db=> \i alter_key.sql
psql:alter_key.sql:3: ERROR:  constraint "cast_cooked_pkey" of relation "cast_cooked" does not exist
ALTER TABLE

db=>

 

And now, better queries!


 

Two-step also:


 

~ ~ ~

References:

https://books.libreoffice.org/en/BG72/BG7204-Forms.html#toc41
https://dba.stackexchange.com/questions/138320/duplicate-rows-how-to-remove-one
https://dba.stackexchange.com/questions/159793/how-to-set-a-not-null-column-into-null-in-postgresql

 

Monday, February 2, 2026

Shrinking and wrapping the copyrights to own imagery

The new web site is "responsive" or so they say. Large images are sliced and diced into smaller versions for display on a small device window, saving load time. The smarty-pants part is it figuring out your screen across the board.

Checking what was viewable and what the image tags looked like, I grabbed a bunch  / with a tip of the sombrero to online:

https://stackoverflow.com/questions/4602153/how-do-i-use-wget-to-download-all-images-into-a-single-folder-from-a-url

$ ls -l *rocky*

spot lighters   77360 Jun 20  2025 6855cb[*]_rocky horror_9235210_orig-p-500.jpg

spot lighters  211763 Jun 20  2025 6855cb[*]_rocky horror_9235210_orig.jpg

I used the A list "ico, SVG, svg, jpeg, jpg, bmp, gif, png, heic" but no SVG came through. And PDF failed in a robotic way.

The p-500 means the file got downsized by the automat to fit in a 500-pixel holey window.

$ exiftool *rocky* | egrep "^File Size|^Image Size"
File Size                       : 76 KiB
Image Size                      : 500x752

File Size                       : 207 KiB
Image Size                      : 532x800

Cut about 2/3 in size for only 500 pixels versus 532.

One has the tags I embossed, at least on the largest chunk.

  31698 Dec 21 01:00 69473f12[*]0c_D48a-p-500.jpg
  68603 Dec 21 01:00 69473f12[*]0c_D48a-p-800.jpg
  81388 Dec 21 01:00 69473f12[*]0c_D48a.jpg

PDF tag? sure, why not.

$ strings *D48* | grep -i spath
  <pdf:Author>Jim Spath</pdf:Author>
$ strings *D48*500* | grep -i spath
$ strings *D48*800* | grep -i spath

On the other site, images over 2000 pixels wide get de-tagged.

Spotlighters-Purchase-Tickets-Icon_Ticket-2.jpg
Your image was automatically resized to 2000x1085 pixels. 

Spotlighters-Logo.jpg
Your image was automatically resized to 2000x882 pixels. 

Once that happens, the tell-tale is a different processor. I'm guessing GD beneath, maybe Python middleware?

$ jhead  Spotlighters_Purchase_Tickets_Icon.jpg
File name    : Spotlighters_Purchase_Tickets_Icon.jpg
File size    : 126559 bytes
File date    : 2026:01:31 22:13:11
Resolution   : 2000 x 1085
JPEG Quality : 75
Comment      : CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), default quality

Fixer upper: 

$ time exiftool   -author="Art Ist"   -copyright="CC BY-NC-SA 4.0 Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International This work is licensed under CC BY-NC-SA 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/4.0/" -subject="Art"    -title="For Spotlighters"  -datetime="2026-01-31 00:00:00"  Logo-2000x882.jpg
    1 image files updated

What else?
 Oh, yah. Pictures or it didn't happen.



Still looking for where I snipped off the orientation.


Sunday, January 11, 2026

Note to Self, Mainly, on Shrinking Map Colors

 I am helping a theatre redo their website, from a post-Y2K relic to one that promises responsiveness (their phrase). They make a big deal about putting warning flags on images over 1MB, or a certain pixel size (like 800x1340). On the "directions and parking" section the old site had text directions, probably copied from Mapquest in 2002 or so. I kept the narrative and added static maps, since I didn't want to try a "live" google map image, figuring most folks would put the address in the own GPS/map app.

  The OSMAND app works great, so I figured I'd do screenshots and show the suggested routes, whether they lined up with the old text or not. They looked good, full screen PNG files with nice colors. Only they ended up way too big for a quick page load.

-rw-rw-rw-  1 user  users  1586661 Dec 14 22:19 map_from_westminster.png

When I saved them as JPEG, they didn't shrink much, just got a little fuzzier. Smaller size images didn't look great either. So I turned to the old standby library of graphics fixer-uppers, NetPBM [ netpbm.sourceforge.net/doc/ ]

The classic set of 216 colors (AKA "Web Safe") made sense as a target palette, as long as the altered image looked decent. I recall doing color reduction in the past and making some pictures that didn't. The docs said create a palette file, like so:

pamseq 3 5 >websafe.pam

Now, I had a 3 color band file, 5 values deep, just like Netscape likes. Applying that to a large image took almost no time:

pnmremap -map=websafe.pam map_from_westminster.pnm >websafe_west.ppm

Note this skips the PNG to PNM and PPM to PNG steps, which I added in a shell script that rotated through the 8 compass points of map/directions I wanted (N, NE, E, etc.)

#!/bin/sh

for img in \

 img1 \

 img2 \


do

  pngtopnm $img.png >$img.pnm

  pnmremap -map=websafe.pam $img.pnm | pnmtopng > ${img}_c216.png

done


Even on a slow drive, this is quick.

$ time pngtopnm map_from_westminster.png | pnmremap -map=websafe.pam - | pnmtopng >west_c26.png
pnmremap: 216 colors found in colormap

real    0m0.246s
user    0m0.260s
sys     0m0.017s

$ ls -l west_c26.png
-rw-r--r--  1 user  users  282243 Jan 12 00:01 west_c26.png


From 1.5 MB to 276 KB: good shrink!


$ perl $EXIFTOOL *west* | egrep "File Size|Image Size|Color Type"
File Size                       : 1549 KiB
Color Type                      : RGB
Image Size                      : 800x1340
File Size                       : 276 KiB
Color Type                      : Palette
Image Size                      : 800x1340

Afterward, I reapplied metadata such as Author and Copyright to the PNG file with the awesome exiftool.

Yet to do: add an overlay showing the OpenStreetMap sources used.




Tuesday, November 4, 2025

Scout Bot What What?

 Different question, same answer:

= =

Scoutie

Hello I am Scoutly, an AI-based chatbot that can help answer general questions about scouting and help you with common tasks. We value your privacy and will keep your information secure. How can I help you today?

User: what is a variable?

Scoutie Scoutly:

A variable is a container in programming that can store different values or information. It's like a labeled box where you can put data that might change, such as numbers, text, or other types of information. For example, a variable could store your age, a game score, or a person's name.

User: Can a variable be a constant?

Scoutie Scoutly:

A variable is a container in programming that can store different values or information. It's like a labeled box where you can put data that might change, such as numbers, text, or other types of information. For example, a variable could store your age, a game score, or a person's name. 

= =


Wednesday, October 22, 2025

Zabbix database lift + shift PostgreSQL migration

 I've been running Zabbix a few years now, and typically put new versions on different hardware as I explore how much power and network are needed even for a relatively small set of hosts. Underpowered components show up as dropped connections and lost data cycles.

I decides to move up the underlying databases on 2 instances from 14 and 15 to both run on PGSQL 16. As I have the databases on NetBSD, I could not easily use the pg_upgrade tool. That needs the source and target versions to both run on one host (as as as I can see), and the NetBSD standard pkgsrc only permits one version. I didn't have the time or skills to install outside the package source.

So, I went with a database export, re-install the database server software, then import. I didn't rush things, and had less than 2 hours of down-time, using a fairly old hardware box (2014 / AMD Family 16h (686-class), 1996.25 MHz).

Much of the time was upgrading or installing about 700 packages that have gone stale in the several years since I touched that box other than small package installs.

20 minutes for the first try:

---Oct 14 00:20:32: [1/780] upgrading freeglut-3.6.0...

---Oct 14 00:40:31: [780/780] upgrading tex-latex-20241101...

Then, after clearing the deadwood and trying again a bit later after the database, the POST GIS extension (times in GMT):

---Oct 14 01:43:15: [1/14] installing tiff-4.7.0nb3...
---Oct 14 01:44:36: [14/14] installing postgresql16-postgis-3.6.0...

Times in EST5EDT:



DB dump


Under a minute, with SSD.

$ time pg_dump -U zabbixadmin -d zabbix -h localhost > zabbix_dump.dmp 2>zabbix.err
real    0m52.698s
user    0m12.638s
sys     0m3.256s

A medium size home net maybe? Around 1GB.

-rw-r--r--  1 jim  wheel  1131407116 Oct 13 23:04 zabbix_dump.dmp

Move older version aside, so if things go sideways a dropback to the prior version is the "save point." We hope not to test this.

# mv pgsql pgsql_15
$ ls -l /usr/pkg/pg*
ls: /usr/pkg/pgsql_15: Permission denied

780 packages
pkg_install warnings: 2, errors: 329
pkg_install error log can be found in /var/db/pkgin/pkg_install-err.log
reading local summary...
processing local summary...
(time passes)

NetBSD pkgsrc database boot

bash-5.2# /etc/rc.d/pgsql initdb
Initializing PostgreSQL databases.

(then create the role for zabbixadmin)

Zabbix specific database steps

unix:
createdb -O zabbix -E Unicode -T template0 zabbix

sql:
GRANT ALL ON SCHEMA public TO                   zabbixadmin;
GRANT ALL PRIVILEGES on database  zabbix    to  zabbixadmin;
GRANT CONNECT on database         zabbix    to  zabbixadmin;
ALTER ROLE                                      zabbixadmin WITH LOGIN;

Restore


$ time psql -d zabbix < zabbix_dump.dmp >~/zabbix.log 2>~/zabbix.err

real    2m59.684s
user    0m12.635s
sys     0m2.364s

As the saying goes, 2 minutes 59.

The image above illustrates the somewhat minimal downtime, where the 90 minute gap between the arrows is the entire loss of one data element.