Saturday, April 4, 2026

Key flipping from sheet to form

 After I wrangled 2 spreadsheets (and 1 Google sheet) into database tables, my first picks for primary keys needed to be changed based on how the data were and how I wanted things to be.

Whoever did the manual entry into rows and columns used month and year as the key separators, since the events were generally produced every month (for literal decades). So it made sense that the first attempt used these as keys. I don't like to use a serial number key for tasks like this as part of the outcome should be normalizing the data; getting an error message that there are duplicate months in a year needs to be investigated.

Among other things I found were complete cast/crew duplicates for 1 show that ran for 2 months (others had only ditto marks, making them easier to spot and tidy up). I needed to switch from a monthly key to a sequence value because after 2000, the shows were scheduled for several week runs but not staying within a month. And there were weekend shows, plus multi-play productions in single ticket access.

 References at the end, one LibreOffice and two DBA Stack Exchange

I started with this key: 

   primary key (year_str, month_str, production, director, person, role)

 The year and month are strings, instead of integers because there were originally a variety of formats which I fixed after initial load.

The propose change: 

ALTER TABLE cast_cooked DROP CONSTRAINT cast_cooked_pkey;
ALTER TABLE cast_cooked ADD PRIMARY KEY (season, sequence, production, person, role, band_cast_crew);

 

Using the example questions, I deleted duplicate rows as I found them, mainly trial-and-error because the first line above always works and the second only when all dupes are clear.

I was overzealous in thinking I should include what turned out to be an indistinct key:

psql:alter_key.sql:3: ERROR:  constraint "cast_cooked_pkey" of relation "cast_cooked" does not exist

psql:alter_key.sql:4: ERROR:  column "sub_seq" of relation "cast_cooked" contains null values

 

All nulls are the same, not unique. Turns out this key also messed with LibreOffice forms, as mismatched tables showed no rows in one (scary!). Had to be removed from the primary key, or filled somehow; I chose the former as faster.

 

ALTER TABLE
psql:alter_key.sql:4: ERROR:  could not create unique index "cast_cooked_pkey"
DETAIL:  Key (season, sequence, sub_seq, person, role)=(4, 11, C, First Last, Director) is duplicated.


These were usually easy, except for the 44 rows of cast+crew dupes, and then one complete duplicate (don't ask).

db=> delete  from cast_cooked where season = 12 and sequence = 7 and month_str = '5' ;

DELETE 44

 
A simple statement but should be sanity claused.

Drop the table constraint and retry:

psql:alter_key.sql:3: ERROR:  constraint "cast_cooked_pkey" of relation "cast_cooked" does not exist

psql:alter_key.sql:4: ERROR:  could not create unique index "cast_cooked_pkey"

DETAIL:  Key (season, sequence, production, person, role, band_cast_crew)=(26, 3, Scrooge (shocking), First Last, Director, crew) is duplicated.


 db=>  select * from cast_cooked where season = 26 and sequence = 3 and role = 'Director' order by month_str, person;
  year_str  | month_str  |     production     |    director     |     person      |   role   | band_cast_crew | season | sequence | opening | closing | sub_seq
------------+------------+--------------------+-----------------+-----------------+----------+----------------+--------+----------+---------+---------+---------
 87         | 12         | Scrooge (shocking) | First Last| First Last  | Director | crew           |     26 |        3 |         |         |
 87         | 12         | Scrooge (shocking) | Firss Last| First Last | Director | crew           |     26 |        3 |         |         |

 There was another typo I redacted, duplicated, as director was not a key value.

To remove a duplicate you might need a "ROW ID" which in PostgreSQL acts like this:

db => select ctid from cast_cooked where season = 26 and sequence = 3 and role = 'Director' order by month_str, person;
  ctid
--------
 (5,14)
 (5,30)
(2 rows)

db=> delete  from cast_cooked where season = 26 and sequence = 3 and role = 'Director' and ctid = '(5,30)';
DELETE 1
 

Then the anticlimactic "done" success message (along with the "you already did that" warning).

 db=> \i alter_key.sql
psql:alter_key.sql:3: ERROR:  constraint "cast_cooked_pkey" of relation "cast_cooked" does not exist
ALTER TABLE

db=>

 

And now, better queries!


 

Two-step also:


 

~ ~ ~

References:

https://books.libreoffice.org/en/BG72/BG7204-Forms.html#toc41
https://dba.stackexchange.com/questions/138320/duplicate-rows-how-to-remove-one
https://dba.stackexchange.com/questions/159793/how-to-set-a-not-null-column-into-null-in-postgresql

 

Monday, February 2, 2026

Shrinking and wrapping the copyrights to own imagery

The new web site is "responsive" or so they say. Large images are sliced and diced into smaller versions for display on a small device window, saving load time. The smarty-pants part is it figuring out your screen across the board.

Checking what was viewable and what the image tags looked like, I grabbed a bunch  / with a tip of the sombrero to online:

https://stackoverflow.com/questions/4602153/how-do-i-use-wget-to-download-all-images-into-a-single-folder-from-a-url

$ ls -l *rocky*

spot lighters   77360 Jun 20  2025 6855cb[*]_rocky horror_9235210_orig-p-500.jpg

spot lighters  211763 Jun 20  2025 6855cb[*]_rocky horror_9235210_orig.jpg

I used the A list "ico, SVG, svg, jpeg, jpg, bmp, gif, png, heic" but no SVG came through. And PDF failed in a robotic way.

The p-500 means the file got downsized by the automat to fit in a 500-pixel holey window.

$ exiftool *rocky* | egrep "^File Size|^Image Size"
File Size                       : 76 KiB
Image Size                      : 500x752

File Size                       : 207 KiB
Image Size                      : 532x800

Cut about 2/3 in size for only 500 pixels versus 532.

One has the tags I embossed, at least on the largest chunk.

  31698 Dec 21 01:00 69473f12[*]0c_D48a-p-500.jpg
  68603 Dec 21 01:00 69473f12[*]0c_D48a-p-800.jpg
  81388 Dec 21 01:00 69473f12[*]0c_D48a.jpg

PDF tag? sure, why not.

$ strings *D48* | grep -i spath
  <pdf:Author>Jim Spath</pdf:Author>
$ strings *D48*500* | grep -i spath
$ strings *D48*800* | grep -i spath

On the other site, images over 2000 pixels wide get de-tagged.

Spotlighters-Purchase-Tickets-Icon_Ticket-2.jpg
Your image was automatically resized to 2000x1085 pixels. 

Spotlighters-Logo.jpg
Your image was automatically resized to 2000x882 pixels. 

Once that happens, the tell-tale is a different processor. I'm guessing GD beneath, maybe Python middleware?

$ jhead  Spotlighters_Purchase_Tickets_Icon.jpg
File name    : Spotlighters_Purchase_Tickets_Icon.jpg
File size    : 126559 bytes
File date    : 2026:01:31 22:13:11
Resolution   : 2000 x 1085
JPEG Quality : 75
Comment      : CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), default quality

Fixer upper: 

$ time exiftool   -author="Art Ist"   -copyright="CC BY-NC-SA 4.0 Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International This work is licensed under CC BY-NC-SA 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/4.0/" -subject="Art"    -title="For Spotlighters"  -datetime="2026-01-31 00:00:00"  Logo-2000x882.jpg
    1 image files updated

What else?
 Oh, yah. Pictures or it didn't happen.



Still looking for where I snipped off the orientation.


Sunday, January 11, 2026

Note to Self, Mainly, on Shrinking Map Colors

 I am helping a theatre redo their website, from a post-Y2K relic to one that promises responsiveness (their phrase). They make a big deal about putting warning flags on images over 1MB, or a certain pixel size (like 800x1340). On the "directions and parking" section the old site had text directions, probably copied from Mapquest in 2002 or so. I kept the narrative and added static maps, since I didn't want to try a "live" google map image, figuring most folks would put the address in the own GPS/map app.

  The OSMAND app works great, so I figured I'd do screenshots and show the suggested routes, whether they lined up with the old text or not. They looked good, full screen PNG files with nice colors. Only they ended up way too big for a quick page load.

-rw-rw-rw-  1 user  users  1586661 Dec 14 22:19 map_from_westminster.png

When I saved them as JPEG, they didn't shrink much, just got a little fuzzier. Smaller size images didn't look great either. So I turned to the old standby library of graphics fixer-uppers, NetPBM [ netpbm.sourceforge.net/doc/ ]

The classic set of 216 colors (AKA "Web Safe") made sense as a target palette, as long as the altered image looked decent. I recall doing color reduction in the past and making some pictures that didn't. The docs said create a palette file, like so:

pamseq 3 5 >websafe.pam

Now, I had a 3 color band file, 5 values deep, just like Netscape likes. Applying that to a large image took almost no time:

pnmremap -map=websafe.pam map_from_westminster.pnm >websafe_west.ppm

Note this skips the PNG to PNM and PPM to PNG steps, which I added in a shell script that rotated through the 8 compass points of map/directions I wanted (N, NE, E, etc.)

#!/bin/sh

for img in \

 img1 \

 img2 \


do

  pngtopnm $img.png >$img.pnm

  pnmremap -map=websafe.pam $img.pnm | pnmtopng > ${img}_c216.png

done


Even on a slow drive, this is quick.

$ time pngtopnm map_from_westminster.png | pnmremap -map=websafe.pam - | pnmtopng >west_c26.png
pnmremap: 216 colors found in colormap

real    0m0.246s
user    0m0.260s
sys     0m0.017s

$ ls -l west_c26.png
-rw-r--r--  1 user  users  282243 Jan 12 00:01 west_c26.png


From 1.5 MB to 276 KB: good shrink!


$ perl $EXIFTOOL *west* | egrep "File Size|Image Size|Color Type"
File Size                       : 1549 KiB
Color Type                      : RGB
Image Size                      : 800x1340
File Size                       : 276 KiB
Color Type                      : Palette
Image Size                      : 800x1340

Afterward, I reapplied metadata such as Author and Copyright to the PNG file with the awesome exiftool.

Yet to do: add an overlay showing the OpenStreetMap sources used.




Tuesday, November 4, 2025

Scout Bot What What?

 Different question, same answer:

= =

Scoutie

Hello I am Scoutly, an AI-based chatbot that can help answer general questions about scouting and help you with common tasks. We value your privacy and will keep your information secure. How can I help you today?

User: what is a variable?

Scoutie Scoutly:

A variable is a container in programming that can store different values or information. It's like a labeled box where you can put data that might change, such as numbers, text, or other types of information. For example, a variable could store your age, a game score, or a person's name.

User: Can a variable be a constant?

Scoutie Scoutly:

A variable is a container in programming that can store different values or information. It's like a labeled box where you can put data that might change, such as numbers, text, or other types of information. For example, a variable could store your age, a game score, or a person's name. 

= =


Wednesday, October 22, 2025

Zabbix database lift + shift PostgreSQL migration

 I've been running Zabbix a few years now, and typically put new versions on different hardware as I explore how much power and network are needed even for a relatively small set of hosts. Underpowered components show up as dropped connections and lost data cycles.

I decides to move up the underlying databases on 2 instances from 14 and 15 to both run on PGSQL 16. As I have the databases on NetBSD, I could not easily use the pg_upgrade tool. That needs the source and target versions to both run on one host (as as as I can see), and the NetBSD standard pkgsrc only permits one version. I didn't have the time or skills to install outside the package source.

So, I went with a database export, re-install the database server software, then import. I didn't rush things, and had less than 2 hours of down-time, using a fairly old hardware box (2014 / AMD Family 16h (686-class), 1996.25 MHz).

Much of the time was upgrading or installing about 700 packages that have gone stale in the several years since I touched that box other than small package installs.

20 minutes for the first try:

---Oct 14 00:20:32: [1/780] upgrading freeglut-3.6.0...

---Oct 14 00:40:31: [780/780] upgrading tex-latex-20241101...

Then, after clearing the deadwood and trying again a bit later after the database, the POST GIS extension (times in GMT):

---Oct 14 01:43:15: [1/14] installing tiff-4.7.0nb3...
---Oct 14 01:44:36: [14/14] installing postgresql16-postgis-3.6.0...

Times in EST5EDT:



DB dump


Under a minute, with SSD.

$ time pg_dump -U zabbixadmin -d zabbix -h localhost > zabbix_dump.dmp 2>zabbix.err
real    0m52.698s
user    0m12.638s
sys     0m3.256s

A medium size home net maybe? Around 1GB.

-rw-r--r--  1 jim  wheel  1131407116 Oct 13 23:04 zabbix_dump.dmp

Move older version aside, so if things go sideways a dropback to the prior version is the "save point." We hope not to test this.

# mv pgsql pgsql_15
$ ls -l /usr/pkg/pg*
ls: /usr/pkg/pgsql_15: Permission denied

780 packages
pkg_install warnings: 2, errors: 329
pkg_install error log can be found in /var/db/pkgin/pkg_install-err.log
reading local summary...
processing local summary...
(time passes)

NetBSD pkgsrc database boot

bash-5.2# /etc/rc.d/pgsql initdb
Initializing PostgreSQL databases.

(then create the role for zabbixadmin)

Zabbix specific database steps

unix:
createdb -O zabbix -E Unicode -T template0 zabbix

sql:
GRANT ALL ON SCHEMA public TO                   zabbixadmin;
GRANT ALL PRIVILEGES on database  zabbix    to  zabbixadmin;
GRANT CONNECT on database         zabbix    to  zabbixadmin;
ALTER ROLE                                      zabbixadmin WITH LOGIN;

Restore


$ time psql -d zabbix < zabbix_dump.dmp >~/zabbix.log 2>~/zabbix.err

real    2m59.684s
user    0m12.635s
sys     0m2.364s

As the saying goes, 2 minutes 59.

The image above illustrates the somewhat minimal downtime, where the 90 minute gap between the arrows is the entire loss of one data element.



Monday, October 6, 2025

First Contact Scoutly Automaton

 To teach a badge I'd better know a bit more than the student, or how to adjust to learning on the fly.

So I asked the Scouting bot cutely named Scoutly, how to connect a Scout to a counselor.

How does a unit leader approve a Scout to start a Merit badge?

The answer imagined only a blue card universe with signed paper forms. Odd for a new piece of code to go way back. Maybe there's more data pre-Scoutbook digital workflow. 

Scoutly:A unit leader approves a Scout to start a Merit Badge by: 1. Providing the Scout with a Merit Badge counselor's name who specializes in the badge's subject 2. Giving the Scout a signed Application for Merit Badge form 3. Encouraging the Scout to contact the counselor to discuss the badge requirements

I "refined" the query to add "in Scoutbook" as we used to add "in ___" jokingly. 

How does a unit leader approve a Scout to start a Merit badge in Scoutbook?

And waited. Minutes of 3 waving dots. The session reset so I asked the same question with a quicker reply.

The answer was: reply hazy try again style non-answer.

Scoutly:In Scoutbook, the process for a unit leader to approve a Scout starting a Merit Badge typically involves: 1. The Scout discusses their interest in a specific Merit Badge with the unit leader 2. The unit leader helps connect the Scout with an appropriate Merit Badge counselor 3. The Scout obtains a signed Application for Merit Badge 4. The counselor works with the Scout to explain requirements and begin the process While the exact Scoutbook digital approval process isn't detailed in the available information, the key is communication between the Scout, unit leader, and Merit Badge counselor.

Differently wrong. 

And yet another diff on the tell me three times it is true:

While the specific Scoutbook approval process isn't detailed, here's the typical Merit Badge approval process:

. The Scout discusses their interest in a specific Merit Badge with their unit leader

. The Scout reviews the requirements for the Merit Badge

. The unit leader provides the name of a qualified Merit Badge counselor

. The Scout obtains a signed Application for Merit Badge from the unit leader

For the exact Scoutbook digital approval process, I recommend contacting your local Scouting America council or checking the Scoutbook help resources directly.

When I went to the Google search page with the same query it knew more about Scoutbook than Scouting's app, for now at least. 

https://help.scoutbook.scouting.org/knowledge-base/how-do-i-assign-merit-badge-counselors-to-a-scout/

Pictures or it didn't happen. Really. 

First contact--blue card

Revision 2, waiting

Revision 2, still waiting 

Waiting 

A different answer

The first 2 answers had numbered segments while the third, with the same question as the 2nd, shows bullet points. Sure, why not.

"Check the Scoutbook help resources directly."

Dude,  give me a link. How hard is it to show that?


Monday, June 30, 2025

Take the 40 Bus Route Out Route 40

 After getting feedback, advice, and counsel about my recent public transport mapping updates in OpenStreetMaps, I started adding the missing "40" bus route to the Baltimore Metro Area. Part of the planning was whether to include bus stops and stopping locations, part was identifying errors and omissions in the OSM base, and part was being able to visualize current routes with OSM data.


Then a fragment:



I can see the 40 on existing bus stop signs:




This stop includes the OR (Orange), the 40, 59, 62, and 160 routes. When I looked, the OSM data for stop 7597 showed '59;62;160;OR' while the MTA showed '59;62;160;OR', as if the bus went to the Essex Park + Ride on the other side of the street instead of this stop.


So, I am unsure what the right data should be, until I ride the bus, or observe at the stop.

The guidance for adding a new route ("Step 1 - Make sure that each bus stop in the route has been added to OpenStreetMap") said check all the nodes. Given the above discrepancy, I hope to add at least one-way next, as small fixes could happen after, like when the routes/schedules change.

The last overview map update page from MTA [ mta.maryland.gov/transit-maps ] is from 2022, and the 40 route is not on the "BaltimoreLink Interactive System Map". That's a map from Google rather than hosted by the government. However, extrapolating shows this page is available in 2025:


Methodology

For myself, and others who may wish to get some GIS mojo working, general steps I've taken to gather data, analyze it, and visualize in various ways to find improvements to OSM content. The earlier post [ https://jspath55.blogspot.com/2025/06/which-bus-such-bus.html ] covers some of this; apologies for repeating myself.

MTA Data

The Maryland Department of Transportation includes the Mass Transit Administration. In addition to the schedule site listed above, there is a GIS portal with bus stops and routes, though I've not gotten much out of the latter.


Home > services > Transportation > MD_Transit (FeatureServer)


Then:


And end up here:



GIS ARC

Part 2 (right side)


The way to pull the data is via:

https://geodata.md.gov/imap/rest/services/Transportation/MD_Transit/FeatureServer/9/query

https://geodata.md.gov/imap/rest/services/Transportation/MD_Transit/FeatureServer/9/query?where=stop_id+is+NOT+NULL ...


"All" query: stop_id is NOT NULL

Output fields: stop_id, stop_name, Routes_Served, shelter

Results in a linear format:

# records: 4549

stop_name: Cylburn Ave & Greenspring Ave
Routes_Served: 31, 91, 94
Shelter: Yes
stop_id: 1
Point:
X: -8533795.9128
Y: 4772066.8330999985

stop_name: Lanier Ave & Sinai Hospital
Routes_Served: 31, 91, 94
Shelter: No
stop_id: 3
Point:
X: -8534126.0864
Y: 4772153.208300002

[...]

I parsed this into a CSV format to load into QGIS. The wrinkle was finding how to deal with the X Y coordinates returned rather than the expected latitude/longitude pairs. QGIS was happy when I set the coordinate reference system (CRS) to "ESRI:102100 - WGS_1984_Web_Mercator_Auxiliary_Sphere."

OSM data

I used OverPass-Turbo.eu to get extracts of bus stopping locations and bus stops, then separate them.

[out:csv("ref", "public_transport", "name", "route_ref", "highway", "network", "operator", ::lat, ::lon, ::id)];

(
  {{geocodeArea:"Maryland"}};

  )->.metroArea;

node(area)[bus=yes];
out;

I used "Maryland" rather than "Baltimore" to pick up Anne Arundel County and other nearby locales, though this expanded the return to include eastern and western counties as well as DC Metro. Better too much data than not enough here.

The first tries did not include position or OSM node ID; eventually I found the special names needed to get theses critical columns. After splitting into 2 files to allow QGIS layers for each, over 4,000 bus stops found though under 1,000 stopping locations.

$ wc -l OSM_platforms_20250623.csv OSM_stops_20250623.csv
  4240 OSM_platforms_20250623.csv
   928 OSM_stops_20250623.csv
  5168 total

Sample export:



Add these 2 layers, plus the MTA extract, and start the reconciliation.

Things found:

  • MTA and OSM sequence the routes differently, and the MTA records appear in yet another order compared to bus stop signs. I wrote a Perl script to convert the MTA data into the order OSM expects to make it easier to update wrong data.
  • The bus stop names differ in upper case/lower case, and various abbreviations ("nb" versus "northbound", for example.
  • Typos in both OSM and MTA data conglomerating 2 or more routes to one number ("78150" instead of "78, 150" or "78;150".
  • Null data nodes: apparent OSM bus stops without details (e.g. openstreetmap.org/node/2743861086 ). These are presumably orphans, though they may be incomplete nodes that should remain. If no MTA stops nearby, fix me candidates.
  • Typos of other sorts, missing stops, incomplete or duplicate descriptions.

Things fixed:

I alternated trying to simplify the updates to be made by only looking at platforms, but decided that version 2 with a stopping location paired with the platform is the preferred way. I created a spreadsheet to have a local index of the 40 bus routes in either direction. The columns included nodes for each type, stop numbers and names, and an alternate name per the official signage ("Eastpoint" as the area for Eastern & 54th, east or west bound).

Only a handful of bus stop areas include a stopping location, so that is my current task. After that, on to steps 2 and 3:

Step 2 - Create the new bus relations
Step 3 - Create a route master relation


The QGIS map view, overlaying MTA, OSM stops/platforms:



By migrating these layers into a PostGIS/PostgreSQL database (using a CRS under 10,000), I can now write SQL queries with outer joins and other comparison techniques.

Stops in MTA but not OSM:

select
  ref,
  substr(trim(route_ref),1,8) as routes,
  trim(name) as name
from
  "OSM_platforms_20250623"
where
  cast(ref as text) NOT in (
    select
      field_4
    from
      "bus-stops-20250611"
  )
;
[...]
   14166 | 30;34    | Northern Parkway & Park Heights Avenue Westbound Far-side
   14215 |          | Tradepoint Atlantic Floor & Decor
   14216 |          | Tradepoint Atlantic Royal Farms
  190036 |          | Baltimore Greyhound Terminal
 3002008 |          | Good Luck Rd at Oakland Ave
 3003202 |          | Good Luck Rd at Parkdale High School
 3003205 |          | Good Luck Rd at Oakland Ave
(133 rows)


Stops in MTA but not OSM:

select
      upper(substr(mta.field_1,1,40)) as nm,
      substr(mta.field_2,1,32) as f2,
      mta.field_4,
      osm.ref
from
      "bus-stops-20250611" mta
left outer join
      "osm_platforms_20250623_md" osm on mta.field_4 = cast(osm.ref as text)
where
  osm.ref is NULL
or
  cast(osm.ref as float) <= 0
order by
  cast(mta.field_4 as float)
;

This one included DC Metro areas, so needs more tuning to be useful.

Show mismatched route lists per stop:

select
      upper(substr(mta.name,1,40)) as nm,
      substr(mta.route_osm_style,1,32) as mta_route_osm,
      substr(osm.route_ref,1,32) as osm_route,
      mta.stop,
      osm.ref
from
      "bus-stops-20250611+0626" mta
left outer join
      "osm_platforms_20250623_md" osm on cast(mta.stop as text)  = osm.ref
where
  (osm.ref is NULL or cast(osm.ref as text) >= '0')
and
  mta.route_osm_style != osm.route_ref
order by
  mta.stop
;

The later version of the MTA list includes the reordered route list, as comparing the strings in SQL was not an obvious approach.  Example results:

[...]
      upper(substr(mta.name,1,40)) as nm,
      substr(mta.route_osm_style,1,32) as mta_route_osm,
      substr(osm.route_ref,1,32) as osm_route,
      mta.stop
[...]
LOCH RAVEN BLVD & STATE HWY DR SB 53 53;103;104 422
LOCH RAVEN BLVD & SAYWARD AVE SB 53;GR 53;103;104;GR 435
SAINT PAUL ST & 31ST ST SB 51;95;SV 95;SV 482
SAINT PAUL ST & 29TH ST SB 51;95;SV 95;SV 483
CHARLES ST & REDWOOD ST NB 51;65;67;95;103;410;411;420;GR;S 51;65;67;95;103;164;410;411;420; 506
SAINT PAUL ST & FAYETTE ST FS SB 67;76;95;103;120;210;215;310;410 67;76;95;103;120;164;210;215;310 516
CHARLES ST & HYATT NB 51;67;94;95;103;SV 51;67;94;95;103;103;164;SV 518
CHARLES ST & FAYETTE ST NB 51;95;103;410;411;420;GR;SV 51;95;103;410;411;420;425;GR;SV 519
CHARLES ST & READ ST NB 51;95;103;GR;SV 51;95;103;410;GR;SV 525

I photographed the bus stop sign at Charles and Read Streets (northbound) only yesterday, confirming the list MTA had. And updated the node: openstreetmap.org/node/6254842377

Charles + Read Streets Baltimore

U.S. Route 40, as opposed to MTA Bus Route 40, is the "National Pike"; see also wikipedia.org/wiki/Old_National_Pike.