NOTE: This is an abandoned attempt. Worked OK but not fast enough on the display refresh and display too small. See another article in this website for the attempt with a low-power transflective LCD. You are welcome to read the first few sections of this article for motivations however.
I am a retired electronics geek with some experience writing software for small microcontrollers and I love the idea of putting that to work in the field of hiking. I guess the main target will be navigation devices using GPS and maps. (I can't think of anything else I want to lug around out in the bush that needs batteries and microcontrollers. Except camera timer or motion stages for astrophotography gear, communications, temperature logging in the snow, torches, e-readers, heaters, rechargers, solar power, drones... oh all right, maybe there might be more than navigation stuff.)
NOTE: Unlike my hiking and running and other sporty endeavors, this blog is pretty dry and technical. It is also incomplete as I make no effort to put all the gritty details into it. It's more about the concepts, thoughts and results than about exactly how to do it. Expect some ranting though...
YDAW! (Your <del>dinosaurs</del> devices are wrong)
One of my main disappointments with off-the-shelf GPS devices or smartphone apps is the lack of control on what detail of the maps gets displayed. Here's an example from a recent trip where the hut wasn't shown until I zoomed right in on where I knew it was and then is showed as a weeny-tiny little thing with almost unreadable text.
Another experience is frustration when trying to see the name of a track when it only appears at a single zoom level and you don't seem to be able to 'pinch' the phone screen the right way to see it.
Or small watercourses, critical for resupply, not showing up at all. Are they in the map data? Who knows?
What I envision is a method whereby you can select a category and have the software search for all the key words that are associated with it (like 'river', 'creek' , 'ck', 'water', 'lake', 'dam', 'stream', 'rivulet', 'burn', 'rivulet', 'tributary' etc (there are lots!)) and have those details displayed regardless of zoom level. Clutter will probably result, but maybe that's better than not being able to find stuff.
Apart from the hardware/software details of managing and displaying maps and GPS position, I think that managing the level details like 'text on map' is going to be the hard part, and may require some thinking outside of the box. Don't even get me started on contour lines or shading to show terrain!
Update April 15 2022. I did recently (since getting well into this project) learn about Bangle.js v2 - Hackable Javascript Smart Watch : ID 5427 : $89.95 : Adafruit Industries It has the ability to load OSM map tiles, has bluetooth, a GPS and best of all it is programmable using Javascript, which is not my favourite by any means. Has a color screen, daylight readable, but at 176x176 is quite small (even smaller than my 200x200 e-ink display I'm playing with) On the other hand, it is small, light, waterproof and supposedly has a 4 week battery though how long that'd last with the GPS on is another question (I read a test someone did whilst cycling and 3 hrs ate 60% so no, not 4 weeks.) It is attractive, but one thing it lacks is easy hardware extensibility. On the other hand, it probably has everything in it that I'd want. Uses a M4 Cortex, has about the same ram and I think twice as much Flash as the ItsyBitsy. Dang it, why didn't I find it sooner? Now I have to get one and see what I can do with it. Awwww...
What ya gonna do about it?
I've played around with a few things, like how to get the maps and how to interpret them. Also how to connect an e-ink device to a microcontroller.
Maps
Here's the result of an experiment where I coded up (in Python ) something that would parse an OSM (JSON version) map and pull out the positions of huts and rivers and roads and make an image for later display.
I made use of the OSM maps because they are free, there are tools to get just 'tiles' of interest without having to download huge files and the internals of them are well-described and because I can get them in a JSON database format that Python can deal with happily. See Export | OpenStreetMap for where to get map data from. I also wrote my own code to convert the somewhat bloated XML format OSM maps into JSON and leave out all the stuff like 'who put this point here and when' that, whilst needed for map maintenance, I don't need to cart around with me. Plus leaving out 'relations' which are area-based objects, usually administrative and not all that necessary for navigation (I hope), and de-referencing 'nodes' into the 'ways' (i.e. find the coord pair referenced by an id and compile them into the list of coords directly) I get something like a 10x reduction in the map size.
I did find that map makers (often more than one person) are not terribly consistent with how they add features. Something like a hut could be a single node (like a waypoint) with a tag that could say 'Simpson's hut' or 'Building' or 'shelter' or... you get the picture. Or it could be a collection of nodes into a 'way' to define a region around the hut, or even a 'relation' which is a collection of ways. See Elements - OpenStreetMap Wiki. So any attempt to pull out and display details about all 'huts' will have to be fairly sophisticated about how it looks for them, and not simply rely on the word 'hut' in the tag nor assume that they are a single coordinate.
Pretty crude but it let me play with how to present the map and showed me that things like avoiding text collisions were non-trivial.
The other issue I'd like to tackle would be power, and hence the interest in e-ink. One problem with e-ink at the moment is color, or lack of it, and the other is display refresh. Tri-color displays are available at a reasonable cost and there are even some 7 color devices. However, one thing that the monochrome displays are better at is faster refresh. The main reason for wanting color is to emphasise the current position, using a red dot. So a tri-color display could do that. On the other hand, a monochrome device could do that too by blinking the dot, especially if it supported partial refresh.
JSON and the Argonauts
OSM maps are usually in XML format, and that's fine but for my purposes I'd like JSON (initially mainly because ArduinoJson library is pretty good at deserialising it but eventually because i found it so much easier to work with JSON than XML)
I can get JSON files from overpass turbo (overpass-turbo.eu) using a query like:
[out:json];
( //make a union of queries using ();
relation({{bbox}}); //give me all the relations
way({{bbox}}); //and ways
>; //recurse down and find all the nodes in the ways
);
out body; //output all interesting like coords and tags but cuts user, uid, changeset etc
Note however that there is an xml library (YXML) too, for Arduino and maybe I'll look back into that in the future... It is naturally good at deserializing in chunks but doesn't lend itself to containing nested elements (which the ArduinoJson lib is....hmmm)
Kapoo
What CPU 'engine' to use to drive all this? It needs to be light, low-powered, reasonably powerful, lots of RAM and non-volatile memory, be able to be coded in Python ('cos I don't wanna go back to C++) able to drive appropriate peripherals like GPS and display and buttons, have pre-existing libraries so I don't have to re-invent all the code. Maybe talk to me? At least a buzzer.
As for why Python when it is slower than C++ by orders of magnitude? Well, this project is mainly io-bound not compute-bound. i.e. it spends most of its time waiting for the GPS, the buttons or the display, so Python's relative slowness in calculation compared to C++ is not so important. More important, to me at least, is ease of implementation and for me, Python is much easier to program in than C++.
Need wifi or bluetooth? Not during normal use though something needed to update maps and so-on. Not trying to re-invent the all-singing, all-dancing sports watch here. USB connectivity is common and will probably be enough.
There are more and more contenders these days for off-the-shelf microcontrollers. Real Python would be best but I kinda like the Circuit-Python or MicroPython capable devices though I need to be a little careful because not all of these cut-down Pythons support things like JSON. I played with one for a while that supported BSON but not JSON so that required another pre-processing step to make BSON maps. Doable, but better avoided. Fortunately, the people (Adafruit and Micropython) who are developing these low-footprint Pythons are catching up rapidly.
Ideally, something capable of a full Linux (which brings all sorts of familiar tools and drivers to bear) would be nice, like a Raspberry Pi but they are usually also quite current-hungry, though getting better.
Would 'bare metal' be better than the burden of an OS, or would a real OS make life so much easier? Can all the things that an OS would make easier be done offline and just the results stored with the map? (like do the search for 'huts' beforehand because yo' know you are gonna want it?)
I have an ItsyBitsy that has an ARM Cortex M4 that runs at 120MHz, has 200KB of RAM and 500KB of flash for program. It can be coded using CircuitPython or Arduino. It's pretty powerful as microcontrollers go so I'll play with that first to get some ideas...
Already available?
There are a few devices already available (maybe) that look good. But when I tried to buy this one Inkplate 6 a while ago (end 2020), I was 'beaten-around-the bush' for weeks before the vendor admitted that they couldn't get it. So angry! However, they do say that they have 4 available in the shelf right now. (Jan 2022). Not tricolor though. Boo.
GitHub - DasBasti/IndiaNavi_Firmware: ESP32 firmware for IndiaNavi outdoor navigation project looks like it's a similar project...
M5Stack® M5Paper ESP32 Wifi + bluetooth Development Core Board V1.1 with Touch EInk Display 4.7inch 960X540 180° Viewing Angle Sale - Banggood Australia looks interesting... Not color.
Watch yer lookin' at?
The display is going to matter a lot. A nice bright vivid current-hungry OLED? Nope, obviously not. Something really exotic like a small, red, dot-shaped robot that crawls around on a paper map? Would love to but can't think how to make it robust and portable and useable in rough conditions.... Hmm, trained lady-bugs? Cockroach with Bluetooth?
No, the best I can think of at the moment is an e-ink device, either color or monochrome. Preferably at least black-white-red so that I can use the color for emphasis (i.e. the 'you are here' dot or crosshair). Low-power, good in sunlight, holds display when off, simple interface...
It's all very preliminary now. I'm currently playing with a small 1.54 Inch display and an ItsyBitsy microcontroller to learn some more about the characteristics of these displays. (Note that Jaycar link shows it as monochrome but the blurb there says its red-white-black and it is. That's not all they got wrong. Their doc sux)
First thing I've learned is that just because the manufacturer gives a time for a fast partial refresh (thus implying that it does support partial refresh) doesn't actually mean that there is fast partial refresh support in the downloaded libraries to drive the device nor any attempt to demonstrate how to use it in the example code.
Second thing is that drivers may not be available for the device in the framework you want. The 1.5" e-ink I have IS supported by the supplier for Arduino, but the example doesn't lend itself to expansion. It IS supported even better by the GxEPD2 library written by Jean-Mark Zingg (but only just as the chip is marked obsolete) It ISN'T supported by Adafruit Circuitpython (but I hacked their EPD libraries and made it work, read on). It might be supported by normal Python on the RPi (supplier claims it is and provides code) but I haven't tried that yet.
Display refresh is not terribly fast, taking about 3 seconds for the full screen, with lots of flickering and carrying on. I see that the red color bits fade in more slowly than the black bits, bit in the end they are nice and vivid.
It was nice to turn the power off and see that the display stayed undisturbed. Perfect.
There's a 'hibernate mode' as well, I see, that turns the power use way down without the need to remove it completely. Issue a reset via a digital input to wake it up.
Display is crisp and in a different trial I could see that it was capable of displaying very small features quite clearly (It's a 1.54" 200x200 display), however in actuality one reason for this DIY project is to have a biggish display so this is certainly not the final device. Still, it rivals the Fenix 7 sports watch display for size and res which is 1.3" and 260x260)
Wiring
I tried using Fritzing to draw up a nice picture of the wiring, even paid for a license for it. However, I got tired of searching the libraries for missing components and decided just to draw boxes and labels by hand. Much faster, more flexible. Not as pretty.
Thanks for the memories
The Itsybitsy has about 200KB of RAM and about 512KB of flash memory and I've put an SD card reader arrangement on it. (There's a neat way to do this that uses one of the many microSD->SD adapter thingies you have hanging around, soldering to its contacts, plugging them into the itsybitsy SPI bus and then inserting your microSD card into the adapter. This works because the microSD card itself has the SPI interface and controller on it. All you really need is access to the contacts!) Something like Cheap Arduino SD Card Reader : 6 Steps - Instructables except in my case as my microcontroller is 3.3v and microsd cards use 3.3V I don't need the resistor dividers.
However, whilst being able to read gigabytes of SD files is nice, and the OSM maps I want to use are all of that big, but how to do it in such a way that the limited memory of the micro can hold enough at one time to be useful? It certainly can't hold a very big map with all it's 'nodes', 'ways' and 'relations' all at once. Even if it could, searching thru the nodes database when drawing a way (which is a collection of nodes) is very slow on a micro. I've seen a single 'relation' containing many 'ways' referencing significant 'nodes' that took up 18KB all by itself and that was just one of many 'relations' you'd find in a typical map. 200KB is just not enough for a whole map. (A raster tile image, maybe...)
I've worked out a way to read each 'node' or 'way' or 'relation' in an JSON file one at a time, using something like this:
mapFile.find("\"elements\": ["); //find the array of elements
do { //then read one element
DeserializationError error = deserializeJson(mapDoc, mapFile);
..... check for errors, do something with the deserialised node
} while (mapFile.findUntil(",","]")); //until end of array
This works because almost everything is inside an array of elements in the JSON file that Overpass puts out if you ask nicely. Each element is bracketed by {} and the deserialiser stops when it reaches a matching }. e.g.
{
"version": 0.6,
"generator": "Overpass API 0.7.57 93a4d346",
"osm3s": {
"timestamp_osm_base": "2022-02-15T04:19:05Z",
"copyright": "The data included in this document is from www.openstreetmap.org. The data is made available under ODbL."
},
"elements": [ <---start after this
{ <---starts here to read first element
"type": "node",
"id": 280706159,
"lat": -37.9158100,
"lon": 145.2752878
}, <---ends first element at the } then find the , then
start again
{
"type": "node",
"id": 308123416,
"lat": -37.9106087,
"lon": 145.2761147
},
{..... and so-on
Standalone drawable ways
Here is my recipe for how I make a file that has a list of 'ways' with all the 'nodes' associated with it in one file, for a small map, say 10km square. Of course, the missing magic sauce is the Python code in read_osm_to_micropython.py. Eventually I will put links to Github for that and other code.
Go to https://www.openstreetmap.org
Get the area you want to map
Press the export button
Choose an area about 10km sq using the coordinate tools
Make note of the 4 latitude and longitudes you used to generate teh map. They go in the bounds.jsn file later made manually
Press Export to download an XML map with a suffix of .osm
Put it in d:\maps.osm\something.osm
Set mapdir and mapfiles and mapfile in top of C:\Users\terry\PycharmProjects\read_osm\read_osm_to_micropython.py on Hipower. pc (Obviously you can't do that because this is my home PC)
I SHOULD be able to open that python code in pycharm or in VSCODE and run it form there ( (use the D:\anaconda3\python.exe interpreter) ) but they have been shitting me lately about some missing module so maybe need to run it in an anaconda console (from the start tiles) using python read_osm\read_osm_to_micropython.py
Then run it (use the D:\anaconda3\python.exe interpreter) to make a something.osm.jsn file that ends up in C:\Users\terry\PycharmProjects\read_osm\
Rename something.osm.jsn to ways.jsn
Make a bounds.jsn manually (Get the min and max latitudes right. Max is the least negative)
Put the output jsn files on the SD card on ItsyBitsy
Restart itsybitsy (making sure that import loadmap.py is uncommented in the code.py file) then and check it loads the map. If prob (i.e. map doesn't show after a few minutes), check the bounds and maybe try all again. Maybe map is too big. Monitoring itsybitsy with usb- serial console can be instructive. If a large map (e.g. lake_house) it can take a while. 30s to load and draw. The drawing is fast, its' actually the loading that takes the most time. Nonetheless, the code should cull to avoid drawing things off the screen.
So, the new idea is to preprocess the OSM maps and find all the coordinates needed for a way using Python code on a conventional PC, put them all into a file where each way is a separate, self-contained (i.e. with node coordinates already found) JSON object. Then they can be read sequentially from the file, and drawn without having to go searching for the node by id to get its coords. Much faster to get the map drawn (10 or so Secs for a fairly busy map). Does make searching for things a bit dicier but we'll see. The time saver is that it is 'single pass' and the memory saver is that it is 'single pass' too. Need to look after single nodes that have tags, like "Fred's hut" for example and which may not even be referenced by a way, but I can deal with that in the pre-processing (which is FAST on the PC). Just create new 'ways' that only have one node.
This worked well. To read and draw the coordinates in all the 3761 ways in this approx 10km square map near the Lysterfield area for example only took ten seconds. A larger map takes 30s.
When I was trying to work out just what I had mapped it was difficult to line the e-ink display up against Google Maps. The lack of distinction, e.g. for the lake doesn't help though it is discernible as a blank area when you know where to look. Marking water in red might help. This means programmatically scrutinising the tags to look for words like river, lake, water, creek, sea etc. Filling in areas with color or shading would help but this e-ink can't do that... unless I write some code for hatching. Hmmm.
Zoomage!
Setting a zoom factor when drawing the map is interesting. It speeds things up, with the current code. I fixed that by not bothering with anything that fell outside the display window after zoom applied. For the Lysterfield Lake map above (about 10km square) it still takes 10s to load and draw even when when zoomed in by 10.
A 1km square map is quite reasonable for pathfinding (though less so for planning navigation for a route. Hmm, that gives me a name for the device. E-Ink GPS PathFinder ™) If I limited the maps to being that small to start with, they'd be a lot faster to load as well. More pfaffing about getting the maps 'tiled' though. Perhaps I'll try it with a map centred on my 'lab' position so I can see what total map redraws during cleanups and updates from the GPS look like.
Text me...
I made a simple alphanumerical font set plus some icons, using JSON encoded 'strokes' quite similar to the idea behind the way the maps are drawn. I did this despite there being a very nice font available in the Gx_EPD2 libs because that font isn't rotatable and my vector font will be. Here's a sample: (This is still in Arduino, I hadn't deserted to Circuitpython yet)
const char vtx_A[] = "{\"A\":[ [[0,10],[4,0],[8,10]], [[2,5],[6,5]] ]}";
const char vtx_B[] = "{\"B\":[ [[0,0],[6,0],[8,2],[8,4],[6,5],[8,6],[8,8],[6,10],[0,10],[0,0]], [[0,5],[6,5]] ]}" ; //B has 2 strokes
There isn't a lowercase set yet. The figures are drawn in an 8x10 rectangle and can be scaled by a float and the thickness of the stroke can be multiplied by a float as well. The photo shows two scales. The smaller scale (1x1 with thickness 1.5) is probably best suited for text on the map. I'll give text it a white background. (Or maybe a red one, just for fun) A nice way to backdrop the text would be to write it twice, once in white with a thicker stroke, and then once more in black with the final thickness.
The Nine Billion Names...
One thing is certain, I need to winnow labels and be selective about what names get shown on the map (which takes me back to the original reason for this work, to find a better way to display labels). I've seen that some roads, like Napoleon Road below, is represented multiple times because it consists of multiple 'ways'. In an urban setting there are lots and lots of street names. Here's what my little local map look like if you print all the names even at half scale. The bigger 'HERE' in the middle shows where the gps says we are. It's also clear that the labels need to be applied to the map after all the ways are drawn else they get obscured by the lines of the map. On the other hand, then the lines of the map get obscured by labels. Hmmm. I think we are at the crux of the problem. How to see labels without obscuring everything else?
Legends!
Especially on a small screen, trying to mix text labels and roads, rivers and so-on leads to a very cluttered display. Maybe a better approach is to use an icon to indicate that there is a label, like a hut icon, road, river etc. Then select/touch the label from the pageable list with thumbwheel/touchscreen and the label pops up on the map. It stays there, on the map, until you make it go away. This is one way to make sure you see the detail you want. Could use icons to indicate that the POI is a house, or road or water. Could have a button that makes a subset, say huts, of the labels appear on the map, then all go away when you want to unclutter. Maybe except the ones you want to keep, selected somehow. Is somewhat like the idea of some labels appearing only at appropriate zoom levels, but more controlled and independent of the zoom level. Select the labels you want to keep and make the others go away.
Avoid Text Trampling
At the very least, we want to avoid text on top of text. This disregards text on top of map which, unless I adopt the 'legends' idea or a separate display for text (see below at Two mice or one cat? and Off the Grid,) is kinda unavoidable. (Although, for a sparse enough map, it might be possible to arrange text to have minimal impact on the map, say, move it a bit to avoid existing map 'bits'. Or use an 'xor-ing' effect where pixels on pixels show underlying pixels. Like the red-white flip stuff I do later for the crosshair. See below.)
To achieve this we'd need a class of object in the code, call it a labelList, that keeps track of what labels have been put where and gives new coordinates to put text at if a collision is detected. In Python this would be a doddle and in C++ it isn't a lot harder, just a bit fiddly. Question is should I bother writing this for Arduino or should I call the Arduino development to a halt, get a bigger display and move on to doing this in CiruitPython?
What sort of display do we need?
Large enough but not too big. 4" minimum, 10" max?
High resolution. 500x500 at least, higher better.
Partial refresh. So we can highlight things, popup text etc
Built-in memory so that the demands on the micro are reduced.
Tricolor or Shades of grey. 3 distinct ones at least.
Fast refresh. Coupla seconds, not 10s of secs
Buttons, lots of buttons. A alphanumerical keyboard would be useful.
Touchscreen or joystick or direction pad.
Waterproof. This is especially important with the touchscreen. A non-touchscreen is more easily encapsulated but touch would be very useful.
Flexible rather than brittle glass screen.
Must be supported by Arduino and CircuitPython libs.
SPI bus. Or I2C. Not parallel, preferably, though if it is very fast...
Two mice or one cat? Are more displays useful?
It is proving difficult to satisfy the display criteria outlined above. Adafruit, who maintain the CircuitPython libs I want to ultimately use because I like Python better than C/C++, do seem to support up to 4.2" tricolor e-ink via their e-ink friend with sram, but I need to find the actual display somewhere else as Adafruit don't sell it. I'm finding ( see Low power displays comparison )that 4.2" displays with IL0398 driver chips might be the most promising large-ish e-ink displays for support by Adafruit displayio Circuitpython module) However I am beginning to wonder if maybe two smaller displays might be better than one larger one. There is a 2.9" e-ink display available from them, and it is a flexible one which has advantages, but the resolution is not that great, 290x150, which compared to the 200x200 of the 1.54" one I have, is not that good.
So imagine a device with 2 x 1.54" displays (Or even bigger). It'd be like having 2 Fenix 5s side by side. (Not quite, but maybe you get the idea) One could be used for the map graphic, and one could be used for the legend text.
I'd have to make sure to get one that CircuitPython supports (It doesn't support the chip on the one I currently have, for example.) The chip for 1.54 200x200 e-inks that is supported are the SSD 1608 (mono) and SSD 1681 (tricolor). These Adafruit displays come with a built-in SRAM and Micro SD card slot, which both would be useful.
See here for the CircuitPython bundle GitHub - adafruit/Adafruit_CircuitPython_Bundle: A bundle of useful CircuitPython libraries ready to use from the filesystem.
I can imagine using one controller and two displays, or even two controllers both getting GPS strings from a single GPS receiver but acting differently. One showing map graphic and position, the other showing text with legend and bearing, for example. Given that each display comes with a microSD card, I could put an SD with the map files on both. Another idea is to use one display for a zoomed out version of the map, and the other for a zoomed in version, with the text on the map but because of the zoomage, not so cluttered.
I'd have to do something about touchscreen though. I wonder if I can get a 1.54 " touchscreen? Or maybe one big enough to cover both at once? Even with some left over for touch buttons on the side? Hmm.
There's a possibility of mixing tech here too. Maybe if I can get a 4" color epaper, low refresh rate, reasonable res, and a high refresh rate Sharp Memory single bit 400x240 display, I could use them together. Use the mono for a zoomable, scrollable but low res map with labels and the epaper for a better map but with no labels and not needing fast refresh. Have same crosshair on both so I can see where I am. Maybe tracklog on the lowres hi refresh one.
Another approach would be to use the e-ink as the large non-changing zoomed out map showing context, the intended route, waypoints but minimal text, whilst another fast refresh map like Sharp Memory LCD shows the quickly changing zoomed in map with text labels, maybe on the map, maybe in a scrollable grid-referenced list. Sounds like 3 displays now. 2 graphic, one text, though a graphic one could do text too, I suppose.
On Our Selection
It does occur to me (and experiments have proven it) that a legend wherein you have a numerical labels shown on the map is not going to work well if you have too many labels. Showing all the labels on a text list will need ability to page thru them, and you'd want a fast refresh for paging. The map can get crowded, even with just a 2 digit label. (Think urban setting again, with lots of streets and each street has a number to refer to) What would be nice would be a way to select the item of interest by touch or crosshair, and no need for a label. But the crosshair idea needs a fast refreshing screen, so we are left with touch. A touchscreen is not out of the question... but resolution might be an issue. I still think that 'filtering' the labels is a way to go. Roads, water, buildings. And maybe a sub-filter, like Major roads, tracks etc. I wonder if the map has enough info to discriminate? Maybe not. I see all the streets labelled Highway regardless. Alright, maybe show just some of the streets? Maybe just the one(s) selected?
Off the Grid
Latest idea of how to deal with labels. Have two displays, one for maps and one for labels. Have the labels indexed with a grid coordinate. E.g. F4. Have a printed grid on the map display (i.e. not in e-ink taking up precious screen space, but permanently printed along the sides). Have a scroll button/joystick/whatever to allow scrolling down the label list which will be quite long. Paging rather than scrolling, I think. So the label display needs to be a fast refresh. Maybe mono? (That appears to be fastest) Maybe Sharp Memory LCD (It's also very low power)
Calculating the grid coords for features from their lat and long is trivial, I do that already to get the pixel coordinates for drawing them, and these F4 etc are just low res pixel coordinates. (Could use int( x,y)/20 but better to convert x to letters so it's sure that Eastings comes before Northings.) Grey grid-lines would help too.
Get in line
Another possibility is to write the labels inline, for polylines ('ways') if they'll fit. If they don't, well they don't deserve to be shown. So there. Then you need the label list and grid coords to deal with those.
What about text on wavy lines, like rivers will be? You'd need to either follow the wavy lines or some approximation of them. Yuk. I can do this in Python... Don't want to do it in Arduino C/C++. Should this be part of the preprocessing? Should it in fact just produce a bitmap? What a cop-out!
Getting directions
One of the most used features of my off-the-shelf GPS devices is direction to go in to get to a planned destination.
I usually have a red line showing the route I want to take. Getting this onto the map is a whole new endeavour, but it will essentially be a polyline (a 'way') consisting of a list of lat/lon coords. Preferably a .GPX file or something easily generated from a .GPX. It will need a colored line.
The map will always be in "North UP" mode. The device would need to be rotated to get map aligned Magnetic North (taking declination into account!) and then looking at the route line will show which direction to go in. Could do all this with a separate magnetic compass. Have to make sure the compass isn't messed up by being close to the electronics. The alignment and "looking for route direction" processes can easily be done separately, the compass doesn't need to be close to the device.
It would be nice to have the compass built in. Maybe an electronic one using the 2nd display ( switch between labels and other functions, like heading, distance travelled, ETA, pace etc) is not such a stretch. In which case, making sure the electronic compass isn't messed up by e-noise is definitely an issue. Power is a consideration too. No need to waste it.
CircuitPython
So far I had done the code in the C/C++ framework of the Arduino IDE and libraries and it has served me well, but the Itsybitsy Cortex M4 board has enough flash and memory to run Circuitpython and in many ways I am more comfortable with Python than the C/C++ platform of Arduino. Especially when it comes to things that Python provides, such as automatic type-handling, lists and dictionaries, JSON and generators and iterators and even cooperative multitasking using asyncio. It's all doable in Arduino but it is just fiddly to code it myself and make it work and in Python a lot of it just works out of the box. (Hah!)
One thing that will be harder is support for the e-ink display, because whilst I have code examples for Python for this 1.54" display (based on a superseded IL09376F) there isn't a nice implementation as a library for it like there is in the Arduino Gx_EPD2 lib which then ties into the Arduino GFX graphics library. However, there are CircuitPython graphics packages I can use (I hope) that are even better (I hope)...
Converting Itsybitsy to Circuitpython platform
This is pretty easy.
Double-reset the board when connected to USB and a drive called ItsyM4Boot appears.
Drag adafruit-circuitpython-itsybitsy_m4_express-en_US-7.2.3.uf2 onto that drive. (You can get it from Downloads (circuitpython.org)
That drive disappears and another called Circuitpy appears. (It already has a bunch of .py files on it from previous work. )
It'll run the file called code.py at startup, so in there do an import of the file you want to run.
Converting back to Arduino
This is easy too.
Plug board into USB and double-reset it to get ItsyM4Boot drive to appear.
Use Arduino IDE to put a program on it.
This doesn't destroy any .py files on the Itsybitsy, but it's a good idea to back them up nonetheless.
Programming the ItsyBitsy in CircuiPython-land
Just edit offline and then drag onto Circuitpy disk, or even edit it online. Put a line in code.py that imports the .py file you want to run at startup. Then press the reset button, just once, to start running. Or you can run the code from within the editor, using the REPL.
Mu and Thonny or Pycharm will work as editors. They all know somewhat about CircuitPython which is useful. They all provide the ability to run the REPL and all can run (and stop) the code from the editor without having to reset the board. Mu is annoying as it tries to run the code as soon as you save it. (I think you can turn that off)
I prefer Pycharm. It is more sophisticated that the other two and it's ability to navigate around the code is far far better. Do have to manage the file saving better because it is quite pro-active about saving the file evenwhen you don't want it to. With Circuitpython that causes the executin of the code to restart.Find solutions by Googling.
In the end I ended up using VSCode. Pycharm had some issues I couldn't get past. VSCode is better in some ways, worse in others, but will have to do.
Make the e-ink display work with CircuitPython.
This is where the hard work starts.
While there are e-ink display drivers for CircuitPython, they are all targeted at displays with other chips (ssd1681, IL0373 or IL0398) and so won't work with the one I have (IL09376F), which is annoying. However, the code structure of the IL0373 is quite close to that required (except for 2-bit gray shades)
I do have Python code examples for this display though, from Duinotech 1.54 Inch Monochrome E-Ink Display Module | Jaycar Electronics
Using the following pinout.
spi = board.SPI() # Uses SCK and MOSI
epd_cs = board.D10
epd_dc = board.D9
epd_reset = board.D5
epd_busy = board.D7 #TC all modified for my itsybitsy wiring
However, the Python code available from Duinotech via Jaycar (and ultimately from manufacturer Waveshare) is aimed at Raspberry Pi or Jetson boards and isn't going to work unless I port it to CircuitPython pin driving (looking at epdconfig.py in the XC3747-softwareMain.zip it looks fairly easy to port from rpi GPIO commands to circuitpython digitalio commands) but even then I wouldn't have access to any graphics libraries unless I make a displayio driver based on what I discover. The example code really only shows loading a bitmap and has its own setup and display commands. Might be worth a try though. You can get the datasheet for this chip at IL0376F.pdf (adafruit.com)
Make copies of epd1in54b.py and epdconfig.py from the XC3747-softwareMain.zip from Jaycar. Start with epd1in54b.py because I used the Arduino epd1in54b file successfully with this display. In epdconfig.py make a class Itsybitsy similar to class RaspberryPi with the pin definitions and digitalwrite stuff aimed at CircuitPython digitalio rather than rpi GPIO and make the implementation target the ItsyBitsy class by default.
I got it 'working' in Circuitpython on the ItsyBitsy using the code above with just a few changes but it's pretty crude. Black is 2 bit per pixel to gives me shades (which I didn't have in the Arduino code) and the bitmaps are packed binary.
Displayio anyone? (Don't read, didn't work)
Need to convert to displayio. To do that I think I have to work out the init_sequence as per External Display | CircuitPython Display Support Using displayio | Adafruit Learning System. Good thing is that I have code like Adafruit_il0373.py to learn from. This implements a displayio driver for a 'similar' e-ink display. Note that you canget the py bundle (rather than the mpy bundle that you usually want) from somewhere like Release March 18, 2022 auto-release · adafruit/Adafruit_CircuitPython_Bundle · GitHub
Quirky-worky display driver
I have a base displayio_il0376f.py 'working' now to provide a IL0376F class based on displayio.EPaperDisplay and it kinda works, but has some quirks. Also, if I leave the display for a while, something, dunno what, tries to write text to it! Very odd.
What I did was to use a driver designed for the IL0373 chip and modify the start_sequence and stop_sequence to make it conform to the IL0376F information I had from epd1in54b.py which was a driver provided by Waveshare, I think.
Where I looked to get info about the displayio campatible adafruit_il0373.py was:
EPaperDisplay | CircuitPython Display Support Using displayio | Adafruit Learning System (NOTE: The Adafruit website maintainers were kind enough to correct that webpage and put in links to the displayio driver code when I pointed out to them that it was incorrectly pointing off to older EPD library code, which works but isn't displayio compatible. And they did it overnight! Wow, speedy.)
So I have a partially working displayio compatible e-ink display driver. The problem is that the code for the memory access is deep in the displayio core and it seems to assume that the black ram is 1 bit wide and whereas I need it to be 2 bits wide. As a result, I have to pull tricks with the screen resolution (pretend its 400x400 but internally keep it at 200x200) and so far can't get shades of grey, just black and white, though I can kinda get red but it doesn't line up with the black. Here's a picture of what I mean.
As a result, unless I copy the displayio core and munge it around to make my own, it looks difficult to continue with this approach.
The reason I want it displayio compatible is that there is a lot of support in CircuitPython for it, shapes, text, bitmaps etc, etc and it would be nice to leverage them. It is a pity that the creators of displayio built too much of the memory specific stuff into displayio core and there is no way I can see to correct that from the driver. There is even a Youtube video where one of the maintainers of the code whinges about the weirdness of how the memories and color conversions are managed for epaper. About 1 hr 57 mins into it.
The displayio sources can be perused at:
and discussion of displayio in general at: https://docs.circuitpython.org/en/latest/shared-bindings/displayio/index.html
The memory acess seems to rely on uint8_t pixels_per_word=(sizeof(uint32_t)*8)/self->core.colorspace.depth; but I can't see where to set core.colorspace.depth
I give up on displayio. For my needs, it is too complicated and stuff I need to change is too deeply buried in the core. Sigh. So close.
LATE NOTE: Apparently now Adafruit sells a 2.9" e-ink display using the IL0373 chip that DOES have 2 bit grayscale and apparently IL0373.mpy version 1.31 DOES have code for 2 bit ram. I'll have another look! Also I might have been looking at old source code (I checked, really I did...)
circuitpython/EPaperDisplay.c at main · adafruit/circuitpython · GitHub is where I should look, not sourcegraph, though sourcegraph is nice to navigate!
Another approach using Circuitpython. Use EPD drivers
There is another (older, deprecated in favour of displayio but that isn't helping me right now) Circuitpython approach to epaper that is simpler than displayio (and not needlessly complicated by groups, tiles, palettes etc). This is the adafruit_epd.epd - Adafruit EPD - ePaper display driver — Adafruit EPD Library 1.0 documentation (circuitpython.org)
and
My code on the ItsyBitsy in E:\epd1in54b.py (copied and modified from Waveshare examples) has enough info to allow me to copy and modify the IL0373.py example found in Github above and the memory handling in the epd.py base class looks like I could modify it fairly easily for the ram handling to get the 2 bit grayscale and 1 bit red working properly (if it doesn't already work out of the box). It has rect, text (horizontal only), line, image, circle. I could also re-implement my vector text that I did for Arduino into Python to give me a text method that can be angled or made to follow a path.
Taking a bit of buffering
Hit my first obstacle with the EPD code. It uses an adafruit_framebuf.Framebuffer class and THERE IS NO SUPPORT FOR A 2 BIT GRAYSCALE . There is a MHMSB format, but it is only single bit . Arrrgh! I might have to copy and munge that too.
After a lot of pain, modifying il0373.py to make il0376F.py with the luts and sequences peculiar to that chip and making custom versions of adafrut_epd.py adafruit_framebuf.py and epd_test.py to add in 2 bit grey shades I finally got somewhere. The color handling code (both before and after I hacked it) is rubbish because of the two separate color frames with different formats involved, not to mention having to correct a LOT of code where the assumption was built-in that the frames would be single bit and able to be treated like booleans. Eventually, I found I could get a variety of shades of red, which is a surprise, but I initially couldn't manage to get dark grey. So I had 6 distinct colors, 1 more than I expected, but I'd still like to get the missing 2, darkgrey and darkpink. (Read on for joyous news!)
50 shades of grey. Well two...
OK, I managed to change the lut_g1 (The phase lookup table used to manipulate the black spheres for the first of the grey shades) so I can now see a difference between g1 and g2. Oddly, the lut_g1 and lut_g2 were originally identical, so no wonder there was no difference! According to the datasheet, the lut bytes are grouped in 3 byte chunks, with the first two bytes providing info on voltage and duration and the third byte about repetitions. I found that by changing the voltage and duration on the 13th byte of lut_g1 I could darken it slightly. Presumably, this acts to pull the black spheres forward more than before by holding a higher voltage than originally during that phase. 0x06 means a VCM_DC voltage (somewhere between 0 and -3 depending on other settings) for 6 frames and 0x42 means VCM_DC+15v for 2 frames. Anyway, it works and so I am happy!
#old table
#lut_g1 = bytearray([0x8E, 0x94, 0x01, 0x8A, 0x06, 0x04, 0x8A, 0x4A, 0x0F, 0x83, 0x43, 0x0C, 0x06, 0x0A, 0x04])
#New table
lut_g1 = bytearray([0x8E, 0x94, 0x01, 0x8A, 0x06, 0x04, 0x8A, 0x4A, 0x0F, 0x83, 0x43, 0x0C, 0x42, 0x0A, 0x04]) #changed 13th byte from 06 to 42
So now I have 8 different 'colors' I can work with. Much better than the original 3 i.e. red, black and white that the original sample code gave me. They are well separated colors too. I can clearly see the difference between each one, at least when they are side-by-side. (OK, so the two darker pinks aren't particularly distinct...) I can imagine using black for walking tracks and huts(because they are most important), dark grey for large roads, light grey for minor roads or contours, red for the 'you are here' indicator and the dark pink for water and the lightest two pink shades for terrain highlighting to indicate steepness. So many possibilities... Plus I can use hatching or multi-bit texture in administrative areas.
The speed of the Circuitpython code to write the display is dominated by the display itself so I can't tell yet if this is going to be a dog in terms of performance. I'll see that when it comes to rendering maps, which is the next step.
I also played a bit with 'fast refresh' which seemed to have both promise and problems. There is no information from the manufacturer as how to sped up display refresh or reduce flickering but I saw a suggestion on https://benkrasnow.blogspot.com/2017/10/fast-partial-refresh-on-42-e-paper.html
that suggested that if you took just the last voltage/period/repeat sequence in a lookup table, say changing both lut_b and lut_w, then it would achieve the same result without all the flickering. I did try it and it did work, fast and non-flickery. However, it left a faint ghost pattern when black was erased and the black came out slightly brown when written on a white background. I can see it could be useful but would take some fiddling and thinking to work out how to get the best results. Maybe sometimes you'd need to use the original full lut (say when erasing the screen) and sometimes you could use the faster non flickery one. Be better just to get a display that supported it.
Make MicroSD reading JSON work with CircuitPython.
Whilst I can read and write to the flash drive from CircuitPython, it only has a 2Mbyte capacity, so couldn't hold much in the way of map files. I'll need the MicroSD card reader. There is a Circuitpython lib for that. It works. Just need to remember to make an spi object only once, not once for sd and once for display.
Reading JSON files works as expected and reads in one JSON object at a time as it did in Arduino code.
import sdcardio
import board
import digitalio
import storage
import os
import json
import busio
spi = busio.SPI(board.SCK, MOSI=board.MOSI, MISO=board.MISO)
SD_CS = board.A1 #chip select for SD card
sd = sdcardio.SDCard(spi, SD_CS)
vfs = storage.VfsFat(sd)
storage.mount(vfs, '/sd')
print(os.listdir('/sd'))
boundsFile = open("/sd/bounds.jsn",'r')
waysFile = open('/sd/ways.jsn', 'r')
boundsDoc = json.load(boundsFile)
minlat = boundsDoc["bounds"][0] #minlat
....
boundsFile.close()
....
waysFile.close()
storage.umount('/sd')
I tested loading in and drawing the small JSON map I have for my neighbourhood and it takes about 10 seconds to read and draw and display. This is comparable to how long it took the Arduino code so that's a win! Though the lines are a bit chunky. I'm guessing a fast-ish Bresenaham's algorithm and hence no antialiasing going on there. With the use of a couple of gray shades and anti aliasing, I can make it look better... if I want to be bothered. Would slow it down though and I'm trying to avoid that. Leave it alone for now.
Blitting
I made some routines for filling triangles and circles and copying rectangles from one framebuffer to another and back to restore an image or part thereof. They all work now so it'll be easier to do 'animations' of the 'you are here' indicator using info from the GPS to show position on the map, because I don't want to scroll the map, because it involves too much drawing and redrawing.
On the other hand, if I am only going to update the display every 3 minutes at the fastest, then I really do have time for doing all that, don't I? Hmmm. Worth a thought. Even rotating it if I know my current bearing rather than leaving it always 'North up' like I was going it. For now, I'll stick to the original plan of moving the dot rather than the map, but it is worth thinking about.
Involutory operation as an indicator
Was a time when one would do an XOR of a crosshair on a display which left animage where the underlying pixesl were still visible, just a contrasting color. And then another XOR to undo the crosshair, leaving the image unsullied. I have got something similar to work by turning a white pixel to a red, and a red to a white and leaving the blacks alone, within a rectangle (or a crosshair which is just 2 skinny rectangles). Doing it twice in a row restores the original image (hence the description as an involutory operation, one that is its own inverse). There needs to be enough white background in the rectangle for the resulting red indicator to stand out though. Not sure what to do about pinks and greys... However, it does work well for the sort of map I've been looking at so far. I especially like that I can use a quite prominent 'you are here' indicator and it doesn't obscure stuff too much.
A filled rectangle is easy and fast to draw and shows up well, but a big crosshair using a vertical and horizontal line shows up even better because it's an extended figure. I'll go with that.
The bother with buffers
The code I inherited from the Adafruit EPD stuff is elegant in a way, in that it defines operations like fill_rect() in a FrameBuffer.format class so each format of color ( single bit binary red and 2 bit grayscale in this case) knows how to do a rectangle, which will be different for each format. It then has a function called _color_dup() that takes care of 'duplexing' the color so that each buffer format has the appropriate function called and can handle its share of getting the bits set in its plane without knowing or caring what the other formatted bitplanes are doing.
All very well but what if you want to do something like flip red and white and leave black and grey alone? Where the planes are no longer independent (because in this case 'red' involves bits on both planes because it needs the black bits turned on as well as the red one. Don't ask me why, epaper is weird) and what goes back into the bitplanes depends on a combination of the existing planes rather than a simple replacement on each plane? You need something that can consider the values in the two bitplanes together, not separately and _color_dup doesn't lend itself to that very well. So far I manage by defining pixel by pixel operations that read a pixel (which gets the bits from both planes and combines them by shifting and or-ing) and decides what to do as a result. But it is slow and I'd have to replicate code for circles and triangles and so-on whereas I already have fast code for each bitplane. I want to re-use that fast code. So I need to replace the part of the code in the framebuffer.format class that deals with the bits, which varies from format to format and for 2 bit grayscale looks something like:
for _x in range(x, x + width): #for each pixel from x to x+width
offset = 6 - (2*_x)%8 # & 0x07 #because 2 bit per pixel
for _y in range(y, y + height): #for each pixel in the row
index = (_y * framebuf.stride + _x) // 4 #calc byte for this pixel
bits = ((color & 0b11) << offset ) #code for 'bits' needs changing
framebuf.buf[index]=(framebuf.buf[index] & ~(0b11 << offset))| bits
So I can see now which code to change (now that I separated out the and-ing and the or-ing of the new bits) but what to change it to? It is agnostic of what is in the other format's bitplane buffer, but the bitwise value at each pixel depends on the other format's bitplane's bit value. Hmm. Need a better method, one that still uses format-wise formulae to work out the address of each bit and how to represent the color but which can access both bitplanes to work out what the resulting value in each bitplane should be.
At the moment, _color_dup gets the appropriate function, like 'fill_rect' from each format class instance and then calls them both and they run independently. It passes the 'color' to both, so if the 'color' contained info about the operation apart from just the color which is represented by a 3 bit code, bit 2=red, bit 1,0 represent grayshade and they are inverted, e.g. (1)(11)-->White. So with appropriate extra bits of information the format classes function would be able to work out that the 'color' is actually an operation. Say use 'color' values above 7 for operations like 'rw_flip, invert, etc'. So far so good, but it still wouldn't be able to access the other bitplane's values, which is the sticking point...
On the other hand, does the 'inversion' operation really need to know both bitplanes' values? Why not just invert each biplane value? Because whilst RED (0)(00) would turn to WHITE(1)(11) OK, but then BLACK (1)(00) would turn to PALEPINK (0)(11) and I don't want that, I need it to stay (1)(00). So yes, we need both format class instances current values. Grr.
On the gripping hand, why not decode the 'color' into an operation at the display class level rather than the framebuffer class level whilst we still have access to both bitplanes? This is what I currently do, but as stated it is slow because it accesses each pixel for both planes, having to work out the pixel bit index, fully every time, read its value and return it for each pixel independently whereas what I want is an efficient pixel after pixel within the byte array method and that method would change too, depending on the figure being drawn. i.e. for a circle the bit address of the next pixel could be quite different from that in a rectangle.
Could it be that simple? Have a method that generates a sequence of values, incrementing away each time it is called? I.e. use a Python 'generator' to produce an iterator that has the bitwise index for each plane and that can be used to work out what to write back to each plane. Yeah, great idea! I knew that writing to myself would be useful! Noice!
Crank up the Generators
I'd need a generator for each type of drawing object. One for rect, one for circle, one for triangle, one for line (able to be angled), one for hline, vline (not angled) One for text, maybe. But, each of those generators would have a lot in common, depending on the buffer format, the code that gets the next bits within a byte, for example. Be nice if the object yielded could be used to read and write the 'color' value at that pixel. The following code works. Took me a while to realise that after a send() you need to have a yield at the other end to keep in sync. Also some more time to work out that I had to deal with rotation of the framebuffer.
#TC wrote this. Does an involutory (i.e. it is its own inverse) operation to make a pixel highlighted
#by changing white to red and visaversa. Leaves blk, gry and pink alone.
#Uses the new-style format.fill_rect_get_pix_gen() to get generators
# that read and write colors to buffers and do it faster because teh generators
#keep track of where they are in the rectangle of bits in each buffer so indices and offsets
#aren't repeatedly calculated. Won't make much difference to speed, but is more elegant...
#plus lets me play with generators... I do like them, but takes a bit to get used to.
#Have realised that this code has to handle the buffer rotation properly def gen_redwhite_flip_rect(self, x, y, w, h):
blk = self._blackframebuf #get the two colorplane buffers for this display type
red = self._colorframebuf
#You need to take care of rotation somwhere and because format.fill_rect_get_pix_gen doesn't
#do it, for various reasons, you need to do it here. Yuk!
#You need to do this for every format function you call direct, if you bypass a function
#in the FrameBuffer that would normally do the rotation for you.
rotation = blk.rotation #I'm gonna assume red rotation is the same beacuse it would be silly if they weren't
if rotation == 1:
x, y = y, x
w, h = h, w
x = blk.width - x - w #strictly this should be blk.width?
if rotation == 2:
x = blk.width - x - w
y = blk.height - y - h
if rotation == 3:
x, y = y, x
w, h = h, w
y = blk.height - y - h
blk_pix_gen = blk.format.fill_rect_get_pix_gen(blk,x,y,w,h)
#the gens are specific to shapes
red_pix_gen = red.format.fill_rect_get_pix_gen(red,x,y,w,h)
#zip takes two (or more) iterators and parallels them.
#so blk,red in the following will be blk_pix[0],red_pix[0] etc
for blk_pix, red_pix in zip(blk_pix_gen,red_pix_gen) :
#for each pixel in the filled rectangle (the pix_gen(iterator) yields them in sequence)
old_blk = blk_pix #get current colors from their generators
old_red = red_pix
col = old_blk | (old_red)<<2 #make a single variable
#so we can compare it to self.WHITE
leave_it = False #assume we are gonna replace it
if col == self.WHITE: #if white, make it red
col = self.RED
new_blk = 0b00 #00 means blk
new_red = 0b0 # 0 means red.
elif col == self.RED: #if red, make it white
col = self.WHITE
new_blk = 0b11
new_red = 0b1
else:
leave_it=True #leave it alone
if not leave_it:
blk_pix_gen.send(new_blk) #set the new color.
red_pix_gen.send(new_red)
#TC wrote this. Makes a cross using the red_white_flip
def gen_redwhite_flip_cross(self, x, y, t):
x0,y0 = int(x-t/2),0
x1,y1 = 0,int(y-t/2)
w0,h0 = t,self.height
w1,h1 = self.width,t
self.gen_redwhite_flip_rect(x0, y0, w0, h0)
self.gen_redwhite_flip_rect(x1, y1, w1, h1)
#this exciting func returns a generator for the pixels in a fill_rect. #Allows the user of the generator to perform both reads and writes to #buffer faster than repeated calls to get_pixel and set_pixel
#because it keeps the already calculated stuff handy.
#Plus it knows what a filled rectangle is!
def fill_rect_get_pix_gen(framebuf, x, y, width, height):
for _x in range(x, x + width): #for each pixel from x to x+width
offset = 6 - (2*_x)%8 # & 0x07 #I should put this in a function
for _y in range(y, y + height):
index = (_y * framebuf.stride + _x) // 4 #
color = yield ((framebuf.buf[index] >> offset) & 0b11)
if color is not None: #if there was a color sent back to the
#generator, write to the buffer. else, do next pix
newbits = (color & 0b11) #
yield newbits # need a yield to keep in sync
newbits = newbits<<offset #shift it up
framebuf.buf[index] =
(framebuf.buf[index] & ~(0b11 << offset)) | newbits
I used the display.gen_redwhite_flip_cross() method that I developed to do a crosshair. I like it. It is easy to see, precise as to where the position is, doesn't obscure black lines underneath it and drawing it twice causes it to disappear. So I can draw it, display it and then draw it again without displaying and the framebuffer is left with just the map, ready for next time we get a fix.
Make GPS work with CircuitPython.
The GPS is an Ultimate GPS from Adafruit and so has Circuitpython support. I've played with it previously and it works. https://learn.adafruit.com/adafruit-ultimate-gps-featherwing/circuitpython-library I am using it to generate the image above with the crosshair in it. It requires you to poll the GPS hardware, connected by UART, quite fast, like every 200ms. I am using an asyncio — Asynchronous I/O — Python 3.10.4 documentation framework in the code to manage multiple 'cooperative concurrent' coroutines so as long as I 'yield' often enough in the mapping coroutine to give the gps coroutine a chance to do polling, it works fine. As I aren't doing much in the mapping until I get a GPS fix anyway, this isn't hard. Asyncio is supported by Circuitpython and seems to work just fine.
I have noticed that the GPS can give results that are quite a long way out, like hundreds of metres away from my accurately known position. Powering down the GPS and forcing a cold start sometimes fixes it. This is not good....
Hansel und Gretel
I need breadcrumbs so I can see where I've been. Needs not to be a solid red line as that would be confused with the GPX course as discussed in next section. However, a series of red/white flipped rectangle out to do it. Like breadcrumbs.
GPX
I need to be able to mark a route using a GPX so that I can see where I need to go. This could be a simple red line, similar to a 'way' i.e. a list of joined lat/lon coordinates. The fact that it would be wriggly would differentiate it from the crosshair.
Button up your DACs
I need some push buttons and I think that the use of a single-wire keypad will work for me. Like Technoblogy - One Input Keypad Interface by
It uses resistors and an analog input and a key matrix. Depending on which key(s) are closed, the analog voltage resulting can be used to determine the input. However, it can't detect multiple key presses, which might not actually be a problem. It's a neat solution and uses standard resistor values.
The Matrix Reloaded
I'll make my own keyboard with discrete buttons using the 'tactile' switches Jaycar SP0603 and SP0602 . I used the resistor values suggested above (slightly adjusted because I had only one 1.5K). I use 3.3V rail as the +ve instead of 5V and I use A1 as the input pin.
Here's what I read for each switch. (full scale of 65536 //100) (The Itsybitsy gives 0-65536 but I decided to divide by 100. Does this mean it's a 16 bit ADC? Huh, I thought it was 12 bit, but Adafruit use a scale of 65536 for some reason for AnalogIn)
646 = no key. There is reasonable separation between them all, more than 10 after division be 100 (closest is 12 for key 10 and 11). Will apply a tolerance of at least +/- 5 when decoding so bins will start 5 lower than the values measured and go 5 higher. The digit in bold is the ordinal of the key, i.e. the ascending order of the values. (note how 2 and 3 are unexpectedly swapped. Probably due to some fudging of resistor values on my part when I used a 1.1k instead of a 1.5k. But I'll leave it for now. She'll be right, the software guy can fix it... Oh, wait, that's me.) The character in bold is the letter I assign in the software to the keys. E and C are for Enter and Cancel and the <,>,^,v are for panning and maybe zooming at some point. H is for Home
286 9 k | 305 10 m | 317 11 n | 429 15 C |
208 6 g | 237 7 h | 255 8 ^ | 403 14 r |
85 2 c | 130 4 < | 160 5 H | 370 13 > |
1 0 S | 60 1 D | 97 3 v | 353 12 E |
Keys are detected on press and decoded on release. The way that the decoder deals with multi-key presses is that the last key released is the one that is decoded. The electronics of the key matrix does not lend itself to multi key presses though it might be useful to take some measurements and see what voltages are produced. For now, it's strictly one key at a time.
I used two different key sizes, making a little left-right, up-down diamond out of slightly raised keys. This will hopefully help me when I'm banging on it with gloves on. They keys aren't waterproof so they'll need a membrane. (As will the whole shebang! Maybe a condom. Perhaps a ziplock bag.)
Go sleepy byes little Itsybitsy
NOTE: When usb is plugged in it pretends to sleep but doesn't save power.
I can put the itsybitsy to deep sleep (and there is similar for light sleep)using
time_alarm = alarm.time.TimeAlarm(monotonic_time=time.monotonic() + 180)
# Do a deep sleep until the alarm wakes us.
alarm.exit_and_deep_sleep_until_alarms(time_alarm)
# never gets here because a deepsleep causes it to restart code
This appears to work. It runs, then goes to sleep for 3 mins, then starts over again.
I added code to turn off leds, display and disable GPS. Tests show I can get down to 14mA in either lightsleep or deepsleep (so deepsleep gains me nothing and is a pain because it resets.) Normally the device pulls 50mA when running with display and GPS and the dotstar led only just on. (The dotstar led only pulls a few mA when set to brightness 1/255).
The timer that wakes it from sleep seems to work, though it gave me some issues at first. No doubt this is one of the things consuming that 14mA when it's otherwise all asleep. We could do better, maybe, using an external RTC and a pin interrupt.
Twinkle twinkle little dotstar
There's a good chance the dotstar LED on the Itsybitsy board draws a reasonable current even when turned off. I might have to remove it. Don't want it for this app anyway.
Battery over-discharge issue
The battery is connected to the VBat input of the ItsyBitsy and as far as I know there is nothing stopping it from being overdischarged when flat which would damage the battery.
AP2112 (diodes.com) is the datasheet for the regulator and shows an enable pin (3) The itsybitsy circuit at Adafruit Learning System shows that pin 3 is pulled high to the battery. I imagine that adding a pulldown resistor might disable the regulator when the voltage drops far enough. The input voltage needs to be above 1.5V to enable it and below 0.4v to disable the regulator.
Basecamp to Circuitpython
I've serendipitously found is a great way to send GPX files from Garmin Basecamp software to the ItsyBitsy Pathfinder GPS I am developing. Basecamp is an old, probably deprecated, stand-alone program from Garmin but it is the best thing I know for preparing courses and getting them to and from Garmin devices. It is much better than all the cloud-based crap that Garmin pushes nowadays.
I found that when I had the Itsybitsy microcontroller plugged into the Highpower PC and in Circuitpython mode that Basecamp (the application ) recognised the E: Circuitpython disk it as a device it could send gpx files to. By selecting a course on the map and saying send to it created a folder called /Garmin/GPX and stuck a track01.gpx on the Circuitpython device! Great!
I don't know what it was about the Circuitpython 'disk drive' that made Basecamp think it was a device. There wasn't a /Garmin/GPX folder on it before (i.e. so Basecamp put it there) Also noticed that Garmin Express 'discovered' the Itsybitsy and thought it was a device too. Hmm. Do need to be careful that Basecamp or Express don't scrag the Python code on the E: drive! Also probably best to stop the code that might be running in Circuitpython to avoid restarting all the time as files are added to E drive. (Use serial monitor and go to REPL by doing CTRL-C)
This also implies that I could create gpx files on the Pathfinder and upload them to Basecamp. Now all I need to do is work out the Circuitpython code for how to read and write gpxes... The fun never ends! GPXes are written in XML so it can't be all that hard...
..... header stuff
<trkseg>
<trkpt lat="-37.049420177936554" lon="147.32086859643459">
<ele>1310.1171875</ele>
</trkpt>
<trkpt lat="-37.048656586557627" lon="147.32238940894604">
<ele>1313.05078125</ele>
</trkpt>
..... more points and elevations to follow
BTW, this is a simple way to get GPX files out of Basecamp and to Googledrive so I can upload them to the Fenix now that it doesn't plug successfully into the PC any more. I think. See https://terrysconfluencecloud.atlassian.net/l/c/72Ms5Hak
oooops....
I found a thing I don't like. I looked more closely at the map I exported from OSM and it appears to have missed some streets. Sure, they were close to the edge, but even so they were inside the area I specified, I thought. I need to be careful and leave a good margin around the edges of the exported maps.
Found two bugs to fix in the e-ink. In map.map.__init__ the maphScale should be set to -mapwScale Works as is because the display is square but if it weren't it would introduce a perspective skew. Second is in map.map it needs a set_zoom() because when you change zoom you also have to change the scaling. As I don't change the zoom yet I hadn't found that bug.
Changed my mind about e-ink
Having gone a long way toward making this a usable mapping GPS with the e-ink display, I have decided that it really isn't gonna cut it. It's too small, too slow. See another article in this website about a second attempt using a SHARP Memory LCD which is also very low power but very fast by comparison. And twice as big, or more. Is monochrome though...
There is a possibility I could end up using both displays as discussed above but I don't think so. I think the Sharp LCD is fast enough that swapping displays between map and info if needed is doable. Or having animations and floating labels on the map, which I can't do on the e-ink because it takes such a long time to redisplay anything. So anyone who started out with the firm conviction that e-ink wasn't gonna work, I sadly bow to your wisdom.