Friday, December 25, 2020

Printalyzer Project Update

Since I wrote my original introductory blog post on this project, I've made considerable progress. The first prototype is now fully constructed, and I'm making significant headway in the development of the firmware. As such, I think its time for another project update.

Given that the project is being developed entirely as open-source hardware and software, the latest schematics and code can always be found in the project's Github repository.

Main Unit Construction

The main unit consists of a rather large PCB, with a double-sided design. The reason it is large is that all the components and most of the ports of the Printalyzer are directly mounted to the board. The way I decided to do this, the board is actually mounted in the enclosure upside-down. This puts the buttons and display on the "bottom", while everything else is on the "top". I took this approach because several of the components are rather tall, and there wouldn't be enough clearance if they had to fit within the height limits of the buttons. From my teardowns of other pieces of darkroom electronics, this construction approach is actually quite common.

For the enclosure, I designed something using Front Panel Express and their housing profile script. While rather expensive for building in quantity, at the prototype stage this gave me very nice results.



For the top of the unit, I installed a piece of Rosco Supergel #19 (Fire Red) sandwiched between some transparent material in the gap between the top of the display and the enclosure. This both protects the display and keeps its output spectrum paper-safe.

Finding a nice metal knob for the encoder, without a pesky direction-mark, took some effort. Ultimately I went with a type of knurled guitar knob (from mxuteuk), which seems to be the magic set of keywords to find these things.

For the keycaps, I ultimately went with black. This was mostly because it was the only color for which I had all the necessary parts at the time, given that these are often backordered or special order with lead time. When looking at the unit on the desk, black also seems like the nicest choice. However, in the darkroom, they are hard to see. I also have these keycaps in dark gray, and they can be special-ordered in light gray. (Having that array of LEDs around the keys really helps, of course.)


Meter Probe Construction

The meter probe is a much simpler device. It essentially consists of a light sensor, a button, and minimalist support circuitry to handle power regulation and voltage level shifting. Right now the sensor I'm using is the TCS3472, which has well-documented lux and color temperature calculation guides and might be useful for color analysis as well.

(One benefit of my design here, is that I can make future versions of this meter probe using basically any sensor I want. The only requirement is that the sensor, or really just the front-end of this circuit board, has an I2C interface.)


Since the enclosure requires a somewhat custom shape, I had to go with 3D printing for this one. For the two prototypes I've built so far, one was self-printed and the other I had printed by Shapeways. The later looks a lot nicer, so that's what I'm showing here.


The hardest part of assembling this was probably the fiddly process of connecting the cable. For the cabling, I went with a 6-pin Mini-DIN connector. In practice, that meant finding someone selling M-M black PS/2 cables and slicing one in half.

One last thing this meter probe needs, is some sort of cover for the sensor hole. My plan is to have something printed/cut for me out of something like polycarbonate, from a place that makes panel graphic overlays. Ideally this overlay will be light in color, with markings pointing to the sensor hole. Thus far I've been putting off this part, but getting it in place is going to be essential to finalizing my calibration of the sensor readings.


Firmware Development

I'm now at the stage in firmware development where I have all the hardware inside the Printalyzer working, and have a stable base from which to build up the rest of the device's functionality. My initial goal here is to fully flesh out the functionality of a mostly-full-featured f-stop enlarger timer, then to later follow on with the exposure metering functions. I'm going about it this way because metering requires a lot of testing that's hard to do "outside the darkroom," and I'll need the timing functions regardless.

(In all the screenshots below, I apologize for the "gunky" picture quality. The red gel sandwich over the display tends to reflect a fair bit of dust, which ends up looking far more distracting in pictures than it does in real life.)

Home Display

The home display shows the current time and selected contrast grade. It also has a reserved space for the tone graph. While I have figured out what that graph will look like, its not easy to show here until I do actual metering.

From this display its also possible to change the adjustment increment, and obviously to start the exposure timer.


Another feature of the home display (not shown here) is fine adjustment. Using the encoder knob you can enter an explicit stop adjustment (in 1/12th stop units), or even an explicit exposure time. I expect this capability to be useful for repeating previous exposures, in lieu of re-metering.


Test Strip Mode

The test strip mode supports both incremental and separate exposures for test strips, and making strips in both 7-patch and 5-patch layouts. The design places the current exposure time in the middle, and then makes patches above and below this time in units of the current adjustment increment.

Another nice feature, made possible by having a graphical display, is that it actually tells you what patches should be covered for the next exposure. This way its harder to lose your place during test strip creation.


Settings Menu

The settings menu is fairly rough at the moment, and probably does need some user interface improvements. That being said, it shows another benefit of using a reconfigurable graphical display. I can show nice text menus that actually tell you what each setting is, and then return to a full-screen numerical time display when done.


Enlarger Calibration

This little tidbit is the first feature I've worked on so far that actually uses the sensor in the meter probe. Its not yet complete, from a user-interface and settings point of view, but the underlying process is fully implemented now.

The basic issue is that any enlarger timer is effectively just "flipping a switch." It can control when that switch turns on, and when it turns off, but little else. Unfortunately, lamps typically do not turn on or off instantly. On both ends, there is a delay and a ramp time. While these are typically short, not accounting for them can cause consistency issues with short exposures. This problem becomes especially troubling when doing incremental short exposures, such as with test strips and burning.

The goal of the calibration process is to make sure that the "user visible exposure time" is the time the paper is exposed to the equivalent of the full light output of the enlarger, rather than simply the time between the on and off states of its "power switch."

Calibration works by first measuring reference points with both the enlarger on (full brightness) and the enlarger off (full darkness). It then runs the enlarger through a series of on/off cycles while taking frequent measurements. When complete, it generates 6 different values:

  • Time from "power on" until the light level starts to increase
  • Time it takes the enlarger to reach full brightness (rise time)
  • Full brightness time required for an exposure equivalent to what was sensed during the rise time
  • Time from "power off" until the light level starts to decrease
  • Time it takes the enlarger to become completely off (fall time)
  • Full brightness time required for an exposure equivalent to what was sensed during the fall time

These values are then fed into the exposure timer code, and used to schedule the various events (on, off, tick beeps, displayed time, etc) that occur during exposure.

Using the above numbers as a rough example of what this all means, without profiling a simple 2 second exposure would actually expose the paper to light for about 2.4 seconds but only a 1.88 second equivalent exposure.

These numbers come from a simple halogen desk lamp, which is more convenient for basic testing. I've also run the same calibration cycle on my real enlarger. It takes longer to rise to full brightness, but is faster at turning off. (Its similar case is 2.2 seconds with a 1.78 second equivalent exposure.)

Now I'll admit this doesn't seem like much, but enlargers do vary and it can suddenly become critical if you're making <1s incremental exposures for test strips or burning.

Blackout Mode

This feature is a little "quality of life" nicety that I haven't seen anyone else do. Every once in a while, some of us want to print on materials, such as RA-4 color paper, that has to be handled in absolute darkness. This means once your enlarger is all setup for the print, you need to turn out all of the lights in the darkroom before taking out the paper. Having to manually turn off your safelights and throw a cover over your illuminated enlarger timer, gets a bit annoying.

This is why I added the "Blackout" switch to the Printalyzer! Flipping this switch turns off the safelight and all of the light-emitting parts of the device. Of course everything still works when in this mode, so you can still make prints and test strips in complete darkness. (Eventually I'll probably add additional audio cues and lock some ancillary features to improve usability.)


Next Steps

At the outset of this project, I collected a long list of "good ideas" for things the device should do. That list is still growing, of course. Eventually I will get to them, but right now my priority is to get the standard features implemented and stable.

I have a few more things to do for the basic f-stop timer functionality, such as burn exposure programs. After that, I need to work on an actual user interface for adding/changing/managing enlarger profiles.

Once these things are done, I'll finally move on to paper profiles and print exposure metering. I expect that to be a long-tail item, because it will require a lot of work on sensor calibration and learning about sensor behaviors. However, I plan to do as much this as possible via first converting raw sensor readings into stable and standardized exposure units. This way, it will be possible to experiment with different sensors (as needed) without dramatic changes to how the metering/profiling process works.

Another thing I'll likely do interspersed with this, is start to take advantage of the USB port I put onto the device. There are a few features I want to implement with this, including:

  • Backup/Restore of user settings and profiles with a thumbdrive
  • Firmware updates with a thumbdrive (right now it requires a special programmer device, which isn't practical for "end user" use).
  • Keyboard entry of profile names (I'll make sure they can be entered without a keyboard, but being able to use one is a nice thing.)
  • Connection to densitometers to help with paper profiling (not sure how this will work, but it can't hurt to try).


I'm still not sure whether, or at what point, I'll proceed with transforming this project into an actual "product" or "kit." The biggest hangup is really that the device necessarily plugs into mains power and switches mains-powered devices. That means it may be hard to safely sell/distribute units without first forming a company and going through various painful and expensive certification processes. However, I'm not going to worry too much about that until I have something rather complete.

Regardless, all the necessary data to build one yourself will always be freely and publicly available. However, actually assembling one of these does require circuit board assembly skills and tools everyone isn't likely to have.

Wednesday, November 18, 2020

Introducing the Printalyzer!

The Printalyzer is a new project I've been working on for several weeks now. It aims to become a modern full-featured darkroom enlarger timer and exposure meter.

 

Printalyzer Main Unit

What is it?

At its core, this is a timer for a darkroom enlarger. That means it has a switchable outlet on the back, and can turn an enlarger on and off for a set amount of time. However, it is going to be far more sophisticated than a simple clock. Its going to allow the adjustment of exposure time in "stops," making it what is known as an "f-stop timer." That means you adjust time in logarithmic units of exposure, rather than linear increments. Here's an article that attempts to explain the concept.

Of course it'll also include features to help with making test strips, and calculating dodge/burn exposures in stop units as well.

In addition to all this, the device is going to include an exposure metering probe. This probe will let you measure and visualize the contrast range of a negative before printing it. The idea is that you can figure out a fairly decent choice of both contrast grade and exposure time, without even making a single test strip.  Of course you still can make the strip, but it becomes more about fine-tuning than making your initial decisions.


Why are you doing it?

Why have I chosen to take on this project? Don't similar devices already exist? Well, technically they do. Several, in fact. You can find everything from polished commercial products to crowdfunding campaigns to homebrew hobbyist projects.

Okay then, why am I bothering?

Well, in short, I've found myself growing increasingly frustrated with what is currently out there.

Most of the commercial products in this space are quite expensive, and were designed and built in the mid-to-late 1990's. While they do work, they tend to suffer from all the limitations of "built to cost" embedded devices from that era. This often means a limited user interface that can be difficult to use without constantly referencing a manual, firmware that is difficult or impossible to update, features that are constrained by the capabilities of an 8-bit microcontroller, fixed peripheral choices, and a general lack of new development work.

Many of the more modern projects tend to forget things that made those old devices great, like a sturdy case, real buttons, and a no-nonsense primary interface. They also sometimes try to do too much, such as attempting to be a general purpose darkroom timer. Finally, its rare that they tackle the exposure metering problem.

 

What are your project goals?

Basically, to take what I like from those older products and to bring it up to date with more modern embedded technology.

Fundamentally, I like to think of this project as building a platform, rather than a static appliance.  Sure, its going to have all the necessary hardware to be an enlarger time and meter. However, its also going to have enough excess capacity so that it can be continuously updated to increase its functionality.


My goals for the Printalyzer include:

  • Use real buttons for the user interface, and have enough of them that the need for awkward combinations is minimized. A rotary encoder knob may be used where it makes sense, but not as a catch-all.
  • Use a graphical display whose layout can be changed depending on the device mode. It can emulate the look of 7-segment and bar-graph LEDs when that makes sense, but it can also display nice menus for setting things up.
  • Use a flexible interface for connecting the metering probe, such that the choice of light sensor isn't baked into the device. It should be possible to use different sensors as desired, simply by plugging in a different probe. While I'll initially focus on B&W metering, being able to function as a color analyzer is absolutely a stretch goal.
  • Have enough program memory that all desired features can be implemented. There should never be a reason to need different versions of the device to offer different feature sets.
  • Have enough user memory that settings and calibration profiles are not arbitrarily limited, and can even be accompanied with meaningful descriptions.
  • Include a USB port, so that settings can be saved/loaded from thumbdrives. It should also be possible to use this port to connect to other peripherals. This could include keyboards, for typing profile names, or even densitometers to automate the process of creating profiles.
  • Include a "Blackout" switch that turns off all illumination on the device, enabling the user to perform basic functions by relying entirely on audio cues. This will be of great benefit when doing color printing, where paper must be handled in total darkness.
  • Able to run off all common mains voltages, without modification (except maybe a different fuse).
  • All hardware and software for the device will be completely open source, so that anyone can build or modify it.

 

My anti-goals, or things I explicitly do not plan to include:

  • It will not be a multipurpose darkroom process timer. That means no timing of film development. This timer is meant to sit next to the enlarger, not to be carried around and used for everything.
  • It will not require any sort of smartphone app or computer interface to set this thing up. It should be possible to use the device as a completely standalone unit, where all of its capabilities should be locally configurable.

 

How will it work?

The device will consist of a main unit that looks somewhat like every other enlarger timer, and a connected metering probe.

Printalyzer Main Unit Rear

 

Meter Probe

 

Here's a basic block diagram that shows the main components:

Block Diagram


The main unit has buttons, a display, illumination LEDs, a buzzer, and relays to control both the enlarger and the safelights. The metering probe has a button, to trigger readings, and a light sensor.

 

How far along is the development?

The first revision of all the schematics and PCB layouts are finished. I've constructed the first prototypes of the metering probe, and am in the process of constructing the first prototype of the main unit. Once that is done, I'll have to do a lot of testing and begin to write the software. Since this is all open-source, you can see the nitty-gritty of this work on Github.

As far as the theory and process of print exposure adjustment and metering, I've also done a lot of research to prepare for the project. In addition to what I can figure out on my own, it also helps that there are a lot of long-expired patents that go into a good amount of relevant detail on the subject. Additionally, I'm hoping I can make the calibration profiles for this device compatible with other existing devices, to make initial setup as easy as possible.


Will I be able to get one?

That is the hope. While the first prototypes are mostly for me to be able to tinker around with an enlarger timer/meter that I can reprogram myself, I would eventually like to make this into a real product. I'm not yet sure if it'll be a DIY thing or a complete constructed unit. Its also likely that there will be non-trival legal, regulatory, and manufacturing hurdles to make this happen. I'll also need to get the cost down, as nice-looking prototype enclosures can be quite expensive.


Tuesday, July 10, 2018

Nestronic Input Board (Part 4)

Introduction


When I designed the main board of the Nestronic, I was busy enough that I didn't want to hold up progress on account of other areas of the project. I basically just figured out what interface pins I might need for an input board, brought them out to a connector, and set that part aside.

I figured I'd need an I2C bus to interface with some sort of I/O expansion chip, and a few support GPIO pins. I also thought it might be neat to try experimenting with the ESP32's touch sensor capabilities, so I made sure at least one of the GPIO pins could be used for that.

(For more project background, please read the [Introduction], [Architecture], and [Prototype Assembly] blog posts.)

Monday, May 28, 2018

Building the Nestronic Prototype (Part 3)

(For background, please read the [Introduction] and [Architecture] blog posts.

Introduction

I eventually got to the point where the schematic was mostly finalized and the whole thing was working on a breadboard. While a big achievement in and of itself, it was far from a finished product.

First, breadboards usually look kinda messy. After all, they tend to evolve as you tinker with the circuit design. Also, breadboard wires are never the exact right length for any given connection. While you can make a clean-looking breadboard, it often requires far more time and effort than one is willing to put in for a simple prototype.

Second, breadboard connections are a little flaky. Breathing on your circuit the wrong way can cause this wire or that resistor to have just enough of a contact issue that something starts or stops working reliably.

Third, breadboards are noisy and full of stray capacitance. I could detect clock signals from one end to the other. Digital connections sometimes had just enough noise to make the difference between a high and a low. And to quote Dave Jones, you sometimes have to "hold your tongue at the right angle" to get things to even start up correctly.

Breadboard Nestronic
At this point I exported the netlist for my schematic, pulled it into KiCad's PCB tool, and began the layout and routing process. After a lot of work, I eventually ended up with a complete board design. The core of this layout was the NES CPU section, with all its parallel bus traces. The rest was kinda squeezed in around it. Perhaps not the most optimal of layouts, but good for the general size and shape of the enclosure I wanted to put it in.

PCB Layout

Sunday, March 04, 2018

Nestronic System Architecture (Part 2)

(For an introduction to this project, please read the previous blog post.)

System Design

Now that the design of the system is mostly complete, I'd like to present it in a little more of a top-down fashion. First, I'll present a block diagram of the complete system. Then I'll go in-depth on how I arrived at all the relevant elements of the system. Finally, I'll show the detailed circuit schematics this all represents.

The design of the Nestronic consists of two major subsystems. First, there's the NES CPU Section, which builds around the RP2A03 CPU and contains everything necessary to actually synthesize video game music. Second, there's the ESP32 Microcontroller Section, which provided the front-end and completes the project.

Block Diagram

Thursday, February 22, 2018

Introducing the Nestronic! (Part 1)

Project Background

Brainstorming

After finishing my previous project, I felt I needed something new to keep up the momentum. There were a lot of things I learned about during the development of the project, which I didn't use but but wanted to explore. I also liked the idea of projects that combined modern and retro elements.

I spent a lot of time scouring the Internet for ideas, but was frustrated with what I found. Far too many projects seemed to be rehashes of the same old ideas, often involving interfacing someone's favorite microcontroller to some sensor or display device. Plenty of elements that would be fun to use as part of a project, but nothing too inspiring on its own. In other words, I really didn't want to build yet another weather station.

One idea that did mildly pique my interest was that of a "modernized" alarm clock (kinda like this). You know, one that could actually sync its time via NTP and that wouldn't lose minutes if left unplugged for too long. I wasn't sure if this idea was tempting enough on its own, but perhaps it could be combined with something else...

As the search wore on, I eventually stumbled upon a project that really caught my eye. This project was Aiden Lawrence's Sega Genesis Music Player. It took the original synthesizer chips from the Sega Genesis, combined them with a modern microcontroller, and used all this to "authentically" play video game music. This idea was almost perfect!

Okay, I say almost, because I didn't actually have a Sega Genesis when I was growing up. Instead, I had a Nintendo Entertainment System (NES). So what if I could build a similar project around the NES? What would it take? And could I combine it with my previous idea to make the ultimate "Nintendo Alarm Clock"?

Unfortunately, unlike the Sega Genesis, the NES does not actually use separate audio chips. Instead, its CPU is actually a modified 6502 core combined with an integrated audio processing unit (APU). As such, I'd need a fundamentally different design. Instead of simply interfacing with audio chips, I'd have to actually build a system capable of running code on that CPU. Thus began a lot of research, including reading all the NES and Famicom schematics posted online. I also found the resources from the NESizer2 project to be quite valuable.

So I finally had my idea! Build a device that combined a NES game music player with an alarm clock! I could wake up to various video game soundtracks, and even get cool sound effects when pushing the buttons.

Initial Approach and Project Goals

The first thing I'd need for this project, is a way to actually run code on a NES CPU from within my own system. This would make it possible for the NES CPU to communicate with a data source and poke values into its audio registers.

There were two ways I could go about accomplishing this:

The hack'ish way involved a modern microcontroller messing with (or pretending to be) the memory on the CPU's bus. (The NESizer2 implemented something like this.) This approach certainly required fewer components, but would involve chasing clock timings. It also seemed a little bit too quick-and-dirty for my tastes.

The more conventional way, which I ultimately opted for, involved putting an SRAM, EEPROM, address decoding, and I/O peripheral ICs onto the NES CPU's bus. Doing this, I'd then write a complete program in 6502 assembly code to transform the NES CPU into an externally controllable audio synthesizer.

The second thing I'd need was a modern microcontroller to act as the front-end. I decided to use this as an excuse to play with the Espressif ESP32, which I had learned about during the development of my last project (which used an ESP8266).

Finally I'd need a bunch of peripherals... A nice display, an SD card interface to provide access to data, some input devices, etc.


At this point, I set out a number of basic goals for the project:
  • Use an authentic NES CPU (RP2A03) to synthesize audio
  • Use an Espressif ESP32 as the front-end microcontroller
  • Use entirely modern components (beyond the NES CPU)
  • Finally embrace surface mount components, which I'd explicitly avoided in all my previous projects

Music Source

At the outset of this project, the VGM (Video Game Music) format seemed like an ideal thing to start with. Its what Aiden's Sega Genesis project used. There were a lot of game rips already available, and the format was literally "poke this value into that register, sleep for X samples, poke this value into that register, etc." It seemed almost trivial to write code that could transform a VGM file into actual NES APU writes.

As I got deeper into the project, I also learned about NSF (NES Sound Format). It seems like NSF has an even larger catalogue of rips and is more popular with the chiptune community. Unfortunately, NSF seems to only be viable on emulators (or an architecturally correct NES with whichever bank switching chip a particular game felt like using). So when/if I decide to start dealing with NSF files, I think I'm going to have to basically convert them to VGM first. (Either by hacking an emulator on my PC, or by running a stripped down emulator on the ESP32 that captures APU writes and sends them across.)


Early Tinkering

I began my actual work on the project by proving out various elements of the NES portion of the system. The first step was to get myself an SRAM chip, an EEPROM chip, some 74-series glue logic, and actually build myself a functional "computer" around the RP2A03.

Breadboard NES Core


To prove that I could actually execute my own code on this thing, I fired up my old HP 1630G logic analyzer. It was of similar vintage as the NES itself, which somehow felt appropriate:

HP 1630G Front Display


I felt a huge sense of victory when I finally got to the point where I could write code, compile it, put it on the EEPROM, and see the RP2A03 actually executing it on the logic analyzer:


Logic Analyzer Probes

Program Code Execution

At this point, I duplicated the audio output components from the original NES schematic, wired them up to a pair of amplified computer speakers, and began running some APU test code that I found online:




Since everything was still working, I decided to move onto the next step.

I read up on the VGM format specification, and began writing a C program to parse the NES-specific subset of it. Testing with the theme of a famous video game, I started by trying to see if I could programmatically convert that theme's VGM data directly to 6502 assembly for the NES CPU. It actually wasn't that hard at all! But it was big! Trimming it down to fit within my paltry 8KB EEPROM only got me a few seconds!


But these were a glorious few seconds of victory!


I then modified my C program to produce output in a more compact binary form, and modified my 6502 assembly program to iterate over this binary data and "play" it. This made room for a few more seconds of song, and gave me a more useful test-bed to evaluate some ideas for how to actually feed data into the system during the next phase of the project.

I also built out a basic audio amplifier circuit, using an LM386 and a small 8 ohm speaker:



Thus, the final proof of concept!

Next Steps

The next step is to take what has been accomplished so far, and to build a complete system design around it. That involves selecting a display device, determining how best to configure and use the ESP32 as a front-end, and most importantly, figuring out exactly how I'm going to make the ESP32 feed data into the NES CPU. I'll be detailing this in the next blog post.

Afterward, I'll need to design an enclosure, control pad, and write plenty of software.

Monday, November 20, 2017

Building a Seeburg Wall-O-Matic Interface (Part 6)

Developing the Software

This post is definitely not going to completely describe the software. The software is always going to be a work-in-progress, so follow the Git repository for more specific updates. Instead, its going to describe a bit about the development setup, and some basics about what the software is doing.

Wall-O-Matic Microcontroller Interface ESP8266 Code [github.com]

Preparing a development rig

The first step in this stage of the project was to actually have a development board to work with. While I could have simply used my real circuit board (as described in the previous post), doing so had a number of disadvantages:
  • Somewhat tedious to switch in-and-out of flash download mode
  • Required an extra adapter to connect to USB
  • Designed to be powered by a bulky transformer connected to AC mains
  • No way to test the coin switch outputs without a multimeter
  • No way to test the pulse decoding input without a real wallbox
In other words, using the real circuit board was not a solution well suited to an office desk environment that was outside of my electronics workbench.

To do the majority of the software development, I really only needed to test the ESP-12S itself and basic interactions on its I/O pins. Thankfully, this did not actually require the full circuit board. So, to make my life easier, I cobbled together a little breadboard development rig.

Development Rig Breadboard

For the ESP-12S part of this rig, I used an Adafruit Feather HUZZAH. Its a convenient breakout board that includes a USB port, reset button, and some nifty wiring that allows the USB serial controller to automatically switch the ESP-12S in and out of flash download startup mode.

To simulate the signal pulses of the wallbox, I used an Arduino Nano. This module also has its own USB port, and I/O pins that could easily be programmed to simulate any digital pulse stream I needed. Since I'm not testing the full filtering path, I really only need to simulate a clean series of logic-level pulses. The code that runs on this Arduino can be found here: [PulseSim.ino]

For the coin switches, three LEDs in different colors were enough. After all, I just needed to test that I could make them blink.

Since the Arduino is a 5V system, and the ESP8266 is a 3.3V system, I leveraged my existing bin of components to link these together. I used a spare opto-isolator and inverter, which did the trick. This had the benefit of making the two microcontroller modules completely isolated from each other, so they could be powered independently.

In schematic form, it looked like this:
Development Rig Schematic

Setting up the toolchain

The easiest way to get the development tools up and running was to leverage the "esp-open-sdk" project.  This project is essentially a Makefile and some scripts that downloads, compiles, and configures everything you need to write code for the ESP8266. This part was surprisingly painless, and decently covered in that project's README.

Decoding Pulses (For Real)


When I was originally figuring out the pulse protocol and filtering circuitry, I used an Arduino-based test decoder. The code for this basically used the "pulseIn()" function in a loop. Upon further investigation, I found that "pulseIn()" just busy-waits on the I/O pin. While perfectly fine for basic testing, this was definitely not a good solution for a complete system.

The approach I took here involved configuring an interrupt to trigger on any transition on the signal pulse's input pin. Inside the interrupt handler function, I then populated an array with the pulse durations and pulse gap widths and kicked off a timer.  If no pulses happened for a certain amount of time, the timer callback function would then inspect the array and determine what the song selection was. This approach ended up working quite well.

The code that implements this is here: user_wb_selection.c

Inserting Coins (For Real)

While precise timing is really not necessary for the wallbox's coin switch mechanism, I still went and measured how long a real coin would actually trip the switches for (40-60ms). I then wrote some simple timer functions that would trigger the selected coin switch for the appropriate amount of time.

The code that implements this is here: user_wb_credit.c

Interfacing with Sonos

This is the step where things started to get complicated. Some of the blog posts I read at the outset of this project had led me to believe it was just a matter of spitting the right HTTP POST at a Sonos speaker's IP address, but is anything actually that simple?  Especially if you want a robust and reliable solution?

As I worked through this process, I had decided that there were certain things I definitely wanted to have:
  • Sonos device configuration should be easy (i.e. no hard-coded IP addresses)
  • Can't make any assumptions as to the state of the Sonos speaker
  • The Sonos device's playlist should be treated as if it was a jukebox
The goal was not only to make setup easy, but to also ensure that I should never have to fiddle the Sonos app on my phone to get things into (or back into) a functional state. The Wall-O-Matic interface should function as standalone as technically possible.
Of course doing all of this meant that I needed to figure out quite a bit more of the Sonos protocol than a single enqueue request.

To figure all this out, I spent a lot of time staring at Wireshark while fiddling with both the official Sonos app and the node-sonos-http-api project's utility commands.

I'm not going to even attempt to document the Sonos protocols here, as that would take way too much time and effort to write up. I'll just be giving a short summary of the parts I used, then linking to my code that implements them.

Discovery Protocol

The Sonos discovery protocol is based on SSDP, and involves two kinds of device discovery: explicit searches and notifications.

Explicit searches begin by sending a UDP broadcast to IP address 239.255.255.250 (port 1900) with the following payload:

M-SEARCH * HTTP/1.1
HOST: 239.255.255.250:reservedSSDPport
MAN: ssdp:discover
MX: 1
ST: urn:schemas-upnp-org:device:ZonePlayer:1

All Sonos devices on the network will then reply to the sender, by sending something like this to the source port of the discovery request:

HTTP/1.1 200 OK
CACHE-CONTROL: max-age = 1800
EXT:
LOCATION: http://192.168.1.42:1400/xml/device_description.xml
SERVER: Linux UPnP/1.0 Sonos/37.12-45270 (ZP120)
ST: urn:schemas-upnp-org:device:ZonePlayer:1
USN: uuid:RINCON_D6E477CEC684A1400::urn-schemas-upnp-org:device:ZonePlayer:1
. . .

Additionally, Sonos devices will periodically broadcast their presence on port 1900 without any discovery requests. When that happens, the payload looks similar:

NOTIFY * HTTP/1.1
HOST: 239.255.255.250:1900
CACHE-CONTROL: max-age = 1800
LOCATION: http://192.168.1.42:1400/xml/device_description.xml
NT: upnp:rootdevice
NTS: ssdp:alive
SERVER: Linux UPnP/1.0 Sonos/37.12-45270 (ZP120)
USN: uuid:RINCON_D6E477CEC684A1400::upnp:rootdevice
. . .

Regardless of how it came in, I collected the IP address, port, and UUID from these payloads. Then, to get the zone name itself, I made an HTTP GET to the following URL:

http://192.168.1.42:1400/status/zp

The response to this was a big XML blob that contained the zone name, as well as other device information.

This info was then collected in a data structure, and later used as follows:
  • IP and Port - Used to actually connect to the device and to send it commands
  • UUID - Used as the reference to the device kept in the software configuration, and used as part of certain control requests. 
  • Zone Name - Used for display purposes, so the UI could give friendly information on which zone was selected.
The code that implements this is here: user_sonos_discovery.c

Control Requests

To instruct the Sonos device to actually do things (e.g. add song to queue, play, query status, etc), I had to implement the following control requests:
  • AddURIToQueue - Adds a file or stream URI to the playback queue
  • GetPositionInfo - Gets position info on the current track
  • SetAVTransportURI - Sets the transport URI to play (e.g. stream source or local file queue)
  • Seek - Select the queued track to play
  • Play - Start or resume playback
The code that implements this is here: user_sonos_request.c

Event Subscriptions

Unfortunately, the normal control interface provides no way to distinguish between the playing and paused states. The only way to do this, was to listen for event notifications. There's another protocol Sonos devices support where you can tell them to notify you whenever there's a state change. Thankfully, these notifications include the play/paused state.

The code that implements this is here: user_sonos_listener.c

Putting it all together

At boot, the device discovery listener is started and a discovery request is broadcast. As soon as the configured Sonos device UUID is detected, it is set as the active device and an event subscription request is made.

When a song selection is made on the wallbox, the following sequence of operations are initiated:
  • Construct a track URI for the selected song
  • Add the track URI to the Sonos device queue (AddURIToQueue)
  • Get the current position info (GetPositionInfo)
  • If the current transport is not a file URI, then switch to the local queue (SetAVTransportURI)
  • If the current transport is a file URI, and currently playing, then do nothing
  • Otherwise:
    • If not currently playing, start playing
    • If on a previous track, and not currently playing, then seek to the added track and start playing
    • If on a previous track, and paused, then seek to the next (or added) track and start playing.

The code that implements this is here: user_sonos_client.c

Providing a User Interface

At runtime, the wallbox itself kinda is the user interface for the project. Well, with one exception: inserting coins. I needed some way to instruct my board to "insert a nickel/dime/quarter" remotely, either via a manual interaction (i.e. click a button on an app/website) or via some sort of home automation setup. To accomplish this, I decided I'd simply expose some sort of basic HTTP API for triggering the coin switch relays.

At setup time, it was a different story. There was actually a lot to configure for this thing to work correctly:
  • Wi-Fi Network Setup
  • Selecting the Sonos device to play through
  • Selecting the type of the connected wallbox
  • Configuring which song file to play for each wallbox selection code
Some might simply bake all this into the firmware. I really didn't want to do this, since you'd never design a "real" product like that.

So the first thing I did, was to embed a web server into the device. To do this, I leveraged the libesphttpd project (and its sample esphttpd app). This project provides a simple webserver written against the ESP APIs, and a fair amount of sample/utility code to handle support functions (Wi-Fi setup and firmware updating) that I knew I was going to need.

As I began to explore this project, I soon noticed that it hadn't been updated in a while. However, there seemed to be a number of forks that had been picking up improvements and fixes. Ultimately, I settled on using Chris Morgan's fork of the project.

After several iterations, I ended up with a (mobile friendly) landing page that looked like this:
Local Web Interface

The setup and firmware pages were pretty much straight lifts from the esphttpd sample project, albeit with some minor style tweaks. The Sonos zone selection page was also fairly simple.  The most complex of these pages was the "Wallbox Configuration" page:

Wallbox Configuration Page
The "wallbox type" determines how many song selections there are, how they're numbered, and what the format of the wallbox's signal pulses are. (Contrary to a previous post, I did eventually get the model "200" mostly working and added code to support its signal pulse format.  The format is different from the "100", and actually slightly simpler.)

The (long) "base folder path" and the (short) "song" filenames are combined to form the playback URI for each song selection. Buttons help automate the otherwise-tedious entry process.

The layout of this page reflects the approach I took towards configuring the playback URIs. While I could have allowed a completely custom per-song-selection URI, doing so would have required quite a lot of configuration memory.

The ESP SDK has APIs for "safe" persistent data work in triplets of 4KB flash pages. So 50KB of URI data (for a 200-selection wallbox) would inflate to 150KB of flash.  With a custom replacement for those built-in APIs, I could probably drop this down a bit, but it would still be more complexity than I wanted to deal with.

So instead, I decided that it made more sense to prepare a folder on the file server just for wallbox-triggered playback. Given that you'd have to organize your music by song codes anyways (to print title strips), it didn't seem like that big of a deal. This way you'd have a single "wallbox" folder with simple and short file names,

By doing this, on-device configuration was mostly automatic. In my own setup, I really only had to configure the base path. For the songs, I just went with the default names and only changed them if my files were something other than MP3s (like M4A or FLAC).

The code that implements this is here: user_webserver.c
The associated HTML and assets are here: html/

Conclusion

Combining everything above with some nasty looking Perl CGI on my home webserver, and fumbling my way through Google Assistant and IFTTT, I eventually ended up with something like this: