Pages

Frugal Glass v2

“One never notices what has been done; one can only see what remains to be done.”
-Marie Curie

I must say it’s the case with Frugal Glass. I’m unimpressed with the performance of the current version–the eyepiece assembly is heavy enough to pull my glasses sideways, it blocks vision out of one of my eyes, the casing’s rough and the weighting hurts my ear after about an hour of use–so I decided to remake it. The new display module will be using a microcontroller I picked up as a freebie, programmable through the Arduino IDE, a small OLED screen controlled over the SPI protocol, and an Inertial Measurement Unit (IMU) consisting of an accelerometer, gyroscope, and magnetometer.

I think I’ll keep the Raspberry Pi 2–the Feather isn’t powerful enough to expand the software while making the hardware more compact. Instead, the Pi will become the CPU, while the Feather will manage the heads-up graphics.

This will allow for faster, more accurate rendering. I intend to have multiple layers of rendering mainly consisting of two types: a static heads-up display, and a 3D environment balanced by an accelerometer and gyroscope breakout board. The latter will be more difficult to implement–I can’t find any free libraries for 3D rendering, so I may have to write my own, a field in which I have no experience. Maybe I’ll look at the OpenGL source?

The eyepiece is getting a significant upgrade. An OLED screen will reflect off a semi-mirror into the user’s eye, while still letting light through. The accelerometer/gyro/magnetometer will be mounted behind it, and a mini-camera designed for first-person piloting of drones will feed into a NTSC/PAL USB dongle on the Pi.

I’ll also be doing an upgrade to the Jasper conversation system. A conversation in Jasper’s current state might look like this:

USER: Jasper

JASPER: <high beep>

USER: What’s the time?

JASPER: <low beep>

JASPER: The time is 10:20 AM right now.

This has three main problems.

The first is that you have to say the keyword first, then wait for the system to start listening, which usually takes 2-3 seconds. This is fixable by constant listening for an inline mention of the keyword. To prevent accidental references, I’ll be using the Natural Language ToolKit+ to analyze the surrounding words for context.

The second is that you have to say the keyword before every command. This is fixable by setting a command chain timer that the user can manually exit by saying something, and automatically times out after 10 seconds of inactivity.

The third is that for each command, you always get the same format of answer. (for instance, “The time is <time> right now.” This is sort of fixable by using formats selected by random.choice, but that’s not the path I’ll take. I’ve been playing with Markov Chains, and by passing a blacklist and whitelist and seeding from a random.choice, I think I can create a form of Markov Chain that will generate natural-sounding language with the gist of the desired message. It will also double-check the grammar of the sentence(s) with NLTK+ before passing it to the Text-To-Speech engine. The downside is that Markov Chains need source material, and more material is better.

RTC Setup

One of the features I wanted to implement was the ability to keep time without a network connection. To do this, I needed a realtime clock. I found one at SparkFun that was flexible enough for my needs, based on the popular DS1307 chip.

After a bit of Google-probing I found another article on adding RTCs that stated that many other guides didn’t do it properly, or left out important steps.

I went through it. At every step, my sudo i2cdetect -y 1 was returning nothing. Not a 0x68 to be seen, not even a UU. (UU = In use by driver)

I ran through every step again. Still no sign of the clock.

I pulled out my trusty Arduino. About 15 minutes later, I had a sketch that requested the time from the clock. I uploaded it and found that Wire (Arduino’s name for I2C) has a bug in which endTransmission() and requestFrom() both hang for a variable amount of time before continuing.

Abandoning the debug attempt, I plugged my clock back into my Pi. I forgot which pins the clock was supposed to sit on, so I pulled up a pinout chart. It was then I found I had been using the wrong pins. I had the SCL in the right place, but the clock’s SDA was connected to GPIO pin 4. (facepalm)

Now it works perfectly – when I run

sudo hwclock

it will give me the time to within a 0.7s accuracy. I still need to adjust for drift, but it’s good enough for now.

Jasper Modules, Part 1: Camera

Writing Jasper modules isn’t exactly fun. Jasper is an open-source interface for many speech-to-text and text-to-speech engines. It also provides a conversation interface. Jasper modules can be standard – which wait for the user to initiate contact – or notification – meaning they monitor a stream and report back every now and then. For instance, I just wrote a module for capturing photos on request, which means a standard module.

When I started, I wasn’t exactly fresh on my Python, but the past 8 hours have more than re-educated me.

After poking around on the internet, I found that taking a still image from a USB webcam and saving it to a file isn’t too difficult. In bash, all you have to do is

fswebcam <file>

And Python can run shell scripts with subprocess.call. The method I found for screenshots was just as easy:

scrot <file>

Those capture methods make video capture look like a walk in the park by comparison. First, you need to know when to start and stop the video recording. Second, you need to know what command to run. After several false starts I found

avconv <file>

I decided to run that command in a separate thread, and when the user says something like “STOP RECORDING”,

killall avconv

Which will force the avconv process to close. However, before closing, avconv saves the recorded file. Then I found a problem: when parallel-threading avconv and Jasper, the TTS audio becomes spottily modulated.

While writing this, I encountered a number of errors–mostly with finding the right libraries–but I had a bit of help:

StackExchg-fake-ebook

Image credit Bruce Sterling

The folks over at StackExchange are life-savers.

The Jasper modules will be available on the Download page.

Jasper & ALSA

I just spent the last day configuring Jasper with ALSA. Jasper is a suite of Speech-To-Text and Text-To-Speech engines with a nice(ish) end interface. ALSA is the Advanced Linux Sound Architecture, a utility for playing and recording .WAV files.

I was having issues with Jasper – it was throwing an IOError -9997 in one of the core PyAudio (PortAudio binding for Python) files. I tried a number of fixes – at first, re-installing PyAudio from scratch seemed to fix it. However, I was tinkering with my ALSA config and the next time I tried to open Jasper, it threw a bunch of errors.

After a bit of poking through my reset-PyAudio script, I noticed that it removed PulseAudio – a sound mini-server that “serves as a proxy to using existing sound components like ALSA or OSS.” (ArchLinux Wiki) I tried re-installing it and next time I started up Jasper, it worked again!

Video glasses & Pi: just add solder

I’ve been tinkering around with these video glasses from Adafruit. There is a cable inside that seemed to have exactly what I need – a cord that runs from the glasses’ 2.5mm TRRS jack to the Pi’s 3.5mm Composite Video TRRS jack. But when I plugged it in, the screen flashed for a fraction of a second and turned black… It is still alive, though – I plugged it into my DVD player’s RCA jack and it was fine.

I took out my multimeter and switched it to Continuity mode. And yep, it was just a standard cable – T connected to T, R to R, R-R, and S-S.

I went back to the Adafruit site and found just what I was looking for: someone had this problem already and posted a fix. Apparently, the Pi’s pinout is VID-GND-L-R, but the glasses want GND-VID-L-R. A very quick fix to see if this kludge will work is to plug the cord in, but not all the way. If you do it right, you get a circuit flowing from the Pi’s Video to the glasses’ Video, and the excess power returns along the glasses’ ground and onto the Pi’s Left Audio. Not the best solution, but it shows if the glasses work.

I stripped away the insulation of the first black sheath, then the second, white sheath, and found 4 wires inside: red, yellow, green, and blue. Nothing very obvious. Again, the handy multimeter in Continuity mode told me that I needed to swap the red and yellow wires.

The insulation on these wires is that horrible stuff you have to sand away. The 48-gauge wires looked like they would sooner break than lose their insulation, so I decided to use the solder to burn through the insulation. I know it’s not the best technique, but I did wipe the first round off and use a second, cleaner round. To my surprise, the second round of the stuff spat more streaks of acrid smoke than the first round. I wiped it off again and tried a third round, which burned through the wires. I re-stripped it and tried again with this definitely-not-a-masterpiece, only using two rounds of solder.

Soldering-TRRS-cropped

I got the video and ground soldered, but what I was breathing was rapidly becoming more mosquito than air, so I packed up and headed in. I tried connecting the system, and sure enough, it worked! I only have the audio pins left now, and I think I should be able to finish those pretty quickly.

Update – March 7, 17:14:

I soldered the last two wires just before the rain hit! Still have the Real-Time Clock to do, though…

Anyway, I tested it again and wrapped the whole assembly in multiple layers of electrical tape, since I’m out of heat-shrink tubing. I’d like to make it a bit more secure but am not sure how to…

TRRS-soldered

It seems to work fine with the Pi, but I can’t get the audio. Maybe it has something to do with my USB sound card?

Project Week is here!

Time for Project Week! Project Week is an event at my school in which we students (and some teachers as well) take the second week (the week before Spring Break) of March off to take a deep dive into their interests. This is well planned out, sometimes starting in January, sometimes in November.

This year, I will try to make a Google Glass-type device using a Raspberry Pi. I did not make everything that will go into this project, so consider this project a culmination of mini-projects.

I’ve been out sick these past few days, so I’ve been sitting around trying to get things off the ground. I’ve been having some difficulties with ALSA, a terminal audio interface, and connecting my Edimax EW-7811-UN usb wifi dongle.

SparkFun: BadgerHacked

Every year, Austin (Texas) has an event for makers, tinkerers, and other nerds who like to build. It’s called SXSW Create. This year, I was manning a booth with The Robot Group. Sparkfun Electronics was giving away these little badges:

Image courtesy learn.sparkfun.com.

They gave you a bag of parts and you waited until there was a soldering station open. Then you just soldered the headers on, threw in some batteries, attached a lanyard, and you had a microcontroller around your neck.

They said they were reprogrammable, so I decided to turn mine into a name tag.

I investigated and found the support files. The page containing them and the installation instructions can be found here. Then I downloaded the Arduino IDE, which will allow us to reprogram the Badger.

I initialized the LED matrix:

#include <SparkFun_LED_8x7.h>
#include <Chaplex.h>

// Global variables
static byte led_pins[] = {2, 3, 4, 5, 6, 7, 8, 9}; // Pins for LEDs

void setup() {

  // Initialize LED array
  Plex.init(led_pins);

  // Clear display
  Plex.clear();
  Plex.display();
}

Next I looked at Sparkfun_LED_8x7.h. (.h files are to .cpp files what tables of contents are to books.) I found the scrollText() method and the stopScrolling() method. I messed around for a bit and found that I need to scrollText(), wait for a bit, then stopScrolling().

In loop(), I made the text “The Robot Group” scroll across the screen.

  Plex.scrollText("The Robot Group", 1);
  delay(10000);
  Plex.stopScrolling();
  delay(2000);

Then I poked around a bit more and found the drawing methods. I prefixed the text with a very pixellated version of the group’s logo.

  // Logo
  Plex.clear();
  
  // Logo: Circle
  Plex.line(2, 0, 0, 2);
  Plex.line(0, 2, 0, 4);
  Plex.line(0, 4, 2, 6);
  Plex.line(2, 6, 5, 6);
  Plex.line(5, 6, 7, 4);
  Plex.line(7, 4, 7, 2);
  Plex.line(7, 2, 5, 0);
  Plex.line(5, 0, 2, 0);
  
  // Logo: Man
  Plex.line(1, 7, 3, 3);
  Plex.line(6, 7, 4, 3);
  Plex.line(0, 2, 7, 2);
  Plex.line(4, 1, 3, 1);
  
  // Logo: Display for 5 seconds
  Plex.display();
  delay(5000);

The Robot Group’s logo:

The Robot Group's logo.

The Robot Group’s logo.

BadgerHack version:

And so I reprogrammed my badge and then did likewise with some of the other members’ badges. It was quite entertaining for me.

Here is the finished code. BadgerStickTRG.zip

430AlarmClock

This piece of code was made possible by Trey German, Robert Wessels, and Cathy Wicks of Texas Instruments. Without Trey’s knowledge and Cathy’s encouragement, I wouldn’t have been able to make this program. Robert is the one who told me about this project.

When creating 430AlarmClock, I adhered to my usual procedure for designing programs:

0. Make a general outline
1. Get the hardware
2. Write the basic stuff
3. Tweak, test, repeat until it works
4. Add/remove/tweak features until you are satisfied
5. Publish, add license, do whatever else you need to do

I made the outline first. It looked like this:
• Clock with increment of 1s/s, overflow each digit pair after 23:59:59
• At least 1 alarm
• If user turns off alarm but fails to get up within 10 minutes, connect to social media, email, etc. to say something like “I turned my alarm clock off but never got up.”

For hardware, I used an MSP430-F5529LP with a Grove BoosterPack.

I poked around and found the required libraries. They can be found on the Grove wiki.

After that, I started in on the code. The first segment was pretty simple: make the clock tick.

#include <TM1637.h>
TM1637 out1(9, 10); //HH:MM
TM1637 out2(3, 4);  //SS

Here I initialize a pair of 7-segment LED drivers. They communicate via I2C, so they only need 2 pins each.

int8_t startTime[3] = {12,49,30};
int8_t hours, minutes, seconds;

startTime[3] will hold the time the clock’s starting time. hours, minutes, and seconds will hold the current time.

void setup() {
  out1.init();
  out1.set(BRIGHT_TYPICAL);
  
  out2.init();
  out2.set(BRIGHT_TYPICAL);
}

The setup() method only runs once, so it is useful for one-time tasks such as initializing devices, setting pinModes, and starting Serial communication. Here I use it to initialize the 7-segment displays and I set their brightness to BRIGHT_TYPICAL, which is best for use indoors. For use outdoors, change this to BRIGHTEST.

void loop() {
  out1.display(0, (hours-(hours%10))/10);
  out1.display(1, hours%10);
  
  out1.display(2, (minutes-(minutes%10))/10);
  out1.display(3, minutes%10);
  
  out2.display(0, (seconds-(seconds%10))/10);
  out2.display(1, seconds%10);
  
  seconds = getSeconds()%60;
  minutes = getMinutes()%60;
  hours = getHours()%24;
  
  delay(100);
}

The first six lines tell the TM1637s to display the current time (which, for the first tick, is 00:00:00). Then we update the time. The reason for doing it this way is so first it tests the displays, then it updates the time and sends it, then repeats.

int getSeconds() {
  return millis()/1000 + startTime[2];
}

int getMinutes() {
  return getSeconds()/60 + startTime[1];
}

int getHours() {
  return getMinutes()/60 + startTime[0];
}

The getSeconds(), Minutes(), and Hours() functions took me some time to work out though. But eventually I figured it out: getSeconds() calculates how many seconds have gone by and adds the starting seconds to that. getMinutes() turns getSeconds() into minutes and adds the starting minutes to it, and so on and so forth.

The next step was to implement the alarm. It was pretty simple:

Add uint8_t alarm[] = {HH, MM, SS};
Add a boolean to say if the alarm is active or not (alarmActive).
Also, #define buzzer.
Then, add this into loop():

  if((seconds == alarm[2]) && (minutes == alarm[1]) && (hours == alarm[0])) alarmActive = true;
  
  if(alarmActive) {
    tone(buzzer, 200*(seconds%minutes));
  } else {
    noTone(buzzer);
  }

The first line makes the alarm go off when the current time equals the alarm’s time. The next if statement controls the buzzer, whose pitch changes every second to be super annoying.

I used an ultrasonic tripwire to detect if the user had gotten up or not.

I imported Ultrasonic.h, made a new Ultrasonic object called tripwire, and assigned it to pin 39.
I then made a tripwire reference (called tripwireRef) so that the tripwire wouldn’t constantly trigger. It initialized in setup().

  tripwireRef = getDistance();

I also had to make my own getDistance() method so not detecting anything (closest object > ultrasonic sensor range) wouldn’t disable the alarm. If the sensor detected nothing, it would default to 0.

int8_t getDistance() {
  int8_t value = tripwire.MeasureInInches();
  if(value < 2) value = 127;
  return value;
}

Then I added the code to turn the alarm off:

  if(getDistance() < tripwireRef-2) alarmActive = false;

The tripwireRef-2 is because the tripwire’s reading fluctuates, but generally never >±2.

Then I found out that the WiFi BoosterPack was fried, so I couldn’t go any further.

Here is the final version of my code: 430AlarmClock.zip Enjoy!

photo