Letsencrypt, Duckdns, and Cox

Like some other ISPs, Cox blocks all incoming access to port 80 on residential connections. They also use DHCP to assign dynamic IP addresses, which can can and do change occasionally — especially when you reboot your router. That’s fine in most cases, but can be a real pain in the ass if you run any local services that you need to access from outside the home. For example, if you run your own email and want to use IMAP, you’re likely going to need an SSL certificate. You need a way to have your DNS update to point to your new IP when it changes.

One way to do all of this without paying subscription fees is with Duckdns and Letsencrypt. Duckdns is a free DNS service with an easy to use API that can be updated by a script when your IP address changes. Letsencrypt is a free SSL certificate CA; I can’t say enough good things about Letsencrypt and encourage you to support them with a donation as I have.

So. First we can use cron to run a command that updates our duckdns IP address every ten minutes or so.

echo url="https://www.duckdns.org/update?domains={my_domain}&token={my_token}&ip=" | curl -k -o ~/duck.log -K -

Simple, right? Now we have a hostname that always points to our own home IP address – or at least always does within ten minutes of an IP address change, which is probably good enough for most purposes.

Now for the SSL certificate. Letsencrypt will happily issue free a 90 day SSL cert for your domain. Normally, one runs a script from cron that renews the certificate if the cert is expiring in less than 30 days. IF you can expose port 80 to the web, even temporarily, then life is good — just run ‘certbot renew‘ once a day, or even once a week, and everything happens for you in the background. If, however, your ISP filters port 80… well, there’s the pain-in-the-ass part. The certbot script renew script will only work if you have port 80 open to the web. I haven’t found a way to get Letsencrypt’s server to use any other port to reach your web server, so forwarding a non-blocked port (8880, for example) to your local server’s port 80 does you no good.

All is not lost; it just means a bit more work. Letsencrypt will also issue certificates using DNS challenges for authentication, placing specific TXT records to prove that you have control of the domain or subdomain in question. The process looks like this:

certbot certonly --manual --preferred-challenges dns -d example.com-d -d example-com.duckdns.org

The certbot script will tell you to create TXT records in DNS for your domain, and will wait for you to do so before proceeding. You can use your DNS provider’s web or API interface to add or change the TXT record accordingly. Duckdns now supports TXT records in addition to A records, and updating yours is simple:

curl 'https://www.duckdns.org/update?domains={my_domain}&token={my_token}&txt={my_txt}&verbose=true'

Once you’ve verified that the TXT records are there using, say, ‘dig _acme_challenge.{my_domain}.duckdns.org TXT‘ — simply hit ENTER to let the script finish. You should end up with a renewed SSL cert.

My previous ISP didn’t block port 80, so I never had to do any work at all for this. I ran the ‘certbot renew’ command from cron once a day, and it automatically updated the certs for me. Now that port 80 is no longer an option, I will need to manually renew the certificate every 90 days. I’ll actually do it at around 75 days, because Letsencrypt helpfully sends out emails to let you know when your certificate is within 15 days of its expiration.

A year’s worth of updates

Time flies when you’re ignoring a blog, right? I’ll catch up.

  • The Mercedes is gone. After everything I’d fixed on it, when the transmission decided it didn’t want to work reliably any more — screw it, I was done. It was an awesome car to drive, but not so much fun to own. I replaced it with a much newer 2018 BMW 540i Xdrive, which has been wonderful.
  • Still flying occasionally, but nowhere near as much as I should or want to.
  • Nothing’s happened with the Mustang, other than getting the engine put back together.
  • We’ve picked up a couple more rental houses; that enterprise is going pretty well overall.
  • We switched from Visible to T-Mobile. Visible had great service when we signed up; it slowly degraded to barely usable. TMO has been better, but not great.
  • I just dumped CenturyLink. Our CenturyLink fiber service has been down since Wednesday morning (it’s Friday now). It took me three hours to get through to a human there, on the phone, who told me they could have someone out Saturday morning. Absolutely appalling service. We were up and running on Cox within an hour of leaving the house to go pick up their equipment.
  • Now I remember why I didn’t like Cox’s equipment… zero flexibility, no control over your own local network at all. You can’t even set your own DNS, so my Pi-Hole is not functional. I’ve got new equipment coming this afternoon. New cable modem, router, and mesh wifi.
  • I left my long time employer (a bank) a little over a year ago and now work for another bank.

Moved.

I’ve moved the blog to a new web service… one of the AWS virtual server offerings. So far, so good… and dirt cheap.

The sad state of application programming

The Hulu app froze again yesterday and required a force stop. We had another episode of the house phones (the Panasonic DECT6.0 cordless set) not seeing the line from the Ooma, and the Ooma Telo box needed to be power cycled to fix it. I’ve had to power-cycle the Fire TV Cube a couple of times since I installed it a couple of weeks ago. It seems that the Fire TV Cube and the Ooma box will just need regular power cycles to keep them from hanging. This kind of stuff is becoming more and more common… apps are stable for a few hours or a few days, but past that your chances of things working as they should decline rapidly.

I think software development is really being taken over by people who are only marginally competent. You probably know the type. They’ve been to all the classes, got the degrees, can write the code, but really don’t understand how things work, and their code is functional only under ideal conditions. I work with these types daily. They’re unable to think about what happens when things don’t work exactly as they should. The typical conversation consist of me asking one of them what happens when X breaks, which results in a puzzled look. X isn’t supposed to break, you see, and if it does then X is at fault and should be fixed. Never occurs to them to allow for X breaking as a known possibility. Problem is, the guy who wrote X is also a marginally competent idiot, so in the end everything breaks and no one understands why.

We seem to be accepting this as the norm. I talk to people a generation younger than myself and either they are incredibly lucky, or I’m incredibly unlucky, or I’m the only one in the world that ever has an application misbehave. They seem to just accept it as normal and move on. A quick power cycle, a quick reboot, force stop and move on, whatever. As do I, but I do notice it. I can remember when applications being unstable was not unusual, but everyone understood that it was a problem and something to be fixed. Now it just seems that no one cares. OK, if we’re talking about some time sucking game, I don’t care either… but we’re not. We’re talking about systems that should be at least as reliable as what they replace, but turn out to be a pile of crap. I can’t count how many working hours are wasted on bad phone connections, twitchy chat sessions breaking, crappy remote meeting sessions, and slipshod work by people who should know better.

Cord cutting update

Well, we’ve been watching Amazon Prime and Hulu Live for a week now. We have not yet needed to switch back to cable, which is good. It has not been quite the seamless transition one would hope for, but it’s not a complete pain in the ass either. Compared to watching cable, it’s a lot more labor intensive. Lots of button pushing, menu navigating, and we seem to have a disruption of some sort on average at least once a night. Wrong video streams, app crashes, Fire TV reboots, etc. It may not be a deal breaker, but then again it may be. It certainly is a pain in the ass.

My short take on it is, this whole thing is great. Or it would be, if the apps were written by people who actually gave a damn whether things actually worked for more than a few hours at a time. I’ve started doing a power-on reset of my Ooma box once a week to keep it from wandering off the path of righteousness; it looks like the Fire TV Cube may need that once a day or so. Unfortunately, there is no way to force reboot either one remotely so it turns into me remembering to go unplug the stupid things.

Here’s the good, the bad, and the ugly so far…

The Good:

  • The shows we watch are automatically recorded, so we can watch them whenever we please.
  • Video and audio quality seem to be very good. I haven’t tried any lower quality settings to see how it impacts things.
  • So far, I don’t think we have found any of our shows that we can’t watch.

The Bad:

  • Navigation is just clunky, there’s no other way to describe it. There’s lots of button pushing, and you have to be careful of lag and slow response.
  • Different apps for different shows. Amazon Prime for Jack Ryan and a couple of others, Hulu for most things. Not a huge deal, but integration could certainly be better.
  • Data burn. We’re on a 1TB/month plan. We had been using 2-5 GB/day; now we’re hitting peaks of 25GB or more. Average seems to be around 15, which is still OK… but we’ll actually need to pay attention to our data usage, which is not ideal. Obviously streaming video is going to burn bandwidth; this was not unexpected.

The Ugly:

  • Alexa commands are a joke. Tell Alexa “Tune Discovery on Hulu”… no dice, Alexa says Hulu can’t find that channel. We use the remote for most everything.
  • The Hulu app is not what I would call stable. I have started force terminating it once a day, just to keep it from crashing at inopportune times.
  • The Fire TV Cube is also not what I would call stable. roughly every other night or so, it will just spontaneously crash and reboot in the middle of a show.
  • Hulu’s inexplicable and stupid lack of a program guide. It’s idiotic, there’s really no other way to describe it. Guys, you’re selling this as a LIVE TV service, why not act like it and put up a damned program guide?
  • Occasionally, our sound bar will simply power itself off in the middle of watching something. What turned it off? Why? No indication, it’s a mystery. And of course, that means you have to grab another damned remote… unless you tell Alexa to turn the sound bar on, which Alexa will, and then you lose the audio stream from the Hulu app.



Cutting the cord? Or part of it…

So the Cox bill has been getting out of control.  After the latest package deal ran out, the bill bumped up to nearly $240 per month, mostly for crap (in the form of TV channels and phone features) that we don’t want.  That’s a ton of money.

The requirements are:

  • Landline with caller ID
  • Live TV with the channels WE watch.  Local channels, Fox News, History, Discovery, AMC, HGTV, several others. 
  • Internet to support full time telecommuting

I already switched the phone service over to Ooma.  I bought a Telo and signed us up for Ooma Premeir service.  That gives us caller ID, voicemail, and unlimited calling in & out.  That will reduce the monthly phone service spend from $53.62 (I shit you not, that’s what Cox was charging me) to less than $20 per month — for more service.

Now, next up is cable TV. Cox’s bill comes to a little over $154, including taxes and fees and surcharges.  I could reduce that by about $24 by dropping HBO and Showtime, which suck anyway and we only have because they were included in the discount package that has expired.  Still WELL over $100 a month for, quite frankly, an awful lot of crap.  200+ channels, but of course they include crap we’d never watch in a hundred years just to try to justify the insane price. 

The last time I looked at alternatives like Hulu, Netflix, Sling, etc. — and it was not that long ago — they all fell woefully short of meeting any of our requirements.  We stuck with cable TV simply because there was no other way to watch, for example, The Walking Dead, or Fox News, or Nebraska football games, live.  A few hours or days or a year after the fact, sure.  Or not at all, depending on the service.  And we’d probably need to sign up for several, resulting in a total bill exceeding what we were paying for cable in the first place.  Oh, and get an antenna up that would work for the local channels, since NONE of them covered those.

Well, it seems the picture has changed significantly.  For about $40 a month Hulu will give you all their stuff, plus live TV covering all the channels we watch (BTN for Husker football included, woohoo!) and a DVR service.  It’s worth a try.  We already have Amazon Prime, mostly for the shipping.  The decision to go with a Fire TV Cube was pretty simple.  I received and installed that yesterday, and signed up for a free trial week of Hulu with live TV.  Oh, and as a side benefit…  it looks like this may also negate the need to try and find yet another “universal” remote control, potentially saving another few rubles.

Last night was our first night watching Hulu on the Fire TV Cube.  Overall the user interface ranges from “fair, needs improvement” to “frustratingly clunky” to “ridiculously obtuse”.  Some of that’s the Fire TV, some is Hulu.  It’s bearable, and I hope it improved with future app updates.  We also had not one, but THREE screwups while trying to watch live TV.  The first was innocuous and not a big deal — watching the news, but the program guide listed it as some oddball foreign cartoon name.  OK, no big deal.  Then we tried watching Vikings on History Channel.  Several minutes into the episode it restarted,  restarted again, and when we tried to get back to the live stream it switched to some episode of “Forged in Fire”.  Horrifically frustrating.  10-15 minutes later we got back to Vikings, but of course missed part of the episode.   We’ll have to watch it again.

Then we tried watching another show, “Curse of Oak Island”.  What we got was an old episode of “Stargate SG-1”, which most definitely has not improved with age.  It would have been funny if it were not for the fact that we couldn’t watch the damn show we wanted to watch.

I will say that non-live streams seem to work perfectly, and the video quality seems to be great.  And we can watch some channels for hours with zero issues.  I chatted with Hulu support today, and the agent says it’s a “known issue” that they’re working to resolve.  IF they resolve it soon, and completely, we’ll have a winner.  If they do not, we’ll need to decide whether we stick with Hulu and adapt (watch things delayed a little), or scrap it and pare our Cox cable back to the minimums and deal with the expense.  Or something else entirely. 

Once we have a final solution to this question, I’ll post a monthly spend and savings analysis.  I think we can probably save about $100 a month, to be honest.  I’m glad I don’t own stock in Cox or any other cable company.  We’ll still have to use them cor Internet access, of course, but who knows how long that will be true?


Weather station reporting

I recently put my AcuRite weather station back up after having it sitting in the garage for a year or so.  I have the Internet Bridge, which recently got a firmware update, and wanted to have it reporting to both AcuRite and Weather Underground.

AcuRite’s site will apparently update WU, but only at 15 minute intervals.  And, I wanted to also collect the data locally so I can feed it into Splunk or some other tool for my own use.

Problem is, the AcuRite gateway box only sends data (via HTTP GET) to one fixed hostname that’s hard coded, hubapi.myacurite.com.  SO…  first we intercept those DNS calls to send them where we want them.  In named.conf:

acl wxbridge-only {
        ip.of.wx.bridge/32;
};

view "wxbridge-view" {
        match-clients { wxbridge-only; };
        zone "hubapi.myacurite.com" {
                type master;
                file "hubapi.myacurite.com";
        };
};

And the zone file:

$TTL 14400
@               IN      SOA     localhost. dale.botkin.org. (
                                2016081803
                                3600
                                3600
                                604800
                                14400 )

@               IN      NS      localhost.

hubapi.myacurite.com.   IN      A       ip.of.local.server;

Now the weather bridge, and ONLY the weather bridge, gets your local machine’s IP address for hubapi.myacurite.com.  So next we create a PHP script and use Apache to point /weatherstation to it (ScriptAlias /weatherstation /var/www/cgi-bin/updateweatherstation.php in my case).  The script sends the original HTTP request to hubapi.myacurite.com, then reformats it and sends it to wunderground.com.  It’s also preserved in the Apache access log, so you can ingest it into Splunk.  You could also syslog it or write it to a file, whatever you want.  I started out using a script I found that Pat O’Brien had written, but ended up rewriting it almost entirely.  It’s been years since I wrote a PHP script.

<?php
 // First send it to AcuRite, no massaging needed...
 $acurite = file_get_contents("http://hubapi.myacurite.com/weatherstation/updateweatherstation?" . $_SERVER['QUERY_STRING']);
 echo $acurite;
 // Now re-format for wunderground.com. We don't always
 // get every parameter, so only send those we do get and
 // strip out those that wunderground won't accept.
 $msg = "";
 $winddir = (isset($_GET['winddir']) ? "&winddir=".$_GET['winddir'] : null);
 $windspeedmph = (isset($_GET['windspeedmph']) ? "&windspeedmph=".$_GET['windspeedmph'] : null);
 $humidity = (isset($_GET['humidity']) ? "&humidity=".$_GET['humidity'] : null);
 $tempf = (isset($_GET['tempf']) ? "&tempf=".$_GET['tempf'] : null);
 $rainin = (isset($_GET['rainin']) ? "&rainin=".$_GET['rainin'] : null);
 $dailyrainin = (isset($_GET['dailyrainin']) ? "&dailyrainin=".$_GET['dailyrainin'] : null);
 $baromin = (isset($_GET['baromin']) ? "&baromin=".$_GET['baromin'] : null);
 $dewptf = (isset($_GET['dewptf']) ? "&dewpointf=".$_GET['dewptf'] : null);
 $msg .= "dateutc=now";
 $msg .= "&action=updateraw";
 $msg .= "&ID=<your weather station ID here>";
 $msg .= "&PASSWORD=<your weather station password here>";
 $msg .= $winddir;
 $msg .= $windspeedmph;
 $msg .= $humidity;
 $msg .= $tempf;
 $msg .= $rainin;
 $msg .= $dailyrainin;
 $msg .= $baromin;
 $msg .= $dewptf;
 $msg .= PHP_EOL;
 $wunderground = file_get_contents("http://rtupdate.wunderground.com/weatherstation/updateweatherstation.php?".$msg);
 // Let's log any failures with the original message, what we sent,
 // and the response we got:
 if(trim($wunderground) !=  "success" ) {
 openlog('weatherupdate', LOG_NDELAY, LOG_USER);
 syslog(LOG_NOTICE, $_SERVER['QUERY_STRING']);
 syslog(LOG_NOTICE, $msg);
 syslog(LOG_NOTICE, $wunderground);
 }
?>

So far it’s been working fine for a couple of days. I have noticed that the AcuRite 5-in-one station will go for extended periods without sending some data – it seems like it only sends what has changed, or what has changed with seemingly random pieces of information.  For example, it may send the barometric pressure even if it hasn’t changed, but not the temperature or wind direction if they’re stable.  It’s weird.  Of course now I understand why they only send periodic updates to Weather Underground.  AcuRite’s own site seems to mask this behavior, but Weather Underground does not.  I’m thinking about keeping a persistent state file and sending every parameter with every update, or collecting updates and just sending WU a digest every minute or two.  But that’s a project for another day.

 

ADS-B followup

Fun stuff…  so I’m playing around with several different aviation apps on my Android tablet, with a Stratux setup just sitting on the window sill of the spare bedroom where it can “see” enough GPS satellites to get a position fix.  I’ve got one SDR radio receiver on it, set up for 1090 MHz to catch transponders in passing aircraft.  I went in to plug the power in to charge the tablet — I’d left it in there overnight — and saw half a dozen targets displayed.  I zoomed in a little and there’s an American flight at 31,000… A Virgin flight headed for Newark…  Hey, wait a minute — one looks familiar!

Screenshot_2016-03-16-13-01-03

N151MH – a friend and fellow EAA Chapter 80 member, out in his ADS-B “out” equipped RV-12.  Absolutely beautiful day for it, too!  Have fun, Mike!

Sorry, I don’t read Chinese…

For the past several weeks I’ve been getting a fairly large amount of Chinese language spam leaking through. Since nearly all of the data (From:, subject, etc) are Chinese characters, my regular Postfix spam filters have not been effective in eliminating it. I finally got tired enough of it to do a little Googling. It’s trivially simple to just reject any incoming email with Chinese characters in the subject line:


/^Subject:.*=\?GB2312\?/ REJECT Sorry, this looks like SPAM (C1).
/^Subject:.*=\?GBK\?/ REJECT Sorry, this looks like SPAM (C2).
/^Subject:.*=\?GB18030\?/ REJECT Sorry, this looks like SPAM (C3).
/^Subject:.*=\?utf-8\?B\?[456]/ REJECT Sorry, this looks like SPAM (C4).

I made the change last night, and this morning came in to find no Chinese spam and several rejects in the mail log… all from pretty obvious spam sources, like this one:

Jul 6 01:12:51 newman postfix/cleanup[30385]: 99EB31A6D3: reject: header Subject: =?utf-8?B?44CQ5Lqk6YCa6ZO26KGM5L+h55So5Y2h5Lit5b+D44CR5bCK6LS155qEZGFpbmlz?=??=?utf-8?B?6I635b6XMTAw5YWD57qi5YyF5aSn56S85rS75Yqo6LWE5qC877yM6aKG5Yiw5bCx5piv6LWa?=??=?utf-8?B?5Yiw?= from spamtitan2.hadara.ps[217.66.226.109]; from=<wkh@p-i-s.com> to=<dale@botkin.org> proto=ESMTP helo=<spamtitan2.hadara.ps>: 5.7.1 Sorry, this looks like SPAM (C4).

Halt and Catch Fire premier

Last night I watched the first episode of Halt and Catch Fire on AMC.  I wanted to love it, was tempted to hate it, and in the end opted for neither one.

For those of you who don’t know me, I lived through the period in question, and in the same industry…  although not working for TI, or a fictitious Texas OS vendor, or even directly in the PC end of things.  Still, those were some pretty exciting times.  I was fixing mainframes for a living, but lived and breathed microcomputers every day.  When micros first came on the scene (we didn’t call them “PCs” until well into the 80s), it was like the Wild West, in all the good ways.  There was opportunity around every corner.  I would be hard pressed to count the number of companies making computers in the pre-IBM days; some very cool things were being done by a lot of gifted and smart people.  I remember one in particular, a machine made by Ohio Scientific that had multiple processors (6800, 6502 and Z-80 if I remember right) and could boot different operating systems depending on your mood.

Anyway, the first bit of bad news came during the opening scene — a typed-text description of the “HALT AND CATCH FIRE” machine instruction.  It’s a simple concept, easy to explain and even a little humorous.  And they got it completely wrong.  Stupidly wrong, in fact.  I felt like a doctor watching Gray’s Anatomy or a cop watching Blue Bloods.  Sigh…

It got a little better from there, but there was some really stupid technical nonsense thrown in for no good reason.  Something real and believable would have been just as dramatic, or maybe even better.  You can’t cut a soda can in half with a pencil soldering iron  – and why would you need to to fix a Speak & Spell?  I especially loved the scene where he’s tediously de-soldering connections on the back of the circuit board — then triumphantly extracts the chip FROM ITS SOCKET.  And then of course there is the biggest non sequitur: ALL of the IBM Personal Computer’s schematics as well as the complete assembler listings for the BIOS were readily available from IBM, in the IBM Model 5150 Personal Computer maintenance manuals that anyone could buy.

So building a clone of the IBM PC was really pretty trivial from an engineering standpoint, and other manufacturers jumped in early and often.  Most tried to build better machines that ran their own version of MS-DOS, and most used the same bus so that expansion cards were interchangeable.  It took a while for the tyranny of the marketplace to grind everyone into making exact clones of the IBM machine, other than some speed improvements and of course much lower prices.

The list of ridiculously stupid technical gaffes is pretty impressive.  The scene where they start reading out the BIOS?  Well, first off, there were no white LEDs in 1983.  You could have any color of LED you wanted as long as it was red, green or yellow.  And binary 1101 is a hexidecimal D, not B.  PC motherboards don’t arc and spark, and if one did it would be dead, dead, dead.  His oscilloscope was displaying a stupidly Hollywood-ized pattern, and why would they need to use one  anyway?  Could they not read the pinout from a common EPROM data sheet?  He’d just finished explaining how all the parts were off the shelf common stuff.  And why would such a hotshot engineer not rig up an interface to his TRS-80 to read out the BIOS chip?  For that matter…  why not just type in a few lines of BASIC program to read out the BIOS and save it to disk, print it or display it on screen?

From a technical standpoint the show is senselessly over-dramatized in ways that really spoil a lot of the “geek appeal”.  If you know much at all about the technical matter at hand you’ll spend half your time shaking your head and saying, “Wha??  No…”  They did, however, seem to do a fairly decent job of catching the general tone of the period, and the story line (other than the glaring issue of the whole made-up BIOS thing) has potential.  I just wish they’d have hired an actual technical consultant, or listened to him if they did hire one.