Rolling your own dynamic DNS

First let me acknowledge that there are many ways to accomplish this. An easy solution for me would have been to simply use dyndns.com, no-ip.com, or one of the other commercial DDNS services supported by my router. For various reasons, I decided not to use one of those. Actually I did use duckdns.org for a while, but there were occasional issues that I got tired of dealing with.

I’m currently using Porkbun for DNS. They’re cheap, reliable, and have a decent user interface. They, like many other DNS services, also provide an API to make changes programmatically, without needing to log into their web site and make manual changes.

In my case, I have a shell script that runs as a cron job every 5 minutes. It checks my router for the WAN address and compares it to the last recorded address. If the two are not the same, it emails me and runs a Python script to update DNS.

I realize that some of this is pretty specific to my setup. Still, it might be a useful starting point. I found the Python script to update Porkbun DNS on their web site. The command to check the WAN IP address at the router may work for yours, or you may need to take a different approach.

#!/bin/bash

# Read the old IP address from a file.  The EOL will go into a variable we don't use.  This is necessary.
read OldIP b < /home/dale/myipaddress.txt
# Get our curent IP address from the router.
OUTFILE=~/myipstatus.txt
MyIP=`ssh -o StrictHostKeyChecking=no [username]@[router.ip] "ifconfig eth0 \
| grep inet | sed -e 's/.*addr:\([^ ]*\) .*/\1/'"`

while [[ $MyIP == "" ||  $MyIP == "192.168."*  ]] ; do
 sleep 10
 MyIP=`ssh -o StrictHostKeyChecking=no [username]@[router.ip] "ifconfig eth0 \
 | grep inet | sed -e 's/.*addr:\([^ ]*\) .*/\1/'"`
done

if [  "$OldIP" != "$MyIP" ] ; then
  echo "`date`" > $OUTFILE
  echo "Found new IP $MyIP, which is different from our previous $OldIP!" >> $OUTFILE
  echo "Updating Porkbun DNS entries..." >> $OUTFILE
  python3 ~/porkbun/porkbun-ddns.py ~/porkbun/config.json <mydomain.com> <hostname> >> $OUTFILE
  mail -s "IP address change detected" <myemail@domain.com>  < $OUTFILE
  echo $MyIP > ~/myipaddress.txt
else
 echo -n "." >> $OUTFILE
fi

In practice, this can result in a 5-10 minute lag between the time your IP address changes and the time your DNS is updated. If your ISP changes your IP address frequently, it may be too long. In my case, our ISP only changes our IP on rare occasions — typically less than once a year.

Again, there are other approaches, but most will not update DNS entries in your own domain. You can get around this to a certain extent by using CNAME entries, but this was the best way that I found to update my own domain’s DNS.

Running Web Servers on Residential Internet

Once upon a time, residential home Internet connections — cable modem and DSL being the choices at the time — were unfiltered and un-firewalled. This had good and bad aspects to it. You may or may not have been firewalled off from your neighbors. I remember a guy who worked for me demonstrating on the TV news one night how he could see every Windows PC in his neighborhood, and send print jobs to random peoples’ printers if he wanted to. Even after they wised up to that little bit of “openness”, it was still possible to run your own services — mail, web, and so on.

Since then ISPs have come a long way. Residential cable, DSL, and fiber connections, often topping out at 1 GB or even higher, are tightly restricted. Your ISP really wants to support only web prosing and gaming, and most certainly do not want any services running on their network. No web servers, no email (in or outbound). Anything inbound on ports 80, 25, and often 443 are blocked, as is outbound port 25.

So, you’ve got your own little web server you run for your own blog (like this one)… or one you run for a nonprofit, club, whatever. You’ve got your own domain and want to run your own email. The solution is usually some combination of a hosted VM, Google, what have you. But it can get a little expensive, and of course you’re dependent on others for critical bits and pieces of your infrastructure. I can’t take all of that pain away, but I can maybe help to reduce it somewhat.

So let’s look at the issues you may face, and how to solve them. I’ll detail each solution in subsequent blog posts, with solutions that may work for you as they have for me.

  • Your IP address is dynamic, and you need reliable DNS. This can be fixed using a script to detect when your IP address changes, and update your DNS accordingly. It’s not perfect in that there will be a delay before the IP address change is detected and updated, but if your IP only changes occasionally it’s “good enough”. Of course there are dynamic DNS (DDNS) solutions that will do this as well, if you don’t mind paying for them. I’m a cheap bastard and I like a challenge, so I rolled my own.
  • Your ISP blocks connections on port 25 (SMTP). This is pretty much going to require an external mail relay. I have yet to find a way to get the rest of the world to use any port other than 25 for SMTP connections… it really is too bad there’s not a DNS based way around this, like a SRV record (see RFC 6186). Until that happens, I use a small external hosted VM relaying mail on a different port. It could actually be a lot simpler, but I prefer to keep our actual email on a server here, at my house.
  • Your ISP blocks incoming traffic on web ports 80 and 443. Easy. Nginx is your answer, what was the question?

The fun part is sizing this stuff. If you’re used to working in a corporate environment like I have been for the past (mumble) years, you’re thinking, “OK, a 4 CPU 16 GB machine for a mail server, than another one for the proxy… that might be OK… ” Nah. You might be shocked at just how little power it takes to do this stuff. After all, we’re just passing packets around. The TLS encryption is the most heavyweight thing being done, I think. If I had a solid place to hang a Raspberry Pi where it would have a static IP and no filtering of privileged ports, it wouldn’t break a sweat — though I’ve had too many of them just stop working to trust them for this kind of stuff, really.

Using Nomorobo to block calls in Asterisk

Nomorobo is a fantastic service. It’s not perfect; plenty of illegal phone spammers are using throwaway numbers and/or illegally spoofing caller ID numbers to make calls that appear to be from random numbers — usually in your own area code. Short of using a strict whitelist, I don’t see a real way to get rid of those. Using Nomorobo, though, will dramatically cut down on the number of junk calls you will receive.

There’s a little problem, though… while many phone providers offer the service (we’ve been using Ooma), they don’t appear to offer the service to individuals or small businesses who run their own phones.

I ran my own Asterisk PBX for several years, supporting our home phones as well as a separate line I used for work, and even a toll-free number for my side business. Life was good for quite a while, but eventually it got to be quite a hassle trying to keep up with all the junk calls. Then my VOIP carrier changed their pricing to make them much less attractive from a cost standpoint. Eventually we switched to Ooma. They’ve been good, but not without issues. The Telo Air occasionally loses communication with the mothership, and if you don’t see the red light you won’t know that your phones aren’t working. The cost has gone up, now running over $20 per month for the Ooma Premier, which includes what I consider to be some pretty basic features — like call blocking, for example.

Now we have some family members who need a home phone, but I just can’t bear to see them get roped into paying really stupid monthly costs for a simple phone line. That, and our Ooma service is getting more expensive and (it seems) less reliable by the year. Time to switch back. But how can I keep Nomorobo? It would be a tough sell to do without that!

Well, Twilio to the rescue! They offer a Nomorobo lookup API that costs a tiny amount per lookup — $.003, or 0.3 cents per incoming call lookup. Conversely, that’s 333 lookups per dollar. Not bad, I’ll gladly pay that to avoid taking telemarketing or scam robocalls. Now, if only we could get Nomorobo to list all of the numbers used by political “push polls”, recorded messages, and other political campaign silliness!

Twilio’s call rates are not outrageously high either, and their monthly costs for DIDs (phone numbers) are pretty reasonable. The only thing I’ll fault them on is too much hassle to set up CNAM for your outbound calls, so unless you go through that process everything shows up as the number only with no CID name. Flowroute is MUCH better for this, so I route most of my outbound calls through them.

So — how to get Asterisk to do the lookup? After several hours of playing around with this, I found that it’s pretty easy to do. While it wouldn’t be terribly helpful (or smart) for me to post my entire dialplan here, I’ll include enough to get you going. I put this very near the top of the context I use for incoming calls from PSTN trunks. There’s no sense in burning CPU cycles on a call if you’re just going to drop it anyway.

First, you’ll need a Twilio account. They’re even nice enough to give you some credit on your account if you’re new, and it’s enough for quite a bit of learning and development work. I funded my account so I can use them for international calls — they’re ridiculously cheap for most destinations. They’re also a good solution if you want to get DIDs in countries outside the US.

Once you have a Twilio account established, use your account SID and auth token to set CURLOPT() with your username and password. This will be used in the next line to make the curl call to the API:

same = n,Set(CURLOPT(userpwd)=username:password)

Now, make the call to Twilio’s API to get the spam score. The result is a block of JSON that gets saved as TWILIO_RESULT:

same = n,Set(TWILIO_RESULT=${CURL("https://lookups.twilio.com/v1/PhoneNumbers/${CALLERID(num)}?AddOns=nomorobo_spamscore")})

Since we’ve got a block of JSON, we’ll need to extract the one wee bit we need. Fortunately Asterisk has a solution for that as well, so we don’t need to resort to anything drastic like a shell command:

same = n,Set(SPAMSCORE=${JSON_DECODE(TWILIO_RESULT,add_ons.results.nomorobo_spamscore.result.score)})

Now we use that result to drop the call if it’s spam. A simple Hangup(2) tells the caller that their call was rejected:

same = n,GotoIf($[ ${SPAMSCORE} = 1]?dropcall)

Later in the dialplan, after we’ve done the whole “call the user, drop to voicemail if they don’t answer, yadda yadda yadda” we have this:

same = n(dropcall),Hangup(21)

The Hangup(21) tells that their call was rejected. There are other, even more creative codes to use… like these (list courtesy of voip-info.org):

  • 1 – Unallocated number
  • 22 – Number changed
  • 27 – Destination out of order
  • 38 – Network out of order

Letsencrypt, Duckdns, and Cox

Like some other ISPs, Cox blocks all incoming access to port 80 on residential connections. They also use DHCP to assign dynamic IP addresses, which can can and do change occasionally — especially when you reboot your router. That’s fine in most cases, but can be a real pain in the ass if you run any local services that you need to access from outside the home. For example, if you run your own email and want to use IMAP, you’re likely going to need an SSL certificate. You need a way to have your DNS update to point to your new IP when it changes.

One way to do all of this without paying subscription fees is with Duckdns and Letsencrypt. Duckdns is a free DNS service with an easy to use API that can be updated by a script when your IP address changes. Letsencrypt is a free SSL certificate CA; I can’t say enough good things about Letsencrypt and encourage you to support them with a donation as I have.

So. First we can use cron to run a command that updates our duckdns IP address every ten minutes or so.

echo url="https://www.duckdns.org/update?domains={my_domain}&token={my_token}&ip=" | curl -k -o ~/duck.log -K -

Simple, right? Now we have a hostname that always points to our own home IP address – or at least always does within ten minutes of an IP address change, which is probably good enough for most purposes.

Now for the SSL certificate. Letsencrypt will happily issue free a 90 day SSL cert for your domain. Normally, one runs a script from cron that renews the certificate if the cert is expiring in less than 30 days. IF you can expose port 80 to the web, even temporarily, then life is good — just run ‘certbot renew‘ once a day, or even once a week, and everything happens for you in the background. If, however, your ISP filters port 80… well, there’s the pain-in-the-ass part. The certbot script renew script will only work if you have port 80 open to the web. I haven’t found a way to get Letsencrypt’s server to use any other port to reach your web server, so forwarding a non-blocked port (8880, for example) to your local server’s port 80 does you no good.

All is not lost; it just means a bit more work. Letsencrypt will also issue certificates using DNS challenges for authentication, placing specific TXT records to prove that you have control of the domain or subdomain in question. The process looks like this:

certbot certonly --manual --preferred-challenges dns -d example.com-d -d example-com.duckdns.org

The certbot script will tell you to create TXT records in DNS for your domain, and will wait for you to do so before proceeding. You can use your DNS provider’s web or API interface to add or change the TXT record accordingly. Duckdns now supports TXT records in addition to A records, and updating yours is simple:

curl 'https://www.duckdns.org/update?domains={my_domain}&token={my_token}&txt={my_txt}&verbose=true'

Once you’ve verified that the TXT records are there using, say, ‘dig _acme_challenge.{my_domain}.duckdns.org TXT‘ — simply hit ENTER to let the script finish. You should end up with a renewed SSL cert.

My previous ISP didn’t block port 80, so I never had to do any work at all for this. I ran the ‘certbot renew’ command from cron once a day, and it automatically updated the certs for me. Now that port 80 is no longer an option, I will need to manually renew the certificate every 90 days. I’ll actually do it at around 75 days, because Letsencrypt helpfully sends out emails to let you know when your certificate is within 15 days of its expiration.

A year’s worth of updates

Time flies when you’re ignoring a blog, right? I’ll catch up.

  • The Mercedes is gone. After everything I’d fixed on it, when the transmission decided it didn’t want to work reliably any more — screw it, I was done. It was an awesome car to drive, but not so much fun to own. I replaced it with a much newer 2018 BMW 540i Xdrive, which has been wonderful.
  • Still flying occasionally, but nowhere near as much as I should or want to.
  • Nothing’s happened with the Mustang, other than getting the engine put back together.
  • We’ve picked up a couple more rental houses; that enterprise is going pretty well overall.
  • We switched from Visible to T-Mobile. Visible had great service when we signed up; it slowly degraded to barely usable. TMO has been better, but not great.
  • I just dumped CenturyLink. Our CenturyLink fiber service has been down since Wednesday morning (it’s Friday now). It took me three hours to get through to a human there, on the phone, who told me they could have someone out Saturday morning. Absolutely appalling service. We were up and running on Cox within an hour of leaving the house to go pick up their equipment.
  • Now I remember why I didn’t like Cox’s equipment… zero flexibility, no control over your own local network at all. You can’t even set your own DNS, so my Pi-Hole is not functional. I’ve got new equipment coming this afternoon. New cable modem, router, and mesh wifi.
  • I left my long time employer (a bank) a little over a year ago and now work for another bank.

Moved.

I’ve moved the blog to a new web service… one of the AWS virtual server offerings. So far, so good… and dirt cheap.

The sad state of application programming

The Hulu app froze again yesterday and required a force stop. We had another episode of the house phones (the Panasonic DECT6.0 cordless set) not seeing the line from the Ooma, and the Ooma Telo box needed to be power cycled to fix it. I’ve had to power-cycle the Fire TV Cube a couple of times since I installed it a couple of weeks ago. It seems that the Fire TV Cube and the Ooma box will just need regular power cycles to keep them from hanging. This kind of stuff is becoming more and more common… apps are stable for a few hours or a few days, but past that your chances of things working as they should decline rapidly.

I think software development is really being taken over by people who are only marginally competent. You probably know the type. They’ve been to all the classes, got the degrees, can write the code, but really don’t understand how things work, and their code is functional only under ideal conditions. I work with these types daily. They’re unable to think about what happens when things don’t work exactly as they should. The typical conversation consist of me asking one of them what happens when X breaks, which results in a puzzled look. X isn’t supposed to break, you see, and if it does then X is at fault and should be fixed. Never occurs to them to allow for X breaking as a known possibility. Problem is, the guy who wrote X is also a marginally competent idiot, so in the end everything breaks and no one understands why.

We seem to be accepting this as the norm. I talk to people a generation younger than myself and either they are incredibly lucky, or I’m incredibly unlucky, or I’m the only one in the world that ever has an application misbehave. They seem to just accept it as normal and move on. A quick power cycle, a quick reboot, force stop and move on, whatever. As do I, but I do notice it. I can remember when applications being unstable was not unusual, but everyone understood that it was a problem and something to be fixed. Now it just seems that no one cares. OK, if we’re talking about some time sucking game, I don’t care either… but we’re not. We’re talking about systems that should be at least as reliable as what they replace, but turn out to be a pile of crap. I can’t count how many working hours are wasted on bad phone connections, twitchy chat sessions breaking, crappy remote meeting sessions, and slipshod work by people who should know better.

Cord cutting update

Well, we’ve been watching Amazon Prime and Hulu Live for a week now. We have not yet needed to switch back to cable, which is good. It has not been quite the seamless transition one would hope for, but it’s not a complete pain in the ass either. Compared to watching cable, it’s a lot more labor intensive. Lots of button pushing, menu navigating, and we seem to have a disruption of some sort on average at least once a night. Wrong video streams, app crashes, Fire TV reboots, etc. It may not be a deal breaker, but then again it may be. It certainly is a pain in the ass.

My short take on it is, this whole thing is great. Or it would be, if the apps were written by people who actually gave a damn whether things actually worked for more than a few hours at a time. I’ve started doing a power-on reset of my Ooma box once a week to keep it from wandering off the path of righteousness; it looks like the Fire TV Cube may need that once a day or so. Unfortunately, there is no way to force reboot either one remotely so it turns into me remembering to go unplug the stupid things.

Here’s the good, the bad, and the ugly so far…

The Good:

  • The shows we watch are automatically recorded, so we can watch them whenever we please.
  • Video and audio quality seem to be very good. I haven’t tried any lower quality settings to see how it impacts things.
  • So far, I don’t think we have found any of our shows that we can’t watch.

The Bad:

  • Navigation is just clunky, there’s no other way to describe it. There’s lots of button pushing, and you have to be careful of lag and slow response.
  • Different apps for different shows. Amazon Prime for Jack Ryan and a couple of others, Hulu for most things. Not a huge deal, but integration could certainly be better.
  • Data burn. We’re on a 1TB/month plan. We had been using 2-5 GB/day; now we’re hitting peaks of 25GB or more. Average seems to be around 15, which is still OK… but we’ll actually need to pay attention to our data usage, which is not ideal. Obviously streaming video is going to burn bandwidth; this was not unexpected.

The Ugly:

  • Alexa commands are a joke. Tell Alexa “Tune Discovery on Hulu”… no dice, Alexa says Hulu can’t find that channel. We use the remote for most everything.
  • The Hulu app is not what I would call stable. I have started force terminating it once a day, just to keep it from crashing at inopportune times.
  • The Fire TV Cube is also not what I would call stable. roughly every other night or so, it will just spontaneously crash and reboot in the middle of a show.
  • Hulu’s inexplicable and stupid lack of a program guide. It’s idiotic, there’s really no other way to describe it. Guys, you’re selling this as a LIVE TV service, why not act like it and put up a damned program guide?
  • Occasionally, our sound bar will simply power itself off in the middle of watching something. What turned it off? Why? No indication, it’s a mystery. And of course, that means you have to grab another damned remote… unless you tell Alexa to turn the sound bar on, which Alexa will, and then you lose the audio stream from the Hulu app.



Cutting the cord? Or part of it…

So the Cox bill has been getting out of control.  After the latest package deal ran out, the bill bumped up to nearly $240 per month, mostly for crap (in the form of TV channels and phone features) that we don’t want.  That’s a ton of money.

The requirements are:

  • Landline with caller ID
  • Live TV with the channels WE watch.  Local channels, Fox News, History, Discovery, AMC, HGTV, several others. 
  • Internet to support full time telecommuting

I already switched the phone service over to Ooma.  I bought a Telo and signed us up for Ooma Premeir service.  That gives us caller ID, voicemail, and unlimited calling in & out.  That will reduce the monthly phone service spend from $53.62 (I shit you not, that’s what Cox was charging me) to less than $20 per month — for more service.

Now, next up is cable TV. Cox’s bill comes to a little over $154, including taxes and fees and surcharges.  I could reduce that by about $24 by dropping HBO and Showtime, which suck anyway and we only have because they were included in the discount package that has expired.  Still WELL over $100 a month for, quite frankly, an awful lot of crap.  200+ channels, but of course they include crap we’d never watch in a hundred years just to try to justify the insane price. 

The last time I looked at alternatives like Hulu, Netflix, Sling, etc. — and it was not that long ago — they all fell woefully short of meeting any of our requirements.  We stuck with cable TV simply because there was no other way to watch, for example, The Walking Dead, or Fox News, or Nebraska football games, live.  A few hours or days or a year after the fact, sure.  Or not at all, depending on the service.  And we’d probably need to sign up for several, resulting in a total bill exceeding what we were paying for cable in the first place.  Oh, and get an antenna up that would work for the local channels, since NONE of them covered those.

Well, it seems the picture has changed significantly.  For about $40 a month Hulu will give you all their stuff, plus live TV covering all the channels we watch (BTN for Husker football included, woohoo!) and a DVR service.  It’s worth a try.  We already have Amazon Prime, mostly for the shipping.  The decision to go with a Fire TV Cube was pretty simple.  I received and installed that yesterday, and signed up for a free trial week of Hulu with live TV.  Oh, and as a side benefit…  it looks like this may also negate the need to try and find yet another “universal” remote control, potentially saving another few rubles.

Last night was our first night watching Hulu on the Fire TV Cube.  Overall the user interface ranges from “fair, needs improvement” to “frustratingly clunky” to “ridiculously obtuse”.  Some of that’s the Fire TV, some is Hulu.  It’s bearable, and I hope it improved with future app updates.  We also had not one, but THREE screwups while trying to watch live TV.  The first was innocuous and not a big deal — watching the news, but the program guide listed it as some oddball foreign cartoon name.  OK, no big deal.  Then we tried watching Vikings on History Channel.  Several minutes into the episode it restarted,  restarted again, and when we tried to get back to the live stream it switched to some episode of “Forged in Fire”.  Horrifically frustrating.  10-15 minutes later we got back to Vikings, but of course missed part of the episode.   We’ll have to watch it again.

Then we tried watching another show, “Curse of Oak Island”.  What we got was an old episode of “Stargate SG-1”, which most definitely has not improved with age.  It would have been funny if it were not for the fact that we couldn’t watch the damn show we wanted to watch.

I will say that non-live streams seem to work perfectly, and the video quality seems to be great.  And we can watch some channels for hours with zero issues.  I chatted with Hulu support today, and the agent says it’s a “known issue” that they’re working to resolve.  IF they resolve it soon, and completely, we’ll have a winner.  If they do not, we’ll need to decide whether we stick with Hulu and adapt (watch things delayed a little), or scrap it and pare our Cox cable back to the minimums and deal with the expense.  Or something else entirely. 

Once we have a final solution to this question, I’ll post a monthly spend and savings analysis.  I think we can probably save about $100 a month, to be honest.  I’m glad I don’t own stock in Cox or any other cable company.  We’ll still have to use them cor Internet access, of course, but who knows how long that will be true?


Weather station reporting

I recently put my AcuRite weather station back up after having it sitting in the garage for a year or so.  I have the Internet Bridge, which recently got a firmware update, and wanted to have it reporting to both AcuRite and Weather Underground.

AcuRite’s site will apparently update WU, but only at 15 minute intervals.  And, I wanted to also collect the data locally so I can feed it into Splunk or some other tool for my own use.

Problem is, the AcuRite gateway box only sends data (via HTTP GET) to one fixed hostname that’s hard coded, hubapi.myacurite.com.  SO…  first we intercept those DNS calls to send them where we want them.  In named.conf:

acl wxbridge-only {
        ip.of.wx.bridge/32;
};

view "wxbridge-view" {
        match-clients { wxbridge-only; };
        zone "hubapi.myacurite.com" {
                type master;
                file "hubapi.myacurite.com";
        };
};

And the zone file:

$TTL 14400
@               IN      SOA     localhost. dale.botkin.org. (
                                2016081803
                                3600
                                3600
                                604800
                                14400 )

@               IN      NS      localhost.

hubapi.myacurite.com.   IN      A       ip.of.local.server;

Now the weather bridge, and ONLY the weather bridge, gets your local machine’s IP address for hubapi.myacurite.com.  So next we create a PHP script and use Apache to point /weatherstation to it (ScriptAlias /weatherstation /var/www/cgi-bin/updateweatherstation.php in my case).  The script sends the original HTTP request to hubapi.myacurite.com, then reformats it and sends it to wunderground.com.  It’s also preserved in the Apache access log, so you can ingest it into Splunk.  You could also syslog it or write it to a file, whatever you want.  I started out using a script I found that Pat O’Brien had written, but ended up rewriting it almost entirely.  It’s been years since I wrote a PHP script.

<?php
 // First send it to AcuRite, no massaging needed...
 $acurite = file_get_contents("http://hubapi.myacurite.com/weatherstation/updateweatherstation?" . $_SERVER['QUERY_STRING']);
 echo $acurite;
 // Now re-format for wunderground.com. We don't always
 // get every parameter, so only send those we do get and
 // strip out those that wunderground won't accept.
 $msg = "";
 $winddir = (isset($_GET['winddir']) ? "&winddir=".$_GET['winddir'] : null);
 $windspeedmph = (isset($_GET['windspeedmph']) ? "&windspeedmph=".$_GET['windspeedmph'] : null);
 $humidity = (isset($_GET['humidity']) ? "&humidity=".$_GET['humidity'] : null);
 $tempf = (isset($_GET['tempf']) ? "&tempf=".$_GET['tempf'] : null);
 $rainin = (isset($_GET['rainin']) ? "&rainin=".$_GET['rainin'] : null);
 $dailyrainin = (isset($_GET['dailyrainin']) ? "&dailyrainin=".$_GET['dailyrainin'] : null);
 $baromin = (isset($_GET['baromin']) ? "&baromin=".$_GET['baromin'] : null);
 $dewptf = (isset($_GET['dewptf']) ? "&dewpointf=".$_GET['dewptf'] : null);
 $msg .= "dateutc=now";
 $msg .= "&action=updateraw";
 $msg .= "&ID=<your weather station ID here>";
 $msg .= "&PASSWORD=<your weather station password here>";
 $msg .= $winddir;
 $msg .= $windspeedmph;
 $msg .= $humidity;
 $msg .= $tempf;
 $msg .= $rainin;
 $msg .= $dailyrainin;
 $msg .= $baromin;
 $msg .= $dewptf;
 $msg .= PHP_EOL;
 $wunderground = file_get_contents("http://rtupdate.wunderground.com/weatherstation/updateweatherstation.php?".$msg);
 // Let's log any failures with the original message, what we sent,
 // and the response we got:
 if(trim($wunderground) !=  "success" ) {
 openlog('weatherupdate', LOG_NDELAY, LOG_USER);
 syslog(LOG_NOTICE, $_SERVER['QUERY_STRING']);
 syslog(LOG_NOTICE, $msg);
 syslog(LOG_NOTICE, $wunderground);
 }
?>

So far it’s been working fine for a couple of days. I have noticed that the AcuRite 5-in-one station will go for extended periods without sending some data – it seems like it only sends what has changed, or what has changed with seemingly random pieces of information.  For example, it may send the barometric pressure even if it hasn’t changed, but not the temperature or wind direction if they’re stable.  It’s weird.  Of course now I understand why they only send periodic updates to Weather Underground.  AcuRite’s own site seems to mask this behavior, but Weather Underground does not.  I’m thinking about keeping a persistent state file and sending every parameter with every update, or collecting updates and just sending WU a digest every minute or two.  But that’s a project for another day.