Blog

I've had my Kodi install running on a Raspberry Pi for a while and have been happy with it.  But when a Mac Mini freed up I wanted to move my setup over to it and free up the Pi for other uses.  I was not prepared for the shoddy state of Kodi packaging.

First off, if you're about to install Fedora 25 on a box you want to run Kodi on, stop.  As of this writing there are two places to get Kodi for Fedora: RPMfusion and United RPMs.  You can't enable those repositories concurrently, United RPMs doesn't have any of the tvheadend code, and RPMfusion just generally has more software.  However, RPMfusion currently only has Kodi 17 (Krypton) which is in beta and horribly unprepared for the world (the changes in Kodi 17 have all but neutralized what little good Kodi documentation is out there).

My current recommendation is Fedora 24 with Kodi 16 (Jarvis).  Ubuntu and OpenELEC are also good choices (and of course, Raspian) but didn't work for my situation.

What you need to know, and what the majority of the Internet apparently has failed to realize, is that Kodi stopped shipping many plugins - such as HTS, the TVHeadend client - and you can't just pull them from a repository.  If you search for instructions for how to enable any of this you're going to find tons of articles, forums, and email list posts basically saying "OMG, just enabled it, GAWD".  These people are running some distribution which has already packaged all the necessary stuff.  RPMfusion used to do this for Fedora but stopped after Fedora 22 and Kodi 14.

So where do you begin?  Well, you need to have Fedora 24 installed.  I started with the Cinnamon spin, which means many packages were included (such as gcc and make).  In my case my tuner card is in the same box and I already installed TVHeadend and set it up.  Here's what you want to do next:

$ sudo dnf install kodi kodi-devel kodi-platform-devel git cmake gcc-c++

Now you're going to want to pull down the source for HTS:

$ git clone -b Jarvis https://github.com/kodi-pvr/pvr.hts.git
$ cd pvr.hts
$ mkdir build

Note that we're specifically pulling the "Jarvis" branch.  If you didn't heed my warning and went with Fedora 25 and Kodi 17 you can omit the -b stuff.

Here's where it gets really interesting.  The HTS source refers to "platform" just as that in source / headers.  However, the Fedora Platform RPM include paths and namespace is p8-platform and P8PLATFORM respectively.  So we have to update the HTS source code for these differences.

There are three changes we need to make in CMakeLists.txt (presented as a unified diff):

--- CMakeLists.txt.dist	2016-12-04 16:23:21.767653860 -0600
+++ CMakeLists.txt	2016-12-04 16:23:40.781270537 -0600
@@ -6,10 +6,10 @@
 
 find_package(kodi REQUIRED)
 find_package(kodiplatform REQUIRED)
-find_package(platform REQUIRED)
+find_package(p8-platform REQUIRED)
 
 include_directories(${kodiplatform_INCLUDE_DIRS}
-                    ${platform_INCLUDE_DIRS}
+                    ${p8-platform_INCLUDE_DIRS}
                     ${KODI_INCLUDE_DIR}
                     ${PROJECT_SOURCE_DIR}/lib)
 
@@ -95,7 +95,7 @@
 
 add_subdirectory(lib/libhts)
 
-set(DEPLIBS ${platform_LIBRARIES} hts)
+set(DEPLIBS ${p8-platform_LIBRARIES} hts)
 if(WIN32)
   list(APPEND DEPLIBS ws2_32)
 endif()

The remaining changes we can make programmatically:

$ for a in $(egrep -r 'platform|PLATFORM' debian/ lib/ pvr.hts/ src/ | egrep 'include| PLATFORM|\tPLATFORM' | awk -F: '{print $1}' | sort -u); do
    sed -i -e 's,platform/,p8-platform/,g' -e 's,PLATFORM,P8PLATFORM,g' "$a"
  done

Now we can compile and install the plugin:

$ cd build
$ cmake -DCMAKE_INSTALL_PREFIX=${HOME}/.kodi/addons -DPACKAGE_ZIP=1 ..
$ make
$ make install

Now you can follow all those guides that say "OMG, just install it, GAWD!"  When you start Kodi the plugin will already be enabled (assuming you built/installed this under the same user as you run Kodi) and you can enable Live TV in the settings.

Fun With Time Lapse

Call me silly but I like to see work being done.  There are many times when I've stood back at the end of a large project with visible results and said "man, I wish I had a recording of that".  I keep saying that I wanted to setup a webcam to take pictures for a time lapse and today I finally did.

The What:

  • Logitech Pro 9000
  • Old laptop running Linux
  • Tripod

The How:

Hook up the webcam to your laptop and point it where you want to take the shots.  I recommend using camorama to get things lined up since it will show you the real-time video.  If you only want to take a shot every minute you could also use it to take the pictures as it has a timer built-in.  However, you can't customize your image type or quality.  I chose to use fswebcam to capture my images.  There are at least a dozen different apps out there that will grab images for you so just choose what works best for you.  You can even use FFMPEG to grab images if you'd like.

Once you have your shot framed you'll want to start grabbing images.  I wrote the following simple script to grab images:

/usr/local/sbin/camshot.sh
#!/bin/bash

JPG_PATH="/home/camuser/Pictures/$(date '+%Y.%m.%d-%H:%M:%S').jpg"

fswebcam \
  -q \
  -p YUYV \
  -r 1600x1200 \
  -S 15 \
  --no-banner \
  $JPG_PATH

chown camuser: $JPG_PATH

You may have to adjust the flags to suit your camera. For example, the -p YUYV and -r 1600x1200 flags will only work on higher-end cameras.

Depending on your setup you may not have to run the image capture as root, but I didn't feel like dinking with it and the old laptop I used has nothing on it but these frame grabs.

For my camera the -S 15 was absolutely critical.  That option instructs fswebcam to discard the first 15 frames after initializing the webcam before saving a shot.  This gave the camera time to adjust the focus and levels on the image.  Without this every image I took was washed out.

Make sure that if you modify the scheme for naming the images that you ensure they can be listed in chronological order easily (such as with a simple ls).

Once you have a script that will grab and save an image, schedule it in cron:

# Take webcam snapshots every 10 seconds-ish
* * * * * root /usr/local/sbin/camshot.sh
* * * * * root sleep 10 && /usr/local/sbin/camshot.sh
* * * * * root sleep 20 && /usr/local/sbin/camshot.sh
* * * * * root sleep 30 && /usr/local/sbin/camshot.sh
* * * * * root sleep 40 && /usr/local/sbin/camshot.sh
* * * * * root sleep 50 && /usr/local/sbin/camshot.sh

I chose to take an image every ten seconds.  It's really a matter of preference how often you take images.  My take on it is that you can always discard images but you can't use what you don't have.  So as long as you have the hard drive space, the more the merrier.  Each of my captures was around 250k.

After you've gathered your shots you can convert them into a video.  This is where it can get confusing because you have so many options.  I used FFMPEG (avconv), but many other people have had success with mencoder and various ImageMagick tools (and other tools on Windows and OS X).  If you're going the FFMPEG route you first need to rename your images in a sequential order:

$ cd /home/camuser/Pictures
$ mkdir ffmpeg
$ i=1
$ for a in *.jpg; do
>   mv $a ffmpeg/`printf "%04d" $i`.jpg
>   ((i++))
> done

This will go through each image and move it to a directory called ffmpeg with a zero-padded name (such as 0001.jpg) in sequential order (assuming you followed the note above regarding your file naming scheme).  You can then spit out a movie with a command such as this:

$ cd ffmpeg
$ avconv -f image2 -i %04d.jpg -r 24 -s uxga -vcodec libx264 ~/Videos/lapse.mp4

The command above generates an MPEG4 video using H.264 encoding at 1600x1200 (the same resolution the pictures were taking at) using 24 frames (pictures) for each second.  If your picture resolution is different you will want to alter the -s uxga flag (you can specify a resolution instead of a name).  Avconv is supposed to match the resolution of the output to that of the input by default, but that was not working for me.  If you want to change the frames per second adjust the -r 24 flag.

The position of the arguments is important in avconv. For example, if you move the -r 24 flag so that it preceeds the -i flag then it becomes an input parameter instead of an output parameter.

If you want your video to play slower, decrease your -r value.

The wife doctored up my resulting video in Adobe Premiere, but here's the result of our Saturday:

Avconv is an extremely powerful tool.  You can use a command such as this to resize your video:

avconv -i ~/Videos/lapse.mp4 -c:v libx264 -q:v 5 -vf scale=640:-1 ~/Videos/lapse640.mp4

Or if you want to convert it to Flash Video:

avconv -i ~/Videos/lapse.mp4 -c:v libx264 -q:v 5 -vf scale=640:-1 ~/Videos/lapse.flv

Enjoy!

Modern Linux distributions typically have LVM resting between physical storage devices and the file system.  This is great for your OS and other data drives that are installed in your system or otherwise "permanently" attached.  The ext file system is designed to allow resizing as the underlying block devices are grow (or even shrunk in the right scenario).  But this does require that the file system reserve space to allow what's called the "block group descriptor table" to grow.  USB drives don't magically change in size (outsize of you messing with partitioning) and because of this don't need to reserve that space.

To get the most file system bang for your buck out of your USB drive use the following mke2fs command:

# mke2fs -m 1 -O ^resize_inode -j -t ext4 /dev/sdz1

Replace /dev/sdz1 with the path to the partition on your USB device.

The -m 1 argument reduces the amount of space reserved for the super user to 1%.

The -o ^resize_inode argument disables the reserving of space for resizing the file system.

Check out the defaults in /etc/mke2fs.conf and man mke2fs for more information about how your file system is being sped up.

Searching for "os x wireless connection timed out" will bring up 100's of unique posts about people having issues getting their Mac's wireless card connected to a new (or just different) router.  Most answers start with the typical knee jerk "I wanted to be the first post" responses of reboot your router, reboot your Mac, change the wireless channel, check your SSID and passphrase... Because, you know, when stuff that was working minutes ago breaks it's because you typed your password wrong.

Dig through the replies a bit more and you will find insightful advice such as deleting system preferences, creating new network profiles, and clearing out your key chain.  When my sister's MacBook Pro suddenly wouldn't connect to a new router using the same security settings as the old, I started running through the gauntlet of troubleshooting.  I did all the resetting (including my SMC) and rebooting and whatnot just to make sure I wasn't "that guy".  I set static channels, downgraded the speed from N to G to B, and tried various combinations of WPA, WPA2, TKIP, and AES/CCMP.  I deleted every network preference I could find.  The thing just wouldn't work.

Then I remembered an issue I had with my Raspberry Pi a few weeks earlier.  I could get my Pi to associate with the wireless network but it wouldn't get an address from the gateway's built-in DHCP server.  Setting a static address worked fine.  After a couple days of scratching my head I figured out that a hidden SSID was the culprit.  Interestingly enough, un-hiding my SSID also fixed a digital picture frame that had mysteriously stopped connecting to the wireless (even though it was connecting earlier with a hidden SSID).

But alas, the SSID at my mom's house wasn't hidden.  I fired up the Network Preferences again on the MacBook, set a manual IP for the wireless connection, and ta da!  The laptop connected to the network and I was able to browse the Internet.  Even after reverting the settings to return to using DHCP the connection still worked (with an appropriately assigned address from the pool).  I did find one reference of someone fixing this same error message by setting a static assignment on his router for the Mac.

I can't say for sure if the issue was with the Mac or the router.  I tend to lean in the direction of the Mac as it seems to store information about past connections in so many places, and even when you delete these old connections from your settings they are still there in *.plist files on your system.  That and every other device was working just fine, including the Mac when connected via ethernet cable.

There are quite a few great articles on the Internet about how to make your Raspberry Pi into a a digital picture frame from a hardware standpoint, but I haven't found a good set of end-to-end instructions for the software side.  I decided to go down that journey myself.  I didn't want to have to keep a copy of my images on a USB drive (or the Pi's SD card).  I just wanted the frame to pull images from our desktop where the wife has already spent time organizing and touching up the photos.

At first I had the Pi doing all the image processing (resizing to make things fit on the display plus a simple web interface) but found that it just couldn't keep up with processing and displaying a new image (from a large-format JPEG) every ten seconds.  The images were frequently appearing corrupted.  So instead I wrote some Python code that runs on my Windows desktop to stage the images for the Pi.  Then all it has to do is grab them and display them.

You can find all of the code and configurations I used along with instructions here:

https://github.com/jbnance/jPiFrame

It's extremely easy to modify this code for your own desires.  If people actually start to become interested in that work I'll put together a proper HOWTO with more details and options.

My WIndows 7 desktop has a 75G SSD for a boot drive and a 1TB RAID1 array for a data drive.  That's a lot of stuff to lose if your computer goes tango uniform.  I've been hobbling along with a 1.5TB external drive for backups, but as my data drive fills up my backup drive is having difficulty keeping up.  About once a month Windows will tell me that the backup drive is full (which is funny considering that I have the "Let Windows Manage The Space On The Backup Drive" option selected).  So when Costco had a 4TB USB3 drive for $140 I was all over it.

I should have known I was in for trouble when it was a Seagate drive, though.  I've never had luck with Seagate drives, and my luck didn't change in this case.  At first things looked like they were going great.  I unpacked the drive and plugged it in and it immediately mounted and started working.  But the moment I tried to use Windows Backup I felt nothing but pain.  The backup would run for a while copying files, but at the very end fail with I/O errors every time.  Drop "Windows Backup 0x8078002A" into Google and you have a few days worth of reading to do.  I mean there is tons of information on this subject.

Basically it boils down to Windows Backup doesn't work with the 4k sectors found in Advanced Format (AF) drives. I tried patches and re-partitioning and changing my disk label to GPT and creating two smaller partitions and sector alignment and on and on and on.  On a side note I found that the drive came with a hidden 50G partition that required a boot into Gparted to blow away.  I know that is a drop in the bucket compared to 4TB, but really, Seagate?  Really?  There are reports that Western Digital released a tool to reconfigure their AF drives to work with Windows Backup, but I cannot confirm those allegations and I suspect that they are false.  All of the patches and workarounds thus far have been to get AF drives to work with Windows in general (as data drives), not to make them work with Windows Backup.  Given that Windows Backup is more than just a simple file copying utility I don't see any of these hacks changing the fact that Microsoft has flat out said AF drives don't work with it.  It sounds like Windows 8 and Windows Server 2012 are going to be the first versions to fully embrace AF.