Category Archives: Computers

reading, ejecting, ripping and polling dvd devices in linux; also, notes on my dvd library

It has been a looong time since I both posted in my blog and worked on my custom DVD ripper scripts.  Apparently the last time I worked on the code was last June, and even then I didn’t make many updates.

I’ve been spurred onto building up my DVD library again by a couple of things. First, I realized that my Blu-ray player has  support for Matroska videos with VobSub and SRT subtitle support!  I was not expecting that.  In fact, it’s way better than what my PS3 can playback, which is … depressing.

I put away my HTPC about two years ago, when I was living in my previous apartment.  I moved into a place that was probably about 550 square feet.  Pretty tiny, and I liked it, but no room for a fantabulous multimedia setup.  So I sacked it for a while and was okay with that.  The fact is I actually spent more time getting it up and running and customizing it than using it.  Which is weird.  Actually I spent even *more* time ripping the DVDs and then not watching them.  But that’s okay.  It wasn’t until recently that I found a setup I think I’d like even more.

For now, I’m preferring having *less* hardware, and so just sticking a small 8GB USB thumb drive in my Blu-ray player with a smattering of samples of shows suits me just fine.  It’s no amazing thundershow of hardware and multimedia, but it *does* get me actually watching the content, so there.  I imagine if (and when) I have a house where I can properly get loud without upsetting neighbors, that’s when I’ll whip the big speakers back out and deck it out properly.  Some day. :)

In the meantime, today, I’ve been working on my DVD scripts.  I call it dart for “dvd archiving tool.”  It’s a complex set of scripts that I’ve been putting together for years, and it is highly customized for my own setup, with a CLI tool to read and access DVDs, then archive them in a database.  I also have a web frontend that I use to tag tracks, titles, episodes, etc. and so on.  If it wasn’t so unwieldly I’d throw the source out there, but the thought of having to explain to *anyone* how to get it up and running makes my head hurt.  So, if you want a good DVD ripper, here’s my advice: use Handbrake.

One problem I was trying to solve tonight was checking for these three statuses of my DVD drive: is the tray open, is the tray closed, is there media in the tray (while closed).  I have to use different tools for each one, but the problem that I always run into is this: it’s impossible (as far as I have been able to discover) to know when a DVD tray is both closed and ready to access.

The problem is that you can run eject just fine to close the tray, but once the command exits successfully, that doesn’t mean the drive can be accessed.  That is, running “eject -t /dev/dvd” and then “mplayer dvd://” in sequence, mplayer will complain that there’s no DVD device.

What’s the solution to all this?  Well, wait four seconds after running “eject.”  That’s simple, but I still spent hours today trying to find out if there was another way to do it.  While I never did (and ended up using ‘sleep’), I did find some cool stuff for polling and reading DVD devices.

blockdev

blockdev basically displays some interesting information about the block devices — in this case, /dev/dvd.  Now, for my library, one thing I have been doing lately is storing the size of the DVD in my database, so I can get an accurate number of how much HDD space I need when I want to archive the UDF or rip it.

You can use blockdev to get the amount of bytes like this:

blockdev –getsize64 /dev/dvd

Now if you want to see that in megabytes, just divide it by 1024

expr `blockdev –getsize64 /dev/dvd` / 1024

udisks

Next up is udisks, which can get information about the DVD device itself.  In this instance, I use it to see if there is media (a DVD) in the tray or not.

Running “udisks –show-info /dev/dvd” spits out all kinds of interesting information, but what I’m looking for is the “has media” field.

udisks –show-info /dev/dvd | grep “has media”
has media:                   1 (detected at Wed Jul  3 23:21:23 2013)

Now, that will say 1 *if* the both the disc tray is closed and there is something in there.  And if the DVD drive has stopped spinning enough for the command to work (again, sleep 4 seconds after closing the tray).

It will display a zero if there is no media *or* if the DVD tray is open.  Here’s a simple command to get just the number:

udisks –show-info /dev/dvd | grep “has media” | awk ‘{print $3}’

cddetect

This is an old small command-line tool I’ve used in the past.  It polls the drive to see if there’s something in there or not, and if the tray is open or not.  Sounds great, right?  It should do everything I want, solving all my problems … except that it doesn’t build on my system (Ubuntu 12.10 with gcc 4.7.2).  It used to, on my older setup, which would have been about 2.5 years ago.

It’s just a small C script, just over 500 lines, you can find it here on Freshmeat.  If someone wants to patch it to get it working, I’ll personally deliver you a plate of brownies.  Mmmm, brownies.

I actually *do* have an old 64-bit binary that I built way back when, because I kept a copy of my old development filesystem.  So I have a working blob, but it kind of breaks.  So I kind of rely on it.  I can only use it if the tray is open or if the tray is closed and empty.  So the way I check if a device is empty and closed in my script is I’ll first poll it to see if it has media with udisks, and if it doesn’t, then I’ll run this one.  If I run it with a disc in there, it pukes on me, and so I have to work around it.  It’s a hack, I know, but whatever.

qpxtool and readdvd

This is the project I ran into today, and I am super, super excited about it.  The QpxTool project is full of way cool little utilities for accessing your drive settings.  Honesty, I didn’t look at the other ones, because I was so hyperfocused on ‘readdvd’.

From the man page, “readdvd reads even a corrupted dvd and writes the the result into a new image file on your harddisk.”  This is awesome, because it’s the first utility I’ve found *specifically* for creating an exact image of a DVD filesystsem (UDF).  In the past, I’ve always used dd, but now I’m onto this one.  It skips over bad sectors and gets the image squeaky clean off of there, and I could not be happier.  This one ranks up there with Handbrake in both awesomeness and must-have-ness.  I should add that it’s also in Ubuntu’s default repos, so have fun.

Just run “readdvd -o movie.iso /dev/dvd”.  Pretty simple.

That’s pretty much it for now.  There are other great tools out there: lsdvd also ranks in the “must have” category.  I couldn’t do anything without it.

I mentioned dd earlier, and I actually use pv with it to give me a nice progress bar (also in Ubuntu repos).  It works just fine, I’ve been using this approach for years.

pv -ptre /dev/dvd | dd of=movie.iso

One more thing I wanted to mention.  Sometimes, some errors get thrown to the syslog because either an application or the DVD drive itself is being fussy.  I haven’t quite narrowed down which it is, but I’m betting it’s the firmware on the DVD drive complaining since some brands (Memorex) complain, and some do not (BenQ).  By far, the best-quality DVD drive I’ve had to date was actually a Sony BD-ROM drive.  At least, I think it was Sony.  Here’s some of the errors I get sometimes:

Jul 3 18:37:04 localhost kernel: [11955.073772] sr 0:0:0:0: [sr0]
Jul 3 18:37:04 localhost kernel: [11955.073784] sr 0:0:0:0: [sr0]
Jul 3 18:37:04 localhost kernel: [11955.073795] sr 0:0:0:0: [sr0]
Jul 3 18:37:04 localhost kernel: [11955.073808] sr 0:0:0:0: [sr0] CDB:
Jul 3 18:37:04 localhost kernel: [11955.073826] end_request: I/O error, dev sr0, sector 4096
Jul 3 18:37:04 localhost kernel: [11955.074286] Buffer I/O error on device sr0, logical block 512

To avoid issues like this, I run a small command to just decrypt the CSS on the DVD so it can kind of clear its head a bit.  Just run mplayer on it, watching about 60 frames (or 2 seconds worth of video), but just ignore it and dump it out.  The whole point of it is to decrypt the DVD, and move on with your life.  And here you are:

mplayer dvd:// -dvd-device /dev/dvd -frames 60 -nosound -vo null -noconfig all

I don’t pretend to understand how or why that helps, but I know it does.  If someone knows why the drives are doing that, I’d love to know.

The only other app I can think of right now off the top of my head is ‘dvdxchap’, which is part of ‘ogmtools’.  I know ogmtools is old, and the OGM container isn’t popular anyway (that I’ve seen), but it’s perfect for getting the chapter information out.  Although I may use something else now (lsdvd?).  I can’t remember, and I haven’t had to mess with chapters lately.

That’s it for me.  Have fun, rip away, and watch some cool Super Friends DVDs.  There are a LOT of seasons out there.  It’s great. :)

Leave a comment

Filed under bend / dvd2mkv, MPlayer, Multimedia

znurt.org cleanup

So, I finally managed to getting around to fixing the backend of znurt.org so that the keywords would import again.  It was a combination of the portage metadata location moving, and a small set of sloppy code in part of the import script that made me roll my eyes.  It’s fixed now, but the site still isn’t importing everything correctly.

I’ve been putting off working on it for so long, just because it’s a hard project to get to.  Since I started working full-time as a sysadmin about two years ago, it killed off my hobby of tinkering with computers.  My attitude shifted from “this is fun” to “I want this to work and not have me worry about it.”  Comes with the territory, I guess.  Not to say I don’t have fun — I do a lot of research at work, either related to existing projects or new stuff.  There’s always something cool to look into.  But then I come home and I’d rather just focus on other things.

I got rid of my desktops, too, because soon afterwards I didn’t really have anything to hack on.  Znurt went down, but I didn’t really have a good development environment anymore.  On top of that, my interest in the site had waned, and the whole thing just adds up to a pile of indifference.

I contemplated giving the site away to someone else so that they could maintain it, as I’ve done in the past with some of my projects, but this one, I just wanted to hang onto it for some reason.  Admittedly, not enough to maintain it, but enough to want to retain ownership.

With this last semester behind me, which was brutal, I’ve got more time to do other stuff.  Fixing Znurt had *long* been on my todo list, and I finally got around to poking it with a stick to see if I could at least get the broken imports working.

I was anticipating it would be a lot of work, and hard to find the issue, but the whole thing took under two hours to fix.  Derp.  That’s what I get for putting stuff off.

One thing I’ve found interesting in all of this is how quickly my memory of working with code (PHP) and databases (PostgreSQL) has come back to me.  At work, I only write shell scripts now (bash) and we use MySQL across the board.  Postgres is an amazing database replacement, and it’s amazing how, even not using it regularly in awhile, it all comes back to me.  I love that database.  Everything about it is intuitive.

Anyway, I was looking through the import code, and doing some testing.  I flushed the entire database contents and started a fresh import, and noticed it was breaking in some parts.  Looking into it, I found that the MDB2 PEAR package has a memory leak in it, which kills the scripts because it just runs so many queries.  So, I’m in the process of moving it to use PDO instead.  I’ve wanted to look into using it for a while, and so far I like it, for the most part.  Their fetch helper functions are pretty lame, and could use some obvious features like fetching one value and returning result sets in associative arrays, but it’s good.  I’m going through the backend and doing a lot of cleanup at the same time.

Feature-wise, the site isn’t gonna change at all.  It’ll be faster, and importing the data from portage will be more accurate.  I’ve got bugs on the frontend I need to fix still, but they are all minor and I probably won’t look at them for now, to be honest.  Well, maybe I will, I dunno.

Either way, it’s kinda cool to get into the code again, and see what’s going on.  I know I say this a lot with my projects, but it always amazes me when I go back and I realize how complex the process is — not because of my code, but because there are so many factors to take into consideration when building this database.  I thought it’d be a simple case of reading metadata and throwing it in there, but there’s all kinds of things that I originally wrote, like using regular expressions to get the package components from an ebuild version string.  Fortunately, there’s easier ways to query that stuff now, so the goal is to get it more up to date.

It’s kinda cool working on a big code project again.  I’d forgotten what it was like.

2 Comments

Filed under Gentoo

gentoo, openrc, apache and monit – proper starting and stopping

I regularly use monit to monitor services and restart them if needed (and possible).  An issue I’ve run into though with Gentoo is that openrc doesn’t act as I expect it to.  openrc keeps it’s own record of the state of a service, and doesn’t look at the actual PID to see if it’s running or not.  In this post, I’m talking about apache.

For context, it’s necessary to share what my monit configuration looks like for apache.  It’s just a simple ‘start’ for startup and ‘stop’ command for shutdown:

check process apache with pidfile /var/run/apache2.pid start program = “/etc/init.d/apache2 start” with timeout 60 seconds stop program = “/etc/init.d/apache2 stop”

When apache gets started, there are two things that happen on the system: openrc flags it as started, and apache creates a PID file.

The problem I run into is when apache dies for whatever reason, unexpectedly.  Monit will notice that the PID doesn’t exist anymore, and try to restart it, using openrc.  This is where things start to go wrong.

To illustrate what happens, I’ll duplicate the scenario by running the command myself.  Here’s openrc starting it, me killing it manually, then openrc trying to start it back up using ‘start’.

# /etc/init.d/apache2 start
# pkill apache2
# /etc/init.d/apache2 status
* status: crashed
# /etc/init.d/apache2 start
* WARNING: apache2 has already been started

You can see that ‘status’ properly returns that it has crashed, but when running ‘start’, it thinks otherwise.  So, even though an openrc status check reports that it’s dead, when running ‘start’ it only checks it’s own internal status to determine it’s status.

This gets a little weirder in that if I run ‘stop’, the init script will recognize that the process is not running, and reset’s openrc’s status to stopped.  That is actually a good thing, and so it makes running ‘stop’ a reliable command.

Resuming the same state as above, here’s what happens when I run ‘stop’:

# /etc/init.d/apache2 stop
* apache2 not running (no pid file)

Now if I run it again, it checks both the process and the openrc status, and gives a different message, the same one it would as if it was already stopped.

# /etc/init.d/apache2 stop
* WARNING: apache2 is already stopped

So, the problem this creates for me is that if a process has died, monit will not run the stop command, because it’s already dead, and there’s no reason to run it.  It will run ‘start’, which will insist that it’s already running.  Monit (depending on your configuration) will try a few more times, and then just give up completely, leaving your process completely dead.

The solution I’m using is that I will tell monit to run ‘restart’ as the start command, instead of ‘start’.  The reason for this is because restart doesn’t care if it’s stopped or started, it will successfully get it started again.

I’ll repeat my original test case, to demonstrate how this works:

# /etc/init.d/apache2 start
# pkill apache2
# /etc/init.d/apache2 status
* status: crashed
# /etc/init.d/apache2 restart
* apache2 not running (no pid file)
* Starting apache2 …

I don’t know if my expecations of openrc are wrong or not, but it seems to me like it relies on it’s internal status in some cases instead of seeing if the actual process is running.  Monit takes on that responsibility, of course, so it’s good to have multiple things working together, but I wish openrc was doing a bit more strict checking.

I don’t know how to fix it, either.  openrc has arguments for displaying debug and verbose output.  It will display messages on the first run, but not the second, so I don’t know where it’s calling stuff.

# /etc/init.d/apache2 -d -v start
<lots of output>
# /etc/init.d/apache2 -d -v start
* WARNING: apache2 has already been started

No extra output on the second one.  Is this even a ‘problem’ that should be fixed, or not?  That’s kinda where I’m at right now, and just tweaking my monit configuration so it works for me.

12 Comments

Filed under Gentoo

freebsd, quick deployments, shell scripts

At work, I support three operating systems right now for ourselves and our clients: Gentoo, Ubuntu and CentOS.  I really like the first two, and I’m not really fond of the other one.  However, I’ve also started doing some token research into *BSD, and I am really fascinated by what I’ve found so far.  I like FreeBSD and OpenBSD the most, but those two and NetBSD are pretty similar in a lot of ways, that I’ve been shuffling between focusing solely on FreeBSD and occasionally comparing at the same time the other two distros.

As a sysadmin, I have a lot of tools that I use that I’ve put together to make sure things get done quickly. A major part of this is documentation, so I don’t have to remember everything in my head alone — which I can do, up to a point, it just gets really hard trying to remember certain arguments for some programs.  In addition to reference docs, I sometimes use shell scripts to automate certain tasks that I don’t need to watch over so much.

In a typical situation, a client needs a new VPS setup, and I’ll pick a hosting site in a round-robin fashion (I’ve learned from experience to never put all your eggs in one basket), then I’ll use my reference docs to deploy a LAMP stack as quickly as possible.  I’ve gotten my methods refined pretty well so that deploying servers goes really fast — in the case of doing an Ubuntu install, I can have the whole thing setup close to an hour.  And when I say “setup” I don’t mean “having all the packages installed.”  I mean everything installed *and* configured and ready with a user shell and database login and I can hand over access credentials and walk away.  That includes things like mail server setup, system monitoring, correct permissions and modules, etc.  Getting it done quickly is nice.

However, in those cases of quick deployments, I’ve been relying on my documentation, and it’s mostly just copy and paste commands manually, run some sed expressions, do a little vim editing and be on my way.  Looking at FreeBSD right now, and wanting to deploy a BAMP stack, I’ve been trying things a little differently — using shell scripts to deploy them, and having that automate as much as possible for me.

I’ve been thinking about shell scripting lately for a number of reasons.  One thing that’s finally clicked with me is that my skill set isn’t worth anything if a server actually goes down.  It doesn’t matter if I can deploy it in 20 minutes or three days, or if I manage to use less memory or use Percona or whatever else if the stupid thing goes down and I haven’t done everything to prevent it.

So I’ve been looking at monit a lot closer lately, which is what I use to do systems monitoring across the board, and that works great.  There’s only one problem though — monit depends on the system init scripts to run correctly, and that isn’t always the case.  The init scripts will *run*, but they aren’t very fail-proof.

As an example, Gentoo’s init script for Apache can be broken pretty easily.  If you tell it to start, and apache starts running, but crashes after initialization (there’s specifics, I just can’t remember them off the top of my head) the init script thinks that the web server is running simply because it managed to run it’s own commands successfully.  So the init system thinks Apache is running, when it’s not.  And the side effects from that are that, if you try to automatically restart it (as monit will do), the init scripts will insist that Apache is already running, and things like executing a restart won’t work, because running stop doesn’t work, and so on and so forth.  (For the record, I think it’s fair that I’m using Apache as an example, because I plan on fixing the problem and committing the updates to Gentoo when I can.  In other words, I’m not whining.)

Another reason I’m looking at shell scripting as well is that none of the three major BSD distros (FreeBSD, NetBSD, OpenBSD) ship with bash by default.  I think all three of them ship with either csh or tcsh, and one or two of them have ksh as well.  But, they all have the original Bourne shell.  I’ve tried my hand and doing some basic scripting using csh because for FreeBSD, it’s the default, and I thought, “hey, why not, it’s best to use the default tools that it ships with.”  I don’t like csh, and it’s confusing to try and script for, so I’ve given up on that dream.  However, I’m finding that writing stuff for the Bourne shell is not only really simple, but it also adds on the fact that it’s going to be portable to *all* the distros I use it on.

All of this brings me back to the point that I’m starting to use shell scripts more and more to automate system tasks.  For now, it’s system deployments and system monitoring.  What’s interesting to me is that while I enjoy programming to fix interesting problems, all of my shell scripting has always been very basic.  If this, do that, and that’s about it.  I’ve been itching to patch up the init scripts for Gentoo (Apache is not the only service that has strange issues like that — again, I can’t remember which, but I know there were some other funky issues I ran into), and looking into (more) complex scripts like that pushes my little knowledge a bit.

So, I’m learning how to do some shell scripting.  It’s kind of cool.  People always talk about, in general, about how UNIX-based systems / clones are so powerful because of how shell scripting works .. piping commands, outputting to files, etc.  I know my way around the basics well enough, but now I’m running into interesting problems that is pushing me a bit.  I think that’s really cool too.  I finally had to break down the other day and try and figure out how in the world awk actually does anything.  Once I wrapped my head around it a bit, it makes more sense.  I’m getting better with sed as well, though right now a lot of my usage is basically clubbing things to death.  And just the other day I learned some cool options that grep has as well, like matching an exact string on a line (without regular expressions … I mean, ^ and $ is super easy).

Between working on FreeBSD, trying to automate server deployments, and wanting to fix init scripts, I realized that I’m tackling the same problem in all of them — writing good scripts.  When it comes to programming, I have some really high standards for my scripts, almost to the point where I could be considered obsessive about it.  In reality, I simply stick to some basic principles.  One of them is that, under no circumstances, can the script fail.  I don’t mean in the sense of running out of memory or the kernel segfaulting or something like that.  I mean that any script should always anticipate and handle any kind of arbitrary input when it’s allowed.  If you expect a string, make sure it’s a string, and that it’s contents are within the parameters you are looking for.  In short, never assume anything.  It could seem like that takes longer to write scripts, but for me it’s always been a standard principle that it’s just part of my style. Whenever I’m reviewing someone else’s code, I’ll point to some block and say, “what’s gonna happen if this data comes in incorrectly?” to which the answer is “well, that shouldn’t happen.”  Then I’ll ask, “yes, but what if it *does*?”  I’ve upset many developers this way. :)  In my mind, could != shouldn’t.

I’m looking forward to learning some more shell scripting.  I find it frustrating when I’m trying to google some weird problem I’m running into though, because it’s so difficult to find specific results that match my issue.  It usually ends up in me just sorting through man pages to see if I can find something relative.  Heh, I remember when I was first starting to do some scripting in csh, and all the search results I got were on why I shouldn’t be using csh.  I didn’t believe them at first, but now I’ve realized the error of my ways after banging my head against the wall a few times.

In somewhat unrelated news, I’ve started using Google Plus lately to do a headdump of all the weird problems I run into during the day doing sysadmin-ny stuff.  Here’s my profile if you wanna add me to your circles.  I can’t see a way for anyone to publicly view my profile or posts though, without signing into Google.

Well, that’s my life about right now (at work, anyway).  The thing I like the most about my job (and doing systems administration full time in general) is that I’m constantly pushed to do new things, and learn how to improve.  It’s pretty cool.  I likey.  Maybe some time soon I’ll post some cool shell scripts on here.

One last thing, I’ll post *part* of what I call a “base install” for an OS.  In this case, it’s FreeBSD.  I have a few programs I want to get installed just to get a familiar environment when I’m doing an install: bash, vim and sometimes tmux.  Here’s the script I’m using right now, to get me up and running a little bit.  [Edit: Upon taking a second look at this -- after I wrote the blog post, I realized this script isn't that interesting at all ... oh well.  The one I use for deploying a stack is much more interesting.]

I have a separate one that is more complex that deploys all the packages I need to get a web stack up and running.  When those are complete, I want to throw them up somewhere.  Anyway, this is pretty basic, but should give a good idea of the direction I’m going.  Go easy on me. :)

Edit: I realized the morning after I wrote this post that not only is this shell script really basic, but I’m not even doing much error checking.  I’ll add something else in a new post.

#!/bin/sh
#
# * Runs using Bourne shell
# * shells/bash
# * shells/bash-completion
# * editors/vim-lite

# Install bash, and set as default shell
if [ ! -e /usr/local/bin/bash ] ; then
	echo "shells/bash"
	cd /usr/ports/shells/bash
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	fi
else
	echo "shells/bash - found"
fi
if [ $SHELL != "/usr/local/bin/bash" ] ; then 
	chsh -s /usr/local/bin/bash > /dev/null 2>&1 || echo "chsh failed"
fi

# Install bash-completion scripts
if [ ! -e /usr/local/bin/bash_completion.sh ] ; then
	echo "shells/bash-completion"
	cd /usr/ports/shells/bash-completion
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	fi
else
	echo "shells/bash-completion - found"
fi

# Install vim-lite
if [ ! -e /usr/local/bin/vim ] ; then
	echo "editors/vim-lite"
	cd /usr/ports/editors/vim-lite
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	fi
else
	echo "editors/vim-lite - found"
fi

# If using csh, rehash PATH
cd
if [ $SHELL = "/bin/csh" ] ; then
	rehash
fi

6 Comments

Filed under Computers, Gentoo, Programming, Uncategorized

freebsd

I’ve started looking at FreeBSD at work this week, because I was reading some blog posts about how MySQL performs well on a combination of that and ZFS together.  I haven’t gotten around to getting ZFS setup yet, but I have been looking into FreeBSD as an OS a lot, and so far, I like it.

This makes the second distro in the past year that I’ve really started to seriously look into, the other one being Ubuntu.  I’m still trying to wrap my head around the whole FreeBSD design structure and philosophy, and for now I’m having a hard time summing it up.  In my mind, it kind of feels like a mashup of functionality between Gentoo and Ubuntu.  I like that there is a set group of packages that are always there, kind of like Ubuntu, but that you can compile everything from source, like Gentoo.

What has really surprised me is how quickly I’ve been able to pick it up, understand it, and already work on getting an install up and running.  I think that having patience is probably the primary reason there.  Figuring out how things work hasn’t really been that hard, but I say that because of past Linux experience that has helped me figure out where to look for answers more easily.  That is, when I get stuck on something, I can usually figure it out just by guessing or poking around with little effort.

Years ago, if I would have looked at any BSD, I would have been asking “why?”  I still don’t know why I’m looking at it, other than I believe it’s not a good idea to put all your eggs in one basket.  At work we already support CentOS, Gentoo and Ubuntu, and it’d be awesome to add FreeBSD to the list.

I’m really enjoying it so far.  It’s easy to install packages using the ports system.  I tried going the route of binary packages at first, but that wasn’t working out so well for me.  Then I tried mixing ports and packages, and that wasn’t doing too great either, so I switched to just using ports for now.

The only thing I don’t like so far is how it’s kind of hard to find what I’m looking for.  I totally chalk that up to me being a noob, and not as any real flaw of the distro or it’s documentation — I just don’t know where to look yet.  Fortunately, ‘whereis’ has saved me a lot of time.

The system seems familiar enough and easy to use for me, coming from a Linux background.  In fact, I really can’t find many differences.  The things I have noticed are that it uses much less memory, even on old underpowered boxes, and that it is relatively quick out of the box.  I never would have guessed that.

I’m curious to see how ZFS integrates into the system, if at all.  I like the filesystem, and it’s feature set, but that’s about it for now (I got to play with it a bit as we had a FreeNAS install for a few months).  If it’s a major pain to integrate it, I’m probably not going to push for it right now — I’m content with riding out the learning curve until I feel more comfortable with the system.

So, all in all, it’s cool to find something different, that doesn’t feel too different, but still lets me get my head in there and figure out something new.

If you guys know of any killer apps to use on here, let me know.  I’m kind of wishing I had an easier way to install stuff using ports aside from tromping through /usr/ports manually looking for package names.

8 Comments

Filed under Computers

rebooting my mini-itx

It’s been a long time since I’ve worked on much anything computer-related as a hobby.  Things have changed quite a lot in the past year.  I moved to a much smaller apartment in Salt Lake, which is about a third the size of my old place.  The idea was to trim the fat and focus on going back to school, which is my major direction in life these days.  When I moved in, I didn’t have room for setting up a desktop computer anywhere, so it’s been just my netbook and me.  That suits me plenty fine, though, I wasn’t really using it that much either.  I had just upgraded to a six-core so I could rip DVDs much faster, and now it was sitting headless wherever I could find room, and even then, only used occasionally.

It’s not just at home that things have been changing.  At work I got to make the transition from programmer to full-time sysadmin, and I’m absolutely loving it.  I knew I was getting tired of coding, and I had always enjoyed just taking care of servers, and now I get to do that all day long. When I initially started as a sysadmin, I didn’t think our small company would have enough work for me to do after a few months.  In actuality, I’m kept busy all the time.  The part I like the most is that part of my job is doing research, how to do things better, more efficiently, anything to make the workload easier.  It’s fun.

On top of all that, my school attendance is starting to ramp up more, and I’ve been consistently drifting to adding more classes to my workload.  All this stuff has basically booted Linux out of my life as a hobby, and so now I need things to “just work” without hassle, so I leave my installations alone.

One thing I’d been neglecting a little bit was my entire HTPC setup.  I hadn’t been using it much lately just because I would mostly stream some Netflix (yay, Doctor Who!).  My setup has been a beast though, normally running for months on end without the slightest hiccup.  What started to happen though is that I would come back to using it, switching my HDMI input over, and the box would be powered off for some reason.  Most of the time, I would either power it back on and go on with life or just ignore it.  Until one day it wouldn’t power on at all, and I just shrugged it off and determined to look at it later.

Well, later turned out to be finals week, when my brain has been working overtime, and I seriously needed a hobby.  I pulled out my main frontend and started looking at it to see what was going on.  It was plugged in properly and everything looked legit, but when I hit the power, the CPU fan would start up for a second and then everything would stop.  After fiddling with it for a bit, I started to notice that something was smelling burnt.  Once that happened, I abandoned my diagnosis.  Even if I did manage to get it working, I didn’t want it to catch everything on fire.

At the same time, my external USB drive enclosure died on me.  So even if I could have gotten it working, I still wouldn’t have had a way to watch my shows.  Them giving out on me hasn’t bothered me in the least — the entire setup has been running flawlessly for years, and I’d managed to get a lot of mileage out of them.

Now I had to decide what I was going to do.  I have a lot of hardware, but in pieces.  I have four mini-ITX boards altogether, two of them are VIA C7 chipsets, and the other two are Zotac boards both running low-powered Celeron CPUs (around 35W if I remember correctly).  The power supplies for the VIA boards use 20-pin connectors and only run at about 80W, and aren’t enough to handle the Zotac boards which use 24-pin connectors.  So I have this mix of hardware, and nothing powerful enough to act as a frontend.

There are some great packaged systems out there now where for between $200 to $300 you can get an entire package in one go that does exactly what I’m putting together myself. I considered the idea of just starting over, but I decided that it’d be cheaper to just salvage what I could.

So this week I ordered a new USB HDD enclosure, and I also ordered a new power supply for the main Zotac board.  I found a site that sells really small power supplies for mini-ITX boards, called picoPSU.  The design eliminates a lot of the hardware that I would normally need to get all the power to my box.  I was really skeptical about them when I first heard of it, but did some looking around and it looks like it’s exactly what I need.

In the meantime, I ripped out my motherboard out of my desktop, and put both Zotac boards in there to make sure they still work, and thankfully they do.  I got the old setup pieced together using my desktop case, and fired up the old system to play around with it.

I had started to forget how much time I put into this thing.  I forgot that I had put countless hours stitching this thing together, running a custom build of Gentoo suited to run on small environments.  On top of that I made hacks to mythvideo and got those working to polish off some rough edges.  It just started to come back to me how much I’d worked on this … and how much fun it was. :)

I played around with my frontend a little bit, and fired up a few movies just to try out the surround sound.  It was awesome.  I’d forgotten how nice it was to have that huge library on demand, too.

So I’m excited now to get things up and running.  It’s been a good little while.

Leave a comment

Filed under Hardware, Multimedia

multimedia reference guide: x264

It seems a little weird to me to post something on my blog that I already posted on our blog at work, but whatever. I figured it’d get more visibility if I wrote about it, since I already cover multimedia stuff sometimes, plus I’m excited about this thing anyway. :)

At work, I get to do all kinds of stuff, and working with video is one of them. I threw together an x264 reference guide on my devspace for what the settings of each preset covers, compared to the defaults. I’ve even translated it to spanish! Vamos, che!

The thing I like about this, is that it helps me see which areas to start tweaking to get higher quality gains, and which ones to stay away from. It kind of sheds light on where the best places to start tweaking are. For instance, the settings that are changed on the ultrafast preset should never be messed with at all, if you want a good outcome. And on the flipside, the ones under the placebo preset are going to slow down the encode greatly if you start beefing them up.

Generally speaking, though, it’s a best approach to use presets set by developers. Every now and then I get the idea in my head that I can somehow make things better just by tweaking a few of the variables. That never works out too well. I always end up spending like 60 minutes to encode a 5 minute video, and then I can’t tell a difference after that. Whoopsie fail.

Next, I want to put together a similar type guide for Handbrake presets, both to compare their presets to each other, and then how to duplicate the same x264 settings using both the x264 cli encoder, and libav. The reason being that, a lot of times I really like the output that Handbrake delivers, and I want to duplicate that using other encoders, but I’m not sure how. That’s what I’m planning to target.

Leave a comment

Filed under Gentoo, Multimedia

digital trike

So, I don’t normally talk about work on my blog, just because … hey, who wants to work? I’d rather surround myself with Reese’s cups and watch Roger Ramjet. I totally recommend it.

Anyway, at Digital Trike, my current depriver of candy and animated features, I’m doing full time systems administration. It turns out I enjoy doing that quite a bit. One thing they’ve let me start doing, is writing blog posts that are howtos covering topics related to Linux. I’m going to be doing mostly Gentoo posts, and some stuff related to CentOS as well, since we use both of them in development and production (yay, Gentoo!).

I just posted my first entry on their blog, which covers setting up collectd on both distros. I’ll warn you, it’s a bit lengthy, but I tried to cover most of the bases as well as I could, while keeping the setup pretty generic. It’s designed to be a two-parter, this being the first one, and I’ll cover CGP, a PHP frontend to actually see the stats probably next week sometime.

Lemme know what you guys think, I’d totally be up for some feedback. :)

7 Comments

Filed under Computers, General, Gentoo

git and acl effective mask

I have run into this funky problem with ACL and git at work, and I cannot for the life of me figure it out. I’m not sure if it’s a bug, wrong expectation on my part, or just plain ole user error.

I have a directory that is setting the default ACL permissions. Those are being inherited just fine by children (files and directories), including the effective mask. However, when I clone a new repository using git, the default effective mask is ignored. And I can’t figure out why.

Specifically, here’s what I’m looking at.

Setting the permissions:

# mkdir testing
# setfacl -m g:users:rwx testing
# setfacl -m d:g:users:rwx testing
# setfacl -m m:rwx testing
# setfacl -m d:m:rwx testing

The ACL permissions:

$ getfacl testing
# file: testing
# owner: root
# group: root
user::rwx
group::r-x
group:users:rwx
mask::rwx
other::r-x
default:user::rwx
default:group::r-x
default:group:users:rwx
default:mask::rwx
default:other::r-x

You can see that the default effective masks are properly set.

When I create a sub-directory, it’s ACL settings are inherited properly as well:


$ mkdir dir
$ getfacl dir
# file: dir
# owner: steve
# group: users
user::rwx
group::r-x
group:users:rwx
mask::rwx
other::r-x
default:user::rwx
default:group::r-x
default:group:users:rwx
default:mask::rwx
default:other::r-x

That works great and dandy and fine.

The problem I run into is when I use git to clone a repo:


$ git clone git@example.com:shell/shell.git
$ getfacl shell
# file: shell
# owner: steve
# group: users
user::rwx
group::r-x
group:users:rwx #effective:r-x
mask::r-x
other::r-x
default:user::rwx
default:group::r-x
default:group:users:rwx
default:mask::rwx
default:other::r-x

The effective mask and the default effective mask have dropped from the default (rwx) to something else (r-x), and I have *no* idea why.

Hopefully someone out there may have a clue. I’m stumped.

9 Comments

Filed under Computers, Gentoo

pear list

I’ve been tinkering with PEAR at work, switching between using portage to install stuff and sometimes using pear directly to install it.

One thing that’d be nice is to get a list of the packages installed in pear command-line syntax. I.e. pear install MDB2-beta.

So, here’s a quick reference to convert the output of “pear list” to a list you can use with pear:

pear list | egrep "(stable|beta|alpha)$" | while read line; do echo $line | cut -d " " -f 1,3 --output-delimiter=-; done

A sample output would be:

$ pear list
INSTALLED PACKAGES, CHANNEL PEAR.PHP.NET:
=========================================
PACKAGE VERSION STATE
Archive_Tar 1.3.7 stable
Auth_SASL 1.0.4 stable
Console_Color 1.0.3 stable
Console_Getopt 1.2.3 stable
Console_Table 1.1.4 stable
Crypt_HMAC 1.0.1 stable

to this:

Archive_Tar-stable
Auth_SASL-stable
Console_Color-stable
Console_Getopt-stable
Console_Table-stable
Crypt_HMAC-stable
etc …

For me it’s just a nice way to backup the pear module list, or copy it to a file and then install the pear modules on another box.

2 Comments

Filed under Programming