Information Overload

My random semi coherent blabberings

Partial Functions vs Partially Applied Functions

So lately I’ve been doing a lot of playing around with scala. Quite a fun and interesting language. Pretty soon I plan on posting about some my initial thoughts about the language so far.

Anyway, a key concept in scala is that of partial functions. Until recently, I had this concept mixed up with that of partially applied functions, which while sounds very similar is actually not related at all. The worst part is, appearantly a lot of people are confused about this and there is a lot of misinformation out there. Because of this, I thought it might be a good idea to do a quick post on it and hopefully clear some of the confusion.

Partial Functions

Basically, partial functions like many cs concepts (in scala in particular) are very mathematical in nature. You can think of it as a function that is only valid over a particular range. A classic example might be:


For almost all values, this function can be plotted on a graph except at 0. It is undefined at that value.

In scala, you could express this as so:

object F extends PartialFunction[Double, Double] {
      def isDefinedAt(x: Double): Boolean = x!=0
      def apply(x: Double): Double = 1/x

A partial function in scala is basically a subclass of a regular function that defines a contract that it promises to adhere to. The parametrized types [Double, Double] refer to the type of the argument and the return value. On its own, this isn’t anything new or exciting, but the neat thing about scala is that it has support for this construct built into the language. Another way of writing this would be:

val f:PartialFunction[Double, Double] = { case x if x!=0 => 1/x }

According to section 8.5 of the scala language specification This is called a “pattern matching anonymous function” and can be expanded out into either a function or a partial function, so the type of p cannot be inferred and has to be written explicitly.

Partially Applied Functions

Partially applied functions, on the other hand refers to the concept of taking a function of arity x and creating a new function of arity < x. In other words, automatically supplying some of the arguments in advance, creating a new function that accepts fewer arguments.

val f = (x:String, y:String) => println(x + " " + y)
val f2 = f("hello ", _:String)
f("hello", "there") // outputs "hello there"
f2("there") // also outputs "hello there"

I won’t go into detail about why this is useful, there’s plenty of material out there that describes that. Daniel Spiewak actually wrote a great writeup on partially applied functions and currying in scala.

To make things just a bit more confusing (as Daniel touches on in his article), partially applied functions in scala are slightly different than the strict definition above. They refer to the concept of supplying any number of arguments at a later time. Possibly even none. What is the point of partially applying a function to which you supply no arguments, you ask? Well in scala, you can partially apply a function, supplying no arguments to tell the compiler that you want to treat the function as a value, instead of trying to apply it. For instance:

def f(s:String) = println(s)
val f2 = f _ // shorthand for f(_:String)
// essentially the same thing:

So the arity doesn’t change, but using this, you can pass a function around as a value. This, I suppose, is necessary because scala allows functions to be called without parenthesis, so it needs some mechanism for indicating that you don’t want to invoke it.

This gets done automatically for you in certain situations:

// same as:
List.range(1,10).foreach(println _)

Anyway, hopefully this helps clear up some of the confusion on these otherwise pretty simple concepts.


So I sucessfully converted a crusty old ant build script for one of our smaller java projects to buildr today. Took a little bit of work to get everything rearranged and configured properly but was well worth it. I don’t have enough experience with buildr to do an exhausted review of it, but my first impression is that it is pretty cool. Basically the benefits vs our previous implementation are:

  • Simplicity. Doing so converted about 90 lines of cryptic xml to only 15 lines of relatively easy to read ruby.
  • Speed. For all the complaints people lay on ruby about speed, buildr is a good deal faster than both ant and maven.
  • Standardization. Buildr uses the same project structure as maven. I’m not a huge maven fan, but I generally agree way it enforces your project layout and maven is somewhat of an industry standard.
  • Functionality. On top of all the above benefits, buildr also adds additional goodies such as continuous compilation, automated testing, JavaRebel and Growl integration.

To demonstrate the first point, I’ll show the current implementation vs just a snippet of the old one. New version:

repositories.remote << ''

desc "Rosy parent app"
define "rosy" do
  desc "Billing app"
  define 'Billing' do
    compile.with Dir[_() + '/src/main/lib/*']
    test.with Dir[_() + '/src/test/lib/*']
    resources.filter.using :ant, Buildr.settings.profile
    package :jar

p. Not too bad. The only part that is a little ghetto is how I handled including the libraries. I’m generally not a big fan of the way maven forces its way of dependency management down your throat so I’m glad buildr gives you options here. Its not perfect but hey..

Anyway, compare this to one my favorite parts of the previous script:

<condition property="" value="">
      <isset property="" />
    <available file="${}/" />

Oh yeah. Some sweet xml programming right there. Xml is a programming language not a markup language, right? Totally not the wrong tool for the job.. totally..

Update: In retrospect, looking back at the ant portion after learning lisp, I find it to be a bit lisp like in nature. I find this to be somewhat interesting. Sadly, its still xml.


So I finally got down and dirty with the ruby debugger recently. Not sure why I waited so long. I guess I was a but frustrated with the apparent primitiveness of it. It’s like “Ew, command line. Who uses that anymore?” Oh wait. I do. Like every damn day.

So I guess it was a pretty stupid reason to avoid learning it. Well, that and I just didn’t come across a burning need to use it. That is, until recently.

My adventures with authlogic have had me delving deep into the rails framework and there is some stuff in there that has rendered tag finding and regular code browsing quite inefficient. If my experiences with javascript have taught me anything, it is that with dynamic languages, sometimes it just really helps to have a debugger. Perhaps unforunately, my experiences with javascript have also spoiled me with some pretty insanely amazing debuggers. But thats another story.

Now I know I could go the easy way and just not use something besides authlogic (devise, perhaps?) or go with some hacky fixes for the problems I’ve faced, but sometimes I just can’t help myself. Anyway, it gives me a good excuse to dig into some rails code.

Initially, I was trying to debug it using jruby but for whatever reason I was having some problems with it. After butting my head against the wall with that for a little bit and then taking some time away from the problem, I resigned to just debugging with MRI. Perhaps when it becomes really necessary, I’ll try it on jruby again at some point.

Anyway, in the meantime, the MRI version has been fulfilling my needs pretty well. It offers your standard set of features for a debugger. Controlling it via command line is an interesting experience that maybe I will grow to like. One thing that I thought was kinda neat was that you can open up an IRB session at any point while debugging (although, once again coming from a javascript world this doesn’t exactly knock my socks off).

What I was really excited about, though was it got me through the problem I was facing. Here is where I got stuck:

send(named_route, *args)

Where did it go? Where?!

Well, with my trusty debugger friend, I just found out. It goes here, in a different file:

@module.module_eval <<-END_EVAL, __FILE__, __LINE__ + 1
  def #{selector}(*args)
    options =  #{hash_access_method}(args.extract_options!)

    if args.any?
      options[:_positional_args] = args
      options[:_positional_keys] = #{route.segment_keys.inspect}


Ha ha! You sneaky little bastard, routing_set.rb.

In the near future, I hope to get the debugger integrated with emacs so I can try that out. But for now, I’d like to mess around with it as is so I can get a feel for what I get out of the box. Anyway, I’m happy. I can sleep well now.

Update: Found “this”: article which suggested setting the following in your .rdebugrc file:

pre. set autolist set autoeval set autoreload

This definitely has made rdebug much nicer to use, the autolist option in particular. One thing I’d really like to see still is for backtrace to accept a numbered argument so that it doesn’t puke all over your screen any time you run it. Some syntax highlighting would be sweet too. I’ll have to test out some gui versions of it but for now its livable.


Been doing a good deal of messing around with rails on jruby lately. Every time I mess with rails I kick myself for not using it more often. Luckily, I’ve started introducing it at my work for some of our smaller projects so I get an excuse to use it more often.

One of the interesting challenges while using it this time that I came across my last time too was authorization logic. Both times, my problems seemed to stem from existing libraries being out of date. Last time I tried using a login generator that failed miserably. I don’t recall all the details, but I do remember it was generating files in a deprecated format.

This time, I went with a neat little framework called Authlogic. I really like the way it works in that it treats peoples sessions as just another model. You can then use rails built in scaffold generators to easily build a login page with just a few small modifications. It automatically handles a bunch of the messier details of authentication like salted hashes and long term persistence of sessions. Its also super easy to use and understand without having to really worry about those details.

Its almost perfect. Almost. Unfortunately, not quite, as it doesn’t seem to be fully compatible with rails 3. Trying to use it gave me several errors and I had to use hacky fixes to make it work.

One was caused by an error in the for_form helper method in action view that attempts to create a form for a model object. Part of that is attempting to create an id attribute for the html form. It does that by trying to obtain the primary key using this below method from an active record object:

# Returns this record's primary key value wrapped in an Array
# or nil if the record is a new_record?
def to_key
  new_record? ? nil : [ id ]

It goes on to append this id to the model name so you get an id on the form like ‘model_name_1’ where 1 is the primary key. For the fake active record object used by AuthLogic, however, this method isn’t present. To get around this, I had to create a sort of dummy method that just returns an arbitrary number (wrapped in an array) inside of my UserSession object.

Fixed, right? Wrong! After fixing that, it gives you another error about not having a persists? method. To satify it, I just hardcoded that function to return false and that seemed to work for the time being. I was about to look this one up in the code but saw this get checked into auth logic’s master today:

def persisted?
    !(new_record? || destroyed?)

Sweet! Someone else was experiencing this problem too. And I’m not just crazy.

The last problem I encountered in setting this up was the ActionView::Helpers::UrlHelper.link_to method. Attempting to use link_to in an erb file with model objects inheriting from Authlogic::Session::Base for whatever reason seems to give problems. Specifically, it causes problems when using it to try to use it to create a link to the destroy method of the controller object responsible for it. Using at such:

<%= link_to 'Log Out', UserSession.find, :method => :delete %>

Creates a link like this:

<a href="/user_sessions" data-method="delete" rel="nofollow">Log Out</a>

Because it is simply pointing to ‘/user_sessions’, it thinks you are trying to call the index method, even though delete is being used. To solve this, I just hardcoded the link instead of using link_to, as so:

<a href="/user_sessions/1" data-method="delete" rel="nofollow">Log out</a>

For a hacky fix, this works OK except causes problems when you try to use warbler and move it to J2EE server where URL may not be relative to the root. I’m trying to find a more permanent solution but the rails code for routing gets a little crazy. In the polymorphic_routes file, it creates a string ‘user_session_path’ and then uses ‘send’ to call a method with that name. I’m guessing there is a method_missing on the self object or something like that somewhere that handles it but I haven’t been able to find it. I need to try to get the debugger working with jruby or something to figure out where it goes.

I will admit, while I do like ruby, sometimes I feel that it is more fun to write than it is to read. Being able to create methods on the fly and mixin various different modules can make it very difficult to follow the execution path, especially if you don’t know the starting point. For instance, while reading the authlogic code, I found a module that included about 20 other modules. Some of these modules had methods with the same name and would rely on these included modules calling super to go up the chain. Which method would be called first depended on the order in which they were included. If you started in one of those included modules and saw the call to super, you would have no idea where that call was going as the module itself did not have a parent. Anyway, its an interesting challenge so I don’t mind it too much. Hopefully I’ll have a better solution soon.

Rails 3 on Jruby

So, to continue where I left off yesterday, today I used warbler to get my sample rails app running on java. Luckily, the process was quite a bit easier than making it work on the c implementation.

First step was upgrading jruby so that I could get the 1.3.6 version of jruby’s rubygems (jgem). This (as mentioned in the previous post) is necessary to install the latest version of rails using rubygems. This was as simple as downloading the latest version (1.5.3 as of this writing) from and setting my executable path to the bin directory in the new version.

Next step was changing the database driver to use jdbcsqlite3 instead of sqlite3. This had to be done both in the config/database.yml file as well as in the Gemfile.

Next up was installing all of the gems on jruby. To do this, I first installed bundler for my new version of jruby ‘jgem install bundler’, and then ran bundler in my new application ‘jruby -S bundle install’. Strangely, even though bundler seemed to install rails (which was part of the gemfile), I still had to run ‘jgem install rails’ to get the rails executable to work with ‘jruby -S rails’. I also used jgem to install activerecord-jdbcsqlite3-adapter manually as indicated by this post, however, I’m not sure this step was necessary. With this, I could run ‘jruby -S rails server’ to get the server up and running with webrick. Almost there..

Finally, to make the app run on a java web container I used warbler to compile it into a war file. At first I was making the mistake of using the regular rubygems version of warbler to do this. This was giving me some strange error about jdbcsqlite3. Then I realized, I needed to install warbler on the jruby version of rubygems and use that instead. So a simple ‘jgem install warbler’ and then ‘jruby -S warble war’ and I had a war file that I could pop into any j2ee compatible server! Yay!


Alright, so just finished switching my blog over to jekyll from wordpress and this is my obligatory first posting after the switch describing how awesome jekyll is.

In short, it is pretty awesome. Making the switch was a bit of a pain, but I’d say it was worth it. Jekyll “does offer migration help”:, however, I didn’t have a whole lot of luck with it. It managed to export all of my posts but the formatting came out kinda screwed up. In the end, I just decided to go through each post and paste the content into textile files that I created manually. Textile does a pretty great job of formatting everything and I don’t have to deal with a bunch of ugly autogenerated wordpress markup.

I also had to manually create some of the plugins I was using for wordpress. Right now, I just did the ones for twitter, delicious, and a javascript syntax highlighter. The API’s for twitter and delicious are pretty easy to use and I had done some messing around with them before so creating those plugins wasn’t too bad. The javascript syntax highlighter I was using had a stand alone version so that was just a matter of creating the right tags around the code. Jekyll does come with a syntax highlighter called Pygments that is written in python, but I am a big fan of doing things on the client side so I stuck with the javascript version.

The only thing I really have left to do is fix the links which didn’t carry over when I copied and pasted. Hopefully, I’ll get to that in the next day or two.

Other than that, it has been pretty good. Moving to Disqus for the comments was a lot easier than I expected it to be. That was just a matter of signing up and pasting to bits of js code in. Disqus seems to be pretty popular these days so its nice that most people won’t have to setup accounts just to post a comment.

Being able to use any editor I choose to write my posts is also pretty sweet. I like writing my posts in textile a lot better than some dumbed down WYSIWYG html editor. I feel like it gives you the power of pure html without all the work.

Having your blog be served purely as static content simplifies things a lot. You can host it basically anywhere and don’t need access to a database. It does place some restrictions on what you can do, but with the power of javascript these days and the help of web services like disqus, its not that big of a deal to me.

Also, being able to version control my blog and host it for free on github is pretty cool.

One thing I would like to see, is for Jekyll to move to more of a single page design where the content would be served up and rendered by javascript. This would allow a lot more flexibility and power in how you could use javascript on the page. I suppose this could be possible without making changes to Jekyll if you just structure the design in the right way. I’ll have to look into it and give it some more thought.

Rails 3 Fun

I wanted to mess around a little bit today with Nick Sieger’s warbler. I figured I’d build a quick rails app to test it out. It had been a while since I used rails and I thought this would be a good opportunity to try out rails 3. Unfortunately, the process of trying out rails was quite the pain.

First I had to update my ruby gems, since appearantly the new rails requires you use ruby gems 1.3.6 or greater and I was still on 1.3.5. I thought I had installed rubygems using the debian packages so I checked that real quick only to realize that appearantly debian sucks at package management sometimes and I hadn’t installed it as a debian package. No big deal. ‘gem update –system’ and my ruby gems was up to date.

Next to update rails. First I uninstalled the version I had and grabbed a new copy of version 3.0. So far so good. ’rails new rails3-app’ and I had a new app. Attempting to run rails server inside the new app gave me an error about sqlite3-ruby something or other, suggesting I run bundle install. Ok. No problem. Attempting to run that, I get an error about needing lib-sqlite-devel package required to build the native extension. Attemping again to use apt-get, I discover debian once again sucks and all of the sqlite libraries are out of date. Or at least too out of date to work with the version I needed with ruby. Alrighty then. I head to sqlite’s home page, download the sqlite-amalgamation file and do all the ‘configure’, ‘make’, ‘make install’ goodness. Everything looks good. Sqlite installs and gives me an executable I can run from /usr/local/bin. Excellent!

Unfortunately, running that executable gives me the error ’sqlite3: symbol lookup error: sqlite3: undefined symbol: sqlite3_config’ . WTF! I had to export the environmental LD_LIBRARY_PATH=/usr/local/lib to make it run properly. This didn’t quite feel right, however.

Looking at how the resulting executable got linked:

ldd /usr/local/bin/sqlite3 =>  (0xb78dc000) => /usr/lib/ (0xb7840000) => /lib/tls/i686/cmov/ (0xb783c000) => /lib/tls/i686/cmov/ (0xb7822000) => /lib/tls/i686/cmov/ (0xb76c4000)
/lib/ (0xb78c2000)

You can see that it is being incorrectly linked to a library in /usr/lib instead of /usr/local/lib. Luckily, I found this article, which suggested I set this variable LD_RUN_PATH=/usr/local/lib prior to building sqlite3. And just like that it worked!

I’m not sure why it was choosing the /usr/lib/ which came as part of the debian installation. You would think it would know where it was putting its own shared library files and choose those instead. Perhaps it was user error.

Anyway, after installing this library, I had to run ldconfig to update the cache of all my shared libraries (since I had just installed a new one). This part is needed for the next step, which is installing and using the sqlite3-ruby gem.

The readme on the sqlite3-ruby github page indicates that you should run the gem command as:

gem install sqlite3-ruby -- --with-sqlite3-include=/opt/local/include \

..for sqlite3 installations in non standard places. However, I had to run it as this:

gem install sqlite3-ruby -- --with-sqlite3-lib=/usr/local/include order for it to choose the correct version of sqlite3. Otherwise, it was choosing the older debian installed one.

And that was it! I finally have a working version of sqlite3-ruby! And assuming ldconfig worked properly I can actually use it too! Yay! Next to make it work on the JVM…

Symlinking Ruby

Ran into this problem the other day. Thought I’d blog about it before it got too stale in my mind. In most typical entry points to an application you’ll see something like this:

$LOAD_PATH << File.join( File.dirname(__FILE__), '..', 'lib' )
require 'your-lib-name'

p. Here it is adding “../lib” to the load path and then loading your-lib-name.rb from that directory. However, when this script is being run via a symlink, FILE will be interpretted as the location of the symlink. Obviously, because “../lib” is a relative location this can cause problems. This will give you the dreaded (cue search engine bait):

:in `require': no such file to load -- yourlib (LoadError)

And absolute paths really aren’t a very good option either. So what to do? Well, luckily, you can do this:

require 'pathname'
$LOAD_PATH << File.join( File.dirname(, '..', 'lib' )
require 'your-lib-name'

As you can see, Pathname.realpath resolves the symlink to the real location. I thought it was an interesting problem and am surprised I haven’t seen this sooner. Maybe that has something to do with the fact that most scripts are run as gems which use stub files instead of symlinks? Or maybe this comes up a lot and I just haven’t noticed. Anyway, maybe this will help someone. Maybe that someone will be you. If I reach out and help just one person, then this blog post would be worth writting. Ok, just kidding but at least it gives me an excuse to blog.

Vim Continued (Sortof)

So, its been about 2 weeks since I started messing around with vim. How have my experiences with it been, you ask? Well, actually I quit using it (technically I did.. more on that in a second). Instead I decided to pick up emacs. While my last post was pretty negative towards emacs, recently I have made a discovery that has since had me singing a different tune. Previously, one of my biggest pet peeves with emacs (besides the default keybindings) was the lack of anti aliased fonts. Superficial, I know, but when you are trying to decide whether or not it is worth it to invest a lot of time into something (especially if you are going to be spending a lot of that time looking at it) it really doesn’t help if it is butt ugly. However, since emacs 23 anti aliased fonts are now supported! And now emacs 23 is available as a debian package which made installation a cinch. Just do a quick apt-get install on emacs23 and you got yourself a brand spankin new purdy emacs.

How does vim fit into this? Well, you might have heard of a little thing called viper mode. Basically, it just adds a mode to emacs that emulates Vi’s key bindings. It appearantly comes standard with most versions of emacs (the debian version has it, at least). Just type M-x viper-mode and you can prance around with all of the vim goodness you could ever want (ok, maybe not ALL of the the vim goodness). You’ll probably want to install vimpulse to get stuff like visual mode and text objects. Keep in mind this doesn’t turn emacs into vim. You just get access to vim’s sweet key bindings and modal editing. You’ll still be using emacs. The full consequences of this, I couldn’t really tell you. I only used vim seriously for about a week so I wasn’t quite intimate with it. At a minimum it probably means you can kiss your vim plugins good bye.

So, why did I do this? Because I’m crazy? Maybe. It definitely makes things interesting. And the lines between what is emacs and what is vim can be a little blurry at times.

However, as much as I liked vim, I did have a number of things about it that really bothered me. First and foremost is vimscript. To me it just feels really half baked and extremely domain specific. If I’m going to have to learn/use a language to configure my editor, at least make it a cool one like lisp that can actually be used elsewhere. While you can configure vim using other languages, as this post indicates, it is not without its pitfalls. Also, I found myself agreeing with posts like this.

With emacs in viper mode, I really feel like I’m getting the best of both worlds. The extensibility of emacs combined with the kick ass modal editing and key bindings of vim. Anyway, my experiences with emacs/vim have been pretty good so far although I still consider myself to be a noobie. I’ll continue to post updates on how it goes.


So the latest round of pro vim blogs has inspired me to take the plunge. I’ve only been using it for a few days but am really enjoying it so far. Sure it is still pretty clumsy and gets in my way half the time because I don’t really know what I’m doing. But it has actually been pretty fun. No wonder I couldn’t get myself to like emacs. I guess I’m a vim guy. I probably would have tried it seriously a lot sooner but I wasn’t aware of a good graphical version of it until recently. Enter gvim. Gvim gives a semi modern UI (post 1991 anyway) to it and gives you the comfort of having the mouse to fall back on. While the point of vim is to not use the mouse and I’ve used it surprising little since making the switch, its still nice to have as a safety net when your first starting out.

Its almost strange that I like it so much since it retains one of the things that irritated me the mouse about emacs: the default key bindings are completely different from any other modern editor you’ve probably ever used. Sure, you can change them but typing ctrl+z and watching it minimize the window instead of undo’ing your last change like every other sane editor would do made me mad enough to want to quit right there. I never officially ‘gave up’ on emacs but things like this would add up to me eventually getting frustrated and putting my experimentation with it on hold.

Turns out vim disregards these universally accepted defaults also but I find myself forgiving it very easily because it is a completely different pardigm than any other editor I’ve used. The idea of having a command mode vs an insert mode is an interesting concept and I’ve really ran with it much quicker than I expected.

Anyway, I’m not sure if my excitement for it now is just an excitement for trying something new or if it is a legitimately better editor. It definitely seems like it has a lot of potential but I haven’t really started using any plugins for it yet and my .vimrc file is still almost completely empty so its hard for me to decide exactly how much. Either way, I’m having fun and I haven’t been frustrated into quitting so I would say that is a good thing.