This is a set of modified MacRuby presentation slides given at the Pittsburgh Ruby Brigade meeting on Nov 5, 2009. The original presentation was given by Patrick Thomson at C4[3] in September, 2009. Slides 68 and 69 were added by me for the PghRB talk.
Patrick's original slides are available at http://www.slideshare.net/importantshock/why-macruby-matters
14. In Objective-C:
1. create a static, shared instance
2. initialize once and only once
3. add an sharedInstance accessor method
…for every singleton class.
Tedious.
100. Last summer I had an internship at Apple.
My views here, however, represent that of an open-source contributor,
and not an Apple employee. Any speculations on the future of MacRuby
are entirely mine and are in no way representative of any plans, attitudes,
or future directions that Apple may take. This presentation is neither
sponsored nor endorsed by Apple.
Apple: please donʼt sue me.
revolutionary graph-search algorithm that powers just about every network routing protocol
this quote of Dijkstra’s draws attention to a central conundrum facing Cocoa programmers today
the conundrum is this: Objective-C, the language we all know and love…
…provides us with an insufficiently powerful level of abstraction. We build the best desktop apps out there, but we do it in spite of ObjC.
So, as a language, what does ObjC lack?
I submit that perhaps the most fundamental problem with ObjC is its lack of support for code reuse. At the Objective-C level of abstraction, effective code reuse is like pulling teeth.
And sure, inheritance provides a good measure of code reuse. But for many use cases, it simply doesn’t apply.
Consider the singleton. It’s a great example of a common Objective-C pattern which cannot be abstracted out, forcing us to resort to boilerplate code.
Every singleton class needs its own static, shared instance. Single inheritance can’t solve the problem, and it’s questionable that multiple inheritance is the right solution in this case.
So, creating a singleton in Objective-C is really tedious. Add to the fact that very few people agree on the right way to go about this tedium - should you synchronize on sharedInstance, or init? Should you check for NULL or use pthread_once or dispatch_once? - and ensuring proper behavior seems hardly worth the trouble. It’s too tedious.
And tedium sucks.
A sufficiently powerful language would provide us with a code reuse mechanism like mixins.
Mixins, for those who haven’t heard of them, provide another level of abstraction into class definitions. Like classes, you define them and define method implementations in them; however, instead of including them in the class hierarchy, you mix them into classes - the resulting class copies the methods you defined in your mixin.
Here’s an example. In this language, the “include” keyword mixes a mixin into a class. By including the Singleton mixin in this class, it defines initialization and accessor methods as well as setting aside a static, shared instance.
Now we can access the shared instance. Easy.
And sure, I understand that the Objective-C language designers held off on implementing New and Fancy Ideas. And I agree with them.
But mixins have been around since Symbolics Lisp. Anyone remember that? I don’t. I wasn’t even born yet.
That was a while ago. And we, as Mac programmers, should ask more from our development tools.
Though Objective-C has a huge advantage in that it’s based on C, it also has a huge disadvantage - that it’s based on C.
Being a representation of a Von Neumann machine, C can solve any computable problem. But what’s interesting is that C is actually pretty expressive, even when you need to do fancy, comp-sci-sexy things. You’ve got a reasonable approximation of continuations with setjmp, of exceptions with sigsetjmp, and first-class functions with function pointers.
But all of these things are hideously unsafe. We’re programmers, not superheroes - if our tools allow us to make a mistake, we will make it.
And C’s unsafeness is visible in almost every part Objective-C.
We program in a high-level language. Why do we still have the ability to shoot ourselves so profoundly in the foot with raw pointers? You, or the library functions you call, can stomp all over memory silently and watch as you drown in bizarre and irreproducible behavior.
C’s ability to treat just about anything as a memory address means that you can bamboozle the hell out of the garbage collector.
Nowhere is the unsafe and backwards nature of C - and consequently, Objective-C - more apparent than exceptions. C really doesn’t play nice with the notion of exceptions at all, and has therefore hobbled ObjC exception support throughout ObjC’s lifetime.
ObjC exception handling is horrifically inefficient. Since on 32-bit exceptions are based on setjmp and longjmp, pretty much everything about them are expensive. Exception creation and try-blocks got much faster with 64-bit C++-compatible exceptions, but throwing them got even slower.
And the kicker is that the vast majority of Cocoa frameworks aren’t exception-safe. So there is absolutely no guarantee that objects will be cleaned up or finalized properly if you throw an exception.
So thanks to this, nobody ever checks for exceptions - honestly, when was the last time you checked for alloc throwing an NSMallocException? - and we make do with functions that take pointers to NSErrors and some functions that return either Carbon, Cocoa, POSIX, or Mach error codes.
Go ahead. Call me lazy.
But I’m tired of creating arrays manually. I want a language that has sufficient syntactic abstraction so that I can just create arrays inline. I mean, imagine if we had to do this to create NSStrings - it would be endlessly tedious!
And don’t even get me started on how tedious NSDictionary creation is. These may seem like syntactic quibbles to some of you, but I know that in the past I’ve avoided using dictionaries in favor of long if-else statements, just because creating dictionaries is so tedious!
Similarly, I want operator overloading, because it lets me say what I mean. I want the less-than sign to be transformed into a call to compare - and let’s be frank, operator overloading is never going to come to Objective-C.
Basically, I want a language that’s as well-designed as Cocoa.
And so far, the closest I have come to that ideal is when I write code in MacRuby.
For those of you that haven’t heard of it, MacRuby is a new implementation of Ruby, a scripting language from Japan.
MacRuby differs from the standard implementation of Ruby 1.9 in that its its virtual machine, optimization routines, bytecode generation, and just-in-time compilation are all implemented on top of LLVM, the Low-Level Virtual Machine project. LLVM is already blazingly fast, and I’m confident that its performance will only continue to improve.
In addition to replacing the Ruby 1.9 VM with the LLVM one, we also reimplemented the standard Ruby data structures on top of CoreFoundation. In addition to providing us with a set of memory-efficient, fast, and mature set of data structures, implementing MacRuby on top of Core Foundation also ensured that…
…Ruby objects are absolutely indistinguishable from Cocoa objects - they even respond to the same API calls! NS/CFStrings are Ruby strings, NS/CFArrays are Ruby arrays, NS/CFDictionaries are Ruby hashes.
But we go beyond what Objective-C offers, and return to a Smalltalk heritage where there are no primitive types. Everything descends from NSObject, even floats and integers.
Though MacRuby is an Apple-supported project, it is released under the Ruby license so you can embed it into your commercial applications.
The MacRuby team is one of the smartest and most focused I’ve ever seen. Laurent Sansonetti, an employee at Apple, started MacRuby just as an experiment to see how well Ruby would run on top of the Objective-C runtime and garbage collector. Yet in its current form, I truly believe that it’s stable enough
Clarification!
Bridges, as Tim pointed out in C4[1], are unreliable, difficult, and tend to be slow.
Explain the differences.
not a toy!
everyone has to deal with it sooner or later
explain what they are, what systems (Python, Ruby, Lua) use them
explain how much they suck
green threads suck
So. The big question.
MacRuby is fast. Not just Fast Enough, but fast.
This may surprise many of you, as most people know Ruby as “that weird, slow, Japanese space-Perl.” And the conception that Ruby is slower than other comparable languages has been true - up until now.
A good example of the speed boosts that we aim for is the Fibonacci sequence.
Take naïve implementations of the Fibonacci sequence, using recursion, in C…
and in Objective-C (using ObjC message sending),
and, predictably, C is going to be a lot faster. And trying to get MacRuby faster than C for everything is beyond the scope of this project.
But take the same Fibonacci implementation in Ruby, run it under MacRuby…
and we see that MacRuby is, in this benchmark, faster than Objective-C.
The answer lies in the --compile flag. MacRuby can compile Ruby source down to Mach-O x86 executables.
This is hugely exciting. Nobody else has done this before. And up until now, shipping a closed-source desktop application written in Ruby has been a Bad Idea - but now you can hide your source code from prying eyes.
For completeness, let’s compare the C, compiled MacRuby and Objective-C performance and interpreted MacRuby, JRuby, and the stock OS X 10.6.1 ruby interpreter. OK... ouch. I dialed back from fib(40) to fib(35) because otherwise the stock ruby interpreter would still be calculating. JRuby’s better but still no competition for MacRuby (interpreted or compiled).
So let’s throw out the stock ruby and JRuby data to get a better look at MacRuby (compiled and interpreted) vs. C and Objective-C.
keyword syntax is readable.
worst of both worlds: verbosity of keyword syntax and unreadability of ALGOL-style syntax
syntactic extensions to make ObjC calls look gorgeous.
optional set of layers
to make common idioms concise
nestable, elegant, yet completely optional
like in Lisp and Scheme
Ruby and Objective-C are almost absurdly similar - how similar, you ask?
contrived example, but talk about CoreImage, CoreGraphics, PDF documents
hooray for my GCD layer
talk about how things just work, and plug topfunky’s MacRuby screencast
hooray
whine about all of these features
be honest!
Though one can obviously treat Objective-C like a dynamic language, and give the ‘id’ type to everything, well-written and idiomatic Objective-C code takes advantage of static typing to catch errors before they happen. And because Ruby is a duck-typed language, you’re going to lose some power.
And rather than viewing this as a problem for MacRuby, I think this is a great opportunity for another language to come in and apply all the innovations in the world of static typing - type inference, currying, existential types - to the world of Cocoa. So, uh, someone get on that.
The 0.5 release of MacRuby is almost upon us. This release will be the first one based on the new virtual machine.
If you want to play with the new features now, I recommend checking out the source from MacRuby.org
or get builds for Snowy.
DISCLAIMER!
I’m here to talk about what is TECHNICALLY POSSIBLE, not what’s *going* to happen.
we want to be the fastest Ruby implementation around
we still need continuations, fibers, and to make all C exts compatible
The first thing people usually ask about MacRuby is “Does it run Rails yet?”
No. It doesn’t. And though we’d like to have it run Rails in the future, there’s still a lot of work to be done in that direction - revamping the sockets interface, improving IO speed and flexibility, and making sure that MacRuby supports all the maddening little metaprogramming quirks of which Rails takes advantage.
Much more interesting to me is the question of whether or not