Archive for October 2007
It has come to my attention that my writing here has been a bit boring. Dry. Stuffy. Well, Kevin Bourrillion has shown me the way. Time to liven it up a bit. Just a bit, mind you. Juuuuuuuuust a bit.
And please don’t give me trouble over my inability to post more than once a week. Really all I can say is ARRRRGH! Work, upcoming conferences, and raising two very young kids = JEEZ WHEN DO PEOPLE EVER GET TIME TO BLOG? And while we’re on that topic, I don’t see you blogging enough either, do I? Don’t be throwing any stones, Mr. Glass House.
OK, where were we? Ah yes, OOPSLA 2007. What a beautiful conference. Not that I attended or anything, but who needs to attend a conference when you have the Internet?
The dynamic vs. static languages flame war has bogged down the language community over the last decade. It’s great to see that, judging by some recent papers from the latest OOPSLA, the logjam has broken with a vengeance. There are all kinds of brain-bending new languages in the works, and it’s frankly exhilarating. (Sorry that some of these are only available through the ACM’s Digital Library… but at $99 per year, how can you afford NOT to be a member???)
First we have a great paper on RPython, a project which creates a bimodal version of Python. You can run your Python program in a fully interpreted way in a startup metaprogramming phase, and then the RPython compiler kicks in, infers static types for your entire metaprogrammed Python code base, and compiles to the CLR or to the JVM with a modest performance increase of EIGHT HUNDRED TIMES. Yes, folks, they’ve run benchmarks of Python apps that run 800x faster with their system than with IronPython (which is no slouch of a Python implementation AFAIK). If that isn’t a great example of how you can have your dynamic cake and eat it statically, I don’t know what is.
Then there’s an equally great paper on JastAdd, an extensible compiler for Java. The JastAdd compiler is built around an extensible abstract syntax tree. The abstract syntax tree is the ONLY data structure in the compiler — there are no separate symbol tables or binding tables; everything is implemented as extensions to the abstract syntax tree. The extensions are expressed with a declarative language that lets you define dataflow equations relating different elements in the tree — inherited elements (for referring to names bound in a parent construct, for example), or reference elements (for referring to remote type declarations, for example).
The compiler has an equation analysis engine that can process all these equations until it reaches a fixpoint, which completely avoids all the usual multi-phase scheduling hassles in compilers around interleaving type analysis with type inference, etc. It seems like The Right Thing on a number of levels, and it makes me want to hack around with building a compiler along similar declarative lines. They give examples of extending Java with non-null types and of implementing Java 5 generics purely as a declarative compiler extension. That, to me, pretty much proves their point. Bodacious! I had been thinking that executable grammars were a nice way to go, but seeing their declarative framework’s power is seriously making me reconsider that idea. What would you get if you combined OMeta and JastAdd? Something beautiful. I’m not sure how you’d combine the statefulness of OMeta with the declarativeness of JastAdd, but we must ponder it deeply, because the One True AST is a goal worth seeking.
A truly mind-bending paper discusses breaking the two-level barrier. What’s the two-level barrier? Simple: it’s the class/object distinction. They point out that many kinds of modeling can’t be done with a class hierarchy. What you really want is a programmer-accessible metaclass hierarchy. (And not a weenie one like Smalltalk’s, either.) For example, consider an online store. You thought you knew everything about online stores? THINK AGAIN, JACKSON. Let’s say you have a DVD for sale, such as Titanic. That Titanic DVD is an instance of the DVD product class. The DVD product class is conceptually a further instance of the DigitalMedia product class. I meant exactly what I said there — in their framework, one class can be an instance of another class.
You can then state that the DigitalMedia metaclass defines a “categoryName” and a “net price”, requiring that “category name” be defined by instances of DigitalMedia, and that “net price” be defined by instances of instances of DigitalMedia.. The DVD class then defines “categoryName” to be “DVD”, so ALL DVDs have the same category name. And then particular instances of DVD define their actual net prices individually. In this way, you can take the same kinds of “must provide a value for this field” constraints that exist in the class-object hierarchy, and extend it to multiple levels, where grandparent metaclasses can express requirements of their meta-grandchild instances.
(They use the abysmal word “clabject” — ClAss obJECT — to refer to entities that can be used to instantiate objects (like classes), but that ARE themselves instantiated (like objects). I think “clobject” would have been better, or maybe “obclass” or something. “Clabject” just sounds… I don’t know… disturbing. Like some kind of hapless crab that’s filled with techno-malice. But the concept itself is very interesting. I think that having two orthogonal hierarchies — the metaclass hierarchy and the subclass hierarchy — is potentially too confusing for most programmers, including me, but it’s nonetheless really thought-provoking.)
Those are just four of the highlights — I’m only about a third of the way through reading the OOPSLA papers this year — but I think those are the top three when it comes to language design. It’s going to be a great next decade, as the whole static vs. dynamic war gives way to a myriad of bizarre hybrids and mutants, greatly enhancing (and confounding) the lives of hackers everywhere!
(Sigh, keeping on a schedule is pretty darn tough. Sorry for week hiatus. Onwards! Finally finishing off this particular sub-series of posts.)
Conscientious Software, Part 3: Sick Software
An interesting thing happened when garbage collection came into wider use. Back when I was doing mostly C++ programming, a memory error often killed the program dead with a seg fault. You’d deallocate something and then try to use a dangling pointer, and whammo, your program core dumps. Fast hard death. Even once you fixed the dangling pointer bugs, if your program had a memory leak, you’d use more and more memory and then run out — and whammo, your program would crash, game over, man!
Once garbage collection came along, I remember being delighted. No more running out of heap! No more memory leaks! Whoa there, not so, is it? You could still leak memory, if your program had a cache that was accumulating stale data. But now instead of killing your program immediately, the garbage collector would just start running more and more often. Your program started running slower and slower. It was still working… kind of. But it wasn’t healthy.
It was sick.
Sick software is software that is functioning poorly because of essentially autopoietic defects. It’s fail-slow software, instead of fail-fast software. A computer with a faulty virus checker, infected with trojans and spyware that are consuming its resources, is sick. A memory-leaking Java program is sick. These systems still function, but poorly. And the fix can be harder to identify, because the problems, whatever they are, are more difficult (or impossible) to trace back to your own (allopoietic) code. Fail-fast environments make it clear when your code is at fault. Fail-slow environments, self-sustaining environments, are trying to work around problems but failing to do so.
Now, there is every likelihood that as systems scale up further, we will have to deal more and more with sick software. Perhaps it’s an inevitability. I hear that Google, for instance, is having to cope with unbelievable amounts of link spam — bogus sites that link to each other, often running on botnets or other synthesized / stolen resources. In some sense this is analogous to a memory leak at Internet scale — huge amounts of useless information that somehow has to be detected as such. Except this is worse, because it’s a viral attack on Google’s resources, not a fault in Google’s code itself. (I realize I’m conflating issues here, but like I said, this posting series is about provocative analogies, not conceptual rigor.)
I think there is a real and fundamental tension here. We want our systems to be more adaptive, more reactive, more self-sustaining. But we also want guarantees that our systems will reliably perform. Those two desires are, at the extremes, fundamentally in opposition. Speaking for myself as an engineer, and even just as a human being, I find that certainty is comforting… humans fundamentally want reassurance that Things Won’t Break. Because when things stop working, especially really large systems, it causes Big Problems.
So we want our systems to be self-sustaining, but not at the cost of predictability and reliability. And as we scale up, it becomes more and more difficult to have both. Again, consider Google. Google has GFS and BigTable, which is great. But once it has those systems, it then needs more systems on top of them to track all the projects and the storage that exist in those systems. It needs garbage collection within its storage pool. The system needs more self-knowledge in order to effectively manage (maybe even self-manage) its autopoietic resources. And in the face of link spam, the system needs more ability to detect and manage maliciously injected garbage.
Returning to the original paper, they spend a fair amount of time discussing the desire for software to be aware of and compliant with its environment. They give many potential examples of applications reconfiguring their interfaces, their plugins, their overall functioning, to work more compatibly with the user’s pre-existing preferences and other software. While extremely thought-provoking, I also find myself somewhat boggled by the level of coupling they seem to propose. Exactly how does their proposed computing environment describe and define the application customizations that they want to share? How are the boundaries set between the environment and the applications within the environment?
In fact, that’s a fundamental question in an autopoietic system. Where are the boundaries? Here’s another autopoietic/allopoietic tension. In an allopoietic system, we immediately think in terms of boundaries, interfaces, modules, layers. We handle problems by decomposition, breaking them down into smaller sub-problems. But in an autopoietic system, the system itself requires self-management knowledge, knowledge of its own structure. Effective intervention in its own operation may require a higher layer to operate on a lower layer. This severely threatens modularity, and introduces feedback loops which can either stabilize the system (if used well) or destabilize the system (if used poorly). This is one of the main reactions I have when reading the original paper — how do you effectively control a system composed of cross-module feedback loops? How do you define proper functioning for such a system, and how do you manage its state space to promote (and if possible, even guarantee) hysteresis? This circles back to my last post, in that what we may ultimately want is a provable logic (or at least a provable mathematics) of engineered autopoietic systems.
There are some means to achieving these goals. You can characterize the valid operating envelope of a large system, and to provide mechanisms for rate throttling, load shedding, resource repurposing, and other autopoietic functions to enable the system to preserve its valid operating regime. It’s likely that autopoietic systems will be initially defined in terms of the allopoietic service they provide to the outside world, and that their safe operating limits will be characterized in terms of service-level agreements that can be defined clearly and monitored in a tractable way. This is like a thermometer that, when the system starts getting overworked, tells the system that it’s feverish and it needs to take some time off.
So the destiny of large systems is clear: we need to be more deliberate about thinking of large systems as self-monitoring and to some extent self-aware. We need to characterize large systems in terms of their expected operating parameters, and we need to clearly define the means by which these systems cope with situations outside those parameters. We need to use strong engineering techniques to ensure as much reliability as possible wherever it is possible, and we need to use robust distributed programming practices to build the whole system in an innately decomposed and internally robust way. Over time, we will need to apply multi-level control theory to these systems, to enhance their self-control capabilities. And ultimately, we will need to further enhance our ability to describe the layered structure of these systems, to allow self-monitoring at multiple levels — both the basic issues of hardware and failure management, the intermediate issues of application upgrade and service-level monitoring, and the higher-level application issues of spam control, customer provisioning, and creation of new applications along with their self-monitoring capabilities.
We’ve made much progress, but there’s a lot more to do, and the benefits will be immense. I hope I’ve shown how the concept of conscientious software is both valid and very grounded in many areas of current practice and current research. I greatly look forward to our future progress!
Along the lines of my post about avoiding religious wars over computer languages, I recently ran across two papers that got me seriously thinking about this whole “how to have the best of all worlds” problem.
Specifically, there was an excellent follow-on to the last OOPSLA conference, called the Library-Centric Software Design ’06 conference. Josh Bloch (of Effective Java fame) was one of the co-chairs. The papers were all focused on the general problem space of “active libraries” or “metaprogramming” or “extensible languages” — generally, the concept that your programming language can be extended by libraries that your programs use. Generally, this means that the typechecking of the language, or perhaps its very grammar, can be enhanced in an extensible way.
One of the more thought-provoking examples is a paper discussing Extending Type Systems in a Library, specifically about adding new “field types” to C++. One use for this is defining new type modifiers such as “notNull”, or “positive”, or “tainted”. These are all in some sense type template modifiers, in that for any pointer type T you could define a modified type notNull with type assertions preventing you from having null instances of that type. Or, for “tainted”, you could have just about any type T and create a variant type tainted indicating that the value of the base type originated from a suspect source.
The “tainted” use case tweaked my memory. The other recent work I’ve seen on taint analysis was the paper discussing how to use the BDDBDDB system to track tainted values through a Java program (previously discussed here). In that case, the system used sophisticated alias analysis to track the movement of values from tainted sources to vulnerable sinks (e.g. from dangerous methods receiving input from the Internet, to vulnerable methods issuing output to a database or filesystem or even another server connection).
Consider how similar these are. In the first case, you’re extending the language’s type system to distinguish tainted from untainted values. In the latter case, you’re analyzing a source base which has no such type distinctions, and statically proving whether or not the value flow does in fact permit tainted values to be used in a context that requires untainted values.
Yet also notice how different these are. In the extended-C++ case, actually using the tainted/untainted types would require you — in my reading of the paper — to essentially modify almost every method in your source code. All methods would need to declare their values as being either tainted or untainted, even methods which don’t actually ever use the possibly-tainted values in a vulnerable way. In other words, the property of taintedness is only relevant at one place in the program — the methods whose inputs are required to be untainted. But declaring the property as an explicit type annotation throughout the program forces your code to expose this distinction everywhere. Suddenly your language extension forces you into a truly massive refactoring!
In contrast, the BDDBDDB-based analysis essentially overlays the tainted property on the program’s alias graph. The analysis may well be no less sound… but how much more convenient for the programmer! Taintedness becomes a property that is derivable only when the programmer wishes to be aware of it. The rest of the time, the programmer can avoid worrying about it. In some sense, the security analysis becomes a “pay-as-you-go” property of the system, as opposed to a “pay-up-front” property in the fully statically typed technique.
Now, consider this a bit more broadly. Isn’t this exactly what dynamic language advocates hate about statically typed languages? Instead of worrying about tainted vs. untainted, let’s consider basic data typing: string vs. number. If you read in a string, and then you want to treat it as a number, isn’t that essentially exactly like a taint analysis? The only time you care whether the string is in fact not a number is if you actually try to use it as one. If you don’t ever try to use it as one, then why worry? Conversely, if you do try to use it as one, wouldn’t it be nice to have the system be able to trace its provenance back to where you originally read it, so you could decide if you wanted to insert a validity check there… without forcing you to declare its type at every intermediate program point where it flows?
It seems to me that extensible languages will have to shift a lot of their safety checking and property checking away from the static type system that’s exposed directly to the programmer. Consider all the possible properties you might want to know about the values in your program: are they string? Numeric? Complex? Are they null? Never-null? Are they tainted? Untainted? Are they structurally compliant XML? Are they valid values for your object-relational database? Are their dimensions correct for the physics math you want to do with them? There is a literally infinite variety of possible types that you might want to track through your program. And if you have to declare all of these types on every variable and method you declare, you pay a “static declaration tax” equal to the number of extended type properties times the number of declarations in your program. Whereas if the system is able to derive, or trace, or deduce the flow of these properties, and if the programming environment gives you enough insight into how that derivation works and what annotations you can add (very selectively) at only the points where the distinctions are relevant, you offload that tax onto the programming environment.
Imagine an IDE for some Ruby++ language. You can write your Ruby code in the normal way. But you can also annotate various methods with fully optional types. Methods that read from an HTTP connection mark their values as tainted. Methods that write to the database require untainted values. Arithmetic methods mark their inputs as numeric. And even better, there are ways (handwaving furiously) to define new kinds of traits and types; so you can implement complex numbers in the language, and use them without needing to explicitly declare their type throughout the program, and the system can track their value flow and create optimized code for their use. The end result: an extensible language with far more ease of use than extended-Java or extended-C++, and far more static safety than basic Ruby.
Maybe it is possible to have the best of all worlds. And in fact, if we want to truly achieve the dream of extensible statically-analyzed languages with usable syntax, maybe it’s not only possible but necessary.
Up with pay-as-you-go! Down with the static declaration tax! Up with extensible languages! Down with pure dynamism!
Maybe I’m a bit more religious than I thought.
What would such a language look like? Stay tuned!
Anyway, I’m leaving the body of the post above as I originally wrote it, because the slightly uninformed enthusiasm gave it some good energy 🙂
One of the most exciting program analysis techniques I’ve ever seen (I love having my own geek blog, where I can say things like that with a straight face) is the BDDBDDB system, from Whaley and Lam at Stanford. (I will capitalize BDDBDDB here, though they don’t.)
Pointer alias analysis is tracking the movement of object references through a large piece of code, across method calls and through heap references. I’ve been reading programming research papers for almost two decades now, and for much of that time, pointer alias analysis was a chimera. It just wasn’t possible to make it scale. The number of potential paths in even a moderately large piece of code is dizzying — numbers like 10 to the 14th power aren’t uncommon. And for most of the last two decades, this has been an uncracked nut.
Well, Whaley and Lam seem to have finally cracked it, by building on other research done using binary decision diagrams (BDDs) to compactly encode the movement of values through variables in a program, in a way that maximizes the amount of shared structure in the variable flow graphs. BDDs were originally created for hardware logic analysis, to track the movement of signals through a huge circuit. So it’s elegant that they can also apply to software analysis, tracking the movement of value through a large code base.
The further elegance in Whaley and Lam’s technique is the query language laid on top of the BDD-based analysis. Binary decision diagrams can be used to encode relations; they can be joined and queried much like relational tables. And there is a query language, Datalog, that is well suited for expressing queries over a BDD-based relational model. Datalog is a relational query language, similar to a more abstract version of SQL, except it supports recursive queries (which SQL does not). Whaley and Lam are able to encode a Datalog-based query and relationship-generation engine on top of their BDD aliasing structures. Hence, BDDBDDB — Binary Decision Diagram-Based Deductive DataBase.
One of the more amazing results in their paper is their discussion of how their analyses improved once they had the Datalog layer. originally they wrote a context-specific alias analysis on top of a BDD engine without any intervening abstract query language. This took them a few months, the code was thousands of lines long, the performance was not good, and there were a number of subtle bugs. Once they wrote the Datalog engine and reformulated their analysis as Datalog formulas, the total size of the analysis dropped to just around a dozen lines (!) of Datalog, and the performance was one to two orders of magnitude faster.
Once you have a query engine that can efficiently query an entire software system to track value and type flow throughout, in a reasonably accurate way, there are many systems you can build on top of it. For instance, you can write an Eclipse plugin to track the exact lines of code that expose a web application to SQL injection attacks. This can be done by notating particular methods as sources of input data from the Internet, and other methods as destinations (“sinks”) of query text for databases, and then evaluating whether any values flow from source methods to sink methods. Any paths through the code that allow unfiltered input to travel from the Internet to your database are security holes. The BDDBDDB analysis can tell you exactly what code paths allow that value flow. And it can do it purely as a static analysis!
They use Java as the basis of their system, for unstated but obvious reasons. (Namely, strong static typing makes their value flow analysis much more efficient, since it can eliminate many value flows that are not dynamically permitted; and lack of pointer arithmetic means their analysis can be trusted, with no need to limit to the “safe” subset of the language without pointer math.) Many people have been wooed away from Java by the flexibility of dynamic language programming, where you needn’t encumber yourself with types; but those of us who remain in the static typing camp are pretty much hooked on the tools. And BDDBDDB is the harbinger of a whole new level of Java-based analysis tools that are going to take Java programming and safety checking to the next level.
One thing not mentioned in any of the BDDBDDB papers I’ve yet seen is reflection. I can’t see how reflection can be handled in the BDDBDDB system, since tracking reflective method invocations requires not just tracking object references through variable sites, but actually tracking method values (and even specific method name strings!) through reflective invocations. Certainly their system as described doesn’t address reflection in any way. This also seems to me to limit the applicability of BDDBDDB to languages like Ruby, which make heavy use of code generation (in Rails, for instance). Though I’m sure that some enterprising Rubyists will undertake to prove me wrong.
In general it’s very challenging to consider the interactions of static analysis with metaprogramming frameworks — runtime bytecode generation is becoming more common in Java systems, but BDDBDDB currently runs on a source analysis, without access to the runtime-modified bytecode. Metaprogramming can blur the line between compile-time and runtime, and BDDBDDB is solidly on the compile-time side… at least for the moment. I say this not to denigrate the importance of this technology, but rather to indicate an even further level to which it can go.
I look forward to what’s next for these analyses. Good times in the static typing world!
I am going to mostly avoid those debates here. Some days I feel that all these flame wars are more about personal taste than anything — and taste, being subjective, is definitely not worth huge flamewars. Other days I feel that all the flaming is simply missing an opportunity to ponder how to make things better overall.
I would like to have a language that combined the succinctness of Ruby with the static typing and IDE-assistance of Java. I would like to have a system that combined all the best qualities of OS X and Windows (for my personal values of “best”). I think competition is great, since it leads to opportunities for learning and improvement. And besides, the less dogmatic you are, the more able you are to appreciate all the many kinds of appeal in these different approaches… Ruby’s “it tells a story” syntax (well, aside from the @fields) is truly a thing of beauty, and yet who couldn’t be taken with the power of a C++ image library with near 100% code efficiency?
Too much flaming can wind up backing you into a corner, where you reject everything about the other side. And that way lies calcification and stagnation. Flamewars are the cholesterol of the technology world… they clog you up and can lead to heart attacks. (Although a little flaming can add spice — even cholesterol can be tasty in moderation! But some blogs serve up so much of it that it’d be superfluous here.)
So if you want incendiary language, there are plenty of other places to go, some of which are on my blogroll. But here I’m going to be all boring by comparison. Or… not… depending on your perspective!
And to try to move the conversation on a bit, stay tuned — my next few posts are on that very theme.
Very sorry for the hiatus, everyone (all twenty-some of you, according to Feedburner — it was forty-plus until Oct 1, then Feedburner’s stats rolled over and HALF OF YOU DISAPPEARED!!!). I know I just blew my “every two weeks” rule. And my hoary old excuse — “I’ve been sick” — has only one redeeming quality: boy, is it ever true. Our two-year-old brought home a cold that went straight to my throat and destroyed my vocal cords on the way to my lungs, where it transformed into a baker of the worst cookies ever. EVER. Combine that with some epic work-related stress immediately prior, and you have the perfect off-the-air recipe.
I’m not 100% yet, but I’ve gotten well enough to be able to think about things other than child care and how sick I am. So, my apologies. I’m about to go into high gear to make up for it — I’ve actually got about a dozen posts saved up and they’re going to start hitting every few days for a couple of weeks. Stay tuned, there’s a veritable manifesto on the way!
Ultimately I’m shooting for more of a weekly rhythm (if not more often), since biweekly is just too seldom.
Anyway, thanks for your patience, and I’ll try to stay healthy for a good long while. (Though with a baby AND a toddler in the house, and winter coming, good health is a guaranteed impermanent condition….)