SANE vs. CUPS

Every so often, I have to print things, sign them, and then scan them.


Average time to print something using CUPS: 2.5 hours. I have to know where my printer is plugged in, edit some configuration files, and then fiddle with a GUI control panel, then a web UI. All these steps are necessary, because all of the functionality is not exposed through one of them. Sometimes, I also have to download a "PPD" file.


Average time to scan something using XSANE: 3 seconds. I plug in my scanner. I click on the picture of the scanner in my menu. I click "scan". It scans.


Congratulations, SANE team. You know how to write software.


Better luck next time, CUPS guys.

Another Prediction

I'm seeing a lot of noise in the blogodrome about REST these days. Ian Bicking says that continuation-based frameworks are doomed as they are workarounds for the web, based on the fact that that Bill de hÓra says REST has won some kind of epic battle against... what? I wasn't aware that REST was fighting a war against someone. In fact, I see no mention of what the war was against, or what the observable signs of the victory are. Is it a victory against WS-*? Over stateful web UIs?

It reminds me of another war, one that America "won" several years ago. The triumphant cries of "mission accomplished!" in the REST community ring hollow to me, although maybe this is just because I am personally working very hard to work around the web.

Don't get me wrong, the web is great and all, and REST is definitely a superior alternative to WS-*, if you want to do server-to-server information exchange. However, being better than WS-* is pretty easy; web services as they stand today are a nearly unmitigated disaster. I'm very happy to never have to work with them. The brief run-ins I've had with them (like the SOAP stuff in Universal Plug and Play) have been hellish nightmares.

Let's come at this whole "REST vs. rich applications" thing from another perspective.

I don't want to be writing software for the web. I'd much rather be writing a Python desktop application which makes its own socket connections, over its own protocol, and distributing that. It would provide a much richer experience. If I had total control of the user's desktop that's what I'd be doing. However, there are certain advantages that the web provides on a logistical level, unrelated to application architecture:
  • Python isn't secure, and many security-conscious users (those in our target market) won't just go installing new applications willy-nilly (javascript is hardly as secure as it should be either, but at least there is an expectation that it should be, and constant effort put towards improving it)
  • every computer has a web browser: often users have to use applications on computers they don't control
  • "downloading" a web-based application is zero-friction; it by definition cannot involve any complex setup or barrier-to-entry
  • web browsers have a very rich presentation layer which would be hard to re-implement in a native application, and schemes to access it through "native" APIs like PyXPCOM are, although higher-performance than generating HTML and Javascript strings, unfortunately unweildy and complex
  • Many networking situations don't allow anything but HTTP, for completely ridiculous political reasons. This is a stupid, social restriction, usually put in place for "security" reasons by people who don't really understand the security issues involved, but nevertheless it is a very common problem
  • finally, JavaScript might not be Python, but it is a highly dynamic language, and many of its flaws can be fixed (especially with a "real" language like Python on the server to help you out)

None of this has anything to do with the statefulness of the API, or the greatness of REST. Right now, it just makes more sense to write software for the browser than for any other client API, because the browser happens to be the most portable and most seamless deployment target.

These advantages also don't mean that the browser is free of problems. In addition to being a highly convenient deployment environment, the browser is at times an insanely bad development environment. None of the frustrations and difficulties encountered trying to work with the browser as a programming environment, nor the poor performance and poor usability that come from trying to treat it as a "regular" browser and doing all the work on the server, have given me a great deal of confidence in the generality and power of REST as a programming model or the web as a model for applications.

I don't care about application architecture per se - what I care about is an architecture for building usable, powerful applications which make it easy for the user to do what they want without a lot of waiting. That's hard any way you slice it, and by forcing everything your application does into tiny, stateless chunks, REST doesn't help that very much.

So, yes, Divmod Athena, like Seaside, is a giant workaround for the web, and it's turning out to be a pretty good one, at that. I'm pretty happy about that.

The Internet is a global, stateful, interconnected mesh of computers. It's an amazing achievement. The web is a historical accident, a hopefully brief one, which throws away much of the Internet's utility by forcing every networked application with a user interface into an almost-but-not-quite-too-narrow mold. Cookies have replaced authenticated connections, blobs of ever-changing unparseable HTML have replaced application-specific protocols, and dumb clients have replaced intelligent peers.

Here's my prediction. REST will continue to be useful as an architectural pattern for certain applications. However, it is in no way the only way to program "for the web". Workarounds for the web - like google mail, like Meebo, like YouOS, and like Divmod, will continue to be successful because they don't sacrifice the user's experience on the altar of architectural purity. There is no indication that they are going away any time soon.

Where does Divmod come from?

Early next week the Divmod Fan Club meets for the first time.

If you want to influence the direction of our future open source development, now's a good time to join. People on IRC have known about this for a while, but I don't think I've given it much attention publicly. Fans who join early are like low generation V:tM vampires - you can't decrease your "joined-on" date. Except, I guess, to continue the vampire analogy, you might be able to find a long-time fan and devour their soul. If you do that, let me know how it works out. I'm not too clear on the mechanics.

The Divmod Fan Club (DFC) is, for lack of a better description, a business-model experiment. Even after we are providing a commercial service, we feel that our open source contributions are of a pretty high value to the community. So, we're letting the community pay for it, to whatever extent that you all think it's worthwhile. We are still working on the incentive structure, but the basic idea for right now is:
  1. if you want support directly from Divmod team members, you really should be in the club, and
  2. fan club members can vote once a month on what we're doing. The aforementioned meeting is the first of those votes, which is why now's a good time to join.
However, it's also sort of like a fixture of most indie bands' business model: the "rep" program. It's an official way to collaborate and be involved in Divmod's success. (In a way, we are a bit more like an indie rock band than a software company.)

So, to answer the titular question: "Divmod is brought to you by the contributions of our listeners, this computer, and others like it."
where-does-divmod-come-from-DSCN9938

The quote for bronze membership is not a joke.

Is "Framework" the new "Enterprise"?

I think that at some point, "Framework" became a dirty word, and I haven't noticed until now. From Guido's blog:
"Frameworks have no requirement to be minimal in size while maximal in features."

I notice that he describes what makes a good library good, but there is no mention of what a good framework might be. The implied conclusion, of course, is that there is no such thing as a good framework - once you're a framework, you've gone off track and created something that isn't very useful, and you should turn around and go implement a library. Guido is not the first person I've heard make this assumption, but he is the first person I'm inclined to take seriously.

My understanding of the distinctions between "frameworks" and "libraries" is that frameworks, in general, tend to call your code through callbacks, whereas libraries don't need to call your code. Under that definition, all existing GUI-building tools would be "frameworks", but nobody calls them that, the convention in the GUI world is to call them "toolkits". The convention in the business software world seems to be to call them "application servers".

I'm not sure that the definitions of these terms are generally accepted, though. FOLDOC says a framework is "a set of classes that embodies an abstract design for solutions to a number of related problems", and a library is "a collection of subroutines and functions stored in one more files, for linking with other programs". Neither of those definitions say anything about what code calls what other code, or how such things are designed. In fact, the way that they are presented there, the terms are nearly synonymous in the context of Python.

I think that regardless of terminology, all software has the obligation to be minimal in size while maximal in features. Also, software should be incredibly fast and well-documented, and user-friendly. Of course it's usually impossible to be all of those things; the design constraints imposed are a complex fusion of the application domain and the intended audience.

However, in light of the growing acceptance of the word "framework" to mean "a really bad library that started out as an application", maybe we should stop calling Twisted a framework. What's a better word? "Toolkit"? "Environment"? "Integratotron"? Discuss.

A Point of Agreement

R0ml has been quoted by John Udell as saying that Open Source software can replace standardization. Many of the things I consider blogworthy are distillations of a position in an argument I've had with my father, but on this I think we are in violent agreement. Everybody loves a fight though, so I'll try to state this as antagonistically as possible - arguing against Mr. Udell's commentary. He raises three potential objections, asking if the idea is wrong in principle.

Objection: We don't write programs that way.
Who's "we", kimosabe? Speaking for groups that count me as a member, we do. Apparently some other people do too - David Wheeler claims that there are 30 million lines (more than $1 billion worth) in Red Hat Linux alone.

Objection: Our technology platforms are too balkanized to enable us to collaborate on common implementations.
Maybe Microsoft isn't going to start collaborating on Firefox, but who cares? If you follow the trends that W3Schools sets forth, IE's days are numbered anyway. Apple already uses an open-source browser engine.

Objection: Ditto for our political agendas.
This, I think, is the most telling objection. People who want to produce a good product for their customers will use open-source software and collaborate on its implementation, because that is quickly becoming the industry's best practice for producing high-quality software. People who instead have a strong political agenda, unrelated to the quality of their products, will develop standards bodies and waste their time re-implementing tons of code which is available elsewhere for free.

For an example of technology politicization, SQL is a "standardized" language, whereas PHP, Python, Perl, and Ruby are each open-source with a single implementation. Have you tried to write a whole program in SQL lately? Or even make a generated SQL statement run on more than one database at a time?

There are still some areas where standardization is important - the web comes immediately to mind - but that is only because it is a platform still nominally controlled by entrenched proprietary interests. Imagine, for example, Microsoft were to give up IE tomorrow, and suddenly declare that they were simply packaging Firefox with Vista. What would the point of the W3C be at that point? To broker discussions between Firefox - with 99% market share - and Safari and Opera? It's unlikely. Safari, being open-source itself, can crib freely from Firefox's code where necessary, and Opera is already an also-ran, copying from behavior of the major browsers more than implementing an abstract standard.

While not quite as bad as SQL (it is possible to make pages which render in both Firefox and IE, at least), anyone who has done browser-portability work will tell you that standards aren't all they're cracked up to be in this arena either. On a daily basis I wish that Firefox were the only implementation I had to contend with.

There are more examples of this trend. There is a PNG standard but even Microsoft just uses libpng. Mozilla maintains what I think is the most exhaustive description of the failure of the C++ standardization process - I think it would be hard to argue that things wouldn't be better if the "standard" were in the hands of the GCC team. Jython and JRuby both elegantly prove that you can have implementations that bridge language gaps without a formal standard.

It's not that open source can replace "open" standards and the standardization process. Open source has replaced standardization. We've just been waiting for the world to catch on.