PC programmers originally wrote programs for DOS. It was simple. It was
direct. A program could do whatever it wanted. It could write directly to
video memory, it controlled the whole screen.
Later PC programmers had to learn to write programs for Windows. At first,
that was a big pain. The Windows API, even Win16, is hundreds of times more
complex than the DOS "API", if you could even say it had one. Applications
had to be changed so that their UIs behaved like Windows did, rather than
what was "best" for their application domain. Most programs had to abandon
their crazy key-bindings to make room for C-c and C-v to mean "copy" and
"paste". (Even we geeks only have room in our hearts for one or two sets of crazy key bindings.)
Browsers are today's "video memory"; applications put information wherever
they want, with no user-interface conventions that aren't enforced by the
browser itself. There is no operating system for the web, no mechanism to
make different applications play nicely together on a single server, let
alone on a single page. The web is in need of a major shift.
Many would have you believe that "Web 2.0" - AJAX and such - is this shift.
"Web 2.0", insofar as I can tell that it refers to anything concrete, is the
idea that web applications should respond to the user interactively. This is
something that applications have been doing even before DOS; it is not a new
idea, and in a sense, it is only the web playing catch-up in an area where
desktop applications have long been the clear winner.
What I want to know is, where is the integration paradigm? Where is the
clipboard for the web?
I think that whatever answers this question will rightly deserve the moniker
"Web 3.0". It will be providing a leap forward over non-web applications
which will really be worth something, and hopefully won't be worthy of
brutal
derision at the hands of industry pundits.
Web 3.0 is going to come with a price, though. By way of an analogy: DOS
developers didn't understand the value of Windows programming at first. They
objected: Windows is slow. It's ugly. The UIs are boring and everything
looks the same. It takes too much memory. Why couldn't they just use DOS
like they used to?
Today's web frameworks are generally more like DOS than like Windows, in
this respect. Don't get me wrong, I don't mean they're bad: DOS was a
huge step up from its predecessor, which was, basically, "you have
to write DOS yourself in every application". The "Web 2.0" frameworks I'm
familiar with: Rails, Seaside, Subway, Django, and TurboGears are geared (no pun intended) towards
providing those parts of your application that you'd have to write yourself,
if they weren't provided. They are sometimes libraries, sometimes
frameworks, but their purpose is to provide you with a quick path to
development and make your application interactive. They aren't really geared
towards providing a context in which your application can communicate with
other applications.
The infrastructure Divmod is building, we hope, will provide some of the
imposed structure that has thus far been lacking on the web. The first step,
by way of Mantissa, and
our recent work on the "Offering" system, is to provide a way for
multiple installed applications to be easily installed on the same website,
an idea that Zope, among
others, pioneered with some success. That's the front-end, the
Windows-1.0 phase. Later, we hope to build clustering and inter-application
remote integration with Vertex, the "networking" and
"multitasking" that came with Windows 95, following in that same
metaphor.
Let's escape the metaphor for a second. The concrete features we want to
provide will be things like: You want to write a social network application.
Social networks implicitly use telephones and email. With Divmod's framework
you can (in the future, when we have finished these systems) write an
application as a set of plug-ins which will be activated by the Address Book
page, the "Email Inbox" page, and the "Place Call" page from our VoIP
application. You can automatically create To-Do items for the user's
existing task-tracker. Similarly, if you provide appropriate plug-in
interfaces (which the system does make pretty easy and idiomatic)
In the process of writing this essay, I discovered that wikipedia
says others are already speculating about Web 3.0 and with similar
ideas. It will be distributed, it will be decentralized, it will involve
connections within and between web services.
So, how does this all relate to Divmod? What could we hope to gain by
providing this kind of technology, especially by giving it all away for
free? I'll explain the part of our business model that this impacts, because
I am personally always suspicious of people telling me of great stuff
they'll do for me without an idea of how it's helping them.
To start with, we have a lot of different ideas for applications, and we
want to make sure they can all inter-operate. All the standard reasons for
using open source apply - making bugs shallow with more eyeballs, etc - and
we also want to make sure that as we're focusing on our first few uses we
are preparing for a broader and broader use of our technology, and what
better way to do that than by having other people with other use-cases take
a look at it.
We have another reason too, though. Once we're done with all of our ideas
for applications, (and by all that is good and holy in this world, one
day we will be done!) we want to step back and allow the service to
keep growing through the contributions of our users, and eventually
compensate those contributors who provide us with useful code. Taking a page from Red
Hat's book, rather than attempt to sell a commodity into a market, we
want to define a market for "web 3.0" products, and then be the
market leader and provide a marketplace for good products, rather than
simply sell our own.
By providing a structure that will allow lots of different applications to
be installed on the same site, we provide a way for independent developers
to provide us with applications that can be run together on our site. We can
then run all those applications through our account management system and
make it easy for those developers to get compensated based on usage patterns
and that sort of thing. Of course, any applications we're going to run on
our cluster will have to be licensed as open source so that we can have the
same expectation the developer does: we won't sue them for using our stuff,
they won't sue us for using theirs.
In other words, we're trying to leverage this technology shift to make a way
for hackers to get rich writing open source software, without going through
the process of starting a startup.
The first step is "I don't have a problem, I can stop whenever I want",
right?
This week I purchased some "gaming" input peripherals, because "gaming" has come to mean "good" in the eyes of peripheral manufacturers. I wasn't disappointed by that assumption.
Given this previous problem with stand-charger-based mice (the MX1000 is not the first wireless mouse I've had this happen to) the G7 was impressive right out of the box. There are 2 battery cartridges: one stays in the charger at all times, one is in the mouse. That means the charger is smaller, doesn't have to be located on my desk, and when the mouse does need charging, I have 30 seconds without a mouse instead of an hour and a half. The charger, while itself small, also has a tiny, detachable USB transmitter, making it a cinch to pop this mouse into my laptop bag for short trips.
What else can a mouse do? Comfortable in my hand: check. Tracks smoothly: very check. I have no objective way to measure it but it certainly feels at least as smooth as any other mouse I've used. Works on glossy surfaces... check? Color me impressed, it worked on at least 3 different surfaces, including my Wacom tablet, that caused my MX1000 grief. It still gets upset if I put it on a mirror, though.
The feature that impressed me the most, and that gives it a real claim to being a "gaming" mouse, was that it has a speed shifter. This never would have occurred to me. Two buttons under the scroll wheel increase and decrease the mouse's speed (in hardware, so it works with linux) from "slow" to "fast" to "crazy". Playing Quake 4 this weekend, this feature was super-handy when getting into an armored vehicle that slows down the mouse to simulate a feeling of weight . Even using regular applications, it's handy; with 2560 pixels to cover on my desktop from edge to edge, it's nice to be able to crank up the speed, rocket over to my left desktop, slow down to pinpoint emacs's title bar, then speed up again to yank it all the way over to the right.
This keyboard is probably the largest that I've ever seen, let alone used or purchased. The IBM model M, named the "desk-dominator" for its unnatural size, is 492 mm x 210 mm (19.4” x 8.3”). This thing is 546mm x 267mm (21.5” x 10.5”).
So far though, the size seems to be worth it. It has the best tactile response I've gotten from a membrane keyboard ever, blowing even the previous front-runner in that category, the Eclipse, out of the water. I can routinely do slightly better than usual in gtypist, even after only having used this keyboard for a day. The "squeak" I've mentioned in previous reviews is completely absent.
The basic layout is a no-nonsense PC-104 key, Everything in the right place, with escape offset slightly - I find I don't mind, but I suspect die-hard Vim fans will likely have a more intense reaction, whether they love it or hate it.
It also includes some special features, which are an annoyance on many keyboards, but which I am generally pleased with on this one. It has a standard set of media keys and a volume knob, all of which worked out of the box on Ubuntu. There's also a switch to turn off the "windows key". It's handy when playing games on Windows; although obviously not a terribly useful feature in Linux when Windows => Hyper and won't magically break you out of a running game. Nevertheless, it works in Linux as well.
It also features backlit keys, a first in a Logitech keyboard. The backlight is subtle, and when it's off, the keycaps are almost black. There's a switch to toggle it on and off, and between two levels of intensity.
The G15 also includes 2 USB ports, which is a welcome addition, and something I've wished every USB keyboard has done since I stopped using a Mac as my primary machine. I now have somewhere convenient on my desk to plug in my USB SSH key and camera. Unfortunately Windows seems to (wrongly?) believe that the keyboard hub doesn't have enough power to drive the thumb drive. Linux powers it without complaint. There are also 2 small grooves to run wires under the keyboard, which is great as they allow me to run my headset cable underneath the keyboard without it rocking slightly where it balances on the cord.
The special "G" keys on the left side of the keyboard are the main attraction. On Linux they are just repeats of F1-F12 and 1-6, which isn't great, but at least the keys provide some default behavior and they're not totally dead, as many special keys are. Since I regrettably must boot into Windows for the majority of gaming these days anyway, this lack of functionality didn't disappoint me too much.
On Windows, with the included driver software (which I will note, did not ask me to reboot!) the G key functionality really shines. They can be bound to any other key, or any sequence of keys, including delays. There is a Macro Record (MR) button which allows you to quickly and easily configure any key to be a timed sequence of keystrokes. This means that in World of Warcraft, I can script a sequence of attacks, including cooldown times, simply by hitting MR, a G key, doing the attacks, then the MR key again to finish. Unfortunately it's unlikely that this functionality will be useful in anything other than a MMORPG, but given how useful it is there, I think it's worth the extra few inches of desk real estate.
Finally, the keyboard also includes an LCD display. I wish I had more to say about this, since it seems like it could be a really killer feature, but the included applications are really sparse; a clock, a CPU meter, a volume meter. I'll be watching g15mods.com to see if anything interesting comes out (not least of which, Linux drivers).
I can't be quite so unequivocal about the G15, but I'm still pretty pleased with it. The tactile response is good; the frame is incredibly sturdy, it looks cool, if massive, the USB ports are handy and the G-keys are really useful in the place where they're supposed to be, to wit, a video game. The jury's still out on the LCD display, and lack of linux support is always a problem with funky features, although offering the Windows SDK on the CD with the keyboard was a nice touch.
This week I purchased some "gaming" input peripherals, because "gaming" has come to mean "good" in the eyes of peripheral manufacturers. I wasn't disappointed by that assumption.
G7 Gaming Mouse
Logitech really has no competition for mice. My previous mouse, an MX1000, was, when I purchased it, by far the best mouse I'd ever used. My only complaint about it, aside from the "doesn't work well on reflective surfaces" problem that many optical mice have, was that it was getting to the point where the (internal, non-replaceable) battery was run down all the time, and the transmitter / stand was slightly warped after much usage, so the contacts no longer directly lined up where they were supposed to and I had to spend 20 minutes fiddling with it every time I wanted to get it to charge.Given this previous problem with stand-charger-based mice (the MX1000 is not the first wireless mouse I've had this happen to) the G7 was impressive right out of the box. There are 2 battery cartridges: one stays in the charger at all times, one is in the mouse. That means the charger is smaller, doesn't have to be located on my desk, and when the mouse does need charging, I have 30 seconds without a mouse instead of an hour and a half. The charger, while itself small, also has a tiny, detachable USB transmitter, making it a cinch to pop this mouse into my laptop bag for short trips.
What else can a mouse do? Comfortable in my hand: check. Tracks smoothly: very check. I have no objective way to measure it but it certainly feels at least as smooth as any other mouse I've used. Works on glossy surfaces... check? Color me impressed, it worked on at least 3 different surfaces, including my Wacom tablet, that caused my MX1000 grief. It still gets upset if I put it on a mirror, though.
The feature that impressed me the most, and that gives it a real claim to being a "gaming" mouse, was that it has a speed shifter. This never would have occurred to me. Two buttons under the scroll wheel increase and decrease the mouse's speed (in hardware, so it works with linux) from "slow" to "fast" to "crazy". Playing Quake 4 this weekend, this feature was super-handy when getting into an armored vehicle that slows down the mouse to simulate a feeling of weight . Even using regular applications, it's handy; with 2560 pixels to cover on my desktop from edge to edge, it's nice to be able to crank up the speed, rocket over to my left desktop, slow down to pinpoint emacs's title bar, then speed up again to yank it all the way over to the right.
G15 Gaming Keyboard
In a word: huge.This keyboard is probably the largest that I've ever seen, let alone used or purchased. The IBM model M, named the "desk-dominator" for its unnatural size, is 492 mm x 210 mm (19.4” x 8.3”). This thing is 546mm x 267mm (21.5” x 10.5”).
So far though, the size seems to be worth it. It has the best tactile response I've gotten from a membrane keyboard ever, blowing even the previous front-runner in that category, the Eclipse, out of the water. I can routinely do slightly better than usual in gtypist, even after only having used this keyboard for a day. The "squeak" I've mentioned in previous reviews is completely absent.
The basic layout is a no-nonsense PC-104 key, Everything in the right place, with escape offset slightly - I find I don't mind, but I suspect die-hard Vim fans will likely have a more intense reaction, whether they love it or hate it.
It also includes some special features, which are an annoyance on many keyboards, but which I am generally pleased with on this one. It has a standard set of media keys and a volume knob, all of which worked out of the box on Ubuntu. There's also a switch to turn off the "windows key". It's handy when playing games on Windows; although obviously not a terribly useful feature in Linux when Windows => Hyper and won't magically break you out of a running game. Nevertheless, it works in Linux as well.
It also features backlit keys, a first in a Logitech keyboard. The backlight is subtle, and when it's off, the keycaps are almost black. There's a switch to toggle it on and off, and between two levels of intensity.
The G15 also includes 2 USB ports, which is a welcome addition, and something I've wished every USB keyboard has done since I stopped using a Mac as my primary machine. I now have somewhere convenient on my desk to plug in my USB SSH key and camera. Unfortunately Windows seems to (wrongly?) believe that the keyboard hub doesn't have enough power to drive the thumb drive. Linux powers it without complaint. There are also 2 small grooves to run wires under the keyboard, which is great as they allow me to run my headset cable underneath the keyboard without it rocking slightly where it balances on the cord.
The special "G" keys on the left side of the keyboard are the main attraction. On Linux they are just repeats of F1-F12 and 1-6, which isn't great, but at least the keys provide some default behavior and they're not totally dead, as many special keys are. Since I regrettably must boot into Windows for the majority of gaming these days anyway, this lack of functionality didn't disappoint me too much.
On Windows, with the included driver software (which I will note, did not ask me to reboot!) the G key functionality really shines. They can be bound to any other key, or any sequence of keys, including delays. There is a Macro Record (MR) button which allows you to quickly and easily configure any key to be a timed sequence of keystrokes. This means that in World of Warcraft, I can script a sequence of attacks, including cooldown times, simply by hitting MR, a G key, doing the attacks, then the MR key again to finish. Unfortunately it's unlikely that this functionality will be useful in anything other than a MMORPG, but given how useful it is there, I think it's worth the extra few inches of desk real estate.
Finally, the keyboard also includes an LCD display. I wish I had more to say about this, since it seems like it could be a really killer feature, but the included applications are really sparse; a clock, a CPU meter, a volume meter. I'll be watching g15mods.com to see if anything interesting comes out (not least of which, Linux drivers).
Overall
The G7 is definitely the best mouse I've yet used, gaming or no; I think I'd recommend it to anyone looking for a good wireless mouse.I can't be quite so unequivocal about the G15, but I'm still pretty pleased with it. The tactile response is good; the frame is incredibly sturdy, it looks cool, if massive, the USB ports are handy and the G-keys are really useful in the place where they're supposed to be, to wit, a video game. The jury's still out on the LCD display, and lack of linux support is always a problem with funky features, although offering the Windows SDK on the CD with the keyboard was a nice touch.
This might have gotten lost in my last post. It has nothing to do with copy
protection so I'm repeating it.
Jonathan Coulton. He sings about robots and fractals, and unlike many who cover those topics, he sings really well.
Listen to his songs.
Give him money.
Do it now.
Jonathan Coulton. He sings about robots and fractals, and unlike many who cover those topics, he sings really well.
Listen to his songs.
Give him money.
Do it now.
Thanks to everyone who commented on my last entry, especially Mary, who
raised some very interesting issues. Thanks also to Jerub for posting it to MetaFilter under the
subject "did skynet need ethics?". Can you spot the secret reference to
Skynet in this article?
Unfortunately the contrast between the MeFi commentary and the comments on the entry implies to me that my point didn't make much sense to non-programmers. So, a few clarifications.
I am not advocating a specific plan. My point isn't "we should license and bond programmers", or "we should throw F4I's programmers in jail". I also don't care that much about the Sony/BMG case except as a specific example of where things have gone wrong already. I definitely don't think that we should crucify some random employees of a software company based on some moral code that they weren't a party to and didn't even know about when they wrote the offending code.
What I am saying is: people who create computer programs have a responsibility to the public. Malware, viruses, DRM, and a variety of other ways to subvert a user's computer against their will are all immoral and people who create them should care about that. I may have been overly narrow in simply addressing "programmers" since obviously management plays a role.
I also believe that the public should assert their rights in this regard. I see that Sony is coming under some incredible pressure in this case. That's great, but their executives are still all universally saying "we still believe in copy protection technology". I think the public should reply with a resounding, "no you don't" and continue to boycott Sony until it abandons all copy-protection "technology".
I do have a few new points to make while I've got the floor.
Some musicians aren't total asshats. If you're looking for some music to listen to while you're waiting a decade or five for Sony/BMG to actually listen to their customers, may I humbly suggest the musical stylings of Jonathan Coulton, He provides some awesome music for free download, and it just so happens that he has specifically said that he is anti-DRM and thinks the Sony rootkit is a travesty.
Also, copy protection just doesn't work. It never has, and it never will. It might discourage people from copying things a few times, for a few minutes, but in the large it has no impact. Tycho put this particularly well:
Unfortunately the contrast between the MeFi commentary and the comments on the entry implies to me that my point didn't make much sense to non-programmers. So, a few clarifications.
I am not advocating a specific plan. My point isn't "we should license and bond programmers", or "we should throw F4I's programmers in jail". I also don't care that much about the Sony/BMG case except as a specific example of where things have gone wrong already. I definitely don't think that we should crucify some random employees of a software company based on some moral code that they weren't a party to and didn't even know about when they wrote the offending code.
What I am saying is: people who create computer programs have a responsibility to the public. Malware, viruses, DRM, and a variety of other ways to subvert a user's computer against their will are all immoral and people who create them should care about that. I may have been overly narrow in simply addressing "programmers" since obviously management plays a role.
I also believe that the public should assert their rights in this regard. I see that Sony is coming under some incredible pressure in this case. That's great, but their executives are still all universally saying "we still believe in copy protection technology". I think the public should reply with a resounding, "no you don't" and continue to boycott Sony until it abandons all copy-protection "technology".
I do have a few new points to make while I've got the floor.
Some musicians aren't total asshats. If you're looking for some music to listen to while you're waiting a decade or five for Sony/BMG to actually listen to their customers, may I humbly suggest the musical stylings of Jonathan Coulton, He provides some awesome music for free download, and it just so happens that he has specifically said that he is anti-DRM and thinks the Sony rootkit is a travesty.
Also, copy protection just doesn't work. It never has, and it never will. It might discourage people from copying things a few times, for a few minutes, but in the large it has no impact. Tycho put this particularly well:
... people who pirate software enjoy cracking it. The game itself is orders of magnitude less amusing. And their distributed ingenuity will smash your firm, secure edifice into beach absolutely every Goddamn time. There are no exceptions to this rule.The whole idea of "copy protection" is flawed. If you take a holistic view of the process, it doesn't even make any sense. The only way copy protection is coherent is if you ignore the part of the distribution process where the customer, you know, uses the thing they bought.
This post isn't about Divmod, exactly.
I've been mulling over these ideas for quite a while, and I think I may still have more thinking to do, but recent events have gotten me thinking again about the increasing urgency of the need for a professional code of conduct for computer programmers. Mark Russinovich reported on Sony BMG's criminal contempt for the integrity of their customer's computers, and some days later CNET reported on Sony BMG's halfhearted, temporary retraction of their crime. A day later, CNet's front page has news of Apple trying to institutionalize, as well as patent, a similar technique. While the debate over DRM continues to rage, there are larger issues and principles at stake here that it doesn't seem like anyone is talking about: when you run a program on your computer, who is really in charge?
I posit that, in no uncertain terms, it is a strong ethical obligation on the part of the programmer to make sure that programs do, always, and only, what the user asks them to. "The user" may in some cases be an ambiguous term, such as on a web-based system where customers interact with a system owned by someone else, and in these cases the programmer should strive to balance those concerns as exactly as possible: the administrator of the system should have no unnecessary access to the user's personal information, and the user should have no unnecessary control over the system's operation. All interactions with the system should faithfully represent both the intent and authority of the operator.
Participants in the DRM debate implicitly hold the view that the ownership of your operating system, your personal information, and your media is a complex, joint relationship between you, your operating system vendor, the authors of the applications you run, and the owners of any media that pass through that application. Prevailing wisdom is that the way any given software behaves should be jointly determined by all these parties, factoring in all their interests, and that the argument is simply a matter of degree: who should be given how much control, and by what mechanism.
I don't like to think of myself as an extremist, but on this issue, I can find no other position to take. When I hear lawmakers, commercial software developers, and even other open source programmers, asking questions like, "how much control should we afford to content producers in media playback programs?", I cannot help but think of Charles Babbage.
Computer programmers need a socially, and legally recognized code of professional ethics, to which we can be held accountable. There have been some efforts in this direction, the most widely-known one being the Software Engineering Code of Ethics and Professional Practice. As long as I'm being extreme: this code of conduct is completely inadequate. It's sophomoric. It's confused about its own purpose. It sounds like it was written by a committee more interested in promoting "software engineering" techniques, as defined by the ACM, than in ethics. I'll write a bit about exactly what's wrong with it after I describe some similarities in existing professional codes of conduct which themselves have legal ramifications.
Although there are many different codes of ethics for medical doctors, a principle which echoes through them all is one which was formulated in ancient history, originally by Hippocrates but distilled into a catch-phrase by Galen: "First, do no harm."
The idea is that, if you are going to be someone's doctor, you have to help them, or at least, you shouldn't ever harm them. Doctors generally regard this as a sacred responsibility. This basic tenet of the doctor-patient relationship typically overrides all other considerations: the doctor's payment, the good or harm that the patient has done or may do, and the advancement of medical science all take a back seat to the welfare of the patient.
In this modern day and age, where doctors often perform general anesthesia on their patients to prepare them for surgery, this understanding is critical to the credibility of the medical profession as it stands. Who would knowingly submit themselves to a doctor, knowing that they might give you a secondary, curable disease, just to ensure they got paid?
Lawyers have a similar, although slightly more nuanced, principle. Anybody who has watched a few episodes of Law and Order knows about it. A slightly more authoritative source than NBC, though, is the American Bar Association, who in their Model Code of Professional Responsibility (the basis for the professional responsibility codes of most states' Bar associations in the United States) declare:
A doctor's responsibility is somewhat the same. If a doctor is treating a deeply evil person, they are still obligated by the aforementioned sacred patient/doctor pact to honestly treat that person, not use their position as a doctor to proclaim a death sentence, or cripple them. They are obligated to treat that person equitably, even if that person's evil extends to not paying their medical bills.
This pattern isn't confined to professional trades. Catholic priests have the concept of the "seal of confession". If you confess your sins to a catholic priest, they are not to reveal those sins under any circumstances, regardless of the possible harm to others. A priest certainly shouldn't threaten their flock with knowledge of their confessed sins to increase contributions to the donation plate, even if one of them has confessed a murder.
In each case, society calls upon a specialist for navigating a system too complex for laymen to understand: the body, the law, and the soul. In each case, both society at large and individuals privately put their trust completely into someone allegedly capable of navigating that system. Finally, in each case, the trust of that relationship is considered paramount, above the practitioner's idea of the public good, above the practitioner's (and other's) financial considerations.
There is a good reason for these restrictions. Society has systems in place to make these judgments. Criminal defense lawyers are not allowed to judge their clients because that's the judge's job. Doctors aren't allowed to pass sentences on their clients because that's the legal system's job. Catholic priests don't judge their confessors because that's God's job. More importantly, each of these functions may only be performed with the trust of the "client" - and it is important for the client to know that their trust will not be abused, even for an otherwise laudable goal, such as social welfare, because notions of social welfare differ.
I believe that computer programmers are a fourth such function.
Global telecommunications and digital recording are new enough that I think this is likely to be considered a radical idea. However, think of the importance of computer systems in our society today. Critical functions such as banking, mass transit, law enforcement, and commerce would not be able to take place on the scale they do today without the help of computer systems. Apropos of my prior descriptions, every lawyer and doctor's office has a computer, and they rely on the information provided by their computer systems to do their jobs.
More importantly, computers increasingly handle a central role in our individual lives. Many of us pay our bills on a computer, do our taxes on a computer, do our school work or our jobs on computers. Sometimes all of these things even happen on one computer. Today, in 2005, most of those tasks can be accomplished without a computer (with the exception, for those of us with technical professions, of our jobs) but as the public systems we need to interact with are increasingly computerized, it may not reasonable to expect that it will be possible to lead an average modern life in 100 years without the aid of a personal computing device of some kind.
If that sounds like an extreme time frame, consider the relative importance of the automobile, or the telephone, in today's society versus 1905. It's not simply a matter of convenience. Today it is considered a basic right today for accused criminals to make a phone call. Where was that right when there were no telephones?
Another way to think of this relationship with technology is not that we do a lot of things with computers, but that our computers do a lot of things on our behalf. They buy things. They play movies. They make legal claims about our incomes to the federal government. Most protocol specifications refer to a program which acts on your behalf (such as a web browser) as a user agent to reflect this responsibility. You are not buying a book on Amazon with your computer; you click on some links, you enter some information, and you trust that your computer has taken this information and performed a purchase on your behalf. Your computer could do this without your help, if someone has installed a malicious program on it. It could also pretend to have made a purchase, but actually do nothing at all.
Here is where we approach the intersection between programming and ethical obligation. Every time a user sits down to perform a task with a computer, they are, indirectly, trusting the programmers who wrote the code they will be using to accomplish that task. Users give not only the responsibility of performing a specific task, they trust those programs (and thereby their programmers) with intensely personal information: usernames, passwords, social security numbers, credit card numbers - the list goes on and on.
There may be a technological solution to this problem, a way to limit the amount of information that each program needs, and provide users with more control over what different programs can say to each other on their own computer. Some very smart people are working on this, and you can read about some of that work on Ka-Ping Yee's "Usable Security" blog. Still, one of the experts there contemplates that perhaps, given the abysmal state of software today, perhaps the general public shouldn't even use the internet.
DRM is definitely a problem, but the real problem is that it's the top of a very long, very slippery slope. Its advocates point at the top of that slope and say "See, it's not so bad!" - but where will it end? While I am annoyed, I'm not really that concerned with the use of this kind of technology to prevent copyright violations. It's when we start using it to prevent other sorts of crimes that the real fear sets in.
Today, it's considered almost (but not quite) acceptable that Sony installs the digital equivalent of a car-bomb on my computer to prevent me from copying music. As I said at the beginning of this article - they don't think that the practice is inherently wrong, simply that there are some flaws in its implementation. Where will this stop? Assuming they can perfect the technology, and given that my computer has all the information necessary to do it, will future versions of Sony's music player simply install themselves and lie in wait, monitoring every download, and automatically billing you for anything that looks unauthorized, not telling me about it until I get my credit card statement?
Whether unauthorized copying should be a crime or not, preventing it by these means is blatantly wrong. Let me be blunt here. It is simply using a technique to wring more money out of users because the technique is there. Much like the doctor who cuts off your nose and won't reattach it until he gets paid for his other (completely legitimate) services, this is an abuse of trust of the worst order. It doesn't matter how much money you actually owe the doctor, or Sony: in any case, they don't have the right to do violence to you or to your computer because of it.
What of "terrorism"? Will mandatory anti-terrorism software, provided to Microsoft by the federal government, monitor and report my computerized activities to the Department of Homeland Security for review? From here, I'll let you fill in the rest of the paranoid ravings. I don't see this particular outcome happening soon, but the concern is real. There is no system in place to prevent such an occurrence, no legal or ethical restriction incumbent upon software developers which would prevent it.
This social dilemma is the reason I termed the IEEE/ACM ethics code "sophomoric". With the directionless enthusiasm of a college freshman majoring in philosophy, it commands "software engineers" to "Moderate the interests of [themselves], the employer, the client and the users with the public good.", to "Disclose to appropriate persons or authorities any actual or potential danger to the user, the public, or the environment", to "Obey all laws governing their work, unless, in exceptional circumstances, such compliance is inconsistent with the public interest." These are all things that a good person should do, surely, but they are almost vague enough to be completely meaningless. These tenets also have effectively nothing to do with software in specific, let alone software engineering. They are in fact opposed to certain things that software should do, if it's written properly. If the government needs to get information about me, they need a warrant, and that's for good reason. I don't want them taking it off my computer without even asking a judge first, simply because a helpful software engineer thought it might be a "potential danger to the public".
Software developers should start considering that accurately reflecting the user's desires is not just a good design principle, it is a sacred duty. Much as it is not the criminal defense lawyer's place to judge their client regardless of how guilty they are, it is not the doctor's place to force experimental treatment upon a patient regardless of how badly the research is needed, and it is not the priest's place to pass worldly judgment on their flock, it is not the programmer's place to try and decide whether the user is using the software in a "good" way or not.
I fear that we will proceed down this slippery slope for many years yet. I imagine that a highly public event will happen at some point, a hundred times worse than this minor scandal with Sony BMG, and users the world over will angrily demand change. Even then, there will need to be a movement from within the industry to provide some direction for that change, and some sense of responsibility for the future of software.
I hope that some of these ideas can provide direction for those people, when the world is ready, but personally I already write my code this way.
I've written about this a couple of years ago, and I think there's more to the issue, but I feel like this key point of accurately relaying the user's intent is the first step to anything more interesting. I don't really know if a large group of people even agree on that yet.
So, like I said, this post isn't about Divmod - exactly - but when we say "your data is your data"... we mean it.
I've been mulling over these ideas for quite a while, and I think I may still have more thinking to do, but recent events have gotten me thinking again about the increasing urgency of the need for a professional code of conduct for computer programmers. Mark Russinovich reported on Sony BMG's criminal contempt for the integrity of their customer's computers, and some days later CNET reported on Sony BMG's halfhearted, temporary retraction of their crime. A day later, CNet's front page has news of Apple trying to institutionalize, as well as patent, a similar technique. While the debate over DRM continues to rage, there are larger issues and principles at stake here that it doesn't seem like anyone is talking about: when you run a program on your computer, who is really in charge?
I posit that, in no uncertain terms, it is a strong ethical obligation on the part of the programmer to make sure that programs do, always, and only, what the user asks them to. "The user" may in some cases be an ambiguous term, such as on a web-based system where customers interact with a system owned by someone else, and in these cases the programmer should strive to balance those concerns as exactly as possible: the administrator of the system should have no unnecessary access to the user's personal information, and the user should have no unnecessary control over the system's operation. All interactions with the system should faithfully represent both the intent and authority of the operator.
Participants in the DRM debate implicitly hold the view that the ownership of your operating system, your personal information, and your media is a complex, joint relationship between you, your operating system vendor, the authors of the applications you run, and the owners of any media that pass through that application. Prevailing wisdom is that the way any given software behaves should be jointly determined by all these parties, factoring in all their interests, and that the argument is simply a matter of degree: who should be given how much control, and by what mechanism.
I don't like to think of myself as an extremist, but on this issue, I can find no other position to take. When I hear lawmakers, commercial software developers, and even other open source programmers, asking questions like, "how much control should we afford to content producers in media playback programs?", I cannot help but think of Charles Babbage.
On two occasions I have been asked [by members of Parliament!], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.The "you don't own your computer" paradigm is not merely wrong. It is violently, disastrously wrong, and the consequences of this error are likely to be felt for generations to come, unless steps are taken to prevent it.
Computer programmers need a socially, and legally recognized code of professional ethics, to which we can be held accountable. There have been some efforts in this direction, the most widely-known one being the Software Engineering Code of Ethics and Professional Practice. As long as I'm being extreme: this code of conduct is completely inadequate. It's sophomoric. It's confused about its own purpose. It sounds like it was written by a committee more interested in promoting "software engineering" techniques, as defined by the ACM, than in ethics. I'll write a bit about exactly what's wrong with it after I describe some similarities in existing professional codes of conduct which themselves have legal ramifications.
Although there are many different codes of ethics for medical doctors, a principle which echoes through them all is one which was formulated in ancient history, originally by Hippocrates but distilled into a catch-phrase by Galen: "First, do no harm."
The idea is that, if you are going to be someone's doctor, you have to help them, or at least, you shouldn't ever harm them. Doctors generally regard this as a sacred responsibility. This basic tenet of the doctor-patient relationship typically overrides all other considerations: the doctor's payment, the good or harm that the patient has done or may do, and the advancement of medical science all take a back seat to the welfare of the patient.
In this modern day and age, where doctors often perform general anesthesia on their patients to prepare them for surgery, this understanding is critical to the credibility of the medical profession as it stands. Who would knowingly submit themselves to a doctor, knowing that they might give you a secondary, curable disease, just to ensure they got paid?
Lawyers have a similar, although slightly more nuanced, principle. Anybody who has watched a few episodes of Law and Order knows about it. A slightly more authoritative source than NBC, though, is the American Bar Association, who in their Model Code of Professional Responsibility (the basis for the professional responsibility codes of most states' Bar associations in the United States) declare:
The professional judgment of a lawyer should be exercised, within the bounds of the law, solely for the benefit of his client and free of compromising influences and loyalties. Neither his personal interests, nor the interests of other clients, nor the desires of third persons should be permitted to dilute his loyalty to his client.For criminal defense lawyers, these "compromising influences and loyalties" may include a basic commitment to the public good. A lawyer who represents a serial murderer who privately admits to having committed heinous crimes must, to the best of their ability, represent the sociopath's interests and try to get them exonerated, or, failing that, the lightest sentence possible. Low as we as a society might consider a lawyer who defends rapists and murderers, we would think even more poorly of one who gave intentionally bad advice to people who he personally didn't like, or sold out his client's interests to the highest bidder.
(emphasis mine)
A doctor's responsibility is somewhat the same. If a doctor is treating a deeply evil person, they are still obligated by the aforementioned sacred patient/doctor pact to honestly treat that person, not use their position as a doctor to proclaim a death sentence, or cripple them. They are obligated to treat that person equitably, even if that person's evil extends to not paying their medical bills.
This pattern isn't confined to professional trades. Catholic priests have the concept of the "seal of confession". If you confess your sins to a catholic priest, they are not to reveal those sins under any circumstances, regardless of the possible harm to others. A priest certainly shouldn't threaten their flock with knowledge of their confessed sins to increase contributions to the donation plate, even if one of them has confessed a murder.
In each case, society calls upon a specialist for navigating a system too complex for laymen to understand: the body, the law, and the soul. In each case, both society at large and individuals privately put their trust completely into someone allegedly capable of navigating that system. Finally, in each case, the trust of that relationship is considered paramount, above the practitioner's idea of the public good, above the practitioner's (and other's) financial considerations.
There is a good reason for these restrictions. Society has systems in place to make these judgments. Criminal defense lawyers are not allowed to judge their clients because that's the judge's job. Doctors aren't allowed to pass sentences on their clients because that's the legal system's job. Catholic priests don't judge their confessors because that's God's job. More importantly, each of these functions may only be performed with the trust of the "client" - and it is important for the client to know that their trust will not be abused, even for an otherwise laudable goal, such as social welfare, because notions of social welfare differ.
I believe that computer programmers are a fourth such function.
Global telecommunications and digital recording are new enough that I think this is likely to be considered a radical idea. However, think of the importance of computer systems in our society today. Critical functions such as banking, mass transit, law enforcement, and commerce would not be able to take place on the scale they do today without the help of computer systems. Apropos of my prior descriptions, every lawyer and doctor's office has a computer, and they rely on the information provided by their computer systems to do their jobs.
More importantly, computers increasingly handle a central role in our individual lives. Many of us pay our bills on a computer, do our taxes on a computer, do our school work or our jobs on computers. Sometimes all of these things even happen on one computer. Today, in 2005, most of those tasks can be accomplished without a computer (with the exception, for those of us with technical professions, of our jobs) but as the public systems we need to interact with are increasingly computerized, it may not reasonable to expect that it will be possible to lead an average modern life in 100 years without the aid of a personal computing device of some kind.
If that sounds like an extreme time frame, consider the relative importance of the automobile, or the telephone, in today's society versus 1905. It's not simply a matter of convenience. Today it is considered a basic right today for accused criminals to make a phone call. Where was that right when there were no telephones?
Another way to think of this relationship with technology is not that we do a lot of things with computers, but that our computers do a lot of things on our behalf. They buy things. They play movies. They make legal claims about our incomes to the federal government. Most protocol specifications refer to a program which acts on your behalf (such as a web browser) as a user agent to reflect this responsibility. You are not buying a book on Amazon with your computer; you click on some links, you enter some information, and you trust that your computer has taken this information and performed a purchase on your behalf. Your computer could do this without your help, if someone has installed a malicious program on it. It could also pretend to have made a purchase, but actually do nothing at all.
Here is where we approach the intersection between programming and ethical obligation. Every time a user sits down to perform a task with a computer, they are, indirectly, trusting the programmers who wrote the code they will be using to accomplish that task. Users give not only the responsibility of performing a specific task, they trust those programs (and thereby their programmers) with intensely personal information: usernames, passwords, social security numbers, credit card numbers - the list goes on and on.
There may be a technological solution to this problem, a way to limit the amount of information that each program needs, and provide users with more control over what different programs can say to each other on their own computer. Some very smart people are working on this, and you can read about some of that work on Ka-Ping Yee's "Usable Security" blog. Still, one of the experts there contemplates that perhaps, given the abysmal state of software today, perhaps the general public shouldn't even use the internet.
DRM is definitely a problem, but the real problem is that it's the top of a very long, very slippery slope. Its advocates point at the top of that slope and say "See, it's not so bad!" - but where will it end? While I am annoyed, I'm not really that concerned with the use of this kind of technology to prevent copyright violations. It's when we start using it to prevent other sorts of crimes that the real fear sets in.
Today, it's considered almost (but not quite) acceptable that Sony installs the digital equivalent of a car-bomb on my computer to prevent me from copying music. As I said at the beginning of this article - they don't think that the practice is inherently wrong, simply that there are some flaws in its implementation. Where will this stop? Assuming they can perfect the technology, and given that my computer has all the information necessary to do it, will future versions of Sony's music player simply install themselves and lie in wait, monitoring every download, and automatically billing you for anything that looks unauthorized, not telling me about it until I get my credit card statement?
Whether unauthorized copying should be a crime or not, preventing it by these means is blatantly wrong. Let me be blunt here. It is simply using a technique to wring more money out of users because the technique is there. Much like the doctor who cuts off your nose and won't reattach it until he gets paid for his other (completely legitimate) services, this is an abuse of trust of the worst order. It doesn't matter how much money you actually owe the doctor, or Sony: in any case, they don't have the right to do violence to you or to your computer because of it.
What of "terrorism"? Will mandatory anti-terrorism software, provided to Microsoft by the federal government, monitor and report my computerized activities to the Department of Homeland Security for review? From here, I'll let you fill in the rest of the paranoid ravings. I don't see this particular outcome happening soon, but the concern is real. There is no system in place to prevent such an occurrence, no legal or ethical restriction incumbent upon software developers which would prevent it.
This social dilemma is the reason I termed the IEEE/ACM ethics code "sophomoric". With the directionless enthusiasm of a college freshman majoring in philosophy, it commands "software engineers" to "Moderate the interests of [themselves], the employer, the client and the users with the public good.", to "Disclose to appropriate persons or authorities any actual or potential danger to the user, the public, or the environment", to "Obey all laws governing their work, unless, in exceptional circumstances, such compliance is inconsistent with the public interest." These are all things that a good person should do, surely, but they are almost vague enough to be completely meaningless. These tenets also have effectively nothing to do with software in specific, let alone software engineering. They are in fact opposed to certain things that software should do, if it's written properly. If the government needs to get information about me, they need a warrant, and that's for good reason. I don't want them taking it off my computer without even asking a judge first, simply because a helpful software engineer thought it might be a "potential danger to the public".
Software developers should start considering that accurately reflecting the user's desires is not just a good design principle, it is a sacred duty. Much as it is not the criminal defense lawyer's place to judge their client regardless of how guilty they are, it is not the doctor's place to force experimental treatment upon a patient regardless of how badly the research is needed, and it is not the priest's place to pass worldly judgment on their flock, it is not the programmer's place to try and decide whether the user is using the software in a "good" way or not.
I fear that we will proceed down this slippery slope for many years yet. I imagine that a highly public event will happen at some point, a hundred times worse than this minor scandal with Sony BMG, and users the world over will angrily demand change. Even then, there will need to be a movement from within the industry to provide some direction for that change, and some sense of responsibility for the future of software.
I hope that some of these ideas can provide direction for those people, when the world is ready, but personally I already write my code this way.
I've written about this a couple of years ago, and I think there's more to the issue, but I feel like this key point of accurately relaying the user's intent is the first step to anything more interesting. I don't really know if a large group of people even agree on that yet.
So, like I said, this post isn't about Divmod - exactly - but when we say "your data is your data"... we mean it.