Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

12 ethical dilemmas gnawing at developers today

Peter Wayner | April 22, 2014
The tech world has always been long on power and short on thinking about the ramifications of this power. If it can be built, there will always be someone who will build it without contemplating a safer, saner way of doing so, let alone whether the technology should even be built in the first place. The software gets written. Who cares where and how it's used? That's a task for somebody in some corner office.

There's no simple answer to how much protection to apply. There are only guesses. More is always better — until the data is lost or the product doesn't ship.

Ethical dilemma No. 5: To bug-fix or not to bug-fix?

It's hard enough to negotiate the ethical shoals when they involve active decisions, but it's even harder when the problem can be pushed aside and labeled a bug that will be fixed eventually. How hard should we work to fix the problems that somehow slipped into running code? Do we drop everything? How do we decide whether a bug is serious enough to be fixed?

Isaac Asimov confronted this issue long ago when he wrote his laws of robotics and inserted one that forbid a robot from doing nothing if a human would be harmed through the robot's inaction. Of course his robots had positronic brains that could see all the facets of a problem instantly and solve them. The questions for developers are so complicated that many bugs go ignored and unfixed because no one wants to even think about them.

Can a company prioritize the list fairly? Are some customers more important than others? Can a programmer play favorites by choosing one bug over another? This is even more difficult to contemplate when you realize that it's hard to anticipate how much harm will come from any given bug.

Ethical dilemma No. 6: How much to code — or compromise — to prevent misuse

The original Apple Web camera came with a clever mechanical extra, a physical shutter that blocked the lens when it was off. The shutter and the switch were linked together; there was no way to use the camera without opening the shutter yourself.

Some of the newer webcams come with an LED that's supposed to be illuminated when the camera is activated. It usually works, but anyone who has programmed a computer knows there may be a place in the code where the camera and the LED can be decoupled. If that can be found, the camera can be turned into a spying device.

The challenge for the engineer is anticipating misuse and designing to prevent it. The Apple shutter is one of the obvious and effective examples of how it can be done elegantly. When I was working on a book about cheating on the SAT, I met one hacker who was adding networking software to his calculator. After some deliberation, he decided to only support wired protocols because he was afraid kids would sneak a calculator with Wi-Fi into an exam. By supporting only wired protocols, he ensured that anyone in a test would need to run a wire to their neighbor's machine. He hated skipping the wireless protocols, but he felt the risk of abuse was too high.


Previous Page  1  2  3  4  5  6  Next Page 

Sign up for Computerworld eNewsletters.