I’ve tried my best, but just can’t reconcile the complete abolishment of software patents with my innate sense of justice. The patent system is an utter mess, but let’s not be so quick to throw out the baby with the bathwater.
Among the friends I’ve made in the indie iOS development community, there is a sort of unwritten honor code related to software innovation. If someone comes up with a great implementation idea we either ask for permission to use that specific implementation, or we iterate on that idea such that it has recognizable bits of the original implementation, but is clearly different—preferably improved. We borrow from each other constantly, but we do so respectfully.
That works well for a relatively small community of thoughtful, creative folks, but obviously doesn’t scale. As the App Store has grown, so to has all forms of blatant IP theft and attempts to profit on the ideas of others with little or no innovation. In the App Store this happens on a relatively small scale and the knockoffs don’t often gain traction.
Then there’s Google. Their knockoffs aren’t confined to the App Store, and with a huge pile of cash funding the efforts many have gained significant traction. As I said in a previous post, they have been “borrowing” at an alarming rate and seem to care less and less about their own innovation as they seek to capitalize on their ability to monetize software through advertising.
I don’t think Google set out to become the Chinese knockoff shop of the software industry, but the erosion of intellectual property rights is a slippery slope. It seems to have started with a disdain for the blatant abuses of software patents and dissolved into a sense of entitlement. Whether or not specific patents are valid, Google doesn’t have the right to trample on any and all software related intellectual property rights. “Making enemies along the way” might seem like an appropriate way to create shareholder value, but it often comes back to bite you in the ass. And even when it’s not illegal, it’s disrespectful—both to the innovator(s) whose ideas were stolen and to the entire industry.
How then do we effectively protect the intellectual property rights of software developers while also spurring innovation. Fair use. Copyright law is another mess of good intentions and imperfect execution, but the concept of fair use has made copyright law, well, more fair.
Here’s an overview of the fair use provision in copyright law from the U.S. Copyright Office:
Section 107 contains a list of the various purposes for which the reproduction of a particular work may be considered fair, such as criticism, comment, news reporting, teaching, scholarship, and research. Section 107 also sets out four factors to be considered in determining whether or not a particular use is fair:
- The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes
- The nature of the copyrighted work
- The amount and substantiality of the portion used in relation to the copyrighted work as a whole
- The effect of the use upon the potential market for, or value of, the copyrighted work
Those four factors have obvious application in the realm of software patents.
Even in copyright law, fair use is confusing and far from perfect—and there’s still the threat of a lawsuit that one can’t afford to fight—but taking some of the teeth out of software patents with concepts similar to fair use could move us a long way toward a more balanced, productive approach protecting the intellectual property rights of software developers. Other ideas such as a non-practicing entity clause would also help.
Unfortunately, adjusting patent law and interpreting it through the legal process will take years. And the legislation currently pending makes things worse, more than better. In the mean time I propose a stopgap. Software developers should treat each other, and the entire industry, with respect. And when we fail to do so, we should expect public shaming by our peers and industry journalists.
Stealing ideas for profit is shameful. Borrowing from the ideas of others and innovating on them is commendable. These two seemingly contradictory concepts are at opposite ends of a vast sea of grey, but some shades of grey are dark enough to be called black.
In “Everything is a Remix” Kirby Ferguson makes a compelling and fascinating case that innovation and creativity lean heavily on prior art. That’s always been the case in technology, especially software, but I can’t recall a single company going so far in “borrowing” from product after product as Google has done recently.
I can, however, recall various groups that unabashedly borrow from other software products. Gimp, Open Office, and many other open source projects don’t just lean heavily on the work of Adobe, Microsoft, and others, they directly copy key features, often while adding little or no additional value. Because most such projects have yet to successfully disrupt the status quo and do actually contribute to the “greater good” in various ways, the tech community has excused and at times even celebrated the borrowing.
Google’s use of “open” as a marketing term often seems to be a naive misappropriation of the concept, but more and more I think it’s a brilliant misdirection. Though many, if not all, software products borrow from existing ideas, we’re more likely to give Google a pass on excessive borrowing because of their perceived openness. In thinking about this post, I had to google whether or not Google Docs is open source. Even though it’s not, it has the feel of open source software since it can be used for free or on the cheap and is an attempt to disrupt the stranglehold of Microsoft Office.
Google has innovated in real-time collaboration and other areas, but the core concepts and even most of the features borrow heavily from Microsoft Office and other similar productivity suites. Shame on Goog… oh, wait… Apple did the same thing with iWork. And Microsoft is the one looking over Google’s shoulder in building Office 365. Everyone borrows.
Productivity suites are an easy target since many of the foundational concepts have been around for decades now, but I think it hints at a greater point—the borrowing of software features and other implementation concepts is so easy and so prevalent it’s almost impossible to prevent or even stop. Apple, Microsoft, and other’s are trying, but enforcing software patents in court is a costly fight that can drag on for years.
Meanwhile Google is flaunting its ability to monetize through advertising by unabashedly borrowing from and commoditizing the world’s most used software. People spend a lot of time reading and replying to email—enter Gmail. People spend a lot of time working on documents, spreadsheets, and presentations—enter Google Docs. People spend lots of time in a browser—enter Chrome. People are spending more and more time on mobile devices—enter Android. People are spending an inordinate amount of time on social networks—enter Google+.
In most of its products Google innovates enough that we view any similarities to prior art as borrowing, but Google+ is the most flagrant product ripoff I’ve seen from such a large corporation (Android being a close second). The similarities between Google+ and Facebook were obvious when Google+ first launched. The UI is cleaner in various ways, and Google has innovated a bit with Circles, Huddles, and other features, but fundamentally Google+ is an unabashed ripoff of Facebook. And if you’re not convinced of that, take a look at the Google+ iOS app that just launched today:
The speculation surrounding Apple’s fall announcements has been focused on new versions of the iPhone and iPad, but we’ll undoubtedly see additional “Apple TV will get apps” speculation as the event nears. Developers, the tech press, and even tech savvy users have been fascinated by the possibility of running iOS apps on the Apple TV. And that fascination reached a fever pitch last fall when Apple launched the iOS based Apple TV 2.
This fall Apple may or may not announce a beefed up Apple TV with the ability to download and run apps directly, but even if they do it wont be quite as ground breaking as many assume. The ground has already been broken with AirPlay and AirPlay mirroring in iOS 5.
There remains a fundamental misunderstanding of the difference between mouse-like pointing devices and touch screen interaction. I’ve written about this before, but I’ll do so again—direct manipulation of objects on a touch screen device is a fundamental change in human-computer interaction and is undoubtedly the future of most, if not all, consumer computing devices. The age of the mouse is ending, but the implications are still unclear to most.
When the Magic Trackpad launched, quite a few posts were written about the death of the mouse. The problem with most of those posts is that they conflated the Magic Trackpad with touch screens. The mouse as an object that physically moves around your desk is dead, but fundamentally the Magic Trackpad is just a better mouse-like pointing device. The Magic Trackpad is not the future, it’s just one of the last great pointing devices.
I think the best way to illustrate what I’m trying to say is to tell the story of my 2 year old son, Luke, trying out AirPlay mirroring for the first time…
The other day I was watching the Tour de France on my TV using AirPlay mirroring of NBC’s TDF app running on my iPad 2. When Luke saw the iPad sitting there seemingly unused, he asked if he could play games on it. I did my best to explain to him that the iPad was busy sending video to our TV and was completely blow away by his response. In essence, he asked if he could play his games on the TV. He’s just a few months past his 2nd birthday, but he instantly grokked that the iPad was able to send video to the TV. Why not, he’s growing up surrounded by magic.
What’s interesting, though, is what happened next. When handed the iPad, he looked down at it and launched this week’s favorite app, The Monster at the End of This Book. He looked up at the screen and was excited to see Grover on TV. Then he looked down at the book and flipped the page. Then he looked up and was again excited to see Grover on TV. Then he looked down and turned the page. After just 60 seconds the thrill was gone and he was mostly just playing with the iPad, only intermittently looking up to confirm that Grover was still on TV.
After a few minutes he exited the app and looked up to see the icons of all his favorite apps on the TV. He immediately set down the iPad, walked up to the TV, and tried launching an app by touching the TV screen. My wife and I instinctually told him not to touch the TV, but he looked back at us quite puzzled. The thing is, Luke has never used a mouse-like pointing device. Other than using the TV remote to turn the TV on and off, or turning a light switch on and off, he’s never used one object to remotely manipulate another.
I can’t overstate what a fundamental shift this is. In the entire history of computers we have used a keyboard, keypad, mouse, mouse-like pointers and other similar devices to manipulate objects on a remote screen. [The Palm Pilot and other devices that used a stylus for input did foreshadow the iPhone, but I’d still lump a stylus with mouse-like pointing devices, though I’ll save that argument for another time.] The iPhone changed all that.
As my son so clearly demonstrated, a remote screen is much less interesting when you can directly manipulate objects on a touch screen. The two most obvious conclusions to draw from that statement are: 1. All screens should be touch screens 2. Remote screens will go away because direct manipulation is more compelling. But I think there’s a third, more likely conclusion: TVs and other remote screens will be limited-use extensions of our mobile touch screen devices.
In thinking about how TVs and other remote screens will be used in the future, I think we need to consider how the user interacts with objects on the remote screen and what use-cases are truly more compelling with a remote screen. As my son demonstrated, interacting with the iPad while looking back and forth from the iPad to the TV is onerous and seeing Grover on the TV was fun, but not ultimately compelling.
The current Apple TV user interface isn’t much different from the mouse-like pointing paradigm we’ve grown accustomed to since the introduction of the Macintosh in 1984. The only difference is that you navigate a selected state among objects instead of navigating a pointer over those objects. We can only visually focus on one thing at a time, so when two devices are involved one of them has to be operated without looking, or the user has to look back and forth between the two objects. Someone who doesn’t know how to touch-type is constantly looking down at the keyboard then back at the screen. The mouse doesn’t require us to look at it, but it is a learned skill. Most of us don’t remember, but using a mouse for the first time is disorienting and takes time to master.
Even break-through technologies like Microsoft’s Kinect require remnants of the pointing device paradigm. There is a very physical disconnect between what the user is doing in front of the TV and what is happening on-screen. That disconnect must be somehow mitigated. In most situations, Microsoft has chosen to use ghosted body parts to help connect the user to the on-screen action. The body then becomes just another pointing device with a pointer visible on-screen.
In discussing the iPad, Steve Jobs famously said “If you see a stylus, they blew it.” Similarly, if you see a pointer on-screen, you’re still using a mouse-like device, even if it’s your body that has become the mouse. If the iPad is just a big trackpad that moves a pointer on a remote screen, we’re in for a very boring future. But that’s where things get quite interesting, the touch screen is the iPad’s primary input method in normal use, but the accelerometer, gyroscope, compass, gps, proximity sensor, light sensor, camera, and even the microphone provide options for input that don’t require looking down. With Airplay maturing this fall in iOS 5 and Microsoft officially allowing developers to tinker with Kinect, we’re just now starting to explore the ways in which a remote screen can be used in compelling ways apart from the human-computer interaction paradigms of the past.
Circles within Google+ look really interesting, but I wonder if they’ll lead us further down the rabbit hole of multiple online personas. I’ve often thought that I’d rather not hear about a person’s dog/kid/food/fun/etc on Twitter. I mostly follow people in the iOS/tech space and it’s often time consuming to wade through the noise looking for the information that’s applicable to running App Cubby. But the more I think along those lines, the more I appreciate how Twitter seems to encourage people to be themselves.
I mean we all put our best foot forward to a certain extent, but I feel like I’ve really gotten to know the people I follow on Twitter. When I bump into them in real life I can ask how their kid if fairing at that new school or their thoughts on french press vs. siphon coffee.
I wonder if creating a bunch of circles and overthinking which circles get to see what bits of information will end up making us less social rather than more. Though a certain level of control seems beneficial, that control might be an overly logical approach to the very human matter of self-disclosure.
Facebook started very private and is pushing users more public. Though Twitter does allow private accounts, it’s primarily been a public platform since its inception and most use it that way. Google is giving us what looks to be the best of both worlds, but I’m not sure that’s a good thing. Time will tell… I haven’t even gotten to use Google+.
Yesterday I wrote a rather speculative post about something Apple probably won’t do. This morning I woke up and thought for a few minutes about what I’d written. At first I felt a bit embarrassed. Sure I made a half decent case, but seeing Apple release a $49 Thunderbolt cable kind of put their MO into perspective. Apple may release a cheaper Apple TV, but the odds are definitely against it.
After re-reading my post and seeing some of the headlines this morning, I’m actually quite proud of what I wrote. For a post wildly speculating about Apple, I did a great job making it clear that it was just speculation. Even the last sentence of my post clarifies that I’m not suggesting Apple will actually release a cheaper Apple TV: “…but I’m more and more convinced that a cheap Apple TV would be a boon to the entire iOS ecosystem.” I’m not more and more convinced that Apple is going to do it, I’m more and more convinced that it would be a boon to the entire iOS ecosystem. Those a two very distinct assertions.
Maybe it’s always been this way, but it seems as though the tech press has been getting worse and worse at presenting speculation as fact. Headlines like “iPhone 3GS Will Still Be The [sic] Low-End iPhone Even After iPhone 5” and a million more like it that I won’t even waste my time looking up, have been driving me nuts lately.
In yesterday’s press frenzy about the possibility of Apple introducing a second, brand new iPhone model—or some other sort of stratification—one tweet stood out as a perfect example of restraint and careful wording. “We keep hearing it’s done and just waiting on Apple to pull the trigger. Really solid source.” Rene does have great sources, but he hardly ever publishes information he’s been given. The thing is, if the information is correct it may cost someone their job, and if the information ends up being false, it may still cost someone their job and Rene looks like a tool.
Apple often leaks false information to rat out sources. So, even if you hear something from a solid source who has been right in the past, there’s still no way to guarantee when or if that thing will happen. Hence Rene saying that Apple does have some sort of cheaper iPhone waiting in the wings, but Apple may or may not pull the trigger. At the end of the day, writing a story about this mythical device just doesn’t make sense for someone who respects their readers and sources.
I will say, however, that there are certain writers to whom Apple PR purposely leaks information that they want public, but don’t want to directly confirm. If you go back and look at articles written as fact by respected members of the Apple press, you may find some patterns of accuracy that weren’t obvious amid all the speculation.
Anyhow, I wont beleaguer the point further… Rumors and speculation can be interesting, but please stop reporting them as fact, it’s insulting to readers and ultimately embarrassing to sites that peddle in those link-bait tactics.