I keep going back to Engelbart’s assertion that computers shouldn’t be easy to use. Do I agree with it? I know I don’t believe that computers should be difficult to use just for the sake of being difficult, but they shouldn’t be easy either. I think the debate is slightly oversimplified, because the real question isn’t whether computers should be easy or hard to use. The real question is if the things we do with computers should be simple or complex. It is possible that if computers are too easy to use, we won’t think of doing (or be able to do) more complicated things with them. But do you think that’s really true? Is “easy” synonymous with “simple”?
I think computers need to be complex. And yes, they need to be difficult to use, to a certain extent. It is only when we are being constantly challenged that we reach peak innovation. I want my computer to make me think and create. This should be a positive feedback loop. Increased difficulty and complexity leads to further mastery and inspiration, which in turn leads to the need for more complexity. We thrive on challenges; why would we want to eliminate that element when it’s an essential component of creation?
As for the concept of computers being more like people, what if computers were able to build off of us just as we’re able to build off of them? Everything we do with them and everything we create lets the computer add new components, new challenges. Things that will give us new ideas. A chain of human-computer innovation, of sorts. There has always been an urge to make computers more like us. Why is this? Why do we push so hard to make computers easier to use but also dream of giving computers human characteristics and abilities? (Isn’t a computer, after all, supposed to be a replacement for human activity that is difficult or time-consuming? A substitution that allows us to go further by eliminating things we would have to do ourselves otherwise?) I think we want computers to be more like us because we actually want this exchange of inspiration. We inspire each other, and since a computer is a replacement, doesn’t it follow that at some level we want (expect?) the computer to inspire us? To perform that last function that allows for further breakthroughs?
Are computers one day going to be more intelligent than humans? Dr. Campbell doesn’t think so, but I’m not so sure. If we continue in this direction of expecting computers to be easy to use and nothing else, then perhaps not. Is a computer simply a sum of what we put into it? Does it–and can it–only do what we’ve programmed it to do? What if we program it (as discussed in class) to feel emotion? Or to reason? Would it still have limits? What if we program it to learn? Is it still functioning on the same basic set of processes and simply building an information base? Or could it be capable of true learning and creation? Clearly such programming wouldn’t be immediately successful and any possibility of “true learning” would be a result of the buildup of several of these processes running and evolving over time. But we shouldn’t ignore the possibility, because that’s what stops it from happening. We have an excellent track record of being able to accomplish things that were thought to be impossible. Anything imagined can be created.