Site icon MacTech.com

Greg’s bite: the history and future of computer/human interfaces

GregsbiteJPEG.jpg

by Greg Mills

Back in the beginning of computers putting information in was hard work as numerous switches had to be thrown a certain way to generate results. The punch cards had to be cut and processed.  Then keyboards were used (about this time IBM screwed up and helped Microsoft take off).

A Zerox research group conceived the Graphical User Interface (GUI) and someone showed it to a fresh face in the crowd who immediately saw the future Mac interface. Foolishly trusting Microsoft to launch needed software that would run on the new Mac computers and to not steal them blind, Steve Jobs showed Gates the graphical interface. At that point Gates was driven to create a graphical interface that worked as well as the new Mac Operating System.

Apple, in time, sued and later settled rather than waste money pursuing Gates and Company. The Apple/Microsoft relationship has had its ups and downs but, seemingly, Apple was long on innovation and Microsoft was long on marketing. For most of the last 25 years marketing won the money race but was always a few years behind the innovation that is Apple.

To some extent, speech control has co-existed for some time now with the mouse and more recently the touch pad. Interestingly, “Star Trek” used voice control before the PC made it work well in real life. Even now, background noise and changes in the voice can confuse computer voice control. Advances in dictation software have resulted in some voice control applications but it never really caught on for general use.  

Interface with the hands, the mouse, the touch pad and now touch screen have been the main way we tell our computer what to do. The touch screen on the iPhone and iPad are based upon Cocoa Touch. A layer of the iPhone OS that allows the computer to convert touches on the screen at a given point to mean something. One, two or more touch points moving from point to point are converted to meaningful information that can control the computer. The length of time the points are touched and the pattern of movement all can be used to mean something. The menu in addition to the touch pattern can do everything the mouse can do, or will do so soon.  

Let’s look a ways down the pike. What happens after the touch OS has matured? There have already been successful experiments with both eye contact interfaces and reading brain waves.

Imagine a camera that faces forward that scans for eyes looking at the screen. The computer calculates several times per second what the eyes are focusing on. As the eye moves around the screen the computer could turn the page of a book or scroll down to keep the point of interest in the middle of the screen. That and some sort of clicking mechanism to create an event in time would allow the touches computer control without speech.

Finally, imagine the equivalent of a mouse on a chip implanted in your brain. You think and it happens on the screen or display device. The advent of 3D displays is closer than one might think, but that is fodder for another article. The old saying, “I think, therefore I am” may become “I think therefore I compute.”

(Greg Mills is currently a Faux Artist in Kansas City. Formerly a new product R&D man for the paint sundry market, he holds 11 US patents. He’s working on a solar energy startup using a patent pending process of turning waste dual pane glass into thermal solar panels used to heat water.  Married, with one daughter still at home, Greg writes for intellectual web sites and Mac related issues. See Greg’s web sites at http://www.gregmills.info .  He can be emailed at gregmills.mac.)


Exit mobile version