brain nav

a BRIEF HISTORY of GUI

Introduction

 
As one of the last of the Eisenhower babies (Hey Boomer!) I've lived through most of the tumultuous history of Graphic User Interface, or GUI development. Evolving from GUI design, to interface architecture, to interaction design, to user experience (UX) has been an entertaining ride. This short essay was developed from notes I've used for seminars on UI design, I am surprised and honored to have many links to it from essays and school papers.

 

a Brief History of GUI

Graphic User Interfaces were considered unnecessary overhead by early computer developers, who were struggling to develop enough CPU horsepower to perform simple calculations. As CPU power increased in the sixties and early seventies, industrial engineers began to study the terminal entry programs of mainframes to optimize entry times and reduce mis-types. The earliest mainframe query protocols still in use, i.e., airline reservation systems, were developed during this period to queue as much information as possible into the shortest command.

Essentially, operators were trained to perform computer language interpretation in their heads.

Read this vision of future computing from the popular science fiction novel Inherit the Stars, ©1977 by James P. Hogan:

"What do I do now?"

"Type this: FC comma DACCO seven slash PCH dot P sixty-seven slash HCU dot one.

That means 'functional control mode, data access program subsystem number seven selected, access data file reference "Project Charlie, Book one," page sixty-seven, optical format, output on hard copy unit, one copy.'"

In the middle to late seventies several companies, including IBM and Xerox, began research on the "next generation" of computers, based on the assumption that computing power would drop in price to the point where many more individuals in companies would be able to effectively use them. IBM directed most of its efforts at mainframe development, but also started a small division to design and produce a "personal computer", which, despite its obscure operating system, would recreate the home-built small computer market. Other companies were struggling to produce cost-effective small computers using the CP/M operating system.

The most prolific early interface research program
was a facility owned by Xerox called
the Palo Alto Research Center (PARC).

In 1973 the PARC team began work on the Alto computer system as "an experiment in personal computing, to study how a small, low cost machine could be used to replace facilities then provided only by much larger shared systems."

The Alto project continued into 1979, replaced by the Star computer, which many consider the forerunner of the Macintosh. The Alto had many unique features, and pioneered the use of the mouse, the portrait monitor, WYSIWYG, local area networking, and shared workspaces.

Alto, and the later Star computers, derived many of these features from cognitive psychology work. The designers attempted to communicate with users more effectively by making the computer communicate in ways the brain uses more readily; using icons for instance, because the visual part of the brain can track their presence and state much better than words. They developed ways of organizing information in patterns which the eye can track through more easily, drawing attention to the work in progress. They developed the model of WYSIWYG (what you see is what you get) to improve print proofing performance, and found through testing that the digital representation of black text on a sheet of white paper increased information legibility and retention. The Star interface added the concept of the desktop metaphor, and overlapping and resizable windows.

PARC discovered along the way that whole new subsystems had to be developed to enable their technology to work; but once demonstrated, testing showed dramatic improvements in productivity, job satisfaction, and reduced training time for users. PARC's research clearly showed that a computer system of sufficient power could be optimized for human use, and that optimization would be paid back with a range of productive (and profitable) behavior and attitude improvements.

In the early eighties the IBM PC running DOS became the runaway best seller among computers. DOS was a cryptic command line interface, a direct descendant of mainframes. The PC had many limitations, including memory access, power, and lack of color or graphic standards; but it had enough productivity to warrant purchases of millions of units.

 

1980s: Entrance of Apple

At the same time, a small group of designers at a company called Apple Computer made a deal with Xerox PARC. In exchange for Apple stock, Xerox would allow Apple teams to tour the PARC facility and incorporate some of their research into future products. The Apple team took elements of the Star interface, refined them and produced the Lisa computer.

The Lisa failed, owing to its cost, lack of software availability, and other factors. Apple's next try with an enhanced and friendlier Lisa interface was the Macintosh, which found a early market foothold in the design and publishing markets.

Email from Jef Raskin in 2001:
 
"You might want to see "Holes in the Histories" at www.jefraskin.com, the Stanford University Mac history documents, Owen Linzmayer's quite accurate chronology in his book "Apple Confidential" and a point-by-point comparison of how the mouse worked in the Xerox Star and the early Macintosh in my book 'The Humane Interface'."
 

From one of Raskin's obituaries in 2005:

"Raskin started as manager of Apple's Publications department when he joined the company as employee number 31 in 1978. By 1979, he had started designing a radically new kind of computer, focusing on human factors rather than technical specifications.
 
A frequent visitor to the XeroxPARC research facility, Raskin initiated Apple's famous visit to the labs where innovations such as the graphical user interface, Ethernet, the laser printer and the mouse were lying dormant until adapted for Apple's Lisa and Macintosh computers.
 
Apple co-founder (and now CEO) Steve Jobs initially hated Raskin's proposal for a computer for the "person in the street". But after he was kicked off the Lisa project, Jobs joined and then took over Raskin's Macintosh project - turning it into a fully fledged product-developmentt effort. Raskin resigned.
 
As Owen Linzmayer, author of the acclaimed 'Apple Confidential 2.0', wrote in the January 2004 special Mac 20th Anniversary issue of Macworld: "The Mac that shipped in 1984 differed greatly from Raskin's prototypes, but the underlying goal of elegant simplicity was retained and became the hallmark of all Apple products."

Apple teams were committed to its GUI, spending millions of dollars over the next ten years to research and implement enhancements; their commitment paid off in the late eighties as the desktop publishing market exploded and Apple's interface was widely acclaimed by the artists, writers, and publishers using the computers. Interestingly, one of the most successful Macintosh application developers was the Microsoft Corporation of Redmond, Washington, owner of MS-DOS. Microsoft, following the Apple GUI standards, developed a spreadsheet for the Mac which set new standards for ease of use. This product was, of course, Excel.

Apple worked with artists, psychologists, teachers, and users to craft revisions to their software and developer guidelines. For example, in California they sponsored an elementary school where every student had an Apple Computer. Each year the teachers and Apple programmers spent the summer planning new lessons and making enhancements to the software used to teach them, because Apple believed that children give the truest reactions to basic interface issues. Although a distant second in number of systems behind IBM compatibles today, Apple's closed hardware and software implementation at one point made them the largest personal computer manufacturer in the world, eclipsing IBM in 1992.

Apple maintains that a principal contributor to their success has been consistent implementation of user interfaces across applications. Macintosh users have been able to easily master multiple applications because commands and behavior were the same across applications. (Example: Command-S is always save)

In the late 1980s Microsoft Corporation, producer of DOS, DOS applications, and Macintosh applications, began a joint project with IBM to develop a new graphic user interface for IBM compatible computers.

 

1990s: Windows and Explosive Adoption of GUIs

This partnership later dissolved, but Microsoft went on to take user interface lessons learned from their successful Macintosh products, Excel and Word, and created a series of graphic shells running on top of DOS which could mimic many of the Macintosh GUI features.

For GUI designers, Apple set a different tone for their standards and guidelines; Apple explained why the guidelines should be applied. User research, consistency, and strategy were all part of the Macintosh GUI development guides, and Apple developer tools made them the defaults, but designers could override the defaults to follow strategy.
 
Microsoft, on the other hand, published their standards as rules and checklists, with fixed requirements and no explanations. These were not guidelines - Microsoft wrote requirements for Windows certification.

Microsoft and Apple became entrenched in extensive litigation over ownership of many of these features, but the case was eventually dismissed. Later version of the Windows operating system became increasingly Macintosh-like. Today Microsoft gives little credit to Apple for pioneering and validating many of the ideas which they have copied.

With increasing desktop power and continued reductions in CPU pricing, another area of GUI development also entered business, that of UNIX. Like DOS, UNIX is a child of the seventies and inherits a powerful and obscure command line interface from mainframes; unlike DOS, it had been used in networked applications and high-end engineering workstations for most of its life. In the eighties UNIX GUI shells were developed by consortiums of workstation manufacturers to make the systems easier to use. The principal GUIs were Solaris (Sun Microsystems), Motif (Open Software Foundation, or OSF), and later NeXTstep (Next Computers).

Altogether new graphical operating systems were also developed for the emerging families of RISC desktop computers and portable devices, these include Magic Cap (General Magic), Newton (Apple Computer), People, Places, and Things (Taligent), Windows CE (Microsoft), and the Palm interface (US Robotics Pilot).

 

WWW and Browsers Cause an Online Revolution

The mid 1990s brought two new movements to GUI design - the Internet browser and it's limited but highly portable interface, and LINUX, a freeware version of UNIX. Which of these will have greater long-term impact is open to debate, but it appears that the browser has had widespread effect on GUI design, and on human culture.

The HTML/browser interface comes in bewildering variety of implementations. With limited interaction in forms the designers were forced back to basics, building and testing iterations. Fortunately, HTML is relatively easy to create, though some would suggest, difficult to master. Newer versions of HTML and decendants like DHTML, XML, WML, SMIL, offer greater potential for true interactive experiences but at the cost of increased download times and questionable compatibility with a diverse legacy of installed browsers. Over time the legacy browser problem will be solved as users upgrade their systems, and bandwidth issues should also improve. But the important thing learned by GUI designers from the Web is that screens do not have to be complicated to be useful - if the form solves a need and is easy to use, then people will use it.

LINUX represented another trend in computing and GUIs, that of group-developed software based on components. Facilitated by the Web, software designers collaborate and produce startling work in short timeframes. LINUX is small and reliable, yet supports a large base of usable processes. Along with Java, LINUX represented a possible future of portable software running on compatible systems anytime, anywhere.

 

GUI Design - Renamed, and then Renamed Again

GUI design... becomes Interface Architecture... becomes Interaction Design (and Information Architecture)... becomes User Experience. Acronyms change: GUI > IA > UX. All the related professional societies scrambled to adjust.

 

2000s: Y2K and Bandwidth, Smartphones

The 56k dialup modem never reached the speed promised in the name (only 4-5k per second in the real world). There were so many people still using them that businesses catered to stripped-down and widely-supported code with small images for acceptable page-load times... and ignored CSS and javascript.

Increasing requests for better experiences and a growing demand for accessibility were creating massive pressure on website teams to modernize their code.

The Y2K panic gave many teams the budget and executive support for massive site re-builds, and newer generations of browsers with increased consistency along with HTML evolution makes dynamic sites easier to build and maintain. API services using SOAP and XML became a reliable and secure way to share data between domains. The Internet was now business-ready for complex web applications like travel reservations, publishing, and online sales.

Broadband from TV cable companies becomes a viable and speedy alternative in cities, allowing streaming (but low-rez) news and personal videos.

 

2010s: Cloud computing and B2B

All large websites and web applications for the Internet ran on data centers - with creative coping behaviors to support scaling, but dependencies on process queues and data access limits were causing issues. Whether the data centers were internal or outsourced, buying, installing, configuring, and monitoring made it expensive and slow to add more servers as needed. Increasing bandwidth needed for growth had similar challenges. Successful scaling required careful planning and implementation, while end-users often were frustrated during bursts of activity.

A better approach was needed, and multiple areas had to improve. Together, the following technologies would revolutionize how software was deployed and consumed:

Internal API approaches in many architectures, sped by executive actions like the infamous Bezos memo forced developers to make more and more functions available to other teams, increasing development speed and also improving security.

Cloud service providers emerged as smaller services consolidated. Very large data users like Amazon realized that selling their internal processes for machine management reduced IT costs for other companies > leading to demand and scaling > reducing costs for processing and storage in a virtuous circle.

Microservices and event architectures became mainstream approaches. The problem was not increasing throughput of serial requests, it was building systems that could do massively parallel work.

DevOps and test automation reduced time-to-deploy for new code. Test automation and Test Driven Development (TDD) made it possible to test everything whenever new code was checked-in - improving reliability and consistency.

True dynamic browser applications with local management of state, using API calls for data...

Bandwidth improvements and bandwidth savings from javascript-based state managers for browsers.

Business-to-Business (B2B) takes over and injected massive investment that, in turn, gave more cost reductions with scaling.

 

Common Concepts in Good User Experiences

Important similarities exist between these GUIs which are based on sound principles of cognitive psychology and proven through thousands of hours of testing and billions of hours of use. They are summarized below:

Making things even more complicated, experience designers are aiming for a moving target. Standards evolve, hardware systems get faster, displays get larger, computers are everywhere [and helpfully know where we are]. Our expectations are changing as better ideas emerge.

Good design helps us navigate systems - compatible changes to organization, detailing, color, and patterns give clues to recognize where we are in the application. Much of the time without active logic or inference. Our very competent brains aid and fight us in subtle ways.

As sociologist Marshall MacLuhan pointed out in the fifties while studying television, "...the medium is the message." As humans we must reject carrier information to extract real information from the events around us.

For example, when we watch video we make constant evaluations on whether the information we see is important: the speaker's words instead of the color of their green jacket. If we did not evaluate the jacket color as unimportant and "reject" it, we would have great difficulty deciding what was important in the huge flow of information coming at us. In many ways video (or gaming, or podcasts, or... ) and this process of rejection is affecting our values and thought patterns.

All presentations of information have many levels that users quickly learn to reject. A consistent interface makes it easier for the user to quickly extract information from the screen. Conversely, changing from learned ways of displaying or manipulating information lead to confusion and doubt. When most of us are forced build a new internal model of hierarchical information we are resistant, and we may never see the new, easier option when it is added later.