It's a dessert topping, it's a floor wax!

Peter Deutsch <> Wed, 05 April 1995 21:22 UTC

Received: from by IETF.CNRI.Reston.VA.US id aa07519; 5 Apr 95 17:22 EDT
Received: from CNRI.Reston.VA.US by IETF.CNRI.Reston.VA.US id aa07513; 5 Apr 95 17:22 EDT
Received: from by CNRI.Reston.VA.US id aa14426; 5 Apr 95 17:22 EDT
Received: by id <> (8.6.11/ for; Wed, 5 Apr 1995 21:53:55 +0100
Received: from by id <> (8.6.11/ for with SMTP; Wed, 5 Apr 1995 21:53:39 +0100
Received: from expresso.Bunyip.Com by with SMTP (5.65a/IDA-1.4.2b/CC-Guru-2b) id AA04498 (mail destined for on Wed, 5 Apr 95 16:53:29 -0400
Received: by (NX5.67c/NX3.0S) id AA02506; Wed, 5 Apr 95 16:53:25 -0400
Message-Id: <>
Sender: ietf-archive-request@IETF.CNRI.Reston.VA.US
From: Peter Deutsch <>
Date: Wed, 5 Apr 1995 16:53:24 -0400
In-Reply-To: "C.K.Work"'s message as of Apr 3, 13:21
X-Mailer: Mail User's Shell (7.2.4 2/2/92)
To: "C.K.Work" <>,
Subject: It's a dessert topping, it's a floor wax!
Reply-To: Peter Deutsch <>
Precedence: list

g'day all,

The original posting has already sparked some comments,
but here are a few additional observations...

[  "C.K.Work" wrote: ]

} .  .  .  Unless I'm misunderstanding things,
} this is not the case. WWW as a technology surely offers much more than
} Gopher can - both now, but perhaps more importantly in terms of scope
} for development. At present Gopher space is effectively a subset of Web
} space, and this will remain the case until Gopher clients can read html
} (in which case won't they be Web clients?).
.  .  .
} Granted, looking at the client end ONLY, it is possible that Gopher
} clients may include features which make them easier/better/faster to
} use, and for those with a serious investment in Gopher based services,
} this is a good thing. But, if you were starting with a clean slate are
} there any reasons to go down the gopher route rather than the Web route?
} Everything I've seen would indicate that the only real future is with
} Web - but I'm only too happy to be corrected on this! :)

When using the current crop of stateless, client/server
based browsing systems (and in particular WWW and Gopher,
which are the most popular) a couple of things should be
kept in mind.

First and foremost is that we need to be very careful to
distinguish between the transport protocol and the
associated display/rendering mechanism. Thus, in WWW we
have (by default) HTTP and HTML. In Gopher, we have (by
default) the Gopher+ protocol and ASCII.

You must also be careful to distinguish between the family
of Mosaic-based browsers (which have been instrumental in
making WWW popular) and WWW itself. In effect, Gopher and
WWW are _both_ subsets of the data space which can be
viewed by Mosaic. Gopher uses a menu-oriented data model,
the Web uses a hypertext-oriented data model. Mosaic
supports both models, and also supports additional models,
including scalable full page description format
(Postscript), various graphic formats, and (soon)
scalable/searchable full page format with hypertext
attachments (Adobe's PDF format).

At the level of basic protocol capability, I claim that it
is possible to accomplish pretty much the same things with
either HTTP or Gopher+. HTTP has MIME capability and thus
content negotiation, Gopher+ has a greater ability to issue
queries about meta data on a link so a smart client could
figure out what a menu item is pointing to and take
appropriate action. You can just as easily serve HTML or
ASCII data and choose an appropriate rendering tool based
on content type in either system (this doesn't mean that
any particular gopher browsers support HTML, but this is
_not_ a limitation of the underlying protocol).

At the level of rendering, neither system _requires_ the
associated default content type (and corresponding
rendering format) and both currently support a range of
other data types. In WWW, you use the Content-Type info
sent by the server to determine the appropriate rendering
to perform. In the original gopher, you determined content
type by examining the first byte of the menu item, and
with Gopher+ you can accomplish this by querying about
meta data before fetching the menu object. Again, the
_system_ capabilities are simply not that far apart,
despite the media hype about WWW and the perception of
Gopher as an ASCII-only system.

Judging from user reaction, hypertext is the more popular
data model of the two (where users have the needed
bandwidth and browser capability) and the use of embedded
graphics does enhance usability (where they have the
bandwidth) but when comparing the two systems it should be
kept in mind how much of this is determined by current
implementations, _not_ overall technical capability.

Another important issue, and one often ignored in the "WWW
versus Gopher" wars, is the question of maintainability.
This has nothing to do with transport and little to do
with rendering. The question is how easy is it to set up
and operate a particular server?

Again, we find that the work involved in installing,
configuring and operating a basic server is almost
identical with the two systems. In either case, you need
to obtain the code (and often compile it), edit a
configuration file to choose appropriate data directories,
port number and so on and you need to decide such things
as whether you want stand-alone or inetd based service.
These decisions are independent of what type of data you
will be serving and thus setting up any of the basic
servers seems to me to be pretty much the same.

There is definitely more initial work involved in setting
up a Web site, since you would presumably want to convert
your existing data to HTML and the associated tools to
help you do this are primitive, to say the least. The
final effect is usually perceived by the consumer to be
much better, which is a reason to do it, but it doesn't
make the task any easier.

And now a final comment on bandwidth. There are still
_significant_ numbers of Internet users who do not have
access to WWW, primarily because of their limited
bandwidth capabilities. As the net continues to expand
into more countries and more user communities, I think
we're going to continue to have need for a system that not
just tolerates, but supports slow speed links. Lynx makes
the Web a little more accessible to VT100-capable users,
but it's not as usable as some of the best Gopher browsers.

The Web is primarily popular because of the extra info
sent on each page, primarily in the form of graphics, and
this pushes a minimum usable Web connection to something
around 14.4k. In my experience, anything less is
_painful_. In terms of bandwidth used, the Web community
boasted when their total backbone traffic (measured in
bytes transported to port 80) exceeded that served by
gopher (to port 70). What Web enthusiasts failed to point
out was that they were doing this with an order of
magnitude fewer packets (which implied they were still an
order of magnitude below the number of gopher _references_
at the time). I think their reference counts have now
passed Gopher, and I'm _NOT_ questioning the Web's
popularity, but we do need to look beyond the hype and see
what people are actually using. Gopher is still popular.
There are still significant numbers of Gopher users,
because a) there's data they want and b) it's a system
they can use.

A large number of people are still on central mainframes,
ASCII terminals, old/slow pre-386 PCs, and so on. You find
them in middle schools, community libraries, small
companies, large companies that are not yet committed to
the Internet, and in their basements. These people are
still better served by Gopher, which is both bandwidth
friendly and easier to set up. 

In conclusion, I currently look askance at any attempt to
declare a winner in this battle, given how far from being
done I perceive any/each of these tools. 

Yes, _Mosaic_ has attracted tremendous commercial
development and opened the net to a new class of user, but
that says more about the integration of multiple protocols
into a single client than it does about the Web itself.

I agree that the long term future of Internet information
services requires a graphics capability and I also don't
dispute the popularity of the Web, but I don't think we
should disenfranchise those on slow speed links, and we
shouldn't necessarily endorse a "one protocol/data
model/browser model fits all" philosophy. That seems
premature, to say the least.

Neither WWW nor Gopher are yet anywhere near being done
(as but one example, neither is at all good at supporting
reliable services, since neither has any support for
detecting/correcting broken links short of "try it and see
if it works"). Finally, there are still significant moves
to happen in the content type side of things (for example,
this week Adobe announced a deal with Netscape to provide
PDF support. Ask yourself what this means for HTML and
then ask yourself whether you care what protocol serves a
PDF document to you). 

In the long run, users don't care two hoots about any
particular technology (how many people know even the
slightest thing about the engine in their cars, anyways?).
As a developer/integrator I care only as insofar as it
allows me to provide my customers with the tools they need
at a price they can afford. I say, cutting off branches of
the development tree at this point seems risky, to say the

I've said it before, and I'll say it again. If we declare
victory today, we risk being accused of endorsing
WorldWideWeb as the DOS of the '90s.  Sure, DOS works, but
it could have been so much better if only they'd kept
working on it, and if only it hadn't spread as quickly as
it did...

					- peterd