Map-Based Horizontal Navigation in Educational Hypertext
This paper was first presented at ACM Hypertext 2002 in June at the University of Maryland, College Park, USA, where it won the SIGWEB Ted Nelson Newcomer Award, an award cosponsored by JoDI. We are pleased to reproduce the paper here. ACM notice
The paper discusses the problem of horizontal (non-hierarchical) navigation in modern educational courseware. It considers why horizontal links disappear, how to support horizontal navigation in modern hyper-courseware, and looks at our earlier attempts to provide horizontal navigation in Web-based electronic textbooks. Map-based navigation -- a new approach to support horizontal navigation in open corpus educational courseware -- which we are currently investigating, is presented. We describe the mechanism behind this approach, present a system, KnowledgeSea, that implements this approach, and provide some results from a classroom study of this system.
Keywords: Horizontal navigation, electronic textbooks, similarity navigation, concept-based navigation, map-based navigation, Self Organizing Maps
A hierarchically organized hypertext is becoming the dominant model for publishing educational material. The Web has made the idea and the benefits of hypertext clear to almost every developer of educational content. The overwhelming majority of Web-based and CDROM-based educational material is no longer plain linear text, but hierarchical hypertext published either in HTML of PDF formats. Quite often, the hierarchy is rather simple: a table of content with a list of links to individual pages with "lectures" or "chapters" that have no further internal structure. However, there are also many well-developed "electronic textbooks" with deep hierarchical structure (chapters, sections, subsections, etc.) and elaborate hierarchical navigation.
This trend was certainly noticed by the producers of tools for developing educational material . Major producers of so-called courseware authoring tools such as CourseInfo , WebCT  and TopClass , have been providing support for creating hierarchically structured hypertext courseware for about four years. A good number of even more elaborate tools have been developed by various research teams.
Thousands of "courses" and "textbooks" have been created and published in the form of hierarchical hypertext. Few of them would be considered real hypertext by traditionalists, however. The only kind of navigation provided by the majority of existing educational courseware is hierarchical navigation such as the table of contents - from parent to child and from child to parent, sometimes even to siblings of a node and its parent node. Missing are "classic" associative hypertext links and classic ways of navigation - from a page to associated pages that are similar, that can enhance the material presented on the page, explain it differently, present an example. In the context of hierarchically organized educational hypertext we call these "horizontal links" and "horizontal navigation" to stress the contrast with vertical hierarchical navigation.
Traditionally, horizontal links and the possibility of horizontal navigation have been considered one of the main benefits of hypertext. Education has often been used as a sample domain by developers of classic hypertext systems to demonstrate these benefits, but today few educational courseware packages have horizontal links. Why is this, definitely beneficial, type of link becoming extinct in educational courseware? What can be done to revive these missing links?
This paper discusses the problem of horizontal navigation in modern educational courseware, and starts by discussing the reasons for the disappearance of horizontal links. We then review known ways to support horizontal navigation in modern courseware. Our earlier attempts to support horizontal navigation in Web-based electronic textbooks are reported. The main part of the paper presents map-based navigation - a new approach to support horizontal navigation in open corpus educational courseware that we are currently investigating. We describe the mechanism behind this approach, present a system called KnowledgeSea that implements this approach, and provide some results from a classroom study of this system.
There are a number of reasons for the horizontal navigation becoming practically extinct. First, creating horizontal links has always involved a large investment of time. It is quite easy and natural for an author to develop educational material as a hierarchy. It is much harder to provide a well-developed set of horizontal links. Each link requires careful consideration of which pages have to be connected, why, and where to attach the link within a page. Moreover, the whole hyperspace should be better planned and "chunked" in advance so that pages can serve as linked resources for each other. The majority of existing educational materials were created by teachers, who are not professionals in hyperspace organization. They have limited time to invest in developing quality educational material. Naturally, horizontal links are considered "extras" that are sacrificed first on the way to getting educational material published.
Creating horizontal links is also technically hard. None of the commercial tools mentioned above for developing educational material support the authoring of horizontal links (an interesting fact in itself). In some of these tools, like CourseInfo, creating non-hierarchical links is simply impossible. Creating links manually requires some reasonable knowledge of HTML that many authors do not possess, and their materials cannot be retrofitted by third parties.
An even more serious problem is the very nature of modern educational courseware. A carefully planned hyperspace made by a single author and linked once and forever is no longer the model for creating educational hypermedia. Today, teams of experts create educational courseware. Given the whole hyperspace, none of them has the clear understanding needed to provide quality links. For example, a developer of educational problems has little knowledge of problem-solving examples developed by another expert and thus has little chance to provide horizontal links from a problem to examples that may be helpful in solving this problem. Second, educational courseware is quite volatile: the rate of change is quite fast. Courseware is updated after every course. New problems, examples, explanations are added all the time. Clones and versions of an original course are created for special needs. Maintenance of horizontal links in such volatile material becomes a headache, both removing links where the target has disappeared and adding new links.
Finally, the idea of horizontal links comes into conflict with the most recent approach to developing educational courseware, which is based on educational objects, content re-use and educational metadata . The paradigm here is that the courseware is created from reusable content objects that can be produced by different authors. In this context, authoring of "hardwired" links between pages and other educational objects becomes simply impossible (and forbidden by authoring guidelines) since every object could be re-used outside its original context where these links have no original destination.
This section reviews the tools and approaches developed by hypertext researchers to support horizontal linking and horizontal navigation. These tools can be divided into two groups
- Tools that make the job of creating these links easier, overcoming the first two problems mentioned in the previous section
- Tools that solve the problem of horizontal navigation in large and volatile educational courseware that may be created by multiple authors with reusable content.
The natural way to support horizontal navigation is to help authors to create horizontal links. This traditional support is also used in modern educational hypermedia. Unlike commercial courseware authoring tools, advanced research-level tools provide good support for authoring horizontal links. The best tools go far beyond simple help in creating a particular link. There are special methodologies for developing rich horizontal links  and tools to author these links . There are also tools that can help an author to identify pairs of pages that can be linked, for example, tools based on Self Organizing Maps (SOMs) . There are even architectures that provide the user with adaptive navigation support to help traversing horizontal links . Some approaches that support an author in creating horizontal links manually can also be used to create horizontal links automatically. Automatic linking has always been an important research topic in the hypertext field. Known approaches that can be used for designing semi-automatic and automatic horizontal linking are based on semantics , word-level similarity , and ontologies . Automatic horizontal links can be also created "postfactum" by processing navigation traces of real users [4; 20]. The manual, semi-automatic and automatic approaches cited above aim to create "hardwired" links between pages "once and forever". They make authoring easier and can also be of help in the case of team-based development of educational material where none of the authors has a mental picture of the whole hyperspace.
Similar solutions exist for providing horizontal navigation in volatile educational courseware. The problem of horizontal linking in the context where both the origin and destination of a link can change or disappear has been on the agenda of hypertext researchers for a long time and produced a stream of research on dynamic linking. Dynamic linking is similar in nature to automatic linking, but the focus is different. Dynamic linking was developed for contexts where pages in the hyperspace are created separately and at different times. While old pages can be removed and new pages added, dynamic linking maintains horizontal connectivity by creating horizontal links "on the fly" during run-time or editing time.
The first dynamic linking solution was suggested by the StrathTutor system . StrathTutor allowed the author of an educational hypermedia to mark both link origins and hypertext nodes with a set of keywords. So every link has an origin, but its destination is replaced by a set of keywords. To resolve such a link at run-time, the system selects a page with a set of keywords most similar to the link (this target page can actually be added to the hyperspace long after the link itself was created). This indexing-based dynamic linking appeared to fit well with the volatile nature of educational hyper-courseware and was used with some variations in several other educational systems. It also fits well with the modern courseware re-use approach where all educational objects are created with metadata that include content index components . A pioneer attempt to use dynamic linking with courseware re-use was the IDEALS-MTS project . We expect more research in this direction in the coming years.
Similarity-based navigation, another traditional approach to dynamic linking, is not specific to educational hypermedia alone. The idea of similarity-based navigation is to use some similarity metric (such as typically used in information retrieval) and dynamically link each document to several most similar ones. This approach provides dynamic links in a huge collection of separately authored documents. Since this paper is oriented to a general hypermedia-literate audience, we do not explain this interesting approach in detail. A good analysis of similarity-based navigation is provided in .
In our work on dynamic horizontal navigation in educational hypermedia we were concerned with the existing interfaces to automatic and dynamic linking. In all the approaches listed in the previous section, users are faced with linking decisions made completely by the system. Typically, to all pages observed by the user the system adds one or more automatic/dynamic links to similar or relevant pages. The problem is that system's decision about similarity may not always be correct from the user's perspective. Pages that are similar to the given page in different dimensions are mixed together even though some of these dimensions may be irrelevant to the given user at the given moment. The only freedom left to the user is to blindly choose one of these pages.
The goal of our work was to give a user more freedom in selecting a link in the context of dynamic horizontal navigation. Following our earlier work on adaptive navigation support  when an intelligent hypertext and a user work together in selecting the most relevant among existing links, we wanted to create the same "cooperative" approach for dynamic linking and horizontal navigation. Our first solution to this problem was concept-based navigation  pioneered in the ISIS-Tutor system  and later refined in the InterBook system . Concept-based navigation was created for the same context of navigation between educational pages indexed with concepts. Instead of generating similarity links from one page to another page for "one-step" navigation, we provided two-step navigation via "concept pages" that, to appeal to a familiar metaphor, InterBook calls "glossary pages". In InterBook, every concept used for indexing educational pages has a dedicated page in the hyperspace that is called its glossary page. On every glossary page the system shows a brief description of this concept provided by an author (as in a real glossary) and generates links to all pages that have this concept in any part of the index. As a result, a glossary page becomes a "jump-station" for one-step navigation to every page related to the concept.
Conversely, the system generates links from every content page to all glossary pages associated with this concept. InterBook can generate two kinds of glossary links. First, InterBook provides a concept bar to the right of every page that shows all concepts related to the page (i.e. concepts from the page index). This mechanism ensures that a student is always able to see the real educational closure of a page. Each concept name on the concept bar is a link to the proper glossary page. Second, additional links from the text of a page to glossary concepts are generated based upon the concept names. For each keyword or key phrase on a page, the corresponding concept becomes a link to the proper glossary page.
With concept-based indexing, InterBook can build a naturally structured and tightly interlinked hyperspace of educational material, which supports advanced navigation. For example, a student can start from a page which describes several concepts, then move to a glossary page which describes one of these concepts. If the student still cannot completely understand the concept, he or she can navigate to one of the pages that provide an example for the concept. Then, the student can select one of the problems related with the concept to test the obtained knowledge. If the problem appears to be hard, the student can analyse the list of concepts in the problem spectrum and move from a problem to another concept which is not yet clear (and which can be far away in the network from the starting concept).
Thus, in two steps the user can navigate horizontally from one page to a related page using a glossary page. Part of the job in the process is done by the system that generates glossary links on content pages and content links on glossary pages. The system also annotates adaptively all generated links to further help the user in navigation decisions. Another part of the job is performed by the user who can choose the most relevant among several possible directions in horizontal navigation by choosing to view one of the glossary pages from the content page.
The concept-based navigation approach satisfied all the goals we set for it. It worked perfectly with volatile educational material, since all links were generated on the fly. Pages that are removed or changed do not create any problems with "hanging" links. A newly indexed page added to this dynamically generated hyperspace will immediately connect its indexed concepts to all glossary pages. It also lets the system and the user work together in selecting the relevant direction for horizontal navigation instead of forcing the user to choose blindly from "one-step" similarity links as in traditional approaches.
We have been happy with the concept-based navigation approach when working in closed-corpus indexed educational content. We hit a problem when trying to apply it to open corpus navigation, however. Our goal was to provide horizontal navigation from authored educational pages to similar pages on the Web. The subject we were exploring was C programming.
We asked the students of our C programming courses to use not only our own educational material, but also about ten good C-tutorials we found on the Web, all of them traditionally organized as hierarchical "hyper-textbooks". Originally, we provided links to the root nodes of all these tutorials from one of the top pages in our hypertextual learning material. Analysis of user navigation logs showed a quite obvious thing " the users were not using these tutorials at all. The information was helpful, but located too far away in the hyperspace. For example, a user who was not completely happy with our learning material on while loops may benefit from a different presentation of this concept in one of the Web tutorials. To get to the right page the user needs to navigate up to the page with roots of all tutorials, then descend down, and possibly repeat this process several times trying to find the most helpful page in several tutorials. None of the students in our class was able or willing to do it.
We clearly need to provide horizontal navigation links from our course material pages right to the relevant pages of online tutorials, but it was also clear that we cannot use concept-based navigation. Conceptually, concept-based navigation can work perfectly to provide horizontal connections between course pages and open corpus Web pages " it was even demonstrated in . The problem is that concept-based navigation requires manual indexing of every page of every tutorial " hundreds and hundreds of pages. We simply had no resources to do it this way. After several attempts to stay within the concept-based approach using automated indexing we have developed and evaluated a different approach to horizontal navigation that can handle the open corpus educational hyperspace. This approach, which we call map-based navigation, is presented in the following sections.
Map-based navigation is an approach that we have developed to support horizontal navigation for a "mixed corpus" educational hypermedia. Such hypermedia includes some traditional closed corpus educational hypermedia material that was specially designed for the needs of a particular course, and large portions of an open corpus Web material that is relevant for the course but was not designed for it. The challenge is to provide horizontal navigation links from closed corpus pages to relevant open corpus pages, as well as between different open corpus pages. We have investigated the problem in the context of creating a supportive Web resource for a typical university class on C programming. In this context, the most often used closed corpus Web-based resource is simply a set of lecture slides " one stack for every lecture. The most easily available open corpus resources are hypertextual C tutorials (like http://www.cs.cf.ac.uk/Dave/C/CE.html). As we have mentioned, hierarchical navigation in this context does not work: it was useless just to point the users to the roots of these tutorials. We have developed the KnowledgeSea system to help the user navigate from lectures to relevant tutorial pages and between them.
The core of KnowledgeSea is a two-dimensional map of educational resources (Figure 1). Each cell of the map is used to group together a set of educational resources. The map is organized in a way that resources (Web pages) that are semantically related are close to each other on the map. Resources located in the same cell are considered very similar. Resources located in directly connected cells are reasonably similar, and so on. The map is built using a neural network technology described in the next section. Each cell displays a set of keywords that helps the user locate the relevant section on the map. A cell also displays links to "critical" resources located in this cell. "Critical resources" are those under user consideration, which thereby serve as origin points for horizontal navigation.
Figure 1. Interface to the KnowledgeSea system. The most compact of the maps is shown
For lecture-to-tutorial navigation the critical resources are lectures and lecture slides (see two map cells in the enlarged section on the upper left part of Figure 2). If other educational resources are located in the cell, a red dot is shown. The cell color indicates the "depth of the information sea" " the number of resource pages lying "under" the cell. Following the metaphor of an information sea, on our map we use several shades of blue in the same way it is used on traditional sea maps to indicate depth. For example, light blue indicates the presence of 1-4 related pages; dark blue indicates more than 10 Web pages. The whole set of resources "under" the cell can be observed by "diving". Clicking on the red dot opens a cell content window (right on Figure 2) that provides a list of links to all tutorial pages relevant to this cell. A click on any of these links will open a resource-browsing window with the relevant selected tutorial page. This page is loaded "as is" from its original URL. It is visualized in another window in order to allow the user to navigate within the tutorial and easily go back to the map. A user can read this page and use it as a starting point to navigate an area of interest in the tutorial.
Figure 2. Working with the KnowledgeSea system. Each cell on the map contains a list of similar pages from three Web tutorials on the C language. When the user "dives" into the selected cell, a pop-up window shows the list of links to relevant pages in all available C tutorials. The enlarged cells (top left) show typical information from the top of the cell: keywords and links to critical resources (here, slides of lecture 14)
The map serves as a mediator to help the user navigate from critical resources to related resources. Links to critical resources work as landmarks on the map and, together with the keywords, give an idea of the material organized by the map. If the user wants to find additional information on the topic of lecture 14 (devoted to pointers), the first place to look is the cell where the material of this lecture is located (shown as L14 link on the enlarged section of Figure 2). If the user is looking for material to enhance the topic of the lecture in some particular direction, the cells close to the original cell provide several possible directions. For example the material related to memory usage in the context of pointers is located underneath the cell marked L14. Links to other critical resources shown on the map can help in selecting the right direction. For example, a good place to look for material that can connect the content of lectures 14 and 15 is a cell between cells where L14 and L15 links are shown. As in the case of concept-based navigation, the map provides mediated horizontal navigation. Instead of navigating from one page directly to another page, the user moves from a page to a mediator " the information map " that helps the user to select the page related to the original in the "right" sense.
Our knowledge map helps the user to navigate as for any regular map, and involves spatial orientation and visual memory. The keywords work as legend and the links to critical resources as landmarks. We have tried to build a stronger connection with real maps by providing several versions of information space maps that differ in the level of detail. The version shown in Figure 1 is the most concise and the smallest: only two keywords are shown for each cell, and only the lecture number identifies the link to lecture slides. A more detailed map contains six keywords and the number of the lecture; the most detailed shows five keywords and the full title of the lecture. The negative side of having more detailed maps is that they require progressively larger tables and thus are harder to "grasp".
The core of the map-based navigation approach is a two-dimensional information map that is built using the ideas of spatial hypertext. Spatial hypertexts allow the user to express the relationships and context of the information in a more flexible way than traditional linking mechanisms. In spatial hypertexts the relationship between pieces of information is expressed by using their relative location in a two-dimensional space . A clear advantage of this kind of hypertext is the possibility of expressing "constructive ambiguity" , which allows the user to create "weak links" between two pieces of information placing them near but not quite next to each other. Two nodes very close are otherwise linked in the strongest way. Another important advantage of spatial hypertext is that user navigation can be supported by visual memory and pattern recognition . Implementation of spatial hypertexts is almost always a by-hand process, although there are attempts to use automatic techniques . In our case the information map was created automatically using a neural network called a Self-Organizing Feature Map , which is discussed in the following section.
Artificial neural network (ANN) models have particular properties such as the ability to adapt, to learn or to cluster data. These models are inspired by our present understanding of the biological neural system and are made up of a dense interconnection of simple non-linear computational elements corresponding to biological neurons. Each connection is characterized by a variable weight that is adjusted, together with other parameters of the net, during the so-called "learning stage". Self organizing networks, and in particular the Self Organizing Feature Map (SOM, sometimes referred as Kohonen maps), are ANNs that try to build a representation of some feature of the input vector used as the "learning input set" during the learning stage. In this neural network, neurons are organized in a lattice, usually a one or two-dimensional array, that is placed in the input space and is spanned over the input vector distribution. Using a two-dimensional SOM network it is possible to obtain a map of input space where closeness between units or clusters in the map represent closeness of the input vectors.
Recently this network has been used to classify information and documents in "information maps". These are two-dimensional graphical representations in which all the documents in a document set are depicted. The documents are grouped in clusters that concern the same topic, and clusters about similar topics are near each other on the map.
The SOM algorithm principle can be explained in an abstract system without reference to any biological structure. The algorithm defines a sort of elastic lattice of simple processing units that are organized to fit a set of input points in a high-dimensional input space and to approximate their density function. To each processing unit is associated a vector of weight of the same dimension of the input vectors. Using the weights of each processing unit as a set of coordinates the lattice can be positioned in the input space. During the learning stage the weights of the units change their position and "move" towards the input points. This "movement" becomes slower and by the end of the learning stage the network is "frozen" in the input space.
After the learning stage the inputs can be associated with the nearest network unit. If the surface is visualized the inputs are distributed, as landmarks on a map. The main application of SOMs is the visualization of high-dimensional data in a two-dimensional manner, and the creation of abstractions as in many clustering techniques .
The effectiveness of SOMs as a tool to cluster information in order to produce links and to develop information maps is discussed in many research works. Some studies indicate that the clustering results obtained using SOMs can have some meaning for users. In particular,  validated the proximity hypothesis for which related topics are clustered closely on the map.
In  the SOM was trained using the nodes of a hypertext. The nodes in the same unit or in units connected by the rectangular lattice were considered linked to each other. This organization was compared to the link structure imposed by the author (the number of links in common was compared with the total number of links). The result can be expressed in terms of "link precision". Of the links in the original hypertext, 64.5% were "covered" by the SOM network, a result that validates the document organization obtained using the SOM.
Another important advantage of using SOMs is that the structure created is a tessellation (i.e. a division or splitting) of the information space in which documents are represented (vector space representation techniques are addressed below). Each unit of the SOM identifies an area of the information space (a set of points in the vector space). It will group together all the documents that are represented with a vector that belongs to that area. Thus, the structure created in the information space after training can be reused to organize other information and documents if they re represented using the same feature of the training set. If a new set of documents is submitted to the trained map it will organize the new information using its "knowledge" on the information domain. This creates a "volatile" link structure that depends on the information space, not on the set of documents, and can continue to exist even if all the documents are removed from the map.
This characteristic allows us to develop a scalable system in which new information can be added and old information erased without losing the spatial organization and the volatile link structure created.
The "volatile" structure created is different from that addressed in . In that case the structure is always under construction, and documents and links are continuously deleted or added. In this case links are represented by the neighborhood relationship on the map, so it is possible to add or delete documents from the map without affecting the link relationship: the map will put the document in the right position, creating the relationship to the other documents.
A mathematical representation of the documents is required to organize the documents on a map. We use the Vector Space Representation (VSR), a common document encoding based on statistical considerations: each document in a collection is represented by a vector where each component corresponds to a different word. The component values are calculated by TF*IDF method . In brief, the value depends on the frequency of occurrence of the word in the document (TF component) weighted by the frequency of occurrence in the whole set of documents (IDF component). The calculation of the TF*IDF representation often also includes a normalization factor that is used to obtain a representation vector that is independent of the text length.
The document set used for the learning phase of the SOM network included a total of 210 HTML files from three tutorials on the C programming language (Figure 3). All the pages of the chosen resources were downloaded and processed to filter "noise" (copyright notes, author names, and so on). The C code in the pages was also removed to produce an effective document representation. This might appear strange, but is easy to explain. The TF*IDF representation is a "bag of words" representation, i.e. it is not able to catch the meaning of a sequence of words (or their context). So it is not possible to catch the meaning of the C code fragment inside a page. Conversely, it is possible to catch the meaning of the whole page looking only at the plain text and removing code between the <pre> </pre> tags. Another step in page processing is the extraction of the page title to use as an anchor in the map.
Figure 3. Result of the training phase of the neural network: the pages of three tutorials on C programming are grouped under different map cells
The whole set of pages contains 4249 different words, but they were represented using the 500 most common words after the removal of stopwords. All the document representations are collected in a file and submitted to the neural network simulator.
- a set of Perl programs that preprocess the document and create the vector space representation for the learning phase of the SOM;
- the SOM program, the SOM-PAK simulator
- a CGI script that is used to trace and log user navigation.
Figure 4 shows the interaction between the system components. The first step is preprocessing the source files in order to separate content information from the "garbage" that includes formatting and other irrelevant information, such as copyright, author, etc. The second step is producing a text surrogate by removing stopwords from the text and choosing the set of words for the vector representation. The third step is the calculation of the document representation. This step was described in more detail above.
Figure 4. Architecture of the KnowledgeSea system
In the next step, an 8x8 SOM was used to organize the documents. The size of the map is a compromise between the need for fine clustering and the need for compactness for visualization purposes. At the end of the learning phase, the map organizes the pages from the various resources. Each cell of the map collects conceptually similar pages from various tutorials. The course lecture slides were processed in the same way as tutorial pages. Each lecture was processed separately to extract text and remove C code, and was finally placed by the SOM network into some map cell.
It is important to note that the KnowledgeSea system is scalable and can handle large portions of volatile or open corpus learning material. New learning resources can be added to the map at any time and will be automatically organized into the proper cells of the map on the HTML table. New documents are processed in the same way as the original set of documents (Figure 4) but processing is faster since it is not necessary to train the SOM again. The link space built by the map is not rigid; it is not a set of defined links between information items as in automatic linking, but an organization of the information space that can be reused many times. In the KnowledgeSea system it is
easy to add a new C tutorial to the existing information map. This means that the navigation strategies and the knowledge of the map acquired by the user can also be reused. After several new tutorials have been added a user still knows where to go to look for material on a particular topic. The KnowledgeSea system shows that the map-based navigation approach can support horizontal navigation in large hyperspaces of educational material that includes fragments of open corpus Web resources.
The simplicity of adding new material to the system distinguishes map-based navigation from the concept-based approach. Adding new resources to be used with concept-based navigation requires a large investment of time for manual indexing. Adding new resources to the map-based navigation system can be done almost automatically, with very little manual work.
Another difference between the approaches is on the conceptual level: concept-based navigation uses a serialistic approach to horizontal navigation since one of many concepts is used as a mediator in the process of horizontal navigation. Map-based navigation uses a holistic approach where the whole map of the information space (with different landmarks) is used as a mediator. Thus, although being somewhat similar on the surface (mediator based approaches that enable the user to participate more actively in horizontal navigation), the two approaches are really orthogonal in nature and can be successfully used within the same system. We are currently working on a system that supports both concept-based navigation (within an open corpus hypermedia) and map-based navigation.
To evaluate the functionality and usefulness of the KnowledgeSea system, it was evaluated in the context of a real C programming course at the University of Pittsburgh.
The system was available to the students as one of the components of our larger KnowledgeTree system that provides Web-based access to all learning resources used by the students over the duration of the course. The goal of the KnowledgeSea system was to provide access to three large hyper-tutorials on the C language. As shown on Figures 1 and 2, the information map organized all course lecture slides and all pages from these tutorials. The KnowledgeSea system was available to the students for several weeks during their work with course lectures and preparation for the final exam. The CGI component mentioned in section 7 was used to log all user navigation with the system.
During the last week of the course the students were asked to fill in a short online questionnaire about the KnowledgeSea system and their experience with it. Participation was not mandatory. Moreover, only those students who spent some considerable time with the system were eligible to complete the questionnaire. All students who completed the questionnaire were rewarded with a few extra credit points. From 39 students in the class, 21 chose to participate.
The goal of the questionnaire was twofold. First, we wanted to check how well, from the student"s point of view, the map organizes the information. Second, we were interested in how useful the whole system was, and in our particular design decisions. The results that are most relevant to the topic of this paper are presented in Table 1.
|Similarity Questions||Strongly related||Reasonably well related||Poorly related||Not related||Cannot judge|
|The tutorial pages connected to the same cell were||2||18||1||0||0|
|For a pair of neighboring cells, the overall topics and the connected tutorial pages were||2||17||2||0||0|
|Performance questions||Completely||Quite well||Not quite well, can sometimes be of help||It does not help at all||No answer|
|To what extent the system has achieved the goal of helping the students to access free online tutorials on C language?||1||12||7||1||0|
As shown by Table 1, the system performed well in organizing the open corpus learning material. The students agree that the pages organized under the same and connected cells were quite well related by content. When evaluating the usefulness of the whole system, about two-thirds of the students think that the system has achieved the goal of providing access to the online C tutorials completely or "quite well". We are very encouraged by this result. At the same time, several students thought that the system was helpful only "sometimes", and one student thought it was not helpful at all. We are currently performing deeper analysis of the answers and navigation logs to determine why some students benefited from the system less than others, and to find ways to make the system more useful for everyone.
The system presented is a valuable tool for supporting horizontal navigation in Web-based education contexts. The system is based on a self-organizing neural network technology. We chose this technology because it creates relationships between the resources and, at the same time, produces an information map that is the core of our navigation approach. The approach proposed is less demanding than concept-based navigation. It allows mediated navigation to be developed in a non-indexed open corpus document space. The resulting system is scalable and easy to modify because the navigation structure created is not a fixed link structure but a mapping of the information space defined by the document set. New documents can easily be added to the system and placed by the neural network in the proper positions.
Results from our preliminary study shows that map-based navigation is a valuable tool for educational hypermedia applications. We also think that map-based navigation is a promising approach for building dynamic hyperspaces in other application areas. We intend to continue exploring map-based navigation in several different contexts. We are also interested in comparing concept-based and map-based navigation approaches in the context of a single system to determine strong and weak aspects of each approach.
This work was performed when Riccardo Rizzo was a Visiting Professor at the School of Information Science, University of Pittsburgh. He would like to thank the faculty and the staff of the School for their support during the work on KnowledgeSea.
 Blackboard (1999) CourseInfo, Blackboard, Inc. http://www.blackboard.com/
 Brusilovsky, P. and Miller, P. (2001) "Course Delivery Systems for the Virtual University". In Access to Knowledge: New Information Technologies and the Emergence of the Virtual University, edited by Tschang, T. and Della Senta, T. (Elsevier Science: Amsterdam), pp. 167-206
 Brusilovsky, P. and Pesin, L. (1994) "ISIS-Tutor: An intelligent learning environment for CDS/ISIS users". In Proc. Interdisciplinary Workshop on Complex Learning in Computer Environments (CLCE94), edited by Levonen, J. J. and Tukianinen, M. T., Joensuu, Finland (EIC), pp. 29-33 http://cs.joensuu.fi/~mtuki/www_clce.270296/Brusilov.html
 Brusilovsky, P. and Schwarz, E. (1997) "Concept-based navigation in educational hypermedia and its implementation on WWW". In Proc. of ED-MEDIA/ED-TELECOM'97 - World Conference on Educational Multimedia/Hypermedia and World Conference on Educational Telecommunications, edited by Müldner, T. and Reeves, T. C., Calgary, Canada (AACE), pp. 112-117
 Chen, C. and Czerwinsky, M. (1998) "From Latent Semantic to Spatial Hypertext " An Integrated Approach". In Proc. Ninth ACM International Hypertext Conference (Hypertext'98), edited by Grønbæk, K., Mylonas, E. and Shipman III, F. M., Pittsburgh, USA (ACM Press), pp. 77-86
 Crampes, M. and Ranwez, S. (2000) "Ontology-supported and ontology-driven conceptual navigation on the World Wide Web". In Proc. Eleventh ACM Conference on Hypertext and Hypermedia (Hypertext 2000), San Antonio, TX (ACM Press), pp. 191 - 199
 Henze, N. and Nejdl, W. (2000) "Extendible adaptive hypermedia courseware: Integrating different courses and Web material". In Adaptive Hypermedia and Adaptive Web-Based Systems, edited by Brusilovsky, P., Stock, O. and Strapparava, C., Lecture Notes in Computer Science (Springer-Verlag: Berlin), pp. 109-120
 Hornung, C., Encarnação, L. M., and Barton III, R. J. (1998) "PLATINUM: WorldWide distributed courseware production, learning and training using MTS". In Proc. WebNet'98, World Conference of the WWW, Internet, and Intranet, Orlando, FL, edited by Maurer, H. and Olson, R. G. (AACE), pp. 446-451
 IEEE LTCS WG12 (2001) LOM: Working Draft Document v6.1, Learning Object Metadata Working Group of the IEEE Learning Technology Standards Committee http://ltsc.ieee.org/wg12/doc.html
 Marchand, Y. and Guerin, J.-L. (1996) "Nestor: A Trail Blazer for Hypertext". In Proc. ICTAI'96, IEEE International Conference on Tools with Artificial Intelligence, Toulouse, France, pp. 420-425
 Rizzo, R., Allegra, M., and Fulantelli, G. (1999) "Hypertext-like structures through a SOM network". In Proc. Tenth ACM Conference on Hypertext and hypermedia (Hypertext'99), Darmstadt, Germany (ACM Press)
 Rizzo, R., Fulantelli, G., and Allegra, M. (2000) "Browsing a Document Collection as an Hypertext". In Proc. WebNet'2000, World Conference of the WWW and Internet, San Antonio, TX, edited by Davies, G. and Owen, C. (AACE), pp. 454-458
 Tudhope, D., Taylor, C., and Benyon-Davies, P. (1995) "Navigation via similarity in hypermedia and information retrieval. In Proc. HIM'95, Konstanz, Universitätverlag Konstanz, edited by Kuhlen, R. and Ritterberg, M., pp. 203-218
 Verhoeven, B., Cardinaels, K., Van Durm, R., Duval, E., and Olivié, H. (2001) "Experiences with the ARIADNE pedagogical document repository". In Proc. ED-MEDIA'2001 - World Conference on Educational Multimedia, Hypermedia and Telecommunications, Tampere, Finland (AACE), pp. 1949-1954
 WBT Systems (1999) TopClass, WBT Systems, Dublin, Ireland http://www.wbtsystems.com/
 WebCT (1999) World Wide Web Course Tools, WebCT Educational Technologies, Vancouver, Canada http://www.webct.com
ACM COPYRIGHT NOTICE. Copyright © 2002 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or firstname.lastname@example.org.