WEB NAVIGATION
Web navigation refers to the process of navigating a
network of information resources in the World Wide Web, which is organized as
hypertext or hypermedia.The user interface that is used to do so is called a
web browser.A central theme in web design is the development of a web navigation
interface that maximizes usability.A website's overall navigational scheme
includes several navigational pieces such as global, local, supplemental, and
contextual navigation; all of these are vital aspects of the broad topic of web
navigation. Hierarchical navigation systems are vital as well since it is the
primary navigation system. It allows for the user to navigate within the site
using levels alone, which is often seen as restricting and requires additional
navigation systems to better structure the website. The global navigation of a
website, as another segment of web navigation, serves as the outline and
template in order to achieve an easy maneuver for the users accessing the site,
while local navigation is often used to help the users within a specific
section of the site.All these navigational pieces fall under the categories
of various types of web navigation, allowing for further development and for
more efficient experiences upon visiting a webpage.
Types of web
navigation
The use of website navigation tools allow for a website's
visitors to experience the site with the most efficiency and the least
incompetence. A website navigation system is analogous to a road map which
enables webpage visitors to explore and discover different areas and
information contained within the website.
There are many different types of website navigation:
- Hierarchical
website navigation
The structure of the website navigation is built from
general to specific. This provides a clear, simple path to all the web pages
from anywhere on the website.
- Global
website navigation
Global website navigation shows the top level sections/pages
of the website. It is available on each page and lists the main content
sections/pages of the website.
- Local
website navigation
Local navigation is the links within the text of a given web
page, linking to other pages within the website.
STYLES OF WEBSITE
NAVIGATION
Styles of website navigation refers to how the navigation
system is presented.
- Text
Links
Text links are words (text) which are surrounded by the
anchor set of tags to create clickable text which takes the visitor to another
web page within your website, a downloadable document from your website, or to
another website on the Internet.
- Breadcrumbs
Breadcrumb navigation shows the website visitor the path
within your website to the page they are currently on.
- Navigation
Bar
A navigation bar is the collection of website navigation
links all grouped together. A navigation bar can be horizontal or vertical.
- Tab
Navigation
Tab navigation is where the website navigations links appear
as tabs, similar to the tabs you use in a binder to divide the contents into
sections.
- Sitemap
A sitemap is a page within your website that lists all the
sections and web pages (if you don’t have too many) that are contained within
the website. This is different from Google Sitemaps and Yahoo Sitemaps.
A traditional sitemap provides navigation for your website
visitors should they get lost, a shorter path to the different areas of the
website for those who know what exactly they are looking for and a means for
the search engines to find all the pages within your website.
- Dropdown
Menu
A dropdown menu is a style of website navigation where when
the visitor places their mouse over a menu item, another menu is exposed. A
dropdown menu can include a flyout menu (see next item).
A dropdown menu system can create accessibility issues and a
problem as far as the search engines not being able to read the links in the
menu, but if constructed properly, these issues can be overcome.
- Flyout
Menu
A flyout menu is constructed similar to the dropdown menu.
When the visitor places their mouse over a link, another menu “flys out”,
usually to the right, from the link where the mouse is placed.
Flyout menus face the same challenges as dropdown menus but
if constructed properly, they can be accessible and readable by the search
engines.
- Named
Anchors
Named anchors are the type of links that take you directly
to a spot on the current page or on another web page.
WEB NAVIGATION USE
To be effective, the website navigation system needs:
- To
be consistent throughout the website.
The website visitors will learn, through repetition, how to get
around the website.
- The
main navigation links kept together.
This makes it easier for the visitor to get to the main
areas of the website.
- Reduced
clutter by grouping links into sections.
If the list of website navigation links are grouped into
sections and each section has only 5-7 links, this will make it easier to read
the navigation scheme.
- Minimal
clicking to get to where the visitor wants to get to.
If the number of clicks to the web page the visitor wishes
to visit is minimal, this leads to a better experience.
Some visitors can become confused or impatient when clicking
a bunch of links to get to where they want to be. In large websites, this can
be difficult to reduce. Using breadcrumbs is one way to help the visitor see
where they are within the website and the path back up the navigation path they
took.
Creating the website navigation system at the planning stage
of the website will effect the overall design of the web page layout and help
develop the overall plan for the website.
A 'web search engine' is a software system
that is designed to search for information on the World Wide Web. The search
results are generally presented in a line of results often referred to as
search engine results pages (SERPs). The information may be a mix of web pages,
images, and other types of files. Some search engines also mine data available
in databases or open directories. Unlike web directories, which are maintained
only by human editors, search engines also maintain real-time information by
running an algorithm on a web crawler.Internet content that is not capable of
being searched by a web search engine is generally described as the deep web.
Web search engines get their information by
web crawling from site to site. The "spider" checks for the standard
filename robots.txt, addressed to it, before sending certain information back
to be indexed depending on many factors, such as the titles, page content,
JavaScript, Cascading Style Sheets (CSS), headings, as evidenced by the
standard HTML markup of the informational content, or its metadata in HTML meta
tags. "[N]o web crawler may actually crawl the entire reachable web. Due
to infinite websites, spider traps, spam, and other exigencies of the real web,
crawlers instead apply a crawl policy to determine when the crawling of a site
should be deemed sufficient. Some sites are crawled exhaustively, while others
are crawled only partially".Indexing means associating words and other
definable tokens found on web pages to their domain names and HTML-based
fields. The associations are made in a public database, made available for web
search queries. A query from a user can be a single word. The index helps find
information relating to the query as quickly as possible.Some of the techniques
for indexing, and caching are trade secrets, whereas web crawling is a
straightforward process of visiting all sites on a systematic basis.Between
visits by the spider, the cached version of page (some or all the content
needed to render it) stored in the search engine working memory is quickly sent
to an inquirer. If a visit is overdue, the search engine can just act as a web
proxy instead. In this case the page may differ from the search terms indexed.The
cached page holds the appearance of the version whose words were indexed, so a
cached version of a page can be useful to the web site when the actual page has
been lost, but this problem is also considered a mild form of linkrot.
High-level architecture of a standard Web
crawle.Typically when a user enters a query into a search engine it is a few
keywords. The index already has the names of the sites containing the keywords,
and these are instantly obtained from the index. The real processing load is in
generating the web pages that are the search results list: Every page in the
entire list must be weighted according to information in the indexes.Then the
top search result item requires the lookup, reconstruction, and markup of the
snippets showing the context of the keywords matched. These are only part of
the processing each search results web page requires, and further pages (next
to the top) require more of this post processing.Beyond simple keyword lookups,
search engines offer their own GUI- or command-driven operators and search
parameters to refine the search results. These provide the necessary controls
for the user engaged in the feedback loop users create by filtering and
weighting while refining the search results, given the initial pages of the
first search results. For example, from 2007 the Google.com search engine has
allowed one to filter by date by clicking "Show search tools" in the
leftmost column of the initial search results page, and then selecting the
desired date range. It's also possible to weight by date because each page has
a modification time. Most search engines support the use of the boolean
operators AND, OR and NOT to help end users refine the search query. Boolean
operators are for literal searches that allow the user to refine and extend the
terms of the search. The engine looks for the words or phrases exactly as
entered. Some search engines provide an advanced feature called proximity
search, which allows users to define the distance between keywords.There is
also concept-based searching where the research involves using statistical
analysis on pages containing the words or phrases you search for. As well,
natural language queries allow the user to type a question in the same form one
would ask it to a human. A site like this would be ask.com.The usefulness of a
search engine depends on the relevance of the result set it gives back. While
there may be millions of web pages that include a particular word or phrase,
some pages may be more relevant, popular, or authoritative than others. Most
search engines employ methods to rank the results to provide the
"best" results first. How a search engine decides which pages are the
best matches, and what order the results should be shown in, varies widely from
one engine to another. The methods also change over time as Internet usage
changes and new techniques evolve. There are two main types of search engine
that have evolved: one is a system of predefined and hierarchically ordered
keywords that humans have programmed extensively. The other is a system that
generates an "inverted index" by analyzing texts it locates. This
first form relies much more heavily on the computer itself to do the bulk of
the work.Most Web search engines are commercial ventures supported by advertising
revenue and thus some of them allow advertisers to have their listings ranked
higher in search results for a fee. Search engines that do not accept money for
their search results make money by running search related ads alongside the
regular search engine results. The search engines make money every time someone
clicks on one of these ads.
Although search engines are programmed to
rank websites based on some combination of their popularity and relevancy,
empirical studies indicate various political, economic, and social biases in the
information they provide and the underlying assumptions about the technology.These
biases can be a direct result of economic and commercial processes (e.g.,
companies that advertise with a search engine can become also more popular in
its organic search results), and political processes (e.g., the removal of
search results to comply with local laws). For example, Google will not surface
certain neo-Nazi websites in France and Germany, where Holocaust denial is
illegal.Biases can also be a result of social processes, as search engine
algorithms are frequently designed to exclude non-normative viewpoints in favor
of more "popular" results. Indexing algorithms of major search
engines skew towards coverage of U.S.-based sites, rather than websites from
non-U.S. countries.Google Bombing is one example of an attempt to manipulate
search results for political, social or commercial reasons.Several scholars
have studied the cultural changes triggered by search engines, and the representation
of certain controversial topics in their results, such as terrorism in Ireland
and conspiracy theories.
No comments:
Post a Comment