Monday, April 26, 2010

High Page Rank Social Bookmarking List

This summary is not available. Please click here to view the post.

Sunday, April 18, 2010

Hyperlink

In computing, a hyperlink (or link) is a reference to a document that the reader can directly follow, or that is followed automatically. The reference points to a whole document or to a specific element within a document. Hypertext is text with hyperlinks. Such text is usually viewed with a computer. A software system for viewing and creating hypertext is a hypertext system. To hyperlink (or simply to link) is to create a hyperlink. A user following hyperlinks is said to navigate or browse the hypertext.

A hyperlink has an anchor, which is a location within a document from which the hyperlink can be followed; that document is known as its source document. The target of a hyperlink is the document, or location within a document, that the hyperlink leads to. The user can follow the link when its anchor is shown by activating it in some way (often, by touching it or clicking on it with a pointing device). Following has the effect of displaying its target, often with its context.

In some hypertext, hyperlinks can be bidirectional: they can be followed in two directions, so both points act as anchors and as targets. More complex arrangements exist, such as many-to-many links.

The most common example of hypertext today is the World Wide Web: webpages contain hyperlinks to webpages. For example, in an online reference work such as Wikipedia, many words and terms in the text are hyperlinked to definitions of those terms. Hyperlinks are often used to implement reference mechanisms that predate the computer, such as tables of contents, footnotes, bibliographies, indexes and glossaries.

The effect of following a hyperlink may vary with the hypertext system and sometimes on the link itself; for instance, on the World Wide Web, most hyperlinks cause the target document to replace the document being displayed, but some are marked to cause the target document to open in a new window. Another possibility is transclusion, for which the link target is a document fragment that replaces the link anchor within the source document. Not only persons browsing the document follow hyperlinks; they may also be followed automatically by programs. A program that traverses the hypertext following each hyperlink and gathering all the retrieved documents is known as a Web spider or crawling.

Saturday, April 17, 2010

Keyword stuffing

Keyword stuffing is considered to be an unethical search engine optimization (SEO) technique. Keyword stuffing occurs when a web page is loaded with keywords in the meta tags or in content. The repetition of words in meta tags may explain why many search engines no longer use these tags.

Keyword stuffing had been used in the past to obtain maximum search engine ranking and visibility for particular phrases. This method is completely outdated and adds no value to rankings today. In particular, Google no longer gives good rankings to pages employing this technique.

Hiding text from the visitor is done in many different ways. Text colored to blend with the background, CSS "Z" positioning to place text "behind" an image — and therefore out of view of the visitor — and CSS absolute positioning to have the text positioned far from the page center are all common techniques. By 2005, many invisible text techniques were easily detected by major search engines.

"Noscript" tags are another way to place hidden content within a page. While they are a valid optimization method for displaying an alternative representation of scripted content, they may be abused, since search engines may index content that is invisible to most visitors.

Sometimes inserted text includes words that are frequently searched (such as "sex"), even if those terms bear little connection to the content of a page, in order to attract traffic to advert-driven pages.

In the past, keyword stuffing was considered to be either a white hat or a black hat tactic, depending on the context of the technique, and the opinion of the person judging it. While a great deal of keyword stuffing was employed to aid in spamdexing, which is of little benefit to the user, keyword stuffing in certain circumstances was not intended to skew results in a deceptive manner. Whether the term carries a pejorative or neutral connotation is dependent on whether the practice is used to pollute the results with pages of little relevance, or to direct traffic to a page of relevance that would have otherwise been de-emphasized due to the search engine's inability to interpret and understand related ideas. This is no longer the case. Search engines now employ themed, related keyword techniques to interpret the intent of the content on a page.

With relevance to keyword stuffing, it is quoted by the largest of search engines that they recommend Keyword Research and use (with respect to the quality content you have to offer the web), to aid their visitors in the search of your valuable material. To prevent Keyword Stuffing you should wisely use keywords in respect with SEO, Search Engine Optimization. It could be best described as keywords should be reasonable and necessary, yet it is acceptable to assist with proper placement and your targeted effort to achieve search results. Placement of such words in the provided areas of HTML are perfectly allowed and reasonable. Google discusses keyword stuffing as Randomly Repeated Keywords

More info : http://en.wikipedia.org/wiki/Keyword_stuffing

Thursday, April 15, 2010

Free Press Release Sites List

24-7PressRelease.com – Free release distribution with ad-support

1888PressRelease.com – Free distribution, paid services gives you better placement and permanent archiving.

ClickPress.com – Distributs to sites like Google News and Topix.net, Gold level will also get you to sites like LexisNexis.

EcommWire.com – Focuses on ecommerece and requires you include an image, 3 keywords and links.

Express-Press-Release.com – Free distribution company with offices in 12 states.

Free-Press-Release.com – Easy press release distribution for free, more features for paid accounts.

Free-Press-Release-Center.info – Distributes your release, offers a web page with one keyword link to your site. Pro upgrade will give you three links, permanent archiving and more.

I-Newswire.com – Allows for free distribution to sites and search engines, premium membership differs only slightly in adding in graphics.

NewswireToday.com – All the usual free distribution tools, premium service includes logo, product picture and more.

PR.com – Not only will they distribute your press releases, but you can also set up a full company profile.

PR9.net – Ad supported press distribution site.

PR-Inside.com – European-based free press release distribution site.

PRBuzz.com – Completely free distribution to search engines, news sites, and blogs.

PRCompass.com – Distribute your press release with a free or paid version, others can vote it up ala Digg (Digg) style.

PRUrgent.com – Not only distributes your release, but attempts to teach you how to write one, and even offers downloadbale samples for you to work with.

Press-Base.com – Submit your release for free and get on their front page and the category of your choice.

PressAbout.com – A free press release service formatted as a blog.

PressMethod.com – Free press release distribution no matter what, but extra services based on the size of your contribution.

PRLeap.com – Free distribution to search engines, newswires, and RSS feeds. Fee based bumps get you better placement.

PRLog.org – Free distribution to Google News and other other search engines.

TheOpenPress.com – Gives free distribution for plain formatted releases, fees for HTML-coded releases.

Monday, April 12, 2010

Web crawler

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner. Other terms for Web crawlers are ants, automatic indexers, bots, and worms or Web spider, Web robot, or—especially in the FOAF community—Web scutter.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

More info : http://en.wikipedia.org/wiki/Web_crawler

Wednesday, April 7, 2010

PageRank

PageRank is a link analysis algorithm, named after Larry Page, used by the Google Internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is also called the PageRank of E and denoted by PR(E).

The name "PageRank" is a trademark of Google, and the PageRank process has been patented (U.S. Patent 6,285,999). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; the shares were sold in 2005 for $336 million.


Mathematical PageRanks (out of 100) for a simple network (PageRanks reported by Google are rescaled logarithmically). Page C has a higher PageRank than Page E, even though it has fewer links to it; the link it has is of a much higher value. A web surfer who chooses a random link on every page (but with 15% likelihood jumps to a random page on the whole web) is going to be on Page E for 8.1% of the time. (The 15% likelihood of jumping to an arbitrary page corresponds to a damping factor of 85%.) Without damping, all web surfers would eventually end up on Pages A, B, or C, and all other pages would have PageRank zero. Page A is assumed to link to all pages in the web, because it has no outgoing links.

History

PageRank was developed at Stanford University by Larry Page (hence the name Page-Rank) and later Sergey Brin as part of a research project about a new kind of search engine. The first paper about the project, describing PageRank and the initial prototype of the Google search engine, was published in 1998: shortly after, Page and Brin founded Google Inc., the company behind the Google search engine. While just one of many factors which determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web search tools.

PageRank has been influenced by citation analysis, early developed by Eugene Garfield in the 1950s at the University of Pennsylvania, and by Hyper Search, developed by Massimo Marchiori at the University of Padua (Google's founders cite Garfield's and Marchiori's works in their original paper). In the same year PageRank was introduced (1998), Jon Kleinberg published his important work on HITS.

More info http://en.wikipedia.org/wiki/PageRank

Monday, April 5, 2010

Web Directories List: Free Directory Submission

This summary is not available. Please click here to view the post.