background preloader

Webmaster Guidelines - Webmaster Tools Help

Webmaster Guidelines - Webmaster Tools Help
Following the General Guidelines below will help Google find, index, and rank your site. We strongly encourage you to pay very close attention to the Quality Guidelines below, which outline some of the illicit practices that may lead to a site being removed entirely from the Google index or otherwise affected by an algorithmic or manual spam action. If a site has been affected by a spam action, it may no longer show up in results on Google.com or on any of Google's partner sites. General Guidelines Ensure that all pages on the site can be reached by a link from another findable page. Ways to help Google find your site: Create a useful, information-rich site, and write pages that clearly and accurately describe your content. Try to use text instead of images to display important names, content, or links. Quality guidelines If you believe that another site is abusing Google's quality guidelines, please let us know by filing a spam report. Matt Cutts talks about manual action on webspam Related:  SEO Tips

10 Questions to Ask When Hiring an SEO Consultant If your website doesn't show up on the first page of search results on Google, Bing or Yahoo, your potential customers might not even know you exist. Better search engine visibility can be critical to boosting visits to your website, which can lead to increased brand awareness and higher sales and profits. But what if you lack the time and technical expertise to improve your site's search engine ranking? It might make sense to hire an experienced, reliable search engine optimization (SEO) consultant. Here are 10 essential questions to ask when considering prospective SEO consultants: 1. These references can help you gauge how effective the candidate is, as well as verify that the person did indeed work on specific SEO campaigns. 2. Make sure the candidate's proposal includes an initial technical review of your website to weed out any problems that could lower your search engine ranking, including broken links and error pages. 3. 4. 5. 6. 7. 8. 9. 10.

Google Basics - Webmaster Tools Help Crawling Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index. We use a huge set of computers to fetch (or "crawl") billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. Google's crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. How does Google find a page? Google uses many techniques to find a page, including: Following links from other sites or pages Reading sitemaps How does Google know which pages not to crawl? Pages blocked in robots.txt won't be crawled, but still might be indexed if linked to by another page. Improve your crawling Use these techniques to help Google discover the right pages on your site: Submit a sitemap.

Adding "OpenInNewWindow" option to SharePoint Links li Forward A very common question that pops up in the SharePoint newsgroups, list servers, and blogs surrounds the issue that the SharePoint Links Web Part doesn’t provide the option to open new links in new widnows. Many people have come up with their own work-around solutions. Todd Bleeker has a good solution that involves using the Content Editor Web Part (CEWP) in his Dashboard Web Part series, but you have to add the CEWP to every single page.[2] Wouldn’t it be nice if it was part of the solution OOTB on every new site you created? In this article, I’ll show you how you can make a few modifications to add a new field to each link that allows contributors to specify if links should open in a new window or not. All files created/modified of any importance are available for download at the end of this article. Tangent: List Templates and Site Definitions For more information on creating your own site defintions, refer to the footnotes.[2] Overview Let’s get started! Attributes to take note of:

Search Help | - SLN2245 - Content quality guidelines At Yahoo, we’re focused on delivering the Web’s best search experience, full of relevant, high-quality content. Yahoo Search Content Quality Guidelines are designed to make sure that poor-quality pages don’t degrade the Yahoo Search experience. High-quality content These are the kind of Web pages Yahoo wants to include in its search results. Original and unique content of genuine value.Pages designed primarily for people, where search engine considerations are a secondary concern.Hyperlinks intended to help people find interesting, related content.Metadata (including title and description) that accurately describes the contents of a web page. Low-quality content These are some of the types of content that Yahoo does not want included in search results.

About Sitemaps - Webmaster Tools Help What is a sitemap? A sitemap is a file where you can list the web pages of your site to tell Google and other search engines about the organization of your site content. Search engine web crawlers like Googlebot read this file to more intelligently crawl your site. Also, your sitemap can provide valuable metadata associated with the pages you list in that sitemap: Metadata is information about a webpage, such as when the page was last updated, how often the page is changed, and the importance of the page relative to other URLs in the site. You can use a sitemap to provide Google with metadata about specific types of content on your pages, including video, image, and mobile content. A sitemap video entry can specify the video running time, category, and age appropriateness rating. Do I need a sitemap? If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Your site is really large.

Web 2.0: la dimensión tecnológica (1) NO HA HABIDO GRAN revolución en la web desde que reventó la burbuja y, sin embargo, somos cada vez más numerosos los que la utilizamos para una gama cada vez más abierta de actividades. Algo está pasando que no sabemos definir y que, por comodidad se bautizó Web 2.0, el último término de moda en la región de San Francisco. ¿De qué se trata? El primer paso para reunir elementos de respuesta consiste en visitar sitios muchas veces tratados en esta columna, sitios que ofrecen aplicaciones concebidas para funcionar en la web. Flickr.com, por ejemplo, permite a decenas de millones de gente almacenar fotos, compartirlas con otros y clasificarlas gracias a las etiquetas. Web 2.0 se refiere a páginas que conectan sus servicios entre sí no solamente gracias a enlaces de hipertexto, sino también mediante interacción dinámica gracias a RSS y a API. Web 2.0 está hecha de módulos, fragmentos,pedazos, aplicaciones, que son acoplados de manera suelta.

Is Duplicate Content Really a Problem? The short answer is yes. We have written extensively on the subject before: SEO Obviousness: Duplicate content sucks, Duplicate content sin #1: Pagination, Duplicate content sin #2: Default page linking, SEO worst practices: The content duplication toilet bowl of death, and 5 SEO Strategies We Swear Aren’t Going Anywhere. So we have done our due diligence warning you about the dangers of duplicate content. But, there is another side to the story. Let’s take Portent client www.RealTruck.com for example. nofollow, noindex attributes on hundreds of thousands of pages, including filtered category pagesnofollow attributes to footer links and other internal linkscanonical tags to several sub-category and product pages that pointed to the main category pagescanonical tags to categories with multiple pages, instead of rel=”prev” and rel=”next” pagination link elements The client was worried about pages competing with each other, or cannibalizing each other. The Moral of the Story Is…

Mortgage LA CA Getting Started with Ajax A List Apart is pleased to present the following excerpt from Chapter 27 of Web Design in a Nutshell (O’Reilly Media, Inc., third edition, February 21, 2006). —Ed. The start of 2005 saw the rise of a relatively new technology, dubbed “Ajax” by Jesse James Garrett of Adaptive Path. Ajax stands for Asynchronous JavaScript and XML. In a nutshell, it is the use of the nonstandard XMLHttpRequest() object to communicate with server-side scripts. The DOM plays into Ajax in a number of ways. This probably sounds very confusing, but it is pretty easy once we go over a few simple examples. As with the DOM Scripting examples (above), for a blow-by-blow of what the script is doing, read the JavaScript comments. Example 1: Ajax with innerHTML#section1 For a simple innerHTML-based Ajax example, we’ll create a quasi-functional address book application. <! As you can see, we have a simple form with a select, from which to choose a person. And now for the JavaScript. See this script in action.

Case Study: One Site's Recovery from an Ugly SEO Mess The author's posts are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz. This past March, I was contacted by a prospective client: My site has been up since 2004. I had good traffic growth up to 2012 (doubling each year to around a million page views a month), then suffered a 40% drop in mid Feb 2012. I've been working on everything that I can think of since, but the traffic has never recovered. Since my primary business is performing strategic site audits, this is something I hear often. It can be devastating when that happens. As this chart shows, when separating out the "expected" roller coaster effect, Google organic traffic took a nose-dive in early February of 2012. First step: check and correlate with known updates When this happens, the first thing I do is jump to Moz's Google Algorithm Change History charts to see if I can pinpoint a known Google update that correlates to a drop. Expand your timeline: look for other hits

Search Engines and Frames Search engines have a tough time with frames. Using frames can prevent them from finding pages within a Web site or cause them to send visitors into a site without the proper frame "context" being established. Both problems can be corrected, with a little foresight by Webmasters. Say More Than Sorry To Search Engines Many sites use frames for navigation, and the fictional "Wonderful World of Stamp Collecting" site in this tutorial is a typical example of this. View the example, then return to this page (use your back button or click on the big, ugly link that says "BACK TO THE TUTORIAL." Example 1 You saw a single page with three frames appear. In contrast, most search engine spiders will only see the master page. So what do frame-challenged search engines see in our example? Sorry! Obviously, we need to provide search engines with a much better description of the site than this. Your Friend, The NOFRAMES Tag We can help both search engines and humans with some smart design. Example 2 Home

Disavow backlinks - Webmaster Tools Help If you have a manual action against your site for unnatural links to your site, or if you think you're about to get such a manual action (because of paid links or other link schemes that violate our quality guidelines), you should try to remove the links from the other site to your site. If you can't remove those links yourself, or get them removed, then you should disavow the URLs of the questionable pages or domains that link to your website. This is an advanced feature and should only be used with caution. Step 0: Decide if this is necessary In most cases, Google can assess which links to trust without additional guidance, so most sites will not need to use this tool. You should disavow backlinks only if: You have a considerable number of spammy, artificial, or low-quality links pointing to your site, AND The links have caused a manual action, or likely will cause a manual action, on your site. The disavow links tool does not support Domain properties. Link file format: Example:

Ajax &amp; XmlHttpRequest Asynchronous Javascript + XMLCreating client-side dynamic Web pages Ajax is only a name given to a set of tools that were previously existing. The main part is XMLHttpRequest, a server-side object usable in JavaScript, that was implemented into Internet Explorer since the 4.0 version. XMLHttpRequest was developed by Mozilla from an ActiveX object named XMLHTTP and created by Microsoft. Why use Ajax? But Ajax can selectively modify a part of a page displayed by the browser, and update it without the need to reload the whole document with all images, menus, etc... Ajax is a set of technologies, supported by a web browser, including these elements: HTML for the interface. The "asynchronous" word, means that the response of the server while be processed when available, without to wait and to freeze the display of the page. Dynamic HTML has same purpose and is a set of standards: HTML, CSS, JavaScript. How does it works? To get data on the server, XMLHttpRequest provides two methods: Attributes

Practical Tips for Correcting A Drop In Search Engine Rankings Checking search engine rankings on a daily basis can drive a marketer crazy and is not an indication of whether a marketing campaign is yielding results or not. This is especially the case nowadays since Google’s algorithm seems to be delaying search engine movement. Nonetheless, if you want to fix a drop in your search engine rankings, here are a few ways to do it. Request Website Magazine's Free Weekly Newsletters Work on getting more shares on social media. Social media shouldn’t be ignored, even if it doesn’t have a direct effect on your rankings. Fortunately, social media is somewhat of an equalizer. Here are some of the best ways to leverage social media marketing: • Make social sharing buttons a priority. • Just ask. • Build relationships with influencers and grow your own sphere of influence. Just remember not to get hung up on the intrigue of large numbers, such as Facebook’s “50,000 People Reached” or Twitter’s “100,000 Followers”. Make internal linking a practice.

Related: