We can get an idea about Google's ranking factors from patents, direct statements from Google or Google employees, detailed research, and case studies, although the factors are not delineated by Google. To help you develop a better understanding of Google's ranking factors, we have put together some useful tips, using Northcutt as a valuable resource. This article also contains some ranking factors that are considered controversial.

On-page SEO describes factors that you are able to manipulate through your own website. Positive factors are those which help you to rank better. Many of these factors may also be abused, to the point that they become negative factors. We will cover negative ranking factors later in this resource.

Off-page Factors, while not something that you have full control over, increase your ranking performance in search results. These methods are generally about getting backlinks from other sites. Positive off-page factors are generally straightforward, trying to understand natural popularity in comparison to the popularity of more influential and reliable sources.

Filtrele:
Keyword in URL
Concrete
Keywords and phrases that appear in the page URL, outside of the domain name, aid in establishing the relevance of a piece of content for a particular search query. Diminishing returns are apparently achieved as URLs become lengthier or as keywords are used more than once.
Placement of Keywords in URLs
Probable

It is important how keywords in URLs are placed. Keywords that place higher in URLs also have a bigger impact. According to Matt Cutts, keywords have little impact after "five words".

Keyword Density of Page
Likely
The percentage of times a word/phrase appears in the text. In the early 2000s, practicing SEOs were sculpting content for target words to appear 5.5%-6% of the time. Google has since improved its content analysis methods to the point that those tactics are scarcely relevant anymore. And Keyword Density, although referenced in Google Patents, is almost certainly just a simplified concept behind TF-IDF, covered next.
Keyword in Title Tag
Concrete

Title tags are titles for pages or content featured on your website, usually displayed on the search engine results pages and also in snippets for social media posts. Depending on the characters, titles should not be longer than 60-70 characters. Considering URLs, it can theoretically be said that keywords that are closer to the heading have more prominence.

Keyword Groups in a Heading Tag (H1, H2, etc.)
Probable

Heading etiketleri içerisindeki anahtar kelimeler, bir sayfanın ilgili konu başlığının belirlenmesinde güçlü bir ağırlıklandırma parametresidir. H1 başlığı bu ağırlıklandırmanın büyük kısmına sahip olurken, H2 ve diğer başlıklar giderek azalacak şekilde daha az ağırlık oranına sahiptirler. Bu başlığın ayrıca kullanıcıların okumasını kolaylaştırarak erişilebilirliği arttırdığı ve açık, net başlıklar oluşturması sayesinde hemen çıkma oranını düşürdüğü çeşitli araştırmalar sonucunda görülmüştür.

TF-IDF of a page
Probable

We can think of TF-IDF, the acronym for Term Frequency - Inverse Document Frequency, as situational keyword density. TF-IDF weighs the density of keywords on a page by looking at what is "normal" rather than a simple, uniform ratio. Ignoring words like 'the' for calculations, this is determined by assessing how often educated people would use a phrase like 'Google's Ranking Factors' in a discussion of the topic.

Closely positioned keywords
Probable

Closely positioned keywords also indicate that they are somehow related. Basic grammar dictates that assumption. A single paragraph with the keyword 'Istanbul SEO' about your SEO projects in Istanbul has more weight for the rankings than the words “SEO” and “Istanbul” mentioned in different paragraphs.

Words with different fonts
Concrete

Keywords in bold, italics, underlined, or with a larger font have a higher weighting in determining what a given page is about, but words with these attributes have a lower weighting than words that appear in the title.

Exact Match of Searched Phrases
Probable

Although Google may return search results that contain some of the phrases on your page (and in some cases, there may be no matching words), the parameter referred to as high Information Retrieval (IR) is the value resulting from an exact match, as stated in one of the patents. For example, a query containing all search terms has a higher-ranking value than a document with one matching word.

Keywords in Alt Tags
Concrete

The ALT attribute of an image is intended to push the image higher up in search engine rankings. The attribute is often made use of to make the images more visible and accessible on Image Search Results.

Keywords Ranking Higher on a Page
Likely

How we choose to write says things about us: the earlier a word appears on a page, usually the more it means to us. This principle can also be applied to sentences, paragraphs, pages, HTML tags, etc. Google seems to have made this a rule of thumb, giving more weight to previously viewed content. It is also the result of the page layout algorithm, which decides what should be displayed at the top of your website.

Partial Search Phrase Match
Probable
It's established by a Google patent that, when content contains an exact match of a particular search phrase, that it has a more significant impact on ranking. In the process, Google indirectly confirms that you may still rank for certain searches when a page contains a text that partially matches a search query. This is further verified by just doing a lot of Googling.
Internal Link Anchor Text
Concrete

The anchor text of a link tells the user where the link goes. It is an important component in ensuring that users can move freely around your website, and if not abused, it helps users see how specific content links up with each other as opposed to "click here" links which are usually meaningless alternatives.

Keyword Stemming
Concrete

Keyword stemming is the practice of taking the root or 'stem' of a word and finding other words that share that stem (i.e., 'stem-ming', 'stem-med). Ignoring this to keep keyword density in balance reduces readability and has a negative impact. This parameter was first introduced in 2003 with the Florida Update.

Reference to the Keyword in Domain Name
Concrete

A ranking bonus is awarded if a keyword is included in the domain name. It has less importance in weighting than if the entire domain name matches a specific search query exactly. However, we can say that it has more importance compared to a keyword at the end of the URL.

The keyword as Domain Name
Likely

It is also called exact match domain or EMD. A strong ranking bonus is awarded if a keyword matches the domain exactly and a search query meets Google's definition of a "commercial query". This parameter allows brands to get more visibility in search results, while it loses its power if it is taken beyond its intended use.

TF-IDF inside the Domain Name
Likely

Although they are completely synonymous, many SEO experts were annoyed by the introduction of the term "Term Frequency" instead of "Keyword Density" in 2015. The important part when it comes to "keyword density" factors is the second part of TF-IDF: Inverse Document Frequency.  Google also developed adverbs with TF-IDF to assess the natural intensity of a topic. Comparative measures of 'how much is natural' fizzled out over time.

Keyword Density in Domain Name
Likely

When Krishna Bharat introduced Hilltop, he also presented a problem with PageRank: 'A website that is an authority may not be the authority on a particular search results page.' Hilltop developed search results by looking at the relevance of all websites identified as "experts". Since TF-IDF determines the relationship at the page level, we can assume that Hilltop made the definition of "expert" using the same tool.

Aged Domain
Likely

The aged domain parameter is confusing because a new domain name also receives a temporary incentive. As Matt Cutts points out, aged domains do not make much of a difference as very little additional trust is placed in them. Theoretically, one could say that this rewards website does not favor short-term black hats.

Distribution of Page Authority
Concrete

In general, pages that get links from all sections of a website are more important than those that provide links to other pages on a website. A similar effect can be seen on pages linked from the home page. Because these pages get the most links on most websites. Building a website to maximize this factor is commonly referred to as PageRank Sculpting.

Hyphenated URL words
Probable

The ideal way to separate words in a URL is to use a hyphen. Underscores can also be used, but they are not very reliable as they can be mistaken for variables in programming languages. Jumbling up words in a URL makes the words not appear as separate words, while also preventing chosen words from making a difference in rankings on search results. Apart from these scenarios, using hyphens would not make a page achieve a higher ranking on search results.

New Domain
Likely

New domains can get temporary nudge-ups on search results as a sort of incentive. In a patent discussion on methods for determining new content, it was reported that "the age of a domain to which a document is added can be decisive for the start date of the document". According to Matt Cutts, this has a very small impact on the ranking of a page. Think of it as a leg-up for a new or niche website.

Long Domain Registration Term
Likely

With this patent, Google explains that long domain registration terms can be used to predict the legitimacy of a domain. Speculatively, those wanting to achieve more volume in the short term with web-spam methods do not prefer longer domains than necessary.

Keyword ranking in titles
Likely

In the 2000s, an SEO theory emerged called the Rule of Three. According to this theory, how we use language - our sentences, headings, paragraphs, and even whole web pages in general - are ordered by importance. Although this was not validated by Google, we could argue this parameter comes up as a factor most of the time, according to our experiments with the ranking of words.

Schema.org
Likely

Schema.org is a joint project of Google, Yahoo!, Bing, and Yandex to understand logical data units for keywords. This initiative could take us beyond the traditional "10 blue links" search. Currently, the use of structured data can improve rankings for a variety of scenarios. There are also theories that Schema.org could improve traditional search rankings with clearer entity salience.

HTTPS (SSL) Usage
Concrete

SSL was officially announced in 2014 as a positive ranking factor, regardless of whether the website requires user input or not. Gary Illyes downplayed SSL in 2015, calling it the score used for tiebreakers. However, with an algorithm that performs numerical calculations, we know that small crucial differences that break the tie are very important in competitive search results.

New Content for the Entire Domain
Maybe

There has been unconfirmed speculation that performance is improved with new content for the entire domain. Again speculatively, Google recommends less "old" content as a source overall preferring correct/relevant content instead. This is especially true when a significant amount of knowledge could be improved with a minor adjustment or addition.

Fresh content
Concrete

Technically, the full name of this parameter is 'new content if the query deserves freshness.' The term QDF, which stands for query deserves freshness, was coined to refer to search queries that require finding new content. Although this parameter does not apply to all search queries, it is particularly important for many informative search queries. These SEO benefits can be cited as another reason for the success of brand-owning publishers.

Old Content for the Entire Domain
Maybe

Theoretically, based on all the information we have from Query Deserves Freshness (QDF), we can say that some content - including news content - is defined as 'Query Deserves Oldness'. Although Google has not made any statements about "QDO" - old content - we can say that old content is preferred in cases where new content is not. Just like with new content for the entire domain. Let's add, however, that we have no evidence that old content is a ranking factor for the entire domain.

Old Content
Concrete

As stated by a Google patent: "For some searches, older content may be preferable to newer content." It is possible that they could be reordered by the average age of the returned results before the search results are displayed.

Related Outbound Links
Maybe

Considering that Google reviews your inbound links in terms of authority, relevance, and content, it seems reasonable that outbound links should be both authoritative and relevant. Associating this with the Hilltop algorithm, we can say it is the opposite of what is generally accepted for inbound links.

Quality Outbound Links
Concrete

While outbound links might cause your website to "go down in PageRanks", websites should not be dead ends either. Google rewards outbound links to 'good, authority sites.' To quote the source: 'Part of our system encourages linking to good sites.'

Reading Level
Iffy

We know that since Google created a search filter for results pages, it analyses the readability of the contents (canceled later). We also know that Google usually dismisses poor quality content, while it considers academic writing to be highly desirable. What we have not been able to have so far is a credible source linking readability levels directly to rankings.

Grammar and Spelling
Likely

This is a ranking factor used by Bing. Grammar and spelling, according to Amit Singhai, are 'the kind of questions we ask ourselves when describing "quality content". Matt Cutts did not agree to this in 2011, but these rankings are somehow related. The first Panda update gave hints that this parameter was very important. Dozens of content-related factors are directly or indirectly influenced by grammar and spelling.

Subdirectories
Likely

Categorical information architecture is a long-standing SEO debate, as Google analyses headings on all websites together. It is unclear whether this parameter is exactly a ranking parameter, but Google now at least treats it as structured data and at least ranks more pages by displaying it as breadcrumbs on the results page.

Rich Media Content
Likely

Rich media content was rated as an indicator of "quality, unique content" generating more traffic to websites through images and videos. With Panda 2.5, videos were identified as crucial factors. Northcutt's contributions were also cited as a positive association. Currently, there is no official public source that vouches for all these factors.

Mobile Optimized
Concrete

Mobile-friendly websites have a significant ranking advantage. We can say this only applies to users making searches on their mobile devices, at least for now. While this was discussed at length at popular SEO meetings, its impact increased even more with the Mobilegeddon update in 2015. However, experts had already started speculating about this almost a decade ago.

Meta Keywords
Myth

It is also merely a rumor that Google considers meta keywords in rankings. 

Google Analytics
Myth

Many have suggested that Google Analytics is (or could be) a Google ranking factor. All the evidence available today, including dozens of statements coming directly from Google's Matt Cutts, suggests that it's all just a rumor. That said, analytics is still an incredibly powerful tool in the hands of a competent marketer.

Meta Description
Iffy

A good meta description will promote your website in search results. Its marketing value cannot be underestimated, considering that many Google Ads agencies - almost all - are working to test A/B Google Ads. Although the keywords in meta descriptions used to be considered direct ranking factors, they are no longer so, as Matt Cutts pointed out in 2009.

ccTLD in National Ranking
Likely

Country codes, also known as TLDs such as .uk and .tr, have a ranking bonus for searches made from the same country. It can be argued this is useful when creating international content. They must also perform much better than a ccTLD site from another country.

Google Search Console
Myth

Just like Google Analytics, Google Search Console (formerly Webmasters Tools) has no proven impact on rankings. Search Console is still useful for uncovering issues with other ranking factors in our content, particularly issues with manual penalties and certain crawling errors.

Salience of Entities
Likely

Over time, Google began to analyze ideas and logical units more often than words and sentence fragments. It analyzed how we say things and what results on websites were an exact match for those statements. This process enabled, in simple terms, the performing of searches like "how to cook meat" and allowed rib recipes to be displayed, even if the word "meat" had not been used directly.

XML Sitemaps
Myth

Although sitemaps are not mandatory, they can be useful to get more of your pages into the Google index. It is just a rumor that your ranking will improve if you use XML sitemaps in Google. It should be noted that this information comes directly from Google and has been verified by various studies. Sitemaps can ensure that your pages are crawled faster by Google.

Web Servers Close to Users
Probable

Google works differently for many local searches, such as Google Maps results and modified organic listings.  The same applies to national and international searches. Hosting your website close to your users (e.g., in the same country) leads to better rankings.

Sentence Fragment and Content
Probable

Since keyword density is no longer a positive factor, understanding the indexing method by sentence fragments showed that fluent and detailed content is a much more important ranking factor than generic content containing only keywords. One of the rather obvious parts of one of Google's patents describes it as 'defining related sentence fragments and clusters of related sentence fragments.'

Using rel="canonical"' tags
Iffy

The rel="canonical" tag suggests the ideal URL for a page. This can prevent duplicate content from being devalued and penalties from being imposed if multiple URLs lead to the same content. In our experience, the use of this function is merely a recommendation to Google and not a conclusive instruction. According to Google, it does not have a direct impact on rankings. Despite all this, we can say that using it makes quite sense.

Author Fame
Myth

Authorship was a kind of experiment that Google conducted from 2011 to 2014. In this experiment, bloggers built a reputation for specific authors with the rel="author" tag. The birth and death of authorship were directly confirmed by Google.

using rel=”publisher”
Myth

Just like rel="author", the use of rel="publisher" was widely accepted SEO advice, seen as a positive ranking factor. Just like rel="author", rel="publisher" became irrelevant with the advent of Authorship.

using rel=”author”
Myth

Using rel="author" used to be a widely accepted SEO recommendation and was considered a positive ranking factor. But Google's use of this factor has been shelved with the introduction of the authorship feature. It is now only a rumor that rel="author" makes any difference.

Fixed IP Addresses
Myth

IP addresses of web servers can be useful in focusing certain demographics on different regions. They can be negative ranking factors if used as part of a webspam effort or identified by the Hilltop algorithm as websites with different owners. However, it has been refuted on many occasions before that having a fixed IP address can turn into a direct ranking advantage.

URLs Using the "www" Subdomain
Myth

One misinformation spread by SEO bloggers is that your website will rank better if its URL starts with "www". This misconception has its origins in the fact that we force all pages of a site to resolve on "www". We do this because we want to avoid URLs with the same content and address, which could be a negative ranking factor.

Number of Subdomains
Maybe

The number of subdomains on a website proves to be the most important factor in determining whether subdomains are treated as standalone websites (whether or not they are part of an affiliated website) (such as hybrid hostings/social networking websites like HubPages and free web hosting services). Thousands of subdomains hypothetically mean that they are not tied to a single thematic website, but that each website is unique.

Using Subdomains
Maybe

Subdomains (something.yoursite.com) are treated as separate websites by Google compared to subfolders (yoursite.com/something/). Subdomains relate to many other factors mentioned in this content. Matt Cutts said in 2012 that subfolders and subdomains were 'almost the same.' The transition from subfolders to subdomains of websites like HubPages that went on the mend after the Panda update shows that this parameter may still be one of the main parameters.

Keywords in HTML Comments
Myth

This parameter is an old SEO theory that can easily be refuted in ten seconds. To prove this, we added a made-up word with no competitive value to the source code of our website and permanently linked it for indexing. If the word comes up in search queries, we will have proof that Google ranks that word, but the word did not come up.

Using AdSense
Myth

Although SEO tends to make too much of AdSense, this parameter has been roundly rejected by Google. However, we can also state that we have not found any evidence that directly supports this claim. Therefore, we cannot say that using AdSense has a direct impact as a ranking parameter.

Keywords in Classes, NAMEs, and IDs
Myth

Once again, we can disprove theories about whether oddly placed words have any effect on search engines by using made-up words and seeing how they fare on search results. Such claims are not even as valuable as what Google tells us, or the speculations in the patent. And once again we see that this factor is only a rumor, at least at the time of this write-up.

Keywords in CSS/JavaScript Comments
Myth

Another old SEO theory was debunked with a little patience and a ten-second experiment. We used the same method we did for HTML comments and using a made-up word with no competitive value, added it to our source code, and gave a link to it. If Google were to assign a value to this word, we would have proof that this claim is true. But that did not happen.

Verifiable Phone Number
Maybe

A phone number is considered proof of legitimacy in search rankings. Although this assumption rests on shaky ground, it is also supported by criteria such as name, address, and telephone (also referred to as Google Maps SEO), which Google takes into account in local SEO. 'Satisfactory contact details' is a parameter that Google quality control reviewers are on the lookout for.

Using Privacy Policy
Maybe

A theory put forward at Webmaster World in 2012 later sparked a major debate: Does having a privacy policy have an impact on ranking? Regardless of whether it is important or not, 30% of household names in the world of search engines said yes. It is also in line with Google's philosophy. But it is still just a theory.

Low Code-to-Text Ratio
Iffy

This SEO theory, which attracted a lot of attention in 2011, suggests that a small amount of website codes combined with a lot of content effectively influences rankings. Let's have a look at what we already know: 1) Speed is a verified factor, 2) Google's own PageSpeed Insights tool highlights the importance of scaling down to 5 KB for payload. 3) Certain minor code errors can lead to deprecations and penalties. We can at least say that this parameter is relevant, if not directly.

Accessible Contact Page
Maybe

It is thought to be an indicator of legitimacy. This was suggested based on a Google policy called Quality Rater Guidelines, which purportedly buttresses this inference. This document recommends Google quality control reviewers look for "satisfactory contact details".

Length of the Content
Maybe

SerpIQ conducted an interesting correlation study. According to this study, the length of the highest-ranking content is between 2000-2500 words. It is unclear whether this is a factor of other factors. These pages may have gotten more likes and therefore more links and shares, and accordingly, they may have ranked higher in long search variations.

Meta Source Tag
Iffy

The Meta Source Tag was coined in 2010 to create more qualified sources for Google News. This parameter comes in two forms: the sources sold (if the sale is to a third party) and the original source (if you are the source). If the news item was sold, this could theoretically help avoid penalties for duplicate content. If you are the original source, this tag has already been replaced by rel="canonical".d

Previously Displayed Keywords in Display Title
Likely

Studies and correlational research over the last decade suggest that titles that start with a keyword (but not always) rank better than titles that end with a keyword. This is pretty easy to test, and it usually works: keywords placed at the front are more successful. But the source we chose for this parameter suggests more than that. Top-performing titles do not start with a keyword but appear modified in Google results (Google does this from time to time).

Meta Geo Tag
Iffy

Unlike IP addresses and ccTLDs, Matt Cutts says they can barely handle this tag.' However, Cutts also suggests that this tag could count if you use it on a gTLD site (like the ".com" extension) and try to limit it to one country. Although the Meta GeoTag has proven to be almost useless, Google still seems to take it into account treating it as a factor of internationalization.

Unique Website Content
Probable

A patent suggests that Google does much more than just deprecate identical duplicate content. Google discusses ways of telling you that your content is not interesting. After determining that a group of articles is relevant, it rewards these "snippets" with a "Novelty Score". These contents can be descriptive, unique, and/or quirky (in a good way).

Previously Used Keywords in the Headings Section
Maybe

When it comes to heading tags, the order of words matters. According to the Rule of Three already mentioned here, words given priority in order have much more weight. In general, our findings confirm this, but it is still necessary to carry out tests specifically with the H1 position.

Content Provides Value and Unique Insights
Probable

According to Google patents, a "Novelty Score" is given for original, unique, and quirky content that provides detailed information. This is done by looking at the quantitative and qualitative characteristics of "information snippets" in content. Today we know that Google's scoring of unusual content, the so-called "novelty scoring", takes place by comparing a large number of documents with each other. As with duplicate content, the novelty score is weighted by reviewing it both internally and externally.

Average Novelty Score for the Entire Website
Likely

Kumar and Bharat's patent, 'Identifying Extraordinary Documents', states that a single document can be used to understand how creating a website is. Giving the page an average "novelty score" is also consistent with parameters such as weak content (Panda algorithm) and high correlation rate (Hilltop algorithm), which are factors that affect the other page as a whole.

Quality Content
Iffy

As we know from numerous sources, Google can distinguish between user-generated content (and even certain Search Console messages) and analyze them differently. One theory is that Google uses the number of comments dropped on content to determine its quality. There is no written proof of this theory yet, but speculatively it seems easy to get results by paying attention to this factor.

Positive Approaches in Comments
Myth

It is assumed that Google determines content quality by looking at the comments. There is also a patent on ranking-based product reviews. But according to Amit Singhal, if the rankings of sites with negative comments were downgraded, no information could be found on the internet about candidates running for election.

using rel=”hreflang”
Likely

There is no direct evidence that the HTML tag alone gives you a ranking boost. However, it is clearly useful to have variations of the website in different regions/languages. Most of the time, such signs seem to be quite useful.

Authoritarian Inbound Links Pointing to Your Page
Concrete

Getting links from other sites that have a large number of inbound links is much more important than getting these from sites that do not have a lot of inbound links. The same is true for determining the value of their inbound links, namely links pointing to your site. Links act like sort of a currency. The value range of this currency is hypothetically between $0 and $1,000,000. This is how the PageRank algorithm works.

More Inbound Links Pointing to Your Page
Concrete

It is needless to stress how important this is. More links with the same value are of greater importance. Of course, this factor applies to links with a certain value; quality is superior to infinite quantity, so most backlinks are almost worthless. But as a more advanced function of the PageRank algorithm, you need a large number of links for your site to compete in search results.

Authoritarian Inbound Links Pointing to Domain Names
Concrete

PageRanks obtained through links from other sites are distributed within the domain in the form of internal PageRank. Domains as a whole tend to gain authority: contents posted on an authoritative site will instantly rank much higher than those posted on non-authoritative domains.

More Inbound Links Pointing to Domains
Concrete

Once again, more links of equal value increase the overall authority of that domain. Larry Page, in his article on the PageRank concept, described 'hostname-based clustering', which is a component of PageRank.

Link Stability
Concrete

The value of backlinks increases as they stick around longer. Speculatively, it can be said that this is because spam links are checked and devalued while pay links expire after a while. For this reason, backlinks that last longer become more valuable links. This parameter is also supported by a patent.

Social Network Indicators
Maybe

This refers to Google ascertaining how authoritative a website is depending on its posts on social networks and the reactions it gets. After the launch of Google+ and the ending of the firehose agreement with Twitter, Matt Cutts said that this feature was now tested with Google+ data. Recent studies show that positive social network penetration is directly or indirectly related to having better rankings.

Anchor Text Keyword
Concrete

Anchor texts used in an external link help establish how a page relates to a search query. The target page does not need to have the term in question either (see Google Bombing).

Links From Affiliated Sites
Concrete

Links from sites with similar content to your site are accepted in rankings. Contrary to common misconceptions and overly damaging link building methods, not all links to your website need to come from domains associated with a particular topic. Such a backlink profile will look completely artificial. Of course, that does not mean you do not have to be a part of the industry you are in. This parameter came with the Hilltop algorithm.

Partly Relevant Anchor Text
Probable

When a backlink portfolio is acquired naturally, as it normally should be, the method for linking to a website differs from person to person. Anchor text that contains part of the keyword phrase or the keyword phrase plus "something" is accepted by Google. A Google patent calls it "partly relevant" anchor text. SEO experts call it “partly matched”.

Keyword Link Title
Myth

For a long time, it was assumed that the title attribute of a link could be interpreted as giving extra weight to certain words, similar to anchor texts. At PubCon 2005, Google said this was not the case, saying that not enough people use this feature. Various studies have shown that title is not a factor.

Partly Relevant ALT Texts
Probable

Just like with partially matching anchor text, the ALT attribute of images is inherently variable and carries a lot of weight for phrases that contain specific keywords. This has not been verified by Google but can be proven in experiments with invented words that do not have a highly competitive value. Google's patent calls it 'partly related', while SEO experts use the term 'partly matched'.

Contents with Links
Concrete

For quite some time, it was assumed that content containing links improves overall content performance in addition to anchor texts. This theory is supported by a patent and experiments. For this reason, contents containing multiple links are assumed to be more valuable than a stand-alone link.

Keyword ALT Text
Concrete

Keywords used in the ALT attribute of images are also treated as anchor texts. In short, while quality ALT tags improve overall accessibility, they also make a big impact on the impression of said images in Google Images Search.

Linking from a Site with a Similar Ranking
Probable

Google states that a backlink from a website that ranks the same in a search results page will result in a higher weighting for that particular search query than otherwise.

Quoting a Brand Name
Myth

Local citations are an important factor in local SEO or Google Maps SEO: quoting by company name, address, and phone number, but no backlinks. According to Moz's Rand, this is also true for "traditional SEO." This study, however, was refuted by a few comments without the need to provide evidence. That's why we consider this factor to be just a rumor.

Links From Multiple “Class C” IPs
Iffy

Typically, Google ranks the value, quality, and relevance of the pages and domains linking to you numerically. Not those of IP addresses. An exception to the Hilltop algorithm is presented in Krishna Bhara's research paper, titled "Detecting Host Affiliation". Sites sharing the same /24 IP range or the first three bytes of an IP address (up to C in A, B, C, D) are considered to be owned by the same person and so disqualified from Hilltop bonuses obtained through links from third-party experts.

Click Through Rate on the Whole Domain
Maybe

A patent issued by Navneet Panda (Panda algorithm) proposes determining the website quality score for various search results based on CTR values. This patent is called the 'Site quality score'. This patent also evaluates searches referring to brands based on clicks, as in the primary method. Nevertheless, he suggests that in addition to search query CTRs being a factor, CTRs on a website could also be a factor.

Click Through Rate per Search/Page (CTR Rate)
Likely

Many theories have been put forward that the Click Through Rate on results pages is a ranking factor. With the Bing search engine, this parameter is a true factor. Rand Fishkin has often used Twitter in experiments that surprisingly confirmed that CTR was a ranking factor.

Backlinks From .GOV Extensions
Myth

Just like with .EDU backlinks, it is not true that .GOV backlinks have a magical effect compared to regular gTLDs with similar characteristics. There are speculations that the links from such websites provide a "more natural balance". However, these claims are untrue, considering the statements and studies on the topic as well as the websites of big brands that do not have such links.

Backlinks From .EDU Extensions
Myth

A popular scam targeting people new to SEO is selling backlinks from “.edu” sites. These extensions are claimed to be of higher value. 'Google does not treat websites with .edu and .gov extensions any differently.' Although these sites have a higher authority than natural reference sources, the links for 'buy .edu link' ads are not naturally derived links, and using these can only deprecate your website.

Low Bounce Rate
Maybe

It is assumed that Google uses the bounce rate as a ranking factor. This can be easily measured with Google Analytics or Chrome data. Matt Cutts dismisses this parameter, saying how much time users spend on a page can easily be exposed to spam activity. Still, SEO Black Hat and Rand Fishkin made statements to the contrary. It should also be noted that Duane Forrester from Bing has confirmed that Bing does indeed use this parameter to measure the 'length of stay.'

Positive Link Speed
Likely

There is speculation that if your website and content are gaining backlinks faster than they are losing them, you should have some advantage over big brands. This is because the other option will not be to your advantage when it comes to off-page SEO. A Google patent gives some insight into how they look at it: 'By analyzing the rate of decrease/increase or the number of backlinks to a piece of content (or page) over time, a search engine can identify an important indicator of how recent its content is.

The ratio of Natural Deep Links
Maybe

As a natural function of the PageRank algorithm, pages with direct links have higher authority than those with indirect links. Just like links to the home page of a website. We can also say that a significant portion of direct inbound links other than the home page can be expected if there are no excessively manipulative applications.

Twitter Posts
Maybe

According to Google, posts on social media are treated as backlinks. However, no information posted on Twitter has any additional value.

Twitter Followers
Iffy

It has been suggested that the number of Twitter followers of a brand could be a ranking factor. Google, though, claims just the opposite. Although Twitter followers can function as brand ambassadors, spreading news of the brand by word of mouth, and providing backlinks to your content, the evidence shows that such data are not brought to bear on rankings by Google.

Facebook Posts
Maybe

According to Google, posts on social media are treated as backlinks. However, there is no information that posts on Facebook have any additional value. In 2010, Danny Sullivan from Google said, 'Who you are on Twitter matters', while Matt Cutts said "as far as he knew" there was no such factor.

Facebook Likes
Iffy

It has been suggested that the number of Facebook likes for a brand could be a direct ranking factor. Google, again, claims just the opposite. Although Facebook followers can function as brand ambassadors, spreading news of the brand by word of mouth, and providing backlinks to your content, the evidence shows that such data are not brought to bear on rankings by Google.

Query Deserves Freshness (QDF)
Concrete

Google doesn't rank all search queries the same. For certain search queries, especially news-related ones, the content to be published must be fresh (as only new content will get ranked). Google calls this parameter Query Deserves Freshness (QDF).

Links from Older Domains
Maybe

In 2008, Microsoft submitted a patent proposal that required higher weighting of backlinks from older domains. And the distribution was 100% for 10+ years, 75% for 6-10 years, 50% for 3-6 years, 25% for 1-3 years, and 10% for less than one year. It has not been confirmed whether Google makes such an assessment.

Query Deserves Out-Dated Content (QDO)
Likely

This is a term coined to describe a particular situation covered by a Google patent. As specifically noted: 'For some queries, outdated content may be more preferable than newer ones.' The patent then defines which content to sort by age, as a function of the average ages for that query.

Query Deserves Sources (QDS)
Likely
A phrase that we've coined to cover a scenario described in Google's Quality Rater Guidelines, used when humans conduct quality control on Google search results. This asks: "this is a topic where expertise and/or authoritative sources are important". Presumably, this applies to all informational search queries (in contrast to transactional and navigational queries).
Query Deserves Diversity
Likely

Certain search queries are ranked differently by Google. The theory of Query Deserves Diversity is based on a concept called entity salience. This is done by defining the same word in different ways. Query Deserves Diversity can be considered part of the Query Deserves Freshness concept. As in the case of Wikipedia's disambiguation pages, the search query is vague and several types of results should be listed at the top of the results. It is unconfirmed but can be easily tested.

SafeSearch
Concrete

In some instances of adult content, a website may or may not show up in rankings depending on whether the SafeSearch option is activated or not. By default, the SafeSearch feature is activated.

Not Using Google Ads
Myth

According to some non-scientific sources, the use of Google Ads is a ranking factor, while according to others, not using it is a ranking factor. The idea that Google Ads could have an impact on Google's organic rankings now or in the future has been dispelled by Google, perhaps more severely than other SEO rumors.

Using Google Ads
Myth

It seems SEO paranoia is not letting this rumor go away. So far, we haven't come across any reliable studies showing that Google Ads can improve rankings. The fact that Google Ads influences organic traffic goes against Google's core philosophy, and Google is more outspoken than anyone else in speaking out against this rumor.

Chrome Site Traffic
Maybe

This patent, rejected by Google, emphasized that "document scoring based on traffic associated with a document" also touches on the use of browser traffic data to determine website rankings: "Information about a document's traffic allows scores associated with that document to be determined or modified over time."

Chrome Bookmarks
Maybe

Although refuted by Matt Cutts, it was confirmed at the ex-Googler meeting at the BrightonSEO conference in 2013. Also, as stated in a Google Patent, 'Over time, the search engine can analyze to which documents the number of bookmarks/favorites are related, to determine the value of the document.

Google Toolbar Activity
Maybe

Just as Matt Cutts stated that Google Chrome data are not used to determine Google's organic search results, the same was said for Google Toolbar. However, there is also talk among SEO experts about a method that can be accomplished using this plugin concerning a Google Patent.

Use of Search History
Likely

If you have not turned this feature off on Google, personal search results will be displayed following your search history. As of 2009, logging in with your Google account is no longer a requirement for your search results to be displayed in line with your search history.

High MozRank / Moz Trust Score
Myth

The “toolbar PageRank” score does not match the actual PageRank data used by Google Search. Today, this data often gives inaccurate results, leading many to prefer MozRank. Despite this, Google has always measured the value of links according to its system, and although Moz data provides the link, it does not correlate with rankings. The same requirements apply to other third-party measurement methods such as Majestic or Ahrefs.

Low Alexa Score
Myth

While there is speculation that Google theoretically considers a site's traffic as a ranking factor, there is currently no evidence that they do so using Alexa. The documentation we have suggests that they can do this using Chrome data. They have already released several statements on the issue.

High Dwell Time (Long Clicks)
Likely

The 'site quality score' patent describes a scenario based on rewarding searches with brands + clicks as a ranking factor. As part of their method, it further states, 'Depending on the system's settings … a click performed over a period of time, or a click performed over a period of time relative to the length of the resource, may, for example, be treated as users' choices.' It is also used by Bing and Yahoo, as verified by several other sources.

Total Brand Searches + Clicks
Likely

Navneet Panda's patent 'site quality score' lays out a scenario describing how searches for brands on Google contribute to the quality score across the entire domain (like the Northcutt contact page). As described in the patent, "The score is used to detect the activities of the users, determined by taking into consideration certain websites and the characteristics of these websites".

Sitemap Saver Tool
Iffy

It is about registering the XML Sitemap in Google using Google Search Console. In some cases, this process can allow more pages to be included in the indexing, but for similar reasons, the concept of 'registering a website is not ideal. The same goes for registering sitemaps. If Google can't find them on their own, these features are unlikely to be ranked at all. Also, as Rand Fishkin points out, this tool has the effect of stopping many debugging processes.

Reconsideration Requests
Likely

When Google detects violations of quality guidelines, it can take manual action on websites. With this section in Search Console, the site owner can learn whether his website has been penalized. Once the errors are fixed and a re-review request is sent to Google, Google can decide whether this penalty will be lifted with the help of human reviews. If you have been exposed to a manual action such as full spam, your indexes can be completely deleted from Google results.

International Targeting Tool
Iffy

Google Search Console provides a tool for international targeting if the other method cannot be implemented correctly. While this tool is generally used for '.com' TLD generics or 'gccTLDs' like .co, it was originally produced by Google for specific countries. But later on, their general use made Google see them differently. These can help with rankings in certain countries in certain situations.

Links from ccTLDs in Target Countries
Likely

Google Country Code Top Level Domains –are used to make a website connect to a specific country. It is widely accepted that backlinks from a particular country's ccTLDs improve Google ranking factors for that country.

Google+ Local Verified Address
Iffy
It's often theorized that a Google+ Local page, in which businesses verify their address using a postcard for listing in Google Maps, is a ranking factor in Google's primary web search results. While true that this is a significant ranking factor for Google Maps searches, and when the local listings box is imposed in line with traditional Google search results, we've found no evidence to support this theory.
Links from IP Addresses in the Target Region
Likely

Google says operating a server near your target audience will improve rankings for those users on a broader, international scale. It is also known that several other factors are used to establish geographic relevance: this was also proven by a comparison of Google.com and Google.co.uk search results.

Iframes Links
Likely

Although Google is built on YouTube frames and iframes, it still does not recommend the use of these structures. Because these structures are not yet very efficient for users or Google itself. However, they still cannot be disregarded entirely.

Crawl Budget
Concrete

The number of pages that Google will crawl and index on your website is proportional to the average authority value obtained from your inbound links. Sites with low authority have less "crawl budget".

Android Pay
Iffy

As with Chrome, Analytics, and other Google resources, it has been suggested that Google uses Android Pay data as a ranking parameter. This is presumably done by associating a Google account with search results that previously led to purchase. It should be mentioned there is no definitive proof for this parameter (for or against).

Core Web Vitals
Concrete

The Core Web Vitals update announced by Google in May 2020, under Page Experience, became one of Google's ranking factors in the second half of 2021. You can access Core Web Vitals errors for websites via the Google Search Console tool. The 3 most important metrics for Core Web Vitals are:

Largest Contentful Paint (LCP): LCP can be defined as the time frame between users clicking on a web page and the loading of the main content. It is recommended for this duration to be under 2.5 seconds. With the algorithm updates for a better page experience released in the summer of 2021, the LCP metric has also become one of the ranking factors.

First Input Delay (FID): FID can be defined as the rate at which users can interact with web pages (e.g., the rate at which the buy button can be clicked). It is recommended for this duration to ideally be less than 100 ms. With the algorithm updates for a better page experience released in the summer of 2021, the FID metric has also become one of the ranking factors.

Cumulative Layout Shift (CLS): CLS is the unexpected shifting of webpage elements while the page is still downloading. It is recommended for this metric value to ideally be below 0.1. With the algorithm updates for a better page experience released in the summer of 2021, the FID metric has also become one of the ranking factors.

Page Experience
Concrete

The key rating metrics for the update, which Google plans to add to its ranking factors from the end of August 2021 to improve users' page experiences, are

  •   Core Web Vitals
  •   Mobile Usability
  •   Security Issues
  •   HTTPS Usage

You can access Page Experience reports for websites using the Google Search Console tool. 

MFI
Concrete

Before Google switched to mobile-first indexing, both mobile and desktop search results were determined via desktop pages. In January 2018, mobile-first indexing was launched and the position of desktop pages was determined by the content offered by mobile pages. MFI also increased the importance of providing a good mobile experience for users.

Negative on-page factors can hurt your current rankings. These factors fall into three categories: Accessibility, Devaluation, and Penalties. Accessibility issues are caused by Google bots' inability to crawl or analyze your website correctly. Devaluation is an indication of a second-rate website, which may prevent your website from going up in the rankings. Penalties are usually portentous of something much more serious. They can have a devastating impact on your long-term performance on Google. On-page factors are related to the way you manage your website.

Negative Off-Page Factors are usually related to backlinks that are unnaturally directed to your website. These are often spammy links that are applied deliberately. Until the Penguin algorithm was introduced in 2012, websites with such links went down in rankings instead of a penalty. Since Google thought they were unnatural, you would lose all or nearly all of your valuable links, but otherwise, your website would not be harmed. Later, Penguin introduced off-page penalties that apply in several cases. This way, it was ensured that the harmful activities of competing websites remained at a certain level. These punishment methods are also known as negative SEO or Google Bowling.

Filtrele:
High Keyword Density
Probable

Keyword stuffing penalties are the result of misuse of the keyword density function, which was once an effective strategy. From our own experience, penalties can occur when you have a word density of around 6%. However, TF-IDF (mentioned above) is also an influential parameter in this respect depending on topics, word types, and context.

Keyword Dilution
Probable

This factor makes sense: if a high keyword density or TF-IDF is positive, at some point the non-existence of the frequency/density ratio will also reduce the relevance. As Google looks more capable now than ever of understanding the nature of language, it would be more accurate to call it subject matter dilution: producing content not relatable to any themes. The same basic concept applies to both variants.

A Keyword Rich Title Tag
Probable

Keyword stuffing penalties are more likely to be given because of the title tag than the whole page. An ideal title tag should have fewer than 60 to 70 characters and even at that, it should promote the page properly on Google searches. There is no benefit in using the same keyword five times in the same tag.

Extremely Long Title Tag
Probable

Keyword stuffing penalties are more likely to be given because of the title tag than the whole page. An ideal title tag should have fewer than 60 to 70 characters and even at that, it should promote the page properly on Google searches. There is no benefit in using the same keyword five times in the same tag.

Too Many Keywords in Heading Tag
Probable

Heading Tags such as H1, H2, H3, etc. can give added weight to certain words. Anyone who tries to abuse this positive ranking factor will eventually find that they cannot put as many keywords into these tags as they would like. This also applies when tags become larger than they normally would. Keyword stuffing penalties appear as a function of the total area within these tags.

Excessive Use of Heading Tags (H1, H2, etc.)
Probable

As a general rule of thumb, if you want to see if there is an SEO penalty somewhere, push a positive factor beyond its reasonable limits. We can say that an easily verifiable penalty is to format your entire website with the H1 tag. Are you too lazy for that? Matt Cutts provides a brief example of content containing too many H1 tags in the link below.

URL Keyword Repetition
Likely

Although there seems to be no penalty for using words more than once in a URL, the repetition of keywords used in a URL does not make any difference. You can easily test this parameter by placing five keywords in a URL.

Extremely Long URLs
Probable

Matt Cutts states that after the first five words, the words that follow the URLs become less important. Although this is not confirmed by Google, it is quite predictable that this would also apply to Google. Although Bing works differently than Google, they also confirm that keyword stuffing is a penalty parameter in their search engine.

Keyword Intensive ALT Tags
Probable

Considering that ALT tags are not directly visible on a website, keyword stuffing of ALT tags is quite often abused. Multiple keywords are fine or even ideal but doing more than that can lead to penalties.

Long Internal Link Anchors
Likely

Long internal anchor texts do not bring anything to the party - they can even lead to a devaluation. In extreme cases, keyword stuffing webspam penalties are imposed for the use of excessively long anchor texts.

High Text to Link Ratio
Maybe

Theoretically, websites with just links containing no other content are usually low quality. This theory is supported by the fact that such pages are rarely displayed in search results. However, it should be noted that this result is not supported by any studies.

Too Many Lists
Probable

Listing a large number of keywords is also considered keyword stuffing, as suggested by Matt Cutts. E.g.: listing too many things, words, phrases, ideas, feelings, concepts, and phrases is not a natural form of writing. Heavy reliance on lists can lead to lower rankings of websites and even penalties.

JavaScript-Hidden Content
Maybe

Although Google is against inserting text in JavaScript because it cannot be read by search engines, this does not prevent Google from crawling in JavaScript. In extreme cases, JavaScript can be used to hide non-JavaScript text on a page. This would incur the penalty of cloaking.

CSS-Hidden Content
Probable

One of the well-known and well-explained on-page SEO penalties is deliberately hiding links or text from users. Stuffing a page with keywords can result in you facing harsh penalties from Google. In legitimate situations, some leeway can be given to users, e.g., through the use of tables or tools such as small suggestion boxes.

Using a Background Similar to the Page
Concrete

One of the most common causes of cloaking penalties is when a web page uses the same color for its background and contents. Google Page Layout can use its algorithm to visually look at pages and weed out faulty ones. In our experience, this can also happen inadvertently.

Image Link in a Single Pixel
Concrete

It was once a popular webspam technique used for hiding links. There is no doubt that Google will treat "really small links" as hidden links. This can also be applied as a 1px-to-1px image or a very small text. If you're trying to pull the wool over Google's eyes with these tricks, chances are you'll get caught eventually.

Empty Link Anchors
Concrete

Although hidden links are generally implemented differently from Hidden Texts in terms of empty anchor text, they can also incur penalties for hiding content. Extremely dangerous, they were once spamming techniques. That's why it's a good idea to double-check your code.

Copyright Violations
Concrete

Posting content that goes against the Digital Millennium Copyright Act (DMCA) or similar copyright violations can result in severe penalties. To this end, Google automatically analyzes unlicensed content. However, users can also chip in by reporting violations of rights.

Doorway Pages
Concrete

Websites that use Doorway or Gateway Pages define a stack of pages that should normally be search engines' landing pages. Such websites offer nothing to users. An example of this would be creating a product page for each city name in the United States. As a result, these pages cause spamdexing or spamming on Google's index pages.

Overuse of Bold, Italic, or Other Features
Likely

If you place all the content on your website in a bold tag, as such text is usually given extra weight compared to the rest of the page, you are not cracking any code that can help your website rank better. Such initiatives are usually categorized by Google as "spam activity". It should be noted that we have tried such spammy activities on our sample websites that are not open to users.

Broken Internal Links
Concrete

Broken internal links make a website more difficult for search engines to access while making it difficult for users to navigate through the site. It is possible to say that this is a sign of a low-quality website. Make sure your internal links are never broken.

Redirected Internal Links
Iffy

PageRank patents and articles, and even Matt Cutts consider redirects a "PageRank meltdown." This means some loss of authority every time one page is redirected to another. In 2016, Gary Illyes tweeted that this was no longer the case.

Text in Images
Concrete

Google has come a long way in analyzing images. But overall, it seems unlikely that texts enriched with media will be searchable by Google. If you insert text into an image, there is no direct penalty or devaluation, but your website can't get a ranking solely through these images.

Text in Video
Concrete

Just like images, the words you use in videos are not safe for access by Google. If you're going to publish a video, it's in your best interest to post a description of the video along with it, as it makes the video searchable. This is also independent of a rich content format This includes HTML5, Flash, SilverLight, and others.

Texts in Rich Media
Concrete

Google has come a long way in reviewing images, videos, and other media formats such as Flash; but overall, it is very unlikely that text embedded in this type of rich media will be searchable in Google. However, there is no penalty or devaluation.

Frames/Iframes
Concrete

In the past, search engines were completely incapable of crawling content contained within the frames. Although they have overcome these weaknesses to a certain extent, frames remain an obstacle in the path of search engine spiders. Google tries to assign the content in a frame to a single page, but there are no guarantees this can be done properly.

Dynamic Content
Likely

Dynamic content can present several challenges for search engine spiders to understand and rank. Using Noindex and minimizing the use of such content, especially in a way that is accessible to Google, will not only provide a more positive user experience overall but will also have a palpable impact on rankings.

Thin Content
Concrete

Although it is always better to create carefully crafted content that covers an entire topic,  Google penalizes content that has no unique value in accordance with Navneet Panda's "Panda" algorithm. A study on the forum page "DaniWeb" by the industry-renowned Dani Horowitz provides great samples of the most fundamental influences of Panda.

Thin Content on the Whole Website
Concrete

For a very long time, Google has struggled to identify the uniqueness and quality of your content. With the introduction of the Panda algorithm, this parameter has ceased to be a page-by-page evaluated parameter, instead of focusing on the entire website. This is very useful for increasing the average quality of content in search engines. Uninteresting and repetitive pages are left to their fate with "noindex". Examples of these are “tag” pages and forum user profiles.

Too Many Ads
Concrete

Pages with a lot of ads, especially if they have these ads at the top of the site, provide a low user experience and are rated as such in search rankings. Google evaluates this parameter by taking a snapshot of the website. This feature is a function of the Page Layout algorithm, also known as Top Heavy Update.

Pop-ups
Likely

Although Google's Matt Cutts answered no to this question, John Mueller from Google said the opposite in 2014. After weighing both answers and considering the processes behind the page layout algorithm, our decisive answer would be yes. Using pop-ups can hurt your search rankings.

Duplicate contents (Third Parties)
Concrete

Duplicate content on another website can result in a significant downgrade, even if it properly links to the source and does not violate copyright regulations. To put it in a nutshell: Content that is unique but contrary to the structure of the web delivers better results overall.

Duplicate contents (Internal)
Concrete

Similar to duplicate content from other sources, the repeated use of a piece of content on a page or site will ultimately lead to a devaluation of the site. This is a fairly common problem and applies to heavily indexed tag pages for www and non-www versions of the website.

Links to Penalized Sites
Concrete

This was a parameter that came with the “Bad Neighborhood” algorithm. To quote Matt Cutts: “Google trusts websites less when these sites link to spammy sites or bad neighbors”. Simple as that. Google recommends using the rel="nofollow" feature if you need to link to such a site. To quote Matt again: "Using nofollow is something that can separate you from the bad neighbor".

Slow Website
Concrete

Slow websites do not get ranked as well as fast ones. Google considers your target audience here. So, consider your users' location, devices, and connection speeds. Google regularly reiterates the "under two seconds" rule and generally sets targets below 500ms.

Page NoIndex
Concrete

If a page contains a meta tag for "robots" with a "noindex" value, Google will not include it in its index. If you use this on a page for which you want to get a good ranking, you are making a bad choice. However, you can use this to remove pages that are not good for Google users. In this way, you will also be increasing the average visitor experience from Google.

Internal NoFollow
Probable

This can happen in two ways: if the page contains the "robots" meta tag with the value "nofollow", this implies that the rel="nofollow" value is assigned to all links within the page. Or it can be implemented so that the value is assigned to each link. Either way, this means “I don't trust this”, “don't crawl anymore”, and “don't give it PageRank”. Matt does not mince his words: never "nofollow" your own site.

Disallow Robots
Concrete

If your site has a “*” or “Googlebot” definition followed by “Disallow” in a file named robots.txt in its root folder, your site cannot be crawled. This will not cause your site to be removed from the index. But it also prevents updating of new content or positive ranking factors that include age or novelty.

Poor Domain Reputation
Concrete

Domains earn themselves a reputation with Google over time, either good or bad. Even if the domain name changes hands and you now manage a completely different website, you may still be subject to penalties for web spam from the previous owners of the site.

Bad Neighborhood IP Address
Likely

Although Matt Cutts has brought this issue to a head with his remarks that using a fixed IP address for a long-used 'SEO web hosting' does no good, in some rare cases Google penalizes the entire IP range of servers located in a private network or a bad neighborhood.

Meta or JavaScript Redirects
Concrete

Currently an uncommon SEO penalty, Google recommends not using meta-refresh and/or JavaScript-timed redirects. While this confuses users, it increases the bounce rate and for some reason, the problem of cloaking occurs. Use server-level 301 (if permanent) or 302 (if temporary) redirects instead.

Text in JavaScript
Concrete

While Google continues to improve its ability to crawl in JavaScript, there can be problems if Google crawls through text that was printed with JavaScript. Additionally, Googlebot may not understand the content when it is printed. Printing the text with JavaScript does not result in penalties. However, it is an unnecessary risk and therefore a negative factor.

Poor UpTime
Concrete

When Google cannot access your site, it cannot reindex it. Reason also dictates that an unreliable site will lead to a poor Google experience. While a single downtime is unlikely to have a devastating impact on your website's ranking, it is also important to get your website back up and running again in a reasonable time. The ideal time length would be one or two days. Longer periods of downtime could lead to issues.

Whois Privacy
Maybe

As noted occasionally, Google does not always have access to whois data. Matt Cutts explained at PubCon 2006 that they still look at this data and that Whois privacy data can lead to penalties in combination with other negative signs.

False Whois
Iffy

Similar to Whois privacy, Google is aware of this frequent scam and sees it as a problem, as Google representatives made it clear time and again. If there is a reason other than breaking ICANN rules, for example, if you want to prevent a domain thief from stealing your domain, then you might be forgiven for using false Whois. But other than that, it is not recommended to use false whois for any reason.

Penalized Registrant
Iffy

If you agree that private and false whois data is bad, consider that Matt Cutts also considers it an indicator of web spam. Cutts attributes it to a domain owner banned and flagged on many sites. This is not verified information and is purely speculative.

ccTLD in Global Ranking
Likely

ccTLDs are country-specific domain extensions. An example would be .uk or .tr. These attachments are the opposite of gTLDs, which are global. Such tools are useful for practicing international SEO, but they can prove to be problematic in getting high rankings outside of that country. It should also be noted that an exception for this rule also applies to certain ccTLDs, such as .co, which Google calls "gccTLDs".

Too Many Internal Links
Probable

Matt Cuts once stated that there was an upper limit of 100 links per page, only to correct himself later to say you should "keep it at an acceptable level'.

Use of Corrupted/Invalid HTML or CSS
Likely

Matt Cutts said this was not a factor. Even so, our experiments regularly show it's a factor. Codes don't have to be perfect and this can only be an indirect impact. But it is normal for bad coding to have negative effects, given other code-related factors. Poor coding can lead to numerous, potentially invisible issues such as tag usage, page structure, and cloaking.

External Affiliate Links
Concrete

PageRank algoritmasının bir fonksiyonu olarak, alan adınızdan “PageRank sızdırmasının” gerçekleşmesi söz konusu. Unutmayın ki, buradaki negatif faktör “çok fazla” dış linktir. Dışarıya makul sayıda link vermek Mr. Cutts tarafından bu faktöre ilişkin açıklamanın yapıldığı makalede de onaylandığı üzere pozitif bir sıralama faktörüdür.

Parked Domain
Probable

Parked domains are domains that do not yet contain a website. Typically, they contain ads generated by a machine and have no user traffic. Such websites don't qualify for other ranking criteria and don't fare well with Google, either. There was a time when they still cut some mustard. But Google has repeatedly made it clear that no type of Parked domain gets ranked anymore.

External Affiliate Links
Likely

Google took action in the past against affiliate sites that do not provide any additional value. It is also included in the guidelines. There is SEO paranoia regarding affiliate links using 301 redirects in a folder blocked by robots.txt. Despite this, Google can display HTTP headers. Several affiliate marketers presented reasonable scientific studies on penalties for too many affiliate links. So, this one qualifies as a 'probable' factor.

Auto Generated Content
Concrete

Content generated by machines based on users' search queries is 'severely penalized by Google and considered a violation under Google's Webmaster Guidelines. There are sections in the Guideline that provide information about certain methods that are still allowed. An exception to this parameter applies to machine-generated meta tags.

Search Results Page
Concrete

Generally speaking, Google wants its users to get directly to the content they are looking for, and not to pages with links that may lead to potential content. The same applies to users coming from a page like a search engine results page (SERP). If a website is essentially like a search results page and it contains links, there is only a slim chance it will get any rankings. This also applies to blog posts that function as tag/category pages.

Infected Sites
Likely

Most website owners may be surprised to hear this, but many web servers classified as dangerous have not been shut down. All too often, these annoying organizations patch their vulnerabilities to protect their own assets. Moreover, they do this without your knowledge. These activities then present themselves as malicious activities on your behalf, such as viruses/malware, and Google takes this very seriously.

Too Many Links in Footer
Probable

It is clear that links in the footer section of a site have less weight than their actual content. Also, when Google first made a statement about its attitude towards pay links, the footer sections of websites were filled with dozens of pay links, and this was quite common. For this reason, we can say that too many footer links can now lead to a penalty.

Outdated Content
Probable

There is a Google patent that sets forth different aspects of putting outdated content on your website. In one approach, outdated content is treated directly as very old content. However, it is not known whether this factor directly affects rankings for all searches or whether it applies to Query Deserves Freshness (QDF) searches.

Phishing Activities
Concrete

If Google associates your website with phishing activities (exact copying of another website's home page to steal someone's information), then you are in serious trouble. For the most part, Google classifies such websites as sources of "illegal activity" and "things that may harm our users."

Obscene Content
Likely

Although Google indexes and displays obscene content on search results pages, they cannot be displayed if the SafeSearch option is activated. Therefore, unverified user-generated content can be blocked by Google's SafeSearch filter.

Orphan Pages
Concrete

Orphan pages are website pages that are not linked to any other page or section of your site. Since they can be treated as doorway pages, they can be considered a potential source of webspam. At the very least, it is possible to say that such pages will not benefit from internal PageRank. Therefore, they have less authority.

Using Subdomains (N)
Maybe

Subdomains (something.yoursite.com) are treated as separate websites by Google compared to subfolders (yoursite.com/something/). This can impact negatively rankings as it depends in many ways on other factors. In such a scenario, a site with too many subdomains cannot enjoy benefits that affect "all domains" on those pages.

Link Sale
Probable

Matt Cutts cites a study on how a domain's PageRank drops to seven due to paying external links. Pay links can result in penalties for webmasters both on and off-page due to violations of Google's guidelines.

4XX/5XX HTTP Status Code on a Page
Concrete

If your web server returns anything but 200 (OK) or 301/302 (redirect) status codes, it means that users cannot view your content. Note that this may be the case even if you can view the content in question in your browser. In cases where the content is truly incomplete, you may well get the 404 error, as stated by Google.

Number of Subdomains
Maybe

The number of subdomains on a website is the most important factor in determining whether each of the subdomains should be treated as a separate website. Using too many subdomains is also quite easy to do accidentally, causing Google to treat one website as if it were more than one website, or to treat many websites as if they were one.

Code Errors on Pages
Maybe

By default, if a page is riddled with errors originating from PHP, Java, or other server-side languages, it falls under Google's definition of poor user experience and low quality. The error messages on the page might get interwoven with Google's on-page analysis.

Domain-wide Ratio of Error Pages
Maybe

Hypothetically, users being redirected to pages with 4XX and 5XX HTTP errors is a sign of a low-quality website overall. We consider this to be a problem with pages that do not have these HTTP headers and have broken external links.

Outbound Links
Myth

At a certain level, there is a situation called “PageRank leakage”: you have some points to distribute and the “points” are not immediately redirected back to your site. But as Matt Cutts confirms, there are other mechanisms in place to reward genuinely relevant and authoritative outbound links. Websites need to be interactive tools. Not dead-ends.

Soft Error Pages
Likely

Google has repeatedly warned site owners about the use of "soft 404" and other soft error pages. These are error pages that return the HTTP 200 code in the document headers. Logically, this isn't something Google can handle properly, and users visiting your website may even get error messages. Google treats such pages on your site as low-quality content, taking them into account when determining the total value of your domain.

Sitemap Priority
Maybe

Many people assume that the "priority" attribute, which can be assigned to the XML sitemap of individual pages, is important for crawling and ranking. As with other indications that you can pass on to Google via Search Console, this doesn't help pages get higher rankings. And it is also a useful indicator to downplay less important content.

HTTP Expires Headers
Maybe

By setting the headers in your web server to "Expires", you can control the caching process of browsers and so increase performance. Unfortunately, depending on how they are redirected, they can also cause issues with search indexing by telling search engines that the content in question will not be updated for a long period. In all cases, they can tell Google bots to stay away longer than they would like, as the real intent is to provide a genuine user experience after the analyses.

Keyword Stuffed Meta Description
Maybe

Although Google tells us that it does not use the meta description section for web rankings, but only for advertising purposes in the search results, it is also theoretically possible that this section sends webspam signals to Google in the event of an abuse attempt.

Sitemap ChangeFreq
Maybe

The ChangeFreq variable in XML sitemaps is intended to show how often content changes. The theory is that Google doesn't re-crawl the content faster than you say the content has changed. It is not certain that Google considers this feature. But if they do, this will give results equal to adjusting the crawl speed in Google Search Console.

User-Generated Spammy Content
Probable

Google should fix problems with the user-generated content of your website and address issues around penalties it imposes on spammy content. This is something that might cause you to get a warning in Google Search Console. In one instance, we noticed that spam in a hidden DIV in WordPress can lead to undetected penalties.

Keyword-Stuffed Meta Keywords
Maybe

Since 2009, Google has announced that it does not take meta keywords into account at all. Despite this, tags continue to be abused by site owners who still do not believe or understand this idea. The theory is that for the latter reason, this tag may still be sending web spam signals to Google.

Auto-Translated Content
Probable

Matt Cutts notes that it is common practice and against Google Webmaster Guidelines to "internationalize" a website using Babelfish or Google Translate. We can say that using Google to that end often leads to a "ranking down" and even a penalty. In a Google Webmaster video, Matt says they treat machine translation as "auto-generated content."

Non-Isolation of Foreign Languages
Likely

If you produce content in a language that doesn't appeal to your target audience, you will hardly be able to take advantage of positive on-page factors or engage your visitors. Matt Cutts says that content in an inappropriately isolated foreign language can be a barrier to users and search engine spiders. For them not to get mixed up with positive ranking factors, Google must be able to categorize content on a page as well as its sections.

All nofollow
Iffy

In a statement, Matt Cutts says Google would rather carefully select links from websites like Wikipedia that were not "nofollow". But he does not mention the value of these links. Websites like Wikipedia that have nailed it with 100% "nofollow" outbound links do not seem to have suffered any harm on that account. Regardless, these websites may no longer benefit from the positive aspects of good outbound links.

Missing Robots.txt File
Myth

As of 2016, Google Search Console recommended site owners incorporate a robots.txt file into their websites. This made many people make the assumption that not having a robots.txt file on a site, could have a negative impact on ranking. Admittedly, we find that a little odd. John Mueller from Google Search says that if you want Googlebot to be able to get to every nook and cranny on your website, then you do not need a robots.txt file.

Weak SSL Encryption
Maybe

SSL encryption was confirmed to be a positive ranking factor. This suggests that Google rewards websites that adopt advanced security practices for their users. So, does Google reward websites for their security measures? It's even easier for Google to measure the success of SSL encryptions than with an already-validated malware test. But for now, we do not have any evidence that this could be a negative parameter.

Non-Thematized Websites
Likely

One of the most popular reviews after the Panda update was HubPages. HubPages eventually used subdomains to isolate many unrelated sites and was able to fix the damage. While the 2004 Hilltop update rewarded domains providing details of their central server, Panda seems to have started penalizing deficiencies back in 2011.

Commercial Queries (YMYL)
Likely

'Commercial queries' are what Google calls searches related to shopping. Quality Rater Guidelines asked QA auditors to flag up "Your Money or Your Life" content as a source of heightened concern for money and health legitimacy in searches. Although its impact on the search algorithm is not known, you can find a few indicators related to this concept by searching "commercial queries" on Google.

X-Robots-Tag HTTP Header
Likely

While the most common way to block search engine crawlers is to use your HTML codes or a separate robots.txt file, it is also possible to do this at the server level. Used properly, this option can be a good way to block weak content. However, even if unintentionally, embracing this approach usually does more harm than good.

Aşırı Çapraz Link Değişimi
Concrete

Birden fazla web sitesine sahip olunması durumunda, inbound link otoritesini arttırmak için bu siteleri birbirleriyle iç linkleme önerilmemektedir. Bu noktada risk, iç linkle sahip alan adı sayısının artmasıyla artış gösterir. Sitelere sahip olan kişiler, IP adresleri, içeriklerin benzerliği, tasarımın benzerliği ve nadiren de olsa, insan kontrolü ile tespit edilebilir ve cezalandırılabilir. Uluslararasılaştırma veya “ortada çok iyi başka bir neden olduğunda”, kullanıcıların bunu yapabilmeleri için istisnalar uygulanabilir.

Pay Links
Concrete

Links cannot be purchased from a website owner to share the PageRank value. As Matt Cutts points out, this rule was directly inspired by FTC's guide to paying services. To put it another way, backlinks are treated as promotional material and real promotion takes place without direct payments.

Reducing the Effect of Page Authority
Concrete

As a function of the PageRank algorithm, each link on a page distributes its total authority value across the pages it is linked up with.  For example, a page containing a link may exceed the PageRank value of 1.0, whereas the same page with 1,000 outbound links will exceed the value of 0.001.

New Anchor text
Probable

The age of an anchor text used in a link can be indicative of a problem, especially if that anchor text looks different on another website. Speculatively, this is about a link not being from a third source, being constructed to manipulate rankings.

Unnatural Anchor Text Ratios
Probable

Anchor texts put the content in context by providing links to other websites. As with any SEO tactic, the SEO community misuses this parameter, with certain checks put in place to set manipulation limits. This constraint can take a threshold value, equivalent to 10% anchor text. This is a feature of the Penguin algorithm.

Reducing Domain Authority
Concrete

While is possible to reduce the page authority, the value of the outbound PageRank can be reduced for the entire domain. Therefore, websites that are more selective in terms of who to link to than those that link to them are more valuable. On the other hand, we can say that sites that focus only on linking to other websites have almost no value.

Unnatural Variety of Websites Linking to Other Websites
Likely

If you agree with the idea that Google does follow natural trends, and if you acknowledge studies on penalties for websites that contain more than 10% anchor text according to the Penguin algorithm, you will also agree that all kinds of unnatural off-site activities can harm websites. Although there are no case studies yet, we have often seen how black-hat SEO activities become greedy over time and how such activities lead to penalties.

Anchor with a Non-Natural Ratio
Likely

As with the high usage of a single anchor, the Moz study showed us that there are also websites that use too many anchor texts across the entire website, as shown in our study of websites that were penalized by the Penguin algorithm. Analysis of backlinks from popular brands showed the presence of large amounts of brand name anchor texts, "click here" anchors, URL anchors, and banners. While overstepping the bounds may see a website slip through the ranks, there are also new penalties imposed by the Penguin algorithm.

Spam Comments
Likely

Spam comments on a blog are repetitive, unnaturally formatted comments. These links are penalized or devalued. This is particularly the case if the comments are automatically generated, contain anchor texts with strange keywords, or show signs of being irrelevant or repetitive. On the other hand, making real comments is the correct thing to do. Cutts notes that in these cases, using your real names has a positive effect on the measurement.

Webspam FootPrint
Likely

'Footprint' is an off-page SEO term that describes almost any activity from the same source that Google can use. This may be the username of a forum, the name of a person, a photo, the biography of a guest author, an element of a WordPress theme within a private blog network, or small undertakings that may be associated with webspam activities. As you can see, a footprint isn't always a bad thing. But if the website deviates even slightly from Google's webmaster guidelines, footprints can turn into factors that can subject websites to penalties.

Promotional Advertising (Native Advertising)
Probable

Promotional introductory content, also known as native advertising, are links that are systematically tracked by Google's webspam team and considered pay links. It should be clear that links in advertisements are created for this purpose and should be defined with the rel="nofollow" attribute to avoid penalties. Hidden ads can also lead to Google News removing all publications from the list.

Spam Content on Forums
Likely

Forum posts, like blog comments, become effective in inbound marketing when they are not made for search engine spiders but for people and turn into two-way communication. John Mueller says they keep an eye on bulk spam link scheme activities within forums.

Inbound Affiliate Links
Iffy

One thing to mention before speculating: Inbound affiliate links are often affected by drops in PageRank that are a result of URL variables, caused by rank drops due to 301 redirects and duplicate content. There are suggestions that inbound affiliate links cause downgrades for similar reasons, whether intentionally or unintentionally, as with pay link penalties. Matt Cuts recommends using outbound affiliate links when you are worried about pay links.

Forum Markers & Profile Links
Likely

It seems that Google can tell which links are forum markers and which are part of a natural conversation. Since natural conversations are content created to be useful, these conversations also get PageRank. The same goes for webspam tactics for creating fake forum profiles. When put in context, both tactics add little value, resulting in web spam penalties.

Header Sidebar Links
Likely

It appears that Google patents treat links in the header or sidebar sections (whether they are fixed or visible links throughout the website) as templates distinguishing between them consciously, as is the case with footer links. The patent says, the article is indexed after the template is removed, and the resulting weighting is much more accurate because it depends more on the non-template sections.

Footer Links
Probable

Links used in the footer section of websites have a lower value compared to the weighting of links providing content in the main section. This concept is also supported by the working structure of the Page Layout algorithm. However, as Google states, the weighting of using too many links in the footer is quite bad compared to the links at the top of websites.

Widget Links
Likely

This was once a very fun and largely productive feature for users, but they have not quite so well adapted to a world where links turn into sources of direct revenues. Even though you can still provide widgets since 2015, Google recommends using nonfollow on links and disregards anchor texts. Google being Google, will heap hefty fines on you if you disregard these rules.

WordPress Sponsored Themes
Likely

On top of the poor intrinsic worth of links in website footers, the Google webspam team is also aware of the use of backlinks within WordPress themes, which were once powerful but are now almost useless. Such efforts leave behind spam footprints, as in the case of the GWG widget.

Link Wheel (or Pyramid or Tetrahedron)
Probable

Read Larry Page's article on the PageRank algorithm and examine the pyramid/circle structure that constantly toggles PageRank values cited in an article across the same pages. If we were still in 2005, you would have been rewarded for your intelligence. Whereas today, Google has cottoned on so to say, being very scrupulous with such manual or automatic applications for users.

Author Biography Links
Likely

If your link-building strategy is prone to spam, Google will send you tumbling down the ranks. These links aren't necessarily "dead links," but "guest posts" in 2010 went to extremes, similar to content marketing in 2005. Such practices have resulted in the reduced weighting of the author bio portions of blogs and articles. Simple as that. And according to popular rumors, the New York Times and other truly authoritarian media outlets were not penalized for comments made "by the people."

Generic Web Directories
Likely

Generic web directories were one of the first link structures. Matt Cutts says they treat pay, and generic directories as pay links and penalize them if they do not offer anything of value in terms of content. Cutts cites Yahoo!'s pay directories as a specimen of a best practice. Paid or not, there is a common theme running through all links: it's good to be cautious about content, and completely public listings are bad.

Article Directories
Likely

While it is not known how successful Google has been in penalizing content within entire swathes of domains with its Panda algorithm or unnatural link structures with its Penguin algorithm, it is also not known whether further improvements in penalizing such websites will be needed in the future. However, they still treat article directories as a problem, as can be seen in a video by Matt Cutts from 2014. You may face issues if you are thinking of using such methods.

Private Networks (Link Farms)
Probable

While providing reciprocal links can lead to penalties, breaking into very large private networks created for SEO sites has a similar effect. Google has a tough stance against these types of websites, analyzing webspam footprints with numerous automated methods. Back in 2015, such methods were used as short-term black hat structures, but for a long time after that, all private networks were dealt with.

Reciprocal Links
Probable

There is a greater tendency for Google to rank down websites trading links, compared to 'downgrades in PageRank' caused by too many outbound links. Too many reciprocal links or pages/sites trading links with each other is proof that most of these links are undeserved and unnatural.

Manual Interventions
Concrete

Along with all other ranking factors, the Google webspam team can manually intervene in certain websites. After these interventions, it may take a year or six months for you to fix the problems on your website. You can check Google Search Console notifications to view them. For this reason, one must do some outside-the-box thinking about the future and ask, 'What does Google want?' Try to get inside Google's head a bit more and make some tweaks to make your website more compliant.

Google Dance
Likely

This term refers to a temporary change of 500 algorithms by Google. Technically, these effects can be positive or negative in that some sites rank higher and others fall in rankings. But since Google Dance is always making unexpected changes, we rank it among negative influences.

No Content Surrounding Links
Likely

If content surrounding a link enhances its value, then links with no content must be a bad thing. Applied naturally on a large scale, this factor can cause you to fall in rankings. You can expect to get much less value from backlinks that are not embedded in content, if not any at all.

Thin Content Surrounding Links
Probable

Google determines the quality of backlinks by looking at the quality of the content surrounding them, especially after the Panda and Penguin algorithms came into play. It is not known exactly how it works, but it is safe to assume that the methods used by Google to determine off-page quality are not different from methods for determining on-page quality.

Irrelevant Content Around Links
Likely

Google patent "Ranking by reference content" describes how Google determines the nature of a link by looking at the words surrounding it. If the content in question is not focused or themed around a certain topic, it cannot benefit from it. If the surrounding content is irrelevant enough, it may look completely awkward and unnatural, and incur penalties as a result.

The ratio of Links in Content
Maybe

Having too many backlinks without any content to back them up is, beyond a certain rate, an indication of clear webspam activity. This theory combines three views: In the frequent review of webspam footprints by Matt Cutts and in consideration of the fact that some of the content-less links are inherently so, a Google patent says when "Ranking by Reference Content" looks for content accompanying a link, quality is often a practical indicator.

Sudden Loss of Links
Probable

While sudden link gain can lead to scrutiny of a website's backlink portfolio, sudden link losses are also an indication of a problem, or something even more serious. To apply a simple logic: webspam activity is usually the result of expired links. While links cherished by Google have usually been around for a very long time.

Very Fast Link Gains
Likely

To quote from a Google Patent, "Although sudden growth in the number of backlinks is a factor in determining how search engines score documents, they can also be an indicator of spam activity in the search engine." Sudden, spontaneous growth can lead to additional reviews by web spam filters, but even better, a piece of content or a page can also "go viral" in a non-manipulative way, as long as it is genuine content.

Links From Unrelated Websites
Iffy

Since the introduction of the Hilltop algorithm, Google gives bonus scores to links coming from relevant websites. Widespread SEO rumors and the dangerous "link building" and "disavowing" services spread the rumor that links from irrelevant sites are absolutely bad. Although too many irrelevant links will leave behind a trail of unnatural footprints, it will not seem natural either to get so many links from websites that might well have the same functions as yours.

Links on All Pages
Probable

Links on all pages of a website are harmless in themselves, but these links are of low value and are treated as if there is only one link in total. Matt Cutts says links appear naturally on all website pages also adding that they are often used in conjunction with webspam activities. For this reason, Google's webspam team picks through common links on all pages with a fine-tooth comb. Hypothetically, there must also be an automated component to this process that leads to more risk overall.

Negative Domain Link Speed
Maybe

One speculation is that something is wrong if your website's backlink portfolio is stable or you have been losing links at a faster rate than you have been gaining them over a long period. We can also say that part of this speculation is supported by a Google patent. This patent suggests that the reduction in the number of incoming links can be construed as the content in question not having been updated for a while. The patent suggests this can be applied to a single page or an entire website.

Negative Page Link Speed
Likely

As stated in a Google patent, "By analyzing changes in the number of backlinks to a document (or page) or the rate of increase/decrease over time, a search engine can discover important signs of how new a piece of content is." The decrease in the number of inbound linking pages over time means these are no longer treated as new pieces of content in search results, having inevitably negative effects.

Disavowed Links
Probable

In 2012, Google Search Console introduced a feature using which you can request an inbound link to be completely ignored. These effects were permanent, and irreversible, and could damage your brand's long-term search reputation if not used correctly. This step can only be applied as a last resort to address previous cases of web spam.

Penalties for Redirects
Likely

John Mueller confirmed on Google Hangouts that penalties can be imposed for organic searches with 301 redirect sites. John's confirmation showed that this was a genuine factor.

Sites Banned From Chrome
Likely

In 2011, Google introduced a tool that allows users to block sites from Google search, via Chrome. In this regard, they said, 'we do not currently use domain names that users have blocked as a ranking indicator, but we are looking to see if it can be useful.' So, it's not a guarantee whether this factor is automatic in rankings, but we also don't know if the webspam team makes decisions without examining this data.

Links From Penalized Sites
Probable

Google has been using the term 'bad neighbors' for a long time now. This term refers to sites that have been penalized. If your site is getting links from penalized websites, remember that this might raise some eyebrows. Consequently, you might even face penalties.

Crawl Rate Modifications
Concrete

Google Search Console allows you to adjust the maximum crawl speed of your site. While it is not possible to speed up Googlebot, it is possible to reduce its speed to zero. Doing so can cause problems in terms of indexing, which in turn can cause problems with rankings. Issues particularly around new pieces of content.

Negative Approaches
Likely

Back in 2010, Google said negative sentiment for a brand coming through in reviews or content around links is a ranking factor. Reviews were previously known as an important part of local and “Google Maps SEO” rankings. The implications for this were somewhat complex, but Moz's Carson Ward did an important job.

International Target Tool
Maybe

Google Search Console offers a tool for international targets. Theoretically, this tool could result in certain regions not appearing in search results that would not include your entire marketing region. This is indeed a process that can harm your website.

Link Building
Myth

Perhaps one of the rumors that will never die is the rumor that link building is bad. Matt Cutts from Google has been giving us advice on link building from the beginning, and in its most basic form, we can say that link building is a traditional marketing method adapted to the web environment. Link building contradicts Google's philosophy if the goal is primarily a search engine. Links are for marketing. Link building should be done primarily for people.

Link Building Services
Myth

Paying for a link-monitoring service is not the same as pay links. However, an exception would be if the service in question gives up and pays someone else to crack the PageRank while publishing worthless content with zero common sense. Trustworthy link building should be similar to the job of a promoter - placement cannot be promised, but there is much to be gained. Matt Cutts refers to common sense around content-building midway through his video on pay directories.

Having No Content
Myth

Matt Cutts tells us that links and content should build a cohesive presence, backed up by content created with common sense. However, he says not all links have to be placed in chime with the content - links don't have to be in the middle of the story or at the beginning of the content. It only takes a little time to test high-quality existing links outside such harmoniously built content. An example would be the Chamber of Commerce or the local Yellow Pages. Reason also dictates that it would seem rather odd for a successful company to reject such normal linkage structures.

Micro Site
Myth

It has been suggested that there are some penalties for microsites, sites with a limited function that have too many pages. Matt Cutts lays out what Google thinks: "Microsites are not tracked and penalized by Google, these sites generally do not maintain a very effective tactic for a long-term strategy as the website's prospects for a good ranking remain poor. These sites also do not receive bonuses for exact-match domains because they do not use keyword-focused domains.

Click Manipulation
Likely

If you get the feeling that the Click Through Rate is a positive factor, it would also not be wrong to assume webspam controls are in place, too. Rand Fishkin's Twitter CTR experiments show that a site with an excessively high number of clicks rises from the 6th to the 1st spot, only to drop to the 12th spot, not maintaining its position even for a few days.

Brand Search Manipulation
Maybe

Another theory is that if brand searches are a ranking factor (as the patent suggests), webspam controls should also exist here and prevent abuse. Otherwise, manipulating this factor would be possible by repeatedly and automatically searching the names of competitors.

Illegal Activity Report
Probable

Google provides a form for users to report any illegal activity with their search results. This page states that such content will be removed from all Google products, including Google Searches. You do not need to doubt them, we do not doubt that someone will soon follow up with an experiment.

DMCA Report
Probable

In addition to automated checks to detect stolen content, uncited sources, and potential copyright infringements, Google recommends that searchers send a DMCA request directly to Google. This makes the DMCA process work pretty well in the USA. And this way, Google has no choice but to remove such from its list of domains with such content.

Low Dwell Time (Short Clicks)
Likely

A Google patent suggests that searches for brands should look for "clicks with a specific duration or clicks with a relative duration depending on the source length". Steven Levy's first-hand Google account "In The Plex" suggests that this is Google's best method for measuring the quality of search results. Bing and Yahoo! both suggest that dwell time can be a ranking factor in some respects.

High Task Completion Time
Maybe

Although not directly confirmed, we have some evidence that the click-through rate and dwell time can be ranking factors. Also, an article co-authored by Google's David Mease offers insights into the importance of analyzing the time a searcher spends before they are satisfied with a search result. Is it possible for automated A/B tests to be combined with weightings for user satisfaction with search results?

Links from the same /24 (“Class C”)
Likely

You can often hear people saying that it is bad to have the same "Class C" links. Krishna Bharat clearly states in his Hilltop article: 'We consider two users to be connected if they share the first 3 bytes of the IP address.' Such hostnames cannot be considered separate "experts" and this would inevitably lead to falling rankings. Matt Cutts said that having multiple pages trading links would not result in penalties. Going over the board with it could lead to issues.

GSC URL Parameters
Concrete

The URL parameters tool of the Google Search Console is used to remove duplicate content from search results. And this has merits when done properly. But this means deliberately removing content from the Google index. This is not a desirable thing, so to speak. Especially when it is abused.

Negative SEO (Google Bowling)
Concrete

Negative SEO, referred to as “Google Bowling,” is a malicious link spamming act managed by a third party purporting to act on behalf of your site. In the past, there used to be off-page devaluations instead of off-page penalties. If a devaluation occurred, a competitor could only increase their existing schemas and harm site authority.

Footprint
Concrete

Anything that Google can use to identify activity from a common malicious source is defined as a "footprint." This could be a username on a forum, a person's name, a photo, a bio snippet, a specific WordPress theme, etc. Google wants to reward links that come from websites belonging to other people or brands, not those from people's websites. Identifying links from one's websites is a problem Google has been trying to solve for decades.