Last week, I attended the Search Engine Strategies conference, held at the Moscone Conference Center in San Francisco, California. On Day 2 of the conference, Matt Cutts, a distinguished engineer at Google, joined Mike Grehan from Incisive Media for a discussion. Soon after beginning this keynote discussion, Danny Sullivan from Search Engine Land, and Brett Tabke from WebmasterWorld/Pubcon joined the discussion on stage.
There were several key takeaways from this discussion. I’ve listed each of the important points or takeaways in a bulleted list below, followed with a full list of notes from the discussion.
- Always hire people smarter and better than you are.
- Google is using the knowledge graph more and more. Google knows more than 50 million things, and billions of connections between things.
- The knowledge graph is being used to disambiguate between the real world and all the data.
- The primary source of the knowledge graph is Freebase (http://www.freebase.com/) which is open source. If you report an issue or correction, Google will try to get it updated in Freebase. You can use the data or download it.
- Google is testing Gmail search results in web results. You have to request it at http://g.co/searchtrial
- The Search Quality Team at Google has been renamed the Knowledge Team.
- Google has a 10 year plan for relying on social factors as a part of the algorithm.
- In the short term, Google is still relying on links as an important ranking factor.
- The Google Panda update is being updated monthly.
- The Google Penguin update is not being updated as frequently as Google Panda, so any updates will be more “jarring”.
- The Google “Panda” update was named after the Google Engineer who worked on the project, Navneet Panda. It was Navneet Panda who chose the name of the animal for the Google “Penguin” update.
- If you have a great site in the real world, Google will reward you with great search results.
- Google can only use social factors as part of an algorithm when they have access to those factors (and the data).
- Anyone can compete with Google if they can crawl better.
- Google crawls 20 billion pages a day.
- Google is being more transparent to the webmaster community because they are getting more confident. They started giving out messages about hidden text, etc. and did not receive any backlash—so they continued with updates and notices to webmasters.
- What is the difference between doing SEO and doing over-optimization or too much SEO? If it is going to help your website and the user experience, such as making your site faster, focusing on keyword research-based content, then that is fine. If you are blog spamming, link spamming, then Google has taken action on those techniques.
- Google doesn’t hate SEO.
- Google is trying to be the best source of information.
- Google’s user expectations are going up every year.
- Google adds features based on whether or not it is good for the user—not whether or not it is going to make money for Google.
- Google believes that it is important for users to understand when there was payment involved.
- Google believe that you should not be able to buy higher search engine rankings.
- Google has been working hard on returning original content in the search results.
- Google tries very hard to remain unbiased, especially when it comes to other Google web properties. There is no boost for payment for Google-owned web properties. Google, for example, bends over backwards to try to show videos other than YouTube in the search results.
- You do not have to worry about duplicate content on other TLDs. For example, if the same content is on domain.co.uk, domain.com, and domain.au, you don’t have to worry—Google will present the user with the most appropriate version of the content.
During the keynote discussion, I took detailed notes. For those interested in learning even more about Matt Cutts’ talk, I transcribed my notes below:
Matt Cutts: hire people smarter and better than you are. If Matt is on vacation, then spam fighters are still there. There are multiple people all over the world to take care of issues–There is always someone on duty at Google.
Last week, there was a press briefing:
There was new stuff in search:
- Google is using knowledge graph more and more
- Google now knows over 500 million things
- Google knows billions of connections between things
The feature called “autocomplete” has new features that have recently been added.
Gmail search results can appear in web search. Go to g.co/searchtrial to sign up. You have to request access in order to use it.
The knowledge graph is Google’s attempt to disambiguate between the real world and all the data. It attempts to make exploration faster. For example, if you search for: tom cruise movies, California light houses. It is good for finding collections of things, such as roller coasters.
The Google group that Matt Cutts belongs to in Google has been renamed the group from “search quality” to “knowledge”.
The Google Panda Algorithm name came from the engineer who did the main work on the project. The Panda engineer picked the name of the animal for the penguin update.
Questions from the audience
In the 10 year time frame they will use social a whole lot more. In the short term, they won’t be leaving links behind. it is still an important factor. Google Panda is being updated monthly. Updates from penguin, they will be more jarring. If you have a great site in the real world, they will reward you with great search results.
Are social signals a ranking factor?
They have to have access to the data. Some sites are blocked, Facebook, etc.. If they can’t crawl, then they won’t have the data.
Regarding real time search:
Twitter is like a private nightclub. Fundamentally they can suspend anyone if they want. Anyone can compete with Google if they can crawl better. Google announced last week that they crawl 20 billion pages a day. When the Twitter firehose deal ended, they were blocked from crawling. When Twitter went away, Google was leery about relying on that data. You can increase your Klout score by having a Wikipedia page about you.
Over the years, Google has been more transparent. is that trend going to continue?
Any system that attracts traffic, people will try to game it because there is money there. Google has been more transparent because they are getting more confident. They started out slow such as “hidden text”, and nothing really happened, no backlash. If people don’t trust Google, then that is a harmful perception to Google. They try to debunk that.
What is the difference between doing SEO and doing ‘over SEO’?
It’s not doing too much seo, you can make your site faster, do keywords, etc.. Google has taken action on blog spamming, and other over-optimization tactics. Matt Cutts says that “Google doesn’t hate SEO”. Look at the spectrum of value that you are adding to the web when you put together a website. Google is trying to be the best source of information. If it’s a small amount of data, like a calculation, then they are not going to send user to a calculator site. Google’s user expectations are going up every year. Whatever you type into the search box, they will try to give you the best answer for it. Google adds features based on whether or not it’s good for the user, not on whether or not they will make money or not.
The litmus test is what is the value add for the user?
There are advocates within Google who say: how are the webmasters coming into the equation? they realize that the web is websites and it needs to be a good value for everyone.
What are the principles that they operate buy?
People should know when there is payment involved. You shouldn’t be able to buy higher search engine rankings. Google has over 3 billion searches a day. They have been working hard on returning original content in the search results, not scrapers.
How much is Google+ helping search results?
The Google+1 button used to see how much it helps Google. You are much less likely now to see Google+ in the search results now. The 10 year trend may be that +1s help, but not right now.
How does Google remain unbiased as Google goes into buying Frommers, etc.?
They bend over backwards to show videos from sites other than YouTube. There is no boost for payment, for Google web properties. The primary part of knowledge graph is from freebase. It remains open source. you can download the data and use it. You can report an issue and they try to get it back into freebase, updated.
Regarding Google Being Transparent:
They are trying to give you more information using Google webmaster tools. They are going to keep moving forward with transparency. They try to have incremental changes, about 500 changes a year. People are shaken by the course direction of Google updates. But if you build a site that is a value add, then you won’t really have issues over the long run. If the only text on the page is what everybody else has, then there will be an issue.
Regarding Duplicate Content:
You don’t need to worry about duplicate content on various TLDs, like co.uk, .com, and co.nz sites. Google will choose the site based on location of the user.