• Matt Cutts’ keynote from SES San Francisco

    by  • August 15, 2012 • Conferences, Google • 4 Comments

    For those who have been in the SEO community for a while, you have probably noticed that Matt Cutts has not keynoted at Search Engine Strategies since Danny Sullivan left Incisive to start up Search Marketing Expo.  Well today it came full circle with Matt Cutts back on the SES stage by doing today’s morning keynote at SES San Francisco.  And in a surprising twist, who else ended up on stage alongside Mike Grehan and Matt Cutts?  None other than Danny Sullivan & WebmasterWorld‘s Brett Tabke.

    But first up, Matt and Mike were up first talking about some of the recent changes happening with Google search.

    Matt talked about how they have “spam fighters” across multiple time zones with team members all over the world.  So there is always somebody awake fighting spam.  Likewise, he said just because he goes on holidays doesn’t mean that people should consider his holidays as a free pass to spam, because again, his team has grown to a large number of people from the days when there were only a couple of team members 10 or 12 years ago.

    Matt then talked about the new knowledge graph that they have recently rolled out, where Google will provide relevant information regularly searched for right on the search page.

    He also talked about the new autocomplete change where as you type in keywords, it will ask you for the context if there are multiple contexts people might search for on a particular topic.  Matt used the example of someone searching for “Rio” and asking for context if people are looking for Rio the movie, Rio the place, Rio the hotel.  You can see it in action when you search for Rio, and you can also see it in the knowledge graph as well where Google asks which context you are searching for.

    Next Matt brings up the new Gmail in your search results.  It was recently added, so if you do a search on Google and you have something relevant in your Gmail results, it will show those results too.  He does mention that you do need to expand the email in order to see it, so you won’t show anything embarrassing from your email if someone is looking over your shoulder while you search.  He used the example of how he searched for which Moscone SES was in, and an email from Mike Grehan with the details about SES being in Moscone West came up in his results.

    Next Matt talked about how they are trying to bring up collections of results when people do certain types of searches.  An example of this is when you search for Tom Cruise movies, it shows his movies across the top (not sure how they are sorting the specific movie results, it isn’t chronologically and doesn’t seem to be by popularity either) and you can side scroll to see more.  California lighthouses is another example Matt used that shows the same type of results across the top.

    Matt also revealed that the Search Quality team is now known as Knowledge.  An interesting name change, although I suspect most people will still think of him as being on the Google Spam Team :)

    Mike Grehan asked why all SEOs need to be come zoologists with their updates.  Matt retold the story that the name Panda came about because it was the last name of the engineer behind it.  When it came time to name what would become the Penguin update, Matt says about the engineer behind that ”here is a list of the top 100 cutest animals, and he picked Penguin”.   When asked about both Panda and Penguin being black and white animals, he said it was a coincidence, but that people are sending suggestions for other black and white animals, so I suspect we will probably see a Zebra update at some point.

    Next was a surprising twist with Danny Sullivan from Search Engine Land and SMX as well as Brett Tabke from WebmasterWorld and PubCon joining Matt and Mike on the stage for an impromptu panel, and one we probably won’t see again – although attendees can always hope!  It brought a really interesting dynamic to the keynote.

    The topic of links was brought up about how much authority links will have in the future.  Matt said in the short term he doesn’t see links going away, which is probably a relief for webmasters who are working on white hat link building.

    He said Panda has been regularly updating on a monthly basis but that Penguin is still being improved, so there will be “a little more jolting and jarring for a while”, which is good or bad depending on what side of the fence you are sitting on.

    Someone raised the question about social signals in search results.  Matt Cutts brought up Twitter and how they used to have access to the firehouse of data but when that agreement with Twitter ran out, Twitter blocked Google from crawling the site.  If Google cannot see how many people are following or retweeting you, Google is unable to use it as a signal for ranking.  and there is concern that Twitter could block Google again in the future.

    Matt also said that anyone can compete with Google if they can crawl and rank better than Google can, which is true to an extend.  If the users aren’t getting a good user experience with the search results they are given, they could be more likely to try and search alternative.

    Matt also brought up some interesting stats behind the Google search.  Google crawls 20 billion pages per day.

    The question was raised about why Google is more open and transparent with webmasters now.  He said any system with a lot of traffic will be spammed.  He also said that someone commented recently that it is cheaper to do things legitimately than it is to go blackhat and try to be under the the radar.

    He also reveals how the transparency with Google’s communications with webmasters came about.  The first send a message to webmasters with hidden links, and there were no problems with it.  Next they did emails about parked domains and no problems.  He said if people are hit with a penalty they probably know it anyway.

    He also brought up the infamous “over optimized SEO” which had webmasters freaking out several months ago.  He said “it was a mistake on his part to say don’t do too much SEO.”  It more specifically refers to when something has been SEO’d so much that when people land on the page they are unhappy with what they get.  They want webmasters to optimize websites so they load quicker and make sites more crawlable.

    One of the most interesting things Matt revealed was the fact they keep old versions of the algo so they can compare old results to new results.  That would be so interesting to see that, so people could see how much search has evolved over the 12 years Matt has been with Google.

    He also said when the search team meets, they aren’t concerned about whether changes will make money or lose money.  They only are looking at what is good for the user.

    Matt also commented on the complaint that webmasters want traffic but if Google is providing answers right on the search page, webmasters are losing that traffic.  Matt commented that facts can’t be copyrighted, using the example of how tall is the Eiffel Tower, but again it is about what is best for the user.  He reiterated that Google has to put the user first, if they don’t, someone else will and people will start using that search engine instead.

    He said again that they don’t want people to buy rankings and feels that people should know when payment is involved.

    He also said some of the sites showing Panda/Penguin winners and losers is only a small sample, but is relevant and it does give a good idea of the kinds of sites being impacted.

    Matt revealed some more search stats.  Google has seen over 30 trillion URLs, does 100 million searches a month.

    Dealing with duplicate content, he said some sites are known to have more original content and some sites have more duplicate content.  He says Google has handled duplicate content pretty consistently over the years.

    Next up was Google+ and its influence on ranking.  Google+ is a signal they will look at and see how good it is.  He also said don’t put a lot of weight on Google +1s just yet.

    Someone asked if Google is giving more influence to Google properties over non-Google ones.  Matt used Google Video search as an example returns a lot of non-YouTube videos.  This is something I have seen a lot myself when I use video search.  Matt says there is no boost for Google properties just like there is no boost for payment.  We want people to trust Google.

    He talks about Knowledge Graph coming from freebase, which is open source and anyone can download for free.  They also use Wikipedia for it as well.

    Someone asks why Google does not have its own rating system, ala HubSpot.  He says an example of an independent metric is PageRank, but it ended up being the basis for link sales and link values on a site by site basis.

    Someone else asks why Google won’t tell webmasters what they are doing wrong and Matt says they are working towards that in Google Webmaster Central.  Then he says “in an ideal world we’d like to give you URLs when sites are doing enough link buys that it starts to affect the entire site because they are links we don’t trust.”  He’d like them to be able to say  “here is an example link we don’t trust”.  He says that is the vision and to keep turning up the knob on transparency.  They don’t want to say “we don’t trust you” without giving something actionable.

    And back on the topic of duplicate content, he says that if you have content similar to many other sites, such as product pages, add something original like product reviews.  Add something of value for the user.

    Unfortunately, the keynote, which really turned into a Google search Q&A, came to an end, I think people would have been happy to stay for another hour to hear Matt answer more Q&A from the audience as well as Mike, Danny and Brett.  It was easily one of the best highlights of SES San Francisco, and something I hope repeats itself and future SES conferences.

    About

    Jennifer Slegg is a longtime member of the SEO community and is an expert on social media, content marketing, Google AdSense and search engines.

    4 Responses to Matt Cutts’ keynote from SES San Francisco

    1. August 15, 2012 at 11:21 am

      Great write up Jen, must have been great to be there :)

    2. Jenstar
      August 15, 2012 at 11:57 am

      Yes, it was pretty good, even if it did start bright and early at 8:30AM on the morning after WebmasterRadio’s Search Bash :)

    3. bob
      August 16, 2012 at 12:00 pm

      thanks, i understand from this very new Matt Cutts speech what google need more money and will purge their index very soon again with new ‘zebra’ or something else with some new penalization – something like ‘you put keywords not in correct place on your page’ reason.

      good luck for him, more new penalties and google mfa like this and i think him will lost his workjob soon. Anyway google not displaying any relevant organic results for mid/low competition keywords now. just our beloved top3/adwords ads at the top. bing with 30% is ad revenue leader right now for me!

    4. Matt Cutts
      August 16, 2012 at 12:46 pm

      Thanks Jennifer, you got it right.

      cross-posted from SERT
      I wasn’t saying that people needed to overly stress out about the next Penguin update, but I’m happy to give more details. I was giving context on the fact that lots of people were asking me when the next Penguin update would happen, as if they expected Penguin updates to happen on a monthly basis and as if Penguin would only involve data refreshes.

      If you remember, in the early days of Panda, it took several months for us to iterate on the algorithm, and the Panda impact tended to be somewhat larger (e.g. the April 2011 update incorporated new signals like sites that users block). Later on, the Panda updates had less impact over time as we stabilized the signals/algorithm and Panda moved closer to near-monthly updates.
      Likewise, we’re still in the early stages of Penguin where the engineers are incorporating new signals and iterating to improve the algorithm. Because of that, expect that the next few Penguin updates will take longer, incorporate additional signals, and as a result will have more noticeable impact. It’s not the case that people should just expect data refreshes for Penguin quite yet.

    Leave a Reply

    Your email address will not be published. Required fields are marked *