Blog

Keep up to date with all the latest news

What You Need to Know Now About Paid Searches vs. Organic Searches In 2019

pexels-photo-218717
  What You Need to Know Now About Paid Searches vs. Organic Searches In 2019 Many people are confused by internet marketing because they don't understand Organic/Natural Search in contrast to a Paid search. Eighty percent of people who search the web are using an organic search. Below is an illustration of the difference between organic and pa...
Continue reading
  101 Hits
  0 Comments
101 Hits
0 Comments

July 2019 Google Webmaster Report

Here is that big recap where I go over the past month of changes related to Google search and webmaster topics and sum up the more important bits. This month we had the June 2019 core update and the diversity update roll out earlier this month. We had Google say they probably will pre-announce these updates in the future.

Google will change how they process noindex and other directives in the robots.txt file. Google also submitted robots.txt to be a real web standard and open sourced its Googlebot parser for robots.txt so others can use it. Google said they may crack down on leased subdomains on third party sites. Google stopped supporting social profile markup for knowledge panels.

Google Search Console got mobile-first indexing tools and dropped the preferred domain setting this month. But with that, Google may be done with removing features from Google Search Console - I hope. Google is sending out top query changes alerts via Google Search Console. They added copy and search code to those tools and expanded the overview reports to 90-days. Google also launched their icon based new search bar earlier this month. Oh and Google Maps got some bad press this month and launched some Google My Business features.

It was a busy month to say the least and honestly last month was busy as well. The ongoing WebmasterWorld chatter is pretty slow right now, mostly July 4th traffic related.

Here are the stories you might want to review or scan for the month if you were on vacation and didn't read this site daily:

Continue reading
  75 Hits
  0 Comments
75 Hits
0 Comments

Google Maps Launches Place Topics

Google has launched what they are calling place topics. Place topics use data based on reviews to provide concise information on what customers highlight about your business. In short, it summarizes the reviews left by your customers into tags.

Here is a screen shot of what it looks like in the top of the reviews section, this screen shot is from Joy Hawkins as she posted it in the Local Search Forums:

This can be used by potential customers, where they can see at a glance the main themes of your business. Since Place Topics are based on customer reviews, business owners don't need to manage this feature, it is all automated. Of course, if your place topics say something you do not like, that can be an issue.

All businesses with a "sufficient amount of quality reviews" can benefit from Place Topics, Google said. Finally if your business doesn't have Place Topics, Google can't create them on demand.

Forum discussion at Local Search Forums.

  50 Hits
  0 Comments
50 Hits
0 Comments

Google My Business Tests New Design For Console

Some folks when they login to their Google My Business account and click on a business to manage will see a more visual and image friendly interface. I am not seeing it myself but I am seeing numerous reports of the new design.

Here is a screen shot from Andy Simpson at the Local Search Forums - you can click on it to enlarge:

Frank Sandtmann also sent me a screen shot from Germany - where he is able to see this as well. Are you able to see this new interface?

Forum discussion at Local Search Forums.

  55 Hits
  0 Comments
55 Hits
0 Comments

SEO Mythbusting Video On The Future Of The Web (Web Evolution)

Martin Splitt had a chat with his colleague at Google Dion Almaer the Director of Web Development Ecosystem about SEO and the future of the web. It was an interesting high level chat that talked about PWA/desktop site/AMP, tools, web components and virtual scroller compatible and assistant devices. Oh and we finally found out that it is iced tea in that pitcher.

Here is the video:

Here is what was covered:

The same content in multiple versions, PWA/desktop site/AMP/etc. vs Google Search and SEO (2:19)Current & future web & SEO tool integration (Google & third parties) (4:50)The 'unknown' variables & search performance (7:32)Are web components and virtual scroller compatible with SEO? (10:06)The future of assistants, and semantic & structured data (12:38)

Forum discussion at Twitter.

  47 Hits
  0 Comments
47 Hits
0 Comments

Google Probably Won't Be Turning Off Any Other Search Console Features

Google's John Mueller hinted this morning that no additional legacy Google Search Console features will be turned off. He said "If we thought it wasn't important, we would have turned it off already." That kind of implies that the remaining features that are in the old Search Console will some how, in some way, be migrated to the new Search Console.

He said this on Twitter:

If it's in the old SC, you can just use it there for now -- it doesn't really matter what will be in the future. If we thought it wasn't important, we would have turned it off already :)

'" ð John ð (@JohnMu) July 5, 2019

I mean, Google just recently turned off the preferred domain setting just a couple of weeks ago. So I am not sure the logic in the statement above but John is basically saying the remainder of the features are safe and not going away or am I reading into this statement too much?

Forum discussion at Twitter.

Continue reading
  48 Hits
  0 Comments
48 Hits
0 Comments

Google Gifts Americans Working On July 4th In Zurich US Flag Balloons

  51 Hits
  0 Comments
51 Hits
0 Comments

How to Make a Technical SEO Recommendation - Whiteboard Friday

Posted by BenjaminEstes

After you've put in the work with technical SEO and made your discoveries, there's one thing left to do: present your findings to the client and agree on next steps. And like many things in our industry, that's easier said than done. In this week's episode of Whiteboard Friday, Benjamin Estes from Distilled presents his framework for making technical recommendations to clients and stakeholders to best position you for success


Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi. My name is Ben. I'm a principal consultant at a company called Distilled. Welcome to Whiteboard Friday. Today I'd like to talk to you about something a bit different than most Whiteboard Fridays.

Continue reading
  52 Hits
  0 Comments
52 Hits
0 Comments

What You Have to Know in 2019 to Identify Your Target Market Online

Target
  What You Have to Know in 2019 to Identify Your Target Market Online One of the most difficult, and most important, aspects of marketing is learning about your audience. Once you complete a site you always want to use statistics tools like "Google Analytics" or "Awstats" to be sure you understand the trends of the people visiting your site. F...
Continue reading
  98 Hits
  0 Comments
98 Hits
0 Comments

The Top 3 Things You Need to Know About Search Engines

pexels-photo-67112
The Top 3 Things You Need to Understand NOW About Search Engines www.highlevelstudios.com Search engines are very complex at the root, but very rudimentary by nature. On March 13, 2009, the World Wide Web celebrated only 20 young years of existence, so it is considered to be a relatively new field. Search engines are designed using complex algorith...
Continue reading
  84 Hits
  0 Comments
84 Hits
0 Comments

How to Talk to Your Clients in a Language They Understand

Posted by Lindsay_Halsey

A few years ago, while enjoying a day of skiing at Aspen Highlands with a group of girlfriends, a skier crashed into me from above, out of nowhere. He was a professional skier traveling at an exceptionally fast speed, and I felt lucky to get away with a mere leg injury. I couldn’t put weight on my leg, though, so I went to the local emergency room.

After a few hours of various doctors and nurses running scans to diagnose the issue, a new doctor whom I’d never met walked in the room. The first words out of his mouth were, “You have a radial tear in your medial meniscus”. I had no idea what he was talking about. He continued speaking in words better suited for a medical peer than a patient.

I wasn’t at all interested in medical-speak. I was a new mom, anxious to return to my family. I wanted to know for how long and to what extent this injury would impact us, and how active I could be at home while caring for our son.

I didn’t get the answers to any of those questions. Instead, my doctor left me feeling overwhelmed, lost, and frustrated.

Continue reading
  48 Hits
  0 Comments
48 Hits
0 Comments

Google To Drop Any Support For crawl-delay, nofollow, and noindex in robots.txt

Google posted this morning that they are going to stop unofficially supporting the noindex, nofollow and crawl-delay directives within robots.txt files. Google has been saying not to do this this for years actually and hinted this was coming really soon and now it is here.

Google wrote "While open-sourcing our parser library, we analyzed the usage of robots.txt rules. In particular, we focused on rules unsupported by the internet draft, such as crawl-delay, nofollow, and noindex. Since these rules were never documented by Google, naturally, their usage in relation to Googlebot is very low. Digging further, we saw their usage was contradicted by other rules in all but 0.001% of all robots.txt files on the internet. These mistakes hurt websites' presence in Google's search results in ways we don't think webmasters intended."

In short, if you mention crawl-delay, nofollow, and noindex in your robots.txt file - Google on September 1, 2019 will stop honoring it. They currently do honor some of those implementations, even though they are "unsupported and unpublished rules" but will stop doing so on September 1, 2019.

Google may send out notifications via Google Search Console if you are using these unsupported commands in your robots.txt files.

That sounds like a good idea. Are you reading our email?
/turns around slowly to scan the room

Continue reading
  90 Hits
  0 Comments
90 Hits
0 Comments

Google Image Search Tests Sticky Image Preview Box

Google is testing, I believe, another image search design, this time with making the image preview box a sticky box on the right side panel.

Google has been testing many variations of these image preview boxes in search for some time but this one is bolder because it sticks.

This GIF of it in action was shared to me by SEOwner on Twitter, he said "New @Google image loading split test is pretty cool. Not sure if seen before. At first glance, the right-alignment was weird, but I'm already used to it. Loading speed is 2x as fast because no waiting for down-scrolling."

Here is a GIF of it in action:

I personally do not see this, I see the old boring user interface in Google Image Search.

Continue reading
  86 Hits
  0 Comments
86 Hits
0 Comments

List Of All The GoogleBot Robots.txt Specifications Changes

With Google aiming to make the robots.txt exclusion protocol a standard, they proposed some changes and submitted them the other day. Now, Google updated their own developer docs around the robots.txt specification to match. Here is a list of what has changed.

Removed the "Requirements Language" section in this document because the language is Internet draft specific.Robots.txt now accepts all URI-based protocols.Google follows at least five redirect hops. Since there were no rules fetched yet, the redirects are followed for at least five hops and if no robots.txt is found, Google treats it as a 404 for the robots.txt. Handling of logical redirects for the robots.txt file based on HTML content that returns 2xx (frames, JavaScript, or meta refresh-type redirects) is discouraged and the content of the first page is used for finding applicable rules.For 5xx, if the robots.txt is unreachable for more than 30 days, the last cached copy of the robots.txt is used, or if unavailable, Google assumes that there are no crawl restrictions.Google treats unsuccessful requests or incomplete data as a server error."Records" are now called "lines" or "rules", as appropriate.Google doesn't support the handling of elements with simple errors or typos (for example, "useragent" instead of "user-agent").Google currently enforces a size limit of 500 kibibytes (KiB), and ignores content after that limit.Updated formal syntax to be valid Augmented Backus-Naur Form (ABNF) per RFC5234 and to cover for UTF-8 characters in the robots.txt.Updated the definition of "groups" to make it shorter and more to the point. Added an example for an empty group.Removed references to the deprecated Ajax Crawling Scheme.

The big changes are (1) GoogleBot will follows 5 redirect hops (which we knew in 2014), (2) there are no crawl restrictions if unavailable is greater than 30 days, (3) unsuccessful requests=server error, (4) there is a 500 KiB size limit and (5) it supports URI-based protocols.

''ï'Updated Google's Robots.txt spec to match REP draft''ï'

ð°Follows 5 redirect hops
ð·ï'No crawl restrictions if unavailable >30 days
' ï'Unsuccessful requests=server error
ð'500 KiB size limit
ð'ªSupports URI-based protocols

Full list of changes: https://t.co/GXd6FWt2D0 #robotstxt25

'" Lizzi Harvey (@LizziHarvey) July 1, 2019

Here are some additional answers:

Correct. If there's none in the cache, then full allow us assumed

Continue reading
  78 Hits
  0 Comments
78 Hits
0 Comments

Google Shares Its Robots.txt Parser Code With Open Source World

Google announced yesterday as part of its efforts to standardizing the robots exclusion protocol that it is open sourcing its robots.txt parser. That means how GoogleBot reads and listens to robots.txt files will be available for any crawler or coder to look at or use.

It is rare for Google to share anything they do in core search with the open source world - it is their secret sauce - but here Google has published it to Github for all to access.

Google wrote they "open sourced the C++ library that our production systems use for parsing and matching rules in robots.txt files. This library has been around for 20 years and it contains pieces of code that were written in the 90's. Since then, the library evolved; we learned a lot about how webmasters write robots.txt files and corner cases that we had to cover for, and added what we learned over the years also to the internet draft when it made sense."

It's been awesome working with @methode and https://t.co/CPJfDQnxn1 on this. I am very happy that it is finally ready to be shared with you all! ð https://t.co/gyxvzrFLtp

'" Edu Pereda (@epere4) July 1, 2019

If you have SERIOUS ideas about what else could be useful as OSS, leave a comment with the idea and an explanation how would you use that OSS https://t.co/cxxqhI9Nzo

Continue reading
  36 Hits
  0 Comments
36 Hits
0 Comments

Search Google For Fireworks & Get A Fireworks Show

Go to Google.com and search for [fireworks] on desktop or mobile and you will see Google light up the search results page with a fireworks display.

Here is a video of it in action:

Thanks to Epic Fireworks for the hat tip on Twitter about this. Here is a GIF of it in action also:

Forum discussion at Twitter.

  48 Hits
  0 Comments
48 Hits
0 Comments

Robots.txt Birthday Cake With Martijn Koster At Google

  54 Hits
  0 Comments
54 Hits
0 Comments

How to Set up a Well-Integrated Effective Link Building Campaign

Posted by AnnSmarty

Link building remains one of the most effective digital marketing tactics, and not just for higher rankings (even though links do still remain the major organic ranking factor). Links drive referral clicks, and generate leads, making your site less dependent on search and advertising traffic.

But how do you build links these days, now that most self-serving link acquisition tactics are frowned upon by Google and can result in lost search visibility?

Here's what we know for sure:

Link building cannot be scaledLink building is not easy or fast.

A new approach to link building integrates all kinds of marketing assets and processes including content marketing, relationship building, and influencer outreach.

Continue reading
  233 Hits
  0 Comments
233 Hits
0 Comments

A note on unsupported rules in robots.txt

Yesterday we announced that we're open-sourcing Google's production robots.txt parser. It was an exciting moment that paves the road for potential Search open sourcing projects in the future! Feedback is helpful, and we're eagerly collecting questions from developers and webmasters alike. One question stood out, which we'll address in this post:
Why isn't a code handler for other rules like crawl-delay included in the code?
The internet draft we published yesterday provides an extensible architecture for rules that are not part of the standard. This means that if a crawler wanted to support their own line like "unicorns: allowed", they could. To demonstrate how this would look in a parser, we included a very common line, sitemap, in our open-source robots.txt parser.
While open-sourcing our parser library, we analyzed the usage of robots.txt rules. In particular, we focused on rules unsupported by the internet draft, such as crawl-delay, nofollow, and noindex. Since these rules were never documented by Google, naturally, their usage in relation to Googlebot is very low. Digging further, we saw their usage was contradicted by other rules in all but 0.001% of all robots.txt files on the internet. These mistakes hurt websites' presence in Google's search results in ways we don’t think webmasters intended.
In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we're retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019. For those of you who relied on the noindex indexing directive in the robots.txt file, which controls crawling, there are a number of alternative options:
Noindex in robots meta tags: Supported both in the HTTP response headers and in HTML, the noindex directive is the most effective way to remove URLs from the index when crawling is allowed.404 and 410 HTTP status codes: Both status codes mean that the page does not exist, which will drop such URLs from Google's index once they're crawled and processed.Password protection: Unless markup is used to indicate subscription or paywalled content, hiding a page behind a login will generally remove it from Google's index.Disallow in robots.txt: Search engines can only index pages that they know about, so blocking the page from being crawled usually means its content won’t be indexed.  While the search engine may also index a URL based on links from other pages, without seeing the content itself, we aim to make such pages less visible in the future.Search Console Remove URL tool: The tool is a quick and easy method to remove a URL temporarily from Google's search results.For more guidance about how to remove information from Google's search results, visit our Help Center. If you have questions, you can find us on Twitter and in our Webmaster Community, both offline and online.

Posted by Gary
  40 Hits
  0 Comments
40 Hits
0 Comments

Daily Search Forum Recap: July 1, 2019

Here is a recap of what happened in the search forums today, through the eyes of the Search Engine Roundtable and other search forums on the web.

Search Engine Roundtable Stories:

Will Google Crack Down On Sites Leasing Out Its Subdomains
Google's John Mueller addressed the topic of sites leasing or renting out subdomains off of their main domain and letting third-parties put their content on those subdomains in an effort to rank better in Google and sell more. John didn't call it spam but did say the search leads at Google are aware of this and have been in discussions about how to handle such efforts.Google Says Sometimes Old Posts Are Still Relevant, But Not In That Example
Google's John Mueller said something very true on Twitter, he said "Sometimes even posts from 2014 are relevant." "The date is not the only ranking factor," he added. But the issue is, the example given is really not the case and that article is really not all that relevant today.Google Works To Make Robots Exclusion Protocol A Real Standard
Google's webmaster channel is on a series of posts every hour around the Robots Exclusion Protocol - in short, an hour ago, Google announced that after 25 years of being a de-facto standard, Google has worked with Martijn Koster, webmasters, and other search engines to make the Robots Exclusion Protocol an official standard.Vlog # 1: Ryan Clutter of Horizon Media on Google Core Updates & More SEO
In our first ever vlog as part of the Search Engine Roundtable vlog series, I interviewed Ryan Clutter (@RyanClutt). Ryan is the Senior SEO Analyst at Horizon Media and full service marketing agency with an estimated billings of $8 billion and over 2,000 employees.Google Images Makes It Easier To Share GIFs
You all know I love GIFs and now Google is making it easier than ever to share GIFs that you find in Google Image search. I personally cannot replicate the feature on my iPhone on any browser but Google said it was here.Google: It May Be Worth Looking Into Compressing HTML & CSS
Google's John Mueller confirmed on Twitter that it may be worth looking into compressing your HTML and CSS. He said "Sometimes minifying HTML & CSS can reduce the size of the files, so it can certainly be worth looking into that," when he was asked if it can help with SEO or rankings.Google NYC Rooftop Connect Four?
I am not 100% sure but this photo is supposedly on one of the rooftops or balconies at the Google NYC office and there is a massive connect four game. I found this photo on Instagram of this person pl

Other Great Search Forum Threads:

Simplifying portfolio bid strategies, Google Ads HelpGoogle traffic to https://t.co/O1o5u2UBiz has plummeted by about 99% over the past few weeks." -> Google buries Mercola in their latest search engine update, Part 1 of 2 (From @mercola himself) https://t.co/EbWFeK0IAM'... https://t, Glenn Gabe on TwitterDan Shure on Twitter: "About the @mercola "traffic drop" - I don't think the drop is a severe as 3rd party tools make it seem. Also, there are so many issues with the site, where to begin: - super distracting & low-quality supplementary content - CTA's th, mobile.twitter.comGlenn Gabe on Twitter: "Newsguard, which I've written about before, now has ratings for some health/medical sites. As you can guess, it's not pretty for https://t.co/O1o5u2UBiz. "Proceed with caution: This website severely violates basic standards of cred, twitter.comLily Ray on Twitter: ""Now, when you Google a medical term such as "heart disease" or "Type 2 diabetes," you will not find https://t.co/IC1NdTPmIx articles in the search results." Hmm, maybe because https://t.co/IC1NdTPmIx has been routinely investigated, twitter.comUpdate: GMB Profile Logo's Are Here on Mobile I am seeing this for myself and for clients ranging across multiple categories. Upload your 250x250 logo today! #localseo #locasearch'... https://t.co/oJASWeYLbb, Ben Fisher on TwitterWe're excited to share some new features rolling out in the coming months: parallel tracking, the final URL suffix, and an expansion of the number of available custom parameter. Get the full details here: https://t.co/, Microsoft Advertising on TwitterThis is interesting: - The mobile carousel has image results. - That aren't top image search results. - That look like they're part of the below snippet. - They're not from the page that the below snippet is from., Ric Rodriguez on TwitterCreating content that's not keyword-based?, WebmasterWorldNot entirely sure what to call these Google Shopping Panels, but they're looking more and more like standard Knowledge Panels lately. I'm now seeing shareable links (same as KP's) and content without clear attribution. See commen, Brodie Clark on TwitterThat's fine -- the image sitemap should match what the pages themselves include. Also, think about how users might be searching visually for your content (make an image search strategy!), rather than just "listing" jpg on the s, John Mueller on Twitter

Search Engine Land Stories:

Lessons from a search marketer of the yearToday's customer decision journey is so complex but AI can helpSMX Replay: How to use data storytelling to earn top-tier media coverageWhat do you do if Google My Business doesn't understand your business?

Other Great Search Stories:

Analytics

A Perfect Pair: Analytics 360 & Optimize 360, Seer InteractiveGoogle Analytics: Getting The Basics Right For Your Business, Contact Online Ltd.Six Principles for Designing Any Chart - Google Design, Medium

Industry & Business

Government hackers reportedly broke into Russian search company Yandex, EngadgetReady for takeoff: Meet the Doodle for Google national finalists, Google BlogResponsible AI: Putting our principles into action, Google BlogAfter Europe, Google under fire for 'Android dominance' in India, The Next WebMicrosoft's Bing, MSN Don't Infringe Web Server Patents, Bloomberg Law

Links & Promotion Building

Everything You Need to Know About Internal Link Building, Search Engine Journal20 Graphics on Google's Valuation of Links, MozGoogle's John Mueller on Good Links and How to Get Them, Search Engine Journal

Local & Maps

Continue reading
  73 Hits
  0 Comments
73 Hits
0 Comments
23 August 2019
Website Marketing News
5 PPC Tips to Improve Your Google AdWords CampaignCreating an effective PPC campaign can be the best thing you do for your business. Google AdWords has made it easy to use and understand, but that also means that you are going to have a lot of compet...

MAKE PAYMENT

Affiliations

Newsletter Signup

FREE SEO REPORT SENT TO YOUR EMAIL

FREE Website Evaluation

Please type your full name.

Invalid email address.

Invalid Input

Invalid Input