Since there’s an upcoming session at SMX advanced titled “Buying Sites for SEO“, I figured the timing is finally right to hit the publish button on a post I wrote long ago about finding quality sites to purchase for SEO benefits.
The first time that I published this was about 2 1/2 years ago, and I was quickly asked by a well known SEO expert and friend (I won’t name any names, but he’s on the panel at the “buying sites” SMX session) to remove the post and wait until buying old virgin sites was common knowledge.
I completely understood/agreed with his logic that while he did have an abundance mentality about most things, some really effective but not yet widely adopted tactical SEO should remain private. It’s one major reason why posting here has always been so light, some things are better left unsaid until a later date.
So without further delay, here’s the entire article about my quest for the perfect virgin site finding tool.
With Google’s increased emphasis on website age as a factor in earning top rankings, it’s more important than ever to find links from old, established, authoritative websites in your industry. This is especially true for new websites, where building up your Trustrank (while still looking natural) is crucial in escaping or dodging the sandbox. One solution is to purchase well-aged, authoritative websites and/or negotiate advertising space with trusted website owners. So how do you find old websites to purchase outright or make advertising deals with?
At first I really didn’t pay a whole lot of attention to the old website buying game, but had read of others that had made it their new pasttime. Since I actually have some free time these days to sit and develop new businesses (I recently stepped down as VP of Marketing for one of my companies and transitioned over to a strictly consultancy arrangement), it finally occurred to me that there’s some money to be made selling advertising on old websites as well as using them for my own benefit. Yes, I’m slow to catch on to new trends as times, but once I do I give it 110%.
I had dinner with my programmer yesterday, whom I also grew up with and have known for over 20 years. We began talking about his current projects and what he’s been up to, which is pretty impressive and involves a major airline company. After a few drinks we started talking about online marketing and all of the new opportunities I’ve recently opened myself up to, as well as the possibility of him working with me full-time. Normally when I talk about SEO and marketing his eyes are quick to glaze over, but for some reason today was different.
I told him about an immediate need I have for a tool that could find old, non-SEO’d, “virgin” websites. Some immediate requirements I had for him were:
The logic with the last requirement is that if the site is ugly and/or outdated, the owner probably hasn’t been successful with it and will part with it for very little money. The ability to use keywords to find sites within a particular industry was a feature that we also discussed.
So, we moved on to covering the APIs involved, how they would interact, what technologies to use, etc. Though I’m not a programmer, I do have an understanding of programming because of my previous experience as a software tester. The tough part actually came when he asked me, “What criteria would you use to find old, poorly-designed websites?” Uhhhh…errrr…that’s a good question. What are the greatest common denominators of poor and/or outdated site designs that we could look for to view the largest number of results?
My first thought was to search the source of the page for ancient HTML editor footprints, like HotDog, FrontPage (using the date or version, ie. 2000), etc., but that seems like a very narrow selection. I would miss the sites that were created by those that coded HTML by hand in Notepad or vi Editor as I did when starting out.
Animated GIFs? Nah. Same problem, too narrow. I also thought about searching the body text and footer for “Copyright (year)” and “Last Updated (year)”, but not all old sites included copyright info or update statistics. What about hr tags (in the body, not in a stylesheet) and other obvious old school HTML footprints? Remembering I had read that Google released a report of web authoring statistics, I gave it a quick scan for ideas but didn’t come up with anything substantial.
Wouldn’t the tool also need to reference the Wayback Machine, to ensure that the site has actually been up for the entire time that the domain has been registered (or close to it)? What about using archive.org to look at an old copy of dmoz for old trusted sites? I couldn’t go too deep into dmoz and still have the wayback machine return results, but it was worth a try. Should a cache date check be included as well, so that we know that the site is still being indexed on a regular basis? What about checking Alexa data, with a higher three month average ranking being a positive indicator for our purposes?
As you can see, this “little tool” as I referred to it when we started the conversation had turned into a full-blown project. It should be fun, kind of like resurrecting forgotten websites and “recycling” the web.
My plan (after contacting the site owners and buying the sites, of course, which I’m sure will be difficult due to low response rates and undeliverable emails) is to change as little of the existing content and WHOIS info as possible, then slowly add new content to the sites to freshen them up.
Buying and improving old websites shouldn’t be viewed as a bad thing, even if you do plan on benefiting directly or indirectly from them later down the road. As long as you’re not converting old sites to new spam and you put real effort into polishing up the sites to the high-gloss, true resource finish they once had, I don’t see a problem with it at all.
What are some other common footprints of old school, “virgin” websites designed years ago? Do you have any old website search tools or ideas that you’d like to share?