SEO for Angularjs ajax Application

faceFore

Home How to Pricing Contact Us


Indexing dynamically loaded Ajax contents & JSON data.


Create prerendered Html snapshots

Detect Search Engine Crawler

Serve the snapshot to crawler





Why your website pages must be crawled and indexed by search engine crawlers?
The answer is simple, if your website pages not been indexed by search engines, then you are no where on the Internet, you are none existing on the web. It will be almost impossible for people to find your site.

Search engine's crawler bots can easily crawl static Html pages and index them. Because all the contents are available in a clear readable format, they can easily find pages through clearly mentioned internal links.

The problem starts when they see just some curly brackets instead of clear readable contents, because the contents load dynamically after executing javascript and making Ajax calls to server.

Google search engine, now can execute the javascript on your page in order to load the contents,


SEO for Ajax contents & JSON data was never been that simple. Now make AngularJS & other dynamic AJAX Application contents crawlable & indexed by Google & other search engines without modifying server configuration and installing middleware or proxying.

A better and easy solution for Ajax based websites, built with pure javascript, can be used with Angularjs and other Javascript frameworks. Improves SEO for Angularjs websites that dynamically loading pages with Ajax JSON data. Best suited for eCommerce websites.


Make AJAX contents indexed and crawled by Google & Bing

Comparison: Ajax app's renderers for Angularjs & other modren javascript frameworks

Features faceFore SEO Tools Other Renderering service providers
Detecting Google & Bing/Yahoo crawler bots? Yes Yes
Creating static Html snapshots? Yes Yes
Serving Html snapshots to crawler? Yes Yes
Generating Sitemap XML? Yes No
Generating HTML hyperlinks for the created Html snapshots? Yes No
Can set unique page Title & description on fly, for each created snapshots? Yes Unknown
Will you be dependent on 3rd party server? No, you get all at your server Yes, they are not giving you full source code
Cost per Month 0 US$ 100 to 360 plus
Cost per Year 0 US$ 1000 to 2400 plus
Monthly pages limit Unlimited 50,000 to 100,000 (extra US$ 0.50 to 1.50 per 1000 pages if limit crossed)
Security Risk 0, None because all the code run at your server. You own the full script You will have to give Ajax call access to 3rd party, in order to render the pages.
Required Phantomjs or other middleware at server? None Yes, most of them
Required .htaccess or other server config file modification? No Yes, most of them
Requires Port monitoring & burdening server memory? No, because no redirection needed Yes, because they need to proxy/redirection, to run the script at there server
Will you lose Pagerank? No, everything at your server May be, because the snapshot is served from another location.

Secretes of creating pre rendered static HTML snapshots, that the pre rendering service providers don't want you to know. !

Now the most easy & simple solution to get AJAX content crawled and indexed by Google bot or Bing bot.

Almost all in one solution for your ajax dynamic contents based website, to make it crawlable and get indexed by search engines.

Set unique page Title & Description on fly for every page, generates XML sitemap, generates HTML href hyperlink for every pre rendered page, creates pre-rendered HTML snapshots and serves the pre-rendered HTML snapshots to search engine's bot.


Creates static HMTL snapshots and cache them.

It serves the Pre-rendered static HTML snapshot for crawler bot whenever it detects the _escaped_fragment_ parameter in the query.

It automatically handles it, if you are using hashbang URLs (#!) or HTML5 mode.

You can pass, page Title & Description, in the function params and it will set the page title tag & meta tag description attribute for the pre rendered static HTML page.
So your every page can have unique title and description, the search engine crawler bot will love it.

Remember that your app will need to produce an HTML snapshot whenever it gets a request for an ugly URL, that is, a URL containing a query parameter with the name _escaped_fragment_. (Google best practices)

Creates a fresh Sitemap XML, whenever you want

Sitemap is the essential part of SEO (Search Engine Optimization) that presents the roadmap of your site contents to search engines.

A sitemap is a file where you can list the web pages of your site to tell Google and other search engines about the organization of your site content. Search engine's web crawlers like Googlebot or Bingbot read this file to more intelligently crawl your website.

Submit a Sitemap using Google Webmaster Tools. Google uses your Sitemap to learn about the structure of your site and to increase our coverage of your webpages.

(Google support & guidelines)

Creates Sitemap HTML, Hyperlinks that you can include in your website's footer

Matt Cutts from Google answers the question, "Which is better: an HTML site map or XML Sitemap?"

The internal links to static HTML pages (snapshots), built with HTML anchor, expose pages to search engine and spread link juice. They establish the architecture, increase ranking power and strengthen the overall SEO value of the website. The internal links to you site's static pages will boost you website SEO.

Make a site with a clear hierarchy and text links. Every page should be reachable from at least one static text link. (Google guidelines)

NO extra script library to install.

You will not require to install extra JS framework/library or any other third party middleware software to create HTML snapshots. It is not a complex job, simple PHP or other server side script can handle it better, you will know when you will see the included script, it is just over hyped.

NO modification of server configuration file.

You will not require to modify any server configuration files to serve the static HTML snapshots to search engine crawler bots. The included snippet will detect the bot, and will serve the required HTML snapshot automatically.

NO proxy call to outside server

You do not require to redirect or proxy to our server or any outside server, everything will happen at your own server. No need to waste server time and bandwidth. Some middle ware uses lot of memory and getting crashed. No need to risk your data and contents.

NO recursive monthly or annual payments.

We are not keeping it secrete from you at our server. Only one time payment and you will get the full code. No page limits to pre-render static HTML pages and no usage boundaries. All will be yours, use it in as much of your projects as you can.