Verify your domain and increase your revenue now.

Dear Publishers,

Recently we are getting traffic from unregistered domain which is causing us trouble to monetize the traffic properly, thus we have to verify your domain individually ASAP.

For this you have to upload the html (see the download link below) file into your root directory of your domain. After uploading the html try to browse the file using your browser: www.yourdomain.com/ritsverify.html, you should see “RITSPIX” written on your browser. [replace www.yourdomain.com with your website url ].

You will see increase of your revenue after completing the process. 🙂

Please email us at [email protected] as soon as you complete the process.

Click here to Download the zip file. [Before upload the file please unzip it]

Thanks,

RITS Ads Optimization Team

 

5 easy steps to increase your revenue.

dollar_bosta

Dear Publishers, here are 5 magical steps to increase your revenue from RITS Ads:

1. Use maximum 3/4 ads in a single page.
2. Get Traffic from English Speaking Countries.
3. Publish fresh content.
4. Place ad tag near to your content or place it on always visible place like sticky ads.
5. Do not use same size ads more than twice.

Enjoy your earning from RITS Ads.

They made the ad for Baby Stroller but they Sell Something Else….

stroller

Does your baby need a stroller that comes with 4-by-4 suspension, a sandbox, a holographic projector and a bubble machine?

In our tech-crazed society, where everyone grows up to be an astronaut or coder, such a contraption—dubbed the “Latot” in ads from The Martin Agency—seems like a quasi-reasonable proposition. (WeeBabe would sell out of them in 15 minutes.) Alas, it isn’t real. It’s the central gimmick in an amusing parody campaign for an entirely different type of consumer service, one that has nothing to do with babies.

In a somewhat risky move, the identity of the actual client is kept hidden until viewers visitLatotStroller.com (the YouTube channel’s name is a giveaway, though). Even on the landing page, it takes a few seconds before the “Stroll Into Greatness” line and tyke testimonial—”I’m no genius. I’m a baby. But this thing is genius”—make way for the reveal.

Ultimately, visitors are counseled not to be “oversold” but to focus on “everything you need and nothing you don’t” as they make their buying decision—on wireless service, as it turns out. I credit Martin for sustaining the metaphor in fun fashion across diverse channels. It’s certainly succeeded at creating desire, though not so much for the actual product (which, if I understand correctly, doesn’t come with a sandbox or bubble machine).

CREDITS
Client: Total Wireless
Creative: The Martin Agency
Social: Weber Shandwick
Landing Page: PJA

Source: http://www.adweek.com/

Amazon Now Started CPM Ads

Amazon has started a display ads in the form of CPM ads.It is invite only program to members of its online affiliates program, Amazon Associates. Select members of the program have recently been invited to test the new advertising option, which will feature both display ads from Amazon as well as other “high-quality” advertisers.

The program is currently in a beta testing period –Did you see amazon ads on RITS Ads ? If yes send us a screenshot and WIN a 7 Inch tab.*

 

Thrive In The Programmatic Marketplace: Focus On Buyers’ Best Interests

Advertisers, which are traditionally more adept at nimble, fast-paced changes than publishers, are often the driving force behind new trends in the rapidly evolving programmatic advertising industry.

Industry trends can be categorized into two broad buckets: Slow-and-steady trends allow publishers to be more strategic in their approach, providing ample time to consider the resources needed to adopt new methods. Native advertising and private marketplaces are two examples. Fast-and-nimble trends, such as brand safety and viewability, are usually driven by pressure from the buy-side and have a significant impact on monetization.

Native Advertising: Worth it when done right

Large publishers can attract enough advertisers to invest in building custom creative, but smaller publishers must wait until there is a native ad standard that can be extended to run across hundreds, if not thousands, of publishers.

Similar to how the IAB solved early challenges in digital display advertising by creating standard banner ad sizes, native ads will need standardized components that can be assembled in real time to match the publisher’s content. As brands and advertisers continue to test the native-ad waters, there’s still time to consider how to successfully implement native.

Recommendation: Don’t be too quick to replace traditional ad units with native, especially if you don’t have robust technology in place. Rather, start with a test placement sold via your direct sales team and learn from it. Next, select another good placement and test in a programmatic environment. This iterative approach will allow you to learn what works best for your systems, sales teams and advertisers, and will let you know when it’s time to scale your native ad inventory.

Private marketplaces: make decisions on inventory value

Publishers are excited when a brand or agency shows interest in setting up a private marketplace (PMP), as the model promises great opportunities for buyers and sellers. Buyers gain access to exclusive inventory or data, while publishers gain access to premium budgets with fewer brand-safety worries. So why are PMPs in the slow-and-steady category? There are too many PMP deals where publishers would have been better off, yield wise, selling in a more competitive environment.

Recommendation: With every PMP opportunity, do the math to determine if the deal is going to deliver better yields than selling it in an open marketplace. Understand if you’re gaining access to exclusive budgets, rather than budgets accessible in the open auction. The deal on the table should be worth the tradeoff of providing an advertiser with exclusive access to your best inventory.

Viewability: More viewable, more valuable

Advertisers are now demanding 70% to 100% viewability on the inventory they purchase, though in their exchange deals, viewability rates are typically much lower, ranging from 30% to 50%. Viewability is problematic to publishers keen to maintain strong yields across their inventory. With all the fear surrounding non-viewable ads, viewability is a fast-and-nimble trend.

Recommendation: Partner with a vendor that can help you understand your viewability rate, and price your inventory accordingly. It’s not necessary for publishers to accept a vCPM model and learn to live with diminished yields.

Understanding your viewability rate makes it easier to identify if low viewability should be attributed to your entire page, or just a collection of poor ad units. While some ad units on your site may have consistently high measurements; lumping all inventory together could potentially cause a significant drop in viewability.

Consider removing low viewability ad units from your site completely over time, those units provide little value to advertisers. Understand the nuances in order to defend prices at the negotiating table. Pricing inventory appropriately is also key. If buyers are able to cherry-pick the best inventory, it should come at a premium rate. Raise and lower your floor prices based on viewability, and possibly reserve some inventory for PMP deals.

Brand-Safety: If it’s not 100% brand safe, it’s not going to work

Stories with shocking content are gaining traction in the publishing world since they tend to bring a lot of visitors to their sites (and generate more ad units). But this strategy is risky: Viral content may conflict with your brand promise. What’s worse, advertisers may not appreciate their ads appearing alongside such click bait, which could easily land you on an advertisers block list.

Recommendation: Don’t let short-term benefits of attracting more consumers outweigh long-term growth. Don’t lower your content standards for quick-hitting revenue. As a publisher, your content strategy should focus on creating value for the advertisers because that will return the most value to you in the long run.

While it may seem counterintuitive to prioritize advertiser interest through programmatic so highly, it yields significant positive returns. By delivering value to advertisers, the algorithms that now buy your inventory will start to pay more for it, obtaining higher CPMs and deriving more value from your content.

 

Source: mediapost.com

OpenX Overtakes Google at Top of ‘Trust Index’

Botnet

OpenX has overtaken Google as the most trusted inventory source, according to figures released this week, as the latter of the pairing aims to underline its leadership in the industry’s fight against dishonest players.

The claims were made in the latest update toPixalate’s Global Seller Trust Index (GSTI), which ranks inventory quality and security risk, with the company further revealing that 70% of media inventory sources are exposed to malware-driven ad fraud, with one-in-20 internet users infected by it.

Overall, OpenX ranked as the most highly-rated source of traffic in April, overtaking Google AdExchange, which had led the table the previous month (see chart immediately below).

PixalateGSTIApril

Meanwhile, both OpenX and Rubicon Project are leading in five categories each, while Google’s AdExchange leads in two (see chart below).

Commenting on the numbers, Jalal Nasir, Pixalate, CEO, said addressing the issues of cyber security risks, plus the concerns they pose to both enterprises and consumers, was crucial if high-level marketers are to place further trust in programmatic media buying.

The complexity of the programmatic advertising sector means it is particularly vulnerable to malicious players in the ecosystem, as it gives them a means to hide their true identity, according to Nasir.

He added: “This is a complex and growing problem, as many buyers, including reputable brands, purchase seemingly legitimate inventory. Unbeknownst to them, some of the inventory has been compromised and ultimately leads to a negative impact on consumer trust and brand integrity.”

Pixalate’s GTSI analyses 100 billion ad impressions, across 350 million IP addresses in order to benchmark 400 programmatic media sellers by assessing their vulnerability to darknets, owned and operated by malicious organisations. The latest version of the report also revealed that breaks sellers down by IAB verticals (see chart below).

PixalateGSTICategories

Google’s efforts to combat fraud

Meanwhile, Google has been at pains recently to demonstrate that it is at the vanguard of the drive to clean up ad tech, post its purchase of Spider.io last year.

The advertising behemoth recently granted industry journal Ad Age access to its 100-strong anti-fraud team in London to offer the industry further insight to its efforts to combat botnets, which are credited as siphoning off $6.3bn from the digital advertising sector each year.

The piece demonstrates the ease with which hackers can hijack machines – through security weaknesses it calls ‘exploits’ – to create botnets that then click on ads en masse, and further profiles Google’s efforts to combat the fraudsters behind them.

More recently, Google used its DoubleClick Advertiser blog to issue a call to action in a post entitled: Stopping Digital Ad Fraud.

Penned by Vegard Johnsen, Google, ad traffic quality, product manager, the piece echoes Google’s backing of the IAB’s efforts to standardise industry jargon around such practices.

He added: “When fraud is identified it should be shared in a clear structured threat disclosure, mirroring how security researchers release security vulnerabilities. By increasing the amount of data we share in a transparent, helpful way, others in the industry will be able to corroborate any claims being made, remove the threat from their systems, removing it from the ecosystem.”

Johnsen goes on to advocate a system whereby any party that purchases non-blind impressions should be passed a chain of unique supplier (and reseller) identifiers – be it an exchange, network, sell-side platform – and one for the publisher.

He added: “With this full chain of identifiers for each impression, buyers can establish which supply paths for inventory can be trusted and which cannot.”

Botlab.io

Mikko Kotila, industry veteran, and author of last year’s WFA report advising brands on transparency in the programmatic buying sector, recently dropped by the ExchangeWire office to discuss his not-for-profit outfit Botlab.io, aimed at helping clean-up the sector.

In the TraderTalk TV episode below, Kotila discusses his view that independent bodies are better placed to combat fraud, compared to security firms or trade bodies, as the latter are incentivised for profit.

Click below to see Kotila explain some of the different types of fraudulent traffic generators on the market at present:

WHAT IS HOLISTIC AD SERVING?

Certainly one of the biggest opportunities in ad tech today is integrating real time bidding (RTB) systems to core ad serving platforms such that ad serving decisions are made from a single system. This vision of a fully integrated monetization stack is known as holistic ad serving, and it’s going to be big.

Holistic ad serving consolidates what is today a fragmented marketplace, modernizes the publisher ad serving stack, and lays the groundwork for advertisers and publishes to transact guaranteed campaigns over RTB infrastructure.  In other words, it provides a way for publishers to transition from a world of manual campaign implementations to accepting and trafficking campaigns programmatically without having to manage the balance between two systems.

Tactically, holistic ad serving is a seems like a basic change – instead of filling direct campaigns first and then letting the exchange try to fill whatever is left, the idea is for publishers to call to the exchange marketplace and get a bid for every single impression, thereby allowing RTB demand to compete directly with the traditionally sold campaigns with guaranteed goals.  By at least getting a bid for every impression, the publisher’s ad server can understand the benefit or cost of filling an impression with a direct campaign – it has all the information.  Holistic ad serving also opens the possibility, on an impression by impression basis, for an RTB campaign to trump a direct campaign.

Yield Overlap Demands System Integration

Holistic ad serving represents a major shift in the way publishers currently integrate with ad exchange demand, though in a good way. Today publishers use a core ad serving technology like Dart for Publishers (DFP) or 24/7’s Open Ad Stream to manage their directly sold campaigns and then redirect to a supply side platform (SSP) like Rubicon Project or PubMatic to manage the exchange and indirect demand on their inventory, tethering the two systems together with a 3rd party tag.  The problem here is that the publishers have made a big assumption with this setup: that the yield from an SSP will never exceed the yield from their direct business and it always makes more sense economically to serve a directly sold campaign than an indirectly sold campaign. In prior years before RTB marketplaces existed, this assumption was pretty sound. Ad networks could rarely compete with the rates the publisher charged when dealing direct to advertisers as their business model was fundamentally different. Networks sold efficient reach while publishers sold targeted frequency; you’d never expect the former to pay more than the latter. In the world of RTB however, this assumption makes less sense by the day. As advertisers are able to define their own targeting, programmatically optimize to goals, and trade network markups for a DSP’s ‘cost plus’ pricing there are more and more cases where an impression is worth more to an exchange buyer than a publisher’s directly sold campaign.

To be clear, premium RTB rates are still the exception, but the overlap is growing and it will soon represent a not insignificant piece of the pie for many publishers. As a publisher today, when you look at a yield histogram from your RTB demand, you see something like the below, with a large block of demand clearing at very low prices, probably on or near your floor price.

But from there it’s a long tail of higher yielding demand, with a small segment at very premium rates that almost certainly earn more than a publisher’s lowest priced direct deals. And if publishers are paying attention to this kind of reporting, the overlap of bids that exceed directly sold eCPM is likely growing each quarter. The opportunity then is that if a publisher were to make a bid request to the exchange for every impression they could take more premium bids overall, and they could also use the inventory the exchanges didn’t want to fill their directly sold campaigns, reducing the number of low yielding RTB bids.

Take the hypothetical scenario below – the publisher has directly sold 30% of their inventory, and is monetizing 70% of the inventory through an SSP. If the publisher has a floor of $1.00, their SSP might only be able to fill half of the inventory, with the rest ending up on Ad Sense, or another performance network of last resort, which likely monetizes at a very low rate. Given this scenario, the publisher winds up with an effective yield of about $5.00 on their direct business and about $2.00 on their indirect revenue at a fill rate of about 50%.

But – if the publisher were to implement a holistic ad serving solution they might find their fill in any given tier from the exchanges is more a function of the inventory they make available, not an absolute impression level, as their direct line of business works. In that case, the inventory they were able to monetize at $6.00 on the exchange ends up being about 4% of whatever they make available, which means they fill more impressions at that price level which would have been used to support a direct campaign. Now that the ad server knows the yield from every source for every impression though, it can serve all $6.00 indirect demand over the $4.00 and $5.00 direct demand.

To logically play that decision out across all impressions means the ad server will likely push all the direct demand into the capacity that had been used for the performance networks. Which brings up one of the potential risks of moving toward holistic ad serving – which is that by allowing exchange cherry picking, publishers risk a system that shifts their higher-yielding inventory to lower performing impressions, likely later in a user’s session, and probably in ad slots below the fold. From a monetization perspective though, the impact is pretty positive. Even if the SSP’s yield remains flat, the fill rate is likely to increase, resulting in an increase in overall yield – in this example the gain is equal to about 13% in found money. Not much from an absolute point of view in our example, but for any business that throws off a considerable amount of money on their indirect line of business, and many publishers do, a double digit gain in yield is tremendous.  The more system yield histograms overlap, the more opportunity there is for a publisher.

Technical Workflow – No Easy Solutions

It isn’t all that difficult to see why a publisher might want to move toward holistic ad serving, but the next question is how exactly they go about doing it. The SSPs are doubtlessly all working to devise a solution, but few seem to be out in market with a product just yet.  The two exceptions I’m aware of today are OpenX’s integration of their Enterprise Ad Server with their exchange optimization platform and recent acquisition, LiftDNA.  You have to be on both products to take advantage of the seamless integration, but it does purport to be a working holistic ad serving solution.  For publishers not on OpenX and hesitant to change, LiftDNA actually has an interesting standalone product, which cleverly works through ad server APIs to traffic and constantly re-prioritize exchange demand as separate placements in the publisher’s ad server, and DFP’s one-click opt in to ‘dyanmic allocation‘ with the DoubleClick Ad Exchange.  The standalone LiftDNA product benefits from being a truly open solution that can work with any indirect source of demand, but it doesn’t truly evaluate exchange yield on an impression by impression basis like it can when working through OpenX’s ad server.  Google’s ad server DFP does pass exchange yield impression by impression, but only from their own exchange, not any others. And while I have no reason to think the product doesn’t serve publisher interest alone, some might say that because DoubleClick is part of Google and Google also owns advertiser facing products like AdWords and InviteMedia means their publisher facing yield management product inherently has a major conflict of interest.

While it seems like a simple concept – just make a bid request for every impression – integrating ad servers with SSPs is actually a tremendously complex task.  Holistic ad serving moves the ad serving decision from one system to many, and the key concern is how to go about that without adding huge amounts of latency to the process.  In my mind there are two basic ways to integrate these technologies, and neither is ideal.  The first would be to push the exchange intelligence into the ad server by making a bid request to the exchange in a site’s header code, and then populating the highest bid value into the ad server using a key value parameter.  The downside to this approach is that it adds a race condition to the page, and makes the page wait to load the publisher’s ad tags until the exchange responds.  Any solution in this direction needs a reliable way to abandon or timeout the request to the exchange after a certain amount of time so the page can continue to load.

The second and superior option would be to connect the systems via API, and have the ad server cookie sync with the SSP.  That way, the publisher wouldn’t have to wait to make the call to the ad server, they could do that straightaway and then let the ad server abandon the exchange request from within their own system.  Regardless of the setup, the technical challenge in both cases is that holistic ad serving introduces a race condition and now has to wait on a 3rd party system in order to make a decision.  Even if the ad server has a way to timeout a response from the SSP after a certain amount of time, it’s almost impossible to think an SSP to ad server integration doesn’t add latency to every impression.  This isn’t necessarily the SSP’s fault either, since it has to wait on DSPs and the exchanges to respond to its requests, but it’s a problem nonetheless.

 

Source: adopsinsider.com

WHAT IS A CACHE BUSTER AND HOW DOES IT WORK?

A cache-buster is a unique piece of code that prevents a browser from reusing an ad it has already seen and cached, or saved, to a temporary memory file.

What Does a Cache-Buster Do?

The cache-buster doesn’t stop a browser from caching the file, it just prevents it from reusing it. In most cases, this is accomplished with nothing more than a random number inserted into the ad tag on each page load. The random number makes every ad call look unique to the browser and therefore prevents it from associating the tag with a cached file, forcing a new call to the ad server.

Cache-busting maximizes publisher inventory, keeps the value and meaning of an impression constant, and helps minimize discrepancies between Publisher and Marketer delivery reports.

What Does a Cache-Buster Code Look Like?

Typically, a java script function like the one below powers a cache buster. An example of a cache buster looks like this:

<script type="text/javascript" language="JavaScript">
ord=Math.random()*10000000000000000;
</script>

This code is put toward the top of the page within the site’s <body> tag and creates a random number for the “ord” value in the ad tag. So, when a browser hits a tag, it builds the ad tag like this –

http://ad.doubleclick.net/ABC/publisher/zone;topic=abc;sbtpc=def;cat=ghi;kw=xyz;tile=1;slot=728x90.1;sz=728x90;ord=7268140825331981?

If the browser then returns to the same page later on, the same tag might look like this, where everything remains the same except for the random number.

http://ad.doubleclick.net/ABC/publisher/zone;topic=abc;sbtpc=def;cat=ghi;kw=xyz;tile=1;slot=728x90.1;sz=728x90;ord=6051834582234?

Why Does a Browser Cache in the First Place?

When a browser navigates to a web page today, the Publisher’s Content Server sends it an HTML file with instructions on how to format the page and where to retrieve all the images, text, and other pieces of the page.  Downloading this information all takes time and memory to accomplish for the browser, so it tries to save as much of the information as possible for future use in temporary folders (the cache) on a user’s hard drive.

This technique lets a browser surf through a website much faster.  It’s less important in an age of high speed fiber optic connections, but made a huge difference in the days of 56K modems, when each page took seconds if not minutes to load.  And, since most web pages are built on templates, many elements of a site are used on every page, for example, the site’s logo.  Why fetch the same image again and again when the browser can save it once, and simply reference the same file on every page? The browser is smart enough to read the HTML code for each page and recognize what content it already has and just skip to the next line of code to look for the unique and previously unseen data.

This would certainly work for the ads on the page, too.  If a user loaded a publisher’s homepage for example, then went to an article page, then back to the homepage, the ad tag would be exactly the same and the browser would just re-use the ad it called the first time if a cache-buster was not implemented. Since Publishers get paid for every impression though, they don’t want this to happen, they want the browser to call or consume another impression so they can charge for it. Advertisers might like the idea of free impressions, but when pressed, most would tell you cache-busting is a good thing for them, too. Recycled ads screw up reports, mess with ROI calculations, and add an uncertainty factor to campaign data, not to mention create tension between Publishers and Advertisers via discrepancies.

In fact, if you are having an issue with 3rd party discrepancies where the publisher numbers are much higher than the advertiser numbers, the first thing you should check is that a cache-buster is in place and working.

Some Interesting Facts About Online Advertising

Here are some really interesting facts about the online advertising, Enjoy!.

  1. Video ads account for 3% of time spent viewing video online
  2. Video ads accounted for 31% of all videos viewed online
  3. The average US online video ad is 24 seconds long
  4. Display ads account for 0.9% of upstream traffic to department store sites
  5. Brand marketers will account for 27% of online display ad spending by 2018, down from 31% in 2011
  6. Brand marketers account for 33% of all online display ad spending, down from 48% in 2006
  7. 8% of US moviegoers watched film previews online via game consoles in 2012, up from 4% in 2010
  8. 15% of US moviegoers watched film previews online via smartphone in 2012, up from 6% in 2010
  9. 8% of US moviegoers watched film previews online via tablets in 2012, up from 5% in 2011
  10. 76% of consumers in the US and UK say they receive more marketing messages containing customized offers or invitations than they did 5 years ago
  11. 28% of consumers in the US and UK want to receive personalized marketing messages which include recommendations for specific products
  12. The internet accounts for 26% of US consumer interaction with media, and 22% of advertising spending
  13. Mobile devices account for 12% of US leisure time, and 3% of advertising spending

Source: http://www.factbrowser.com/tags/online_advertising/

How ad serve targeting GEO location

In today’s digital ad market, geotargeting depends on mapping a user’s IP address to a physical location, a task every ad server outsources to my knowledge.  This is because the process of assigning a geographic location to an IP is messy and complex to say the least.  Just because the ad server outsources the functionality however doesn’t give Ops an excuse to ignore this important and highly utilized feature.

How is an IP Address Associated with a Geographic Location?

By and large, IP addresses are arbitrary – meaning they could be anywhere, and there isn’t much rhyme or reason to their values from a geographic perspective.  It isn’t as though if the IP address starts with a 1 it is always located in the United States, for example.  Instead, companies like Digital Envoy use a multi-layered approach to assign geographic qualities to a user, some highly technical, and some which are just common sense, and some that are a combination of the two.

On the common sense side, a fair amount of geolocation companies can leverage Regional Internet Registries, or RIRs, to assign high level qualities, like country or continent.  The RIRs each own dedicated ranges of IP values and exist to allocate IP addresses within their regions, and cooperate among each other to ensure that the same IP isn’t being used in more than one place. So placing the IP address within a specific RIR’s range allows the service to identify location at a very high level.  Some geolocation services are rumored to work with large registration based sites as well, and have zip code information that a user might manually enter during a sign up process.

Pings, Traceroutes, Reverse DNS, and Other Technical Methods of Geolocation

From there though, the heavy lifting is usually done through a combination of three technical processes known as pings, traceroutes, and reverse DNS lookups.  Let’s run through a high level explanation of all three processes, and then explain how they work in concert to geographically locate a single IP address.

A ping is just a small piece of information sent from one computer to another, with a request to call the originating computer back.  Pings can also record the round trip time of the journey, and are used for a variety of administrative network processes.  Think of it like a submarine’s sonar technology, applied to the internet.

Tracerouting is basically a way to record the network routing process of the ping service, or the detail behind how the ping got from one machine to its destination.  Tracerouting records how a ping is routed, who it is routed through, and the time it takes at each step.  When information travels across the internet, be it a ping or just regular surfing, it moves through a series of very high speed fiber optic networks owned by various public and private entities.  Now, when the information gets physically close to a user, it passes down to an Internet Service Provider (ISP), which sells internet access to consumers.  The ISP eventually moves the packet of information to a nearby network router to the user, which connects directly to the user.  By using the traceroute utility, the geolocation service can know every system the information was passed through in order to get to its final destination.  The important piece of information the service gets from a traceroute is the IP address of that final network router, geographically nearest to the user.  You can ping or see the traceroute command in action on your own machine at Network Tools.

With the network router’s IP address in hand, the geolocation service can finally use a technique known as a reverse DNS lookup to identify who owns that network router, which it can use to lock in on the physical location of the user.  Reverse DNS is simply a service to identify the hostname of an IP address, that is, who owns an IP address.  For many home computers, the host ends up being the ISP.  For businesses, the host ends up being the company’s domain. DNSStuff provides a reverse DNS lookup service – just enter an IP address into their ‘IP Information’ tool to try it out.

Geolocation in Action

Now that you understand the basic approach, here’s how it all works together at a high level –

When a geolocation service wants to triangulate an IP, it starts by pinging that IP address from a central server it owns, and then looking at the traceroute.  From the traceroute, the service can identify the nearest network router to the user by IP, labeled point A on the diagram below.  Then, using a reverse DNS lookup, the service can find out which ISP owns that router, and then query the location from public data, the ISP itself if the service has a business relationship in place, or failing that, triangulate the location with the process below.

In all likelihood, the geolocation service already knows the location of this network router, either by working with an ISP directly, or through previous triangulation efforts.  With that location in hand, the geolocation service hands off the triangulation process to servers closest to that network router, of which it also knows the exact geographic location.  Now, the service sends a ping from at least three of its own separate servers (1, 2, 3), and records the time it takes to reach the user.  Only time can be recorded from a ping, not distance, but using time as a radius, the geolocation service can draw a circle around each server, and know that the target location must exist at some point on the arc.

Geolocation by Ping Triangulation Explained

With three separate locations, the target location should exist at the one point where all the arcs meet, which also gives the service the exact vector to the target from each server.  And, since information runs through fiber optic cable at a known, constant speed (about 2/3 the speed of light), the service can now translate that time into a distance, and with the vector and a known server location, calculate the exact location of the target, within a certain margin of error, depending on the exact method used, and how many points of triangulation are employed. Currently, the most advanced geolocation triangulation methods employ as many as 36 points to eliminate problem data and increase accuracy, and can accurately map an IP address within 700m – but we’ll talk more about that in the final piece in this series.

Network Maps & WHOIS Lookups

Using either piece of information, the ISP or the business domain, the geolocation service can further refine the geographic values of a given IP.  Geolocation services may also work directly with ISPs to get the general physical location, when available of a given IP, since the ISP will know the exact address of the customer using that connection at any given time.  It’s important to note that no PII is exchanged in that process, a zip code is just mapped to the IP address, and not all ISPs participate, or may simply provide the location of the final network router instead of the end-user’s zip.

Some of the more sophisticated geolocation services may be able to deduce the physical location of an ISPs network routers, also known as the ISP’s network map, by pinging those routers from various servers with known geographic locations, measuring the time it takes to get a response, and using that information to triangulate the router.

Businesses may also have a specific address, available through a WHOIS lookup, which allows country, state, city, and zip to be assigned.  The WHOIS directory is a public registry of who owns what domain, along with their name, and importantly, address.  Through this information, geolocation services can get a better idea of the physical location of each machine.

Where Does Geolocation Data Come From?

In most cases, a 3rd party table from a company that specializes in geolocation data.  Practically speaking, most of the advertising industry relies on a small company called Digital Envoy, founded in 1999 by a few smart entrepreneurs, and was acquired by a larger media company called Dominion Enterprises in 2007.  Digital Envoy pioneered the process of linking an IP address to a geographic location, and specializes in keeping the information current, and accurate.

Effectively, Digital Envoy maintains a massive table of literally billions of IP addresses and their inferred geographic qualities, and then sells access to that table at various levels of granularity to ad servers and lots of other companies who have an interest in identifying the location of a user, an ad server for example, who then cache the information in their local database, and can run queries against it.

Other companies that perform this service include QuovaMaxMindGeoBytesCyscapeIP2Location, andAkamai’s EdgeScape product, though there are also free services out there such as HostIPIPInfoDB, andSoftware 77.

[This article was originally published on Run of Network in Dec of 2011]