tutorialpythonjavascriptgoogle-scraping

How to Scrape Google Search Results

Scrape Google search results with the SERP Search API. Complete guide covering all parameters, pagination, and geo-targeting in Python, JavaScript, and cURL.

Scraping Google directly is a constant maintenance headache. Google updates its HTML structure every few weeks, your Playwright selectors break in production, and you end up fighting CAPTCHAs instead of building the actual product you care about. The SERP Search API removes all of that: you send a query, you get structured JSON back. This guide covers every parameter, the full response shape, and common patterns for real use cases.

What you'll need

  • A SERP Search account (sign up here, takes 30 seconds)
  • An API key from the dashboard
  • cURL, Python, or JavaScript (any of the three works)

Your first request

The only required parameter is query. Everything else has sensible defaults.

cURL

curl -G https://serpsearch.com/api/v1/search \
  -H "Authorization: Bearer YOUR_API_KEY" \
  --data-urlencode "query=best project management tools"

Python

import requests
 
response = requests.get(
    "https://serpsearch.com/api/v1/search",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    params={"query": "best project management tools"},
)
response.raise_for_status()
 
data = response.json()
for result in data["organic_results"]:
    print(f"{result['position']}. {result['title']}")
    print(f"   {result['url']}")

JavaScript

const res = await fetch(
  "https://serpsearch.com/api/v1/search?" +
    new URLSearchParams({ query: "best project management tools" }),
  { headers: { Authorization: "Bearer YOUR_API_KEY" } }
);
const data = await res.json();
data.organic_results.forEach((r) => console.log(r.position, r.title, r.url));

That's the happy path. If you're seeing a 401, your key is wrong or missing. A 429 means you've hit the rate limit. Slow down and retry with backoff.

All request parameters

The web search endpoint (GET /api/v1/search) accepts these parameters:

ParameterTypeDefaultDescription
querystringThe search query. Required.
pageinteger1Results page (1-based, 10 results per page).
latnumberLatitude for geo-targeted results.
lngnumberLongitude for geo-targeted results.
locationstringLocation string for geo-targeting, e.g. "New York, NY, US".
response_typestringjsonResponse format: json, html, both, or correlated.
jsbooleanfalseKeep script tags in HTML responses. Only matters when response_type is html or both.

For most use cases you'll only use query, page, and occasionally location.

Understanding the response

A successful response looks like this:

{
  "search_info": {
    "total_results": "About 21,600,000 results",
    "time_taken": "0.29s"
  },
  "organic_results": [
    {
      "title": "Beautiful Soup: Build a Web Scraper With Python",
      "url": "https://realpython.com/beautiful-soup-web-scraper-python/",
      "website": "Real Python Tutorials",
      "position": 1,
      "description": "Beautiful Soup is a Python library designed for parsing HTML and XML documents. It creates parse trees that make it straightforward to extract data from HTML...",
      "visible_url": "https://realpython.com › beautiful-soup-web-scraper-py...",
      "answer": null,
      "date": null,
      "thumbnail": null
    }
  ],
  "knowledge_graph": { "title": "...", "description": "..." },
  "definition_result": null,
  "people_also_ask": [{ "question": "Is Python good for web scraping?" }],
  "weather_result": null,
  "related_searches": [{ "query": "python web scraping library" }],
  "local_results": null,
  "top_stories": null
}

A few things to know before you start parsing:

Everything outside organic_results can be null. knowledge_graph shows up for branded and entity searches. local_results appears for location-oriented queries like "coffee shops near me." top_stories surfaces for news-heavy topics. people_also_ask is common on informational queries. Don't assume any of them are populated. Always null-check.

The organic_results fields: position is 1-based rank. url is the full link. website is the domain only. description is the snippet text shown in the SERP. The answer, date, and thumbnail fields are only set when Google explicitly shows them in the result (rich snippets, news articles, etc.); they're null otherwise.

related_searches is genuinely useful for keyword research. If you're building a content tool, these are the queries Google itself is recommending.

Pagination

The API returns 10 results per page, which matches Google's default. Pages are 1-based: page=1 is positions 1–10, page=2 is 11–20.

Here's a Python snippet that collects the first 30 results across 3 pages:

import requests
import time
 
API_KEY = "YOUR_API_KEY"
all_results = []
 
for page in range(1, 4):
    resp = requests.get(
        "https://serpsearch.com/api/v1/search",
        headers={"Authorization": f"Bearer {API_KEY}"},
        params={"query": "python web scraping", "page": page},
    )
    resp.raise_for_status()
    all_results.extend(resp.json()["organic_results"])
    time.sleep(1)  # stay within rate limits on Starter plan
 
print(f"Collected {len(all_results)} results")

The time.sleep(1) matters. Lower plans are rate-limited to 1 req/s, so fetching 10 pages without a delay will get you a 429. Higher plans have higher limits. Check the pricing page for the specifics.

Geo-targeting

There are two ways to pin results to a location.

Option 1: location string. Easiest to use, readable in logs.

curl -G https://serpsearch.com/api/v1/search \
  -H "Authorization: Bearer YOUR_API_KEY" \
  --data-urlencode "query=coffee shops" \
  --data-urlencode "location=London, England, UK"

Option 2: lat / lng coordinates. More precise, good for hyperlocal queries.

curl -G https://serpsearch.com/api/v1/search \
  -H "Authorization: Bearer YOUR_API_KEY" \
  --data-urlencode "query=coffee shops" \
  -d "lat=51.5074&lng=-0.1278"

The main gotcha with location: format sensitivity. "New York, NY, US" works reliably. Vague strings like "United States" or "NYC" might not produce meaningfully localised results. If you're building a feature where users type their own location, validate the string against the autocomplete endpoint first rather than passing raw user input.

Response formats

The default response_type=json is almost always what you want. The other options exist for specific cases:

  • html: returns the raw rendered page HTML. Useful when you need elements the parser doesn't capture, or when you want to do your own extraction.
  • both: returns the parsed JSON and the raw HTML in the same response.
  • correlated: attempts to link parsed JSON fields back to their source HTML elements. Handy for debugging but not needed for typical data extraction.

The js flag only applies when you're requesting HTML. Set js=true to preserve script tags; the default strips them. For nearly every use case, ignore both of these parameters and stick with json.

Handling errors

The API uses standard HTTP status codes:

CodeMeaningWhat to do
200SuccessParse the JSON response.
400Bad requestCheck your parameters. query is missing or malformed.
401UnauthorizedYour API key is wrong, missing, or revoked.
429Rate limit hitBack off and retry after a delay. Use exponential backoff.
500Server errorRetry once or twice. If it persists, contact support.

A minimal retry wrapper in Python:

import requests
import time
 
def search(query, page=1, retries=3):
    for attempt in range(retries):
        resp = requests.get(
            "https://serpsearch.com/api/v1/search",
            headers={"Authorization": "Bearer YOUR_API_KEY"},
            params={"query": query, "page": page},
        )
        if resp.status_code == 429:
            time.sleep(2 ** attempt)  # 1s, 2s, 4s
            continue
        resp.raise_for_status()
        return resp.json()
    raise RuntimeError(f"Failed after {retries} retries")

What you can build with this

A few real-world use cases that work well with this API:

Rank tracker. Query your target keywords daily, store position for each URL, and chart movement over time. 10 keywords × daily = 300 calls/month, well within the Starter plan.

Competitor monitoring. Watch where your competitors rank for shared keywords. When a competitor jumps to position 1 on a keyword you care about, you'll know immediately.

Content research. Use related_searches and people_also_ask to surface what your audience is actually searching for. These are Google's own suggestions, which makes them reliable signals.

SERP feature tracking. Check whether you're winning a featured snippet, knowledge panel, or local pack on a given keyword. The knowledge_graph, definition_result, and local_results fields tell you what's appearing above the fold.

Next steps

  • Full endpoint reference (news, images, videos, maps, reviews): API docs
  • Language and framework examples: Examples page
  • Pricing: the free tier works for testing, Starter covers most ongoing projects

Ready to get started?

Start scraping Google search results in minutes. Free tier included.