Free UK property data API Start free →
Tutorials 12 min read

How to Bulk Export UK Property Data: Python & Node.js Guide

Export thousands of UK property listings to CSV or JSON in minutes. Filter by postcode, price, bedrooms, and property type. Includes pagination, rate limit handling, and resumable large exports with Python and TypeScript examples.

Homedata Team ·

Why bulk property data matters

Most property data use cases don't involve looking up one property at a time. Portfolio managers need to screen 500 postcodes for high-yield opportunities. PropTech platforms need to backfill their database on launch. Market analysts need to pull every 3-bed house in a city to track median price trends. Estate agents need to find every property in a catchment that's been on the market for 90+ days.

All of these require bulk extraction — not single-property lookups. This guide covers exactly that: how to use the Homedata Bulk Export API to pull thousands of UK property listings to CSV or JSON, filtered by postcode, price, bedrooms, and more.

The Bulk Export endpoint

The Homedata Bulk Export API lives at POST /api/v1/listings/export. It accepts up to 5,000 rows per request (Growth tier and above) and returns either CSV or JSON — your choice.

POST /api/v1/listings/export JSON body
{
  "postcode_prefix": "SW1",
  "min_price": 400000,
  "max_price": 1000000,
  "bedrooms": 3,
  "property_type": "flat",
  "limit": 500,
  "format": "csv",
  "fields": [
    "address",
    "postcode",
    "uprn",
    "price",
    "bedrooms",
    "property_type",
    "status",
    "dom",
    "listed_date",
    "lat",
    "lng"
  ]
}

Complete Python example: exporting to CSV

Here's a self-contained Python script that pulls all 3-bedroom flats in SW1 priced between £400k and £1m, writes them to a CSV file, and handles pagination for large datasets.

bulk_export.py Python 3.9+
import requests
import csv
import io
import time

API_KEY = "hd_live_your_key_here"
BASE_URL = "https://homedata.co.uk/api/v1"

def bulk_export(filters: dict, output_file: str) -> int:
    """
    Export UK property listings matching filters to a CSV file.
    Handles pagination automatically by batching into 500-row chunks.

    Returns total rows written.
    """
    headers = {
        "X-API-Key": API_KEY,
        "Content-Type": "application/json",
        "Accept": "text/csv",
    }

    offset = 0
    batch_size = 500
    total_rows = 0
    first_batch = True

    with open(output_file, "w", newline="", encoding="utf-8") as f:
        writer = None

        while True:
            payload = {
                **filters,
                "limit": batch_size,
                "offset": offset,
                "format": "json",  # use JSON for easy pagination
            }

            resp = requests.post(
                f"{BASE_URL}/listings/export",
                json=payload,
                headers={**headers, "Accept": "application/json"},
                timeout=30,
            )
            resp.raise_for_status()
            data = resp.json()

            rows = data.get("listings", [])
            if not rows:
                break  # no more results

            if first_batch:
                # Write CSV header from first row's keys
                writer = csv.DictWriter(f, fieldnames=rows[0].keys())
                writer.writeheader()
                first_batch = False

            writer.writerows(rows)
            total_rows += len(rows)
            offset += len(rows)

            print(f"  Fetched {total_rows} rows so far...")

            # Respect rate limits — 0.5s between batches on Growth tier
            if len(rows) == batch_size:
                time.sleep(0.5)
            else:
                break  # last batch (fewer than batch_size = no more pages)

    return total_rows


if __name__ == "__main__":
    filters = {
        "postcode_prefix": "SW1",
        "min_price": 400_000,
        "max_price": 1_000_000,
        "bedrooms": 3,
        "property_type": "flat",
        "fields": [
            "address", "postcode", "uprn", "price",
            "bedrooms", "property_type", "status",
            "dom", "listed_date", "lat", "lng",
            "agent", "has_address_match",
        ],
    }

    print("Starting bulk export...")
    n = bulk_export(filters, "sw1_3bed_flats.csv")
    print(f"Done. {n} properties exported to sw1_3bed_flats.csv")

Node.js / TypeScript example

For Node.js applications — useful when you're building a scheduled data pipeline or an Express API that caches bulk data locally.

bulkExport.ts TypeScript / Node.js
import fs from 'fs';
import { stringify } from 'csv-stringify/sync';

const API_KEY = 'hd_live_your_key_here';
const BASE_URL = 'https://homedata.co.uk/api/v1';

interface ExportFilters {
  postcodePrefix?: string;
  minPrice?: number;
  maxPrice?: number;
  bedrooms?: number;
  propertyType?: string;
  fields?: string[];
}

interface Listing {
  address: string;
  postcode: string;
  uprn?: string;
  price?: number;
  bedrooms?: number;
  status?: string;
  dom?: number;
  listed_date?: string;
  lat?: number;
  lng?: number;
}

async function bulkExport(
  filters: ExportFilters,
  outputPath: string
): Promise {
  const batchSize = 500;
  let offset = 0;
  let totalRows = 0;
  let allListings: Listing[] = [];

  while (true) {
    const response = await fetch(`${BASE_URL}/listings/export`, {
      method: 'POST',
      headers: {
        'X-API-Key': API_KEY,
        'Content-Type': 'application/json',
        'Accept': 'application/json',
      },
      body: JSON.stringify({
        postcode_prefix: filters.postcodePrefix,
        min_price: filters.minPrice,
        max_price: filters.maxPrice,
        bedrooms: filters.bedrooms,
        property_type: filters.propertyType,
        fields: filters.fields,
        limit: batchSize,
        offset,
        format: 'json',
      }),
    });

    if (!response.ok) {
      throw new Error(`API error ${response.status}: ${await response.text()}`);
    }

    const data = await response.json();
    const listings: Listing[] = data.listings ?? [];

    if (listings.length === 0) break;

    allListings = allListings.concat(listings);
    offset += listings.length;
    totalRows += listings.length;

    console.log(`  Fetched ${totalRows} rows...`);

    if (listings.length < batchSize) break; // last page
    await new Promise(r => setTimeout(r, 500)); // rate limit
  }

  // Write to CSV
  const csv = stringify(allListings, { header: true });
  fs.writeFileSync(outputPath, csv, 'utf-8');

  return totalRows;
}

// Run
bulkExport(
  {
    postcodePrefix: 'M1',
    minPrice: 100_000,
    maxPrice: 500_000,
    bedrooms: 2,
    fields: ['address', 'postcode', 'uprn', 'price', 'bedrooms', 'dom', 'lat', 'lng'],
  },
  'manchester_2bed.csv'
).then(n => console.log(`Exported ${n} properties.`));

Available filter parameters

The export endpoint accepts the following filters. All parameters are optional — omit them to pull everything within your quota.

Parameter Type Description
postcode_prefix string Filter by outcode or sector (e.g. SW1A, M1)
min_price integer Minimum asking price in GBP
max_price integer Maximum asking price in GBP
bedrooms integer Exact bedroom count (1–10+)
property_type string One of: detached, semi-detached, terraced, flat
transaction_type string One of: sale, let
limit integer Rows per request. Max 1,000 (Growth) or 5,000 (Pro/Scale)
offset integer Pagination offset (default 0)
format string Response format: csv (default) or json
fields array Fields to include in output (omit for all 20 available fields)

Billing: how calls are counted

The export endpoint uses a simple billing model: 1 API call per 10 rows returned, with a minimum of 1 call. A 500-row export costs 50 calls — the same as making 50 individual property lookups, but far faster and simpler to implement.

Billing by tier:

  • Free / Starter — Bulk export not available
  • Growth (£149/mo, 10,000 calls) — Up to 1,000 rows per request
  • Pro (£349/mo, 50,000 calls) — Up to 5,000 rows per request
  • Scale (£699/mo, 250,000 calls) — Up to 5,000 rows per request

On Growth, exporting 10,000 rows costs 1,000 calls — one month's full quota. Plan accordingly. On Pro, 10,000 rows uses 1,000 calls out of 50,000 — leaving plenty for regular API usage.

Use case: finding long-DOM properties for acquisition leads

One of the most common bulk export patterns: pulling properties that have been on the market for 90+ days. These are the motivated sellers. Here's a Python snippet that post-processes the export:

find_motivated_sellers.py Python 3.9+
import requests

API_KEY = "hd_live_your_key_here"

def find_long_dom_properties(postcode_prefix: str, min_dom: int = 90):
    """
    Find properties that have been on the market for 90+ days.
    High DOM = motivated seller, more likely to accept below asking.
    """
    resp = requests.post(
        "https://homedata.co.uk/api/v1/listings/export",
        headers={"X-API-Key": API_KEY},
        json={
            "postcode_prefix": postcode_prefix,
            "limit": 1000,
            "format": "json",
            "fields": ["address", "postcode", "price", "bedrooms",
                       "property_type", "dom", "listed_date",
                       "lat", "lng", "agent"],
        },
        timeout=30,
    )
    resp.raise_for_status()

    listings = resp.json()["listings"]
    long_dom = [p for p in listings if (p.get("dom") or 0) >= min_dom]
    long_dom.sort(key=lambda x: x.get("dom", 0), reverse=True)

    print(f"Found {len(long_dom)} properties in {postcode_prefix} on market {min_dom}+ days:")
    for p in long_dom[:10]:
        print(f"  {p['address']} — £{p['price']:,} — {p['dom']} days on market")

    return long_dom

# Example: find motivated sellers in Manchester M1
results = find_long_dom_properties("M1", min_dom=90)

Use case: rental yield screening across postcodes

For portfolio investors, screening buy-to-let opportunities across multiple postcodes is a classic bulk export use case. Combine sale prices with rental listing prices to estimate yield:

yield_screener.py Python 3.9+
import requests
from statistics import median

API_KEY = "hd_live_your_key_here"
BASE = "https://homedata.co.uk/api/v1"

def estimate_yield(postcode_prefix: str, bedrooms: int = 2) -> dict:
    """Estimate gross rental yield for a postcode by comparing sale vs let prices."""

    common_filters = {
        "postcode_prefix": postcode_prefix,
        "bedrooms": bedrooms,
        "limit": 500,
        "format": "json",
        "fields": ["price", "transaction_type"],
    }

    # Sale prices
    sales = requests.post(f"{BASE}/listings/export",
        headers={"X-API-Key": API_KEY},
        json={**common_filters, "transaction_type": "sale"},
        timeout=30).json().get("listings", [])

    # Rental prices
    lets = requests.post(f"{BASE}/listings/export",
        headers={"X-API-Key": API_KEY},
        json={**common_filters, "transaction_type": "let"},
        timeout=30).json().get("listings", [])

    if not sales or not lets:
        return {"error": "insufficient data"}

    median_sale = median(p["price"] for p in sales if p.get("price"))
    median_rent = median(p["price"] for p in lets if p.get("price"))

    # Convert monthly rent to annual; calculate gross yield
    annual_rent = median_rent * 12
    gross_yield = (annual_rent / median_sale) * 100

    return {
        "postcode": postcode_prefix,
        "bedrooms": bedrooms,
        "median_sale_price": round(median_sale),
        "median_monthly_rent": round(median_rent),
        "gross_yield_pct": round(gross_yield, 2),
        "sample_sales": len(sales),
        "sample_lets": len(lets),
    }

# Screen multiple postcodes
for pc in ["LS1", "LS2", "LS6", "LS7", "LS8"]:
    result = estimate_yield(pc, bedrooms=2)
    print(f"{pc}: {result.get('gross_yield_pct', 'N/A')}% gross yield "
          f"(median sale £{result.get('median_sale_price', 0):,})")

Handling large exports with progress tracking

For very large exports (50,000+ rows on Pro/Scale), it's worth adding progress tracking and checkpointing so you can resume if the script is interrupted:

large_export.py Python — resumable export
import requests
import csv
import json
import time
from pathlib import Path

API_KEY = "hd_live_your_key_here"

def resumable_export(filters: dict, output_csv: str, checkpoint_file: str):
    """
    Large export with checkpoint/resume.
    Saves offset after each batch — safe to kill and restart.
    """
    # Load checkpoint if exists
    offset = 0
    if Path(checkpoint_file).exists():
        with open(checkpoint_file) as f:
            offset = json.load(f).get("offset", 0)
        print(f"Resuming from offset {offset}...")

    mode = "a" if offset > 0 else "w"
    total = offset

    with open(output_csv, mode, newline="", encoding="utf-8") as f:
        writer = None
        write_header = (offset == 0)

        while True:
            resp = requests.post(
                "https://homedata.co.uk/api/v1/listings/export",
                headers={"X-API-Key": API_KEY, "Accept": "application/json"},
                json={**filters, "limit": 5000, "offset": offset, "format": "json"},
                timeout=60,
            )
            resp.raise_for_status()
            data = resp.json()
            rows = data.get("listings", [])

            if not rows:
                break

            if write_header:
                writer = csv.DictWriter(f, fieldnames=rows[0].keys())
                writer.writeheader()
                write_header = False
            elif writer is None:
                writer = csv.DictWriter(f, fieldnames=rows[0].keys())

            writer.writerows(rows)
            total += len(rows)
            offset += len(rows)

            # Save checkpoint
            with open(checkpoint_file, "w") as cp:
                json.dump({"offset": offset, "total": total}, cp)

            print(f"  {total} rows exported...")

            if len(rows) < 5000:
                break
            time.sleep(0.2)

    # Clean up checkpoint on successful completion
    Path(checkpoint_file).unlink(missing_ok=True)
    print(f"Complete: {total} rows → {output_csv}")
    return total


# Export all 2-bed properties in London (E, EC, N, NW, SE, SW, W, WC postcodes)
LONDON_OUTCODES = ["E1", "E2", "N1", "N4", "SE1", "SW1", "W1", "EC1", "WC1"]

for outcode in LONDON_OUTCODES:
    print(f"\nExporting {outcode}...")
    resumable_export(
        filters={
            "postcode_prefix": outcode,
            "bedrooms": 2,
            "transaction_type": "sale",
            "fields": ["address", "postcode", "uprn", "price", "dom",
                       "listed_date", "lat", "lng", "has_address_match"],
        },
        output_csv=f"london_{outcode.lower()}_2bed.csv",
        checkpoint_file=f".checkpoint_{outcode.lower()}.json",
    )

Rate limits by plan tier

Plan Monthly calls Max rows/request Rate limit Max rows/month
Free / Starter 100 / 2,000 Not available
Growth 10,000 1,000 rows 10 req/sec 100,000 rows
Pro 50,000 5,000 rows 20 req/sec 500,000 rows
Scale 250,000 5,000 rows 40 req/sec 2,500,000 rows

What's in the export response

Every row returned by the bulk export can contain up to 20 fields. You specify which ones you want via the fields array — omit it to get all fields.

Sample JSON response
{
  "count": 1,
  "total_calls_used": 1,
  "listings": [
    {
      "listing_id": "rm_123456789",
      "address": "14 Acacia Avenue",
      "postcode": "SW1A 1AA",
      "uprn": "100023336956",
      "price": 650000,
      "original_price": 695000,
      "bedrooms": 3,
      "bathrooms": 2,
      "property_type": "semi-detached",
      "transaction_type": "sale",
      "status": "For Sale",
      "dom": 47,
      "listed_date": "2026-02-16",
      "agent": "Knight Frank",
      "agent_branch": "Chelsea",
      "reductions": 1,
      "lat": 51.5012,
      "lng": -0.1419,
      "construction_age": "1950s",
      "has_address_match": true
    }
  ]
}

Start bulk exporting today

Bulk export is available on Growth, Pro, and Scale plans. If you're on Free or Starter and need bulk access, upgrading to Growth gets you 100,000 rows/month — enough for most data pipeline use cases.

Try bulk export free

Get a free API key and test the export endpoint in minutes. 100 calls/month, no credit card required. Upgrade to Growth when you're ready to scale.