Skip to Content
GuidesPagination

Pagination

Learn how to efficiently navigate through large collections of recordings using the IdentityCall API’s pagination system.

How Pagination Works

The API uses offset-based pagination with two parameters:

ParameterTypeDefaultMaxDescription
pageinteger1-Page number (1-indexed)
per_pageinteger20100Items per page

Response Metadata

Every paginated response includes a meta object:

{ "data": [...], "meta": { "current_page": 1, "per_page": 20, "total_pages": 5, "total_count": 95 } }
FieldDescription
current_pageThe page number returned
per_pageNumber of items per page
total_pagesTotal number of pages available
total_countTotal number of items across all pages

Basic Pagination

# Get first page with 20 items curl -X GET "https://api.identitycall.com/api/v1/public/recordings" \ -H "Authorization: Bearer $IDENTITYCALL_API_KEY" # Get page 3 with 50 items per page curl -X GET "https://api.identitycall.com/api/v1/public/recordings?page=3&per_page=50" \ -H "Authorization: Bearer $IDENTITYCALL_API_KEY"

Iterating Through All Pages

def get_all_recordings(): """Fetch all recordings across all pages.""" all_recordings = [] page = 1 per_page = 100 # Max allowed while True: result = get_recordings(page=page, per_page=per_page) all_recordings.extend(result["data"]) meta = result["meta"] print(f"Fetched page {meta['current_page']}/{meta['total_pages']}") if meta["current_page"] >= meta["total_pages"]: break page += 1 return all_recordings # Fetch everything recordings = get_all_recordings() print(f"Total fetched: {len(recordings)}")

Generator/Iterator Pattern

For memory-efficient processing of large datasets:

def iterate_recordings(per_page=100): """Generator that yields recordings one at a time.""" page = 1 while True: result = get_recordings(page=page, per_page=per_page) for recording in result["data"]: yield recording if result["meta"]["current_page"] >= result["meta"]["total_pages"]: break page += 1 # Process recordings without loading all into memory for recording in iterate_recordings(): process_recording(recording)

Parallel Fetching

For faster retrieval when you know the total pages:

import asyncio import aiohttp async def fetch_page(session, page, per_page=100): async with session.get( f"{BASE_URL}/recordings", headers={"Authorization": f"Bearer {API_KEY}"}, params={"page": page, "per_page": per_page} ) as response: return await response.json() async def get_all_recordings_parallel(): async with aiohttp.ClientSession() as session: # First, get metadata first_page = await fetch_page(session, 1) total_pages = first_page["meta"]["total_pages"] # Fetch remaining pages in parallel tasks = [fetch_page(session, p) for p in range(2, total_pages + 1)] results = await asyncio.gather(*tasks) # Combine all recordings all_recordings = first_page["data"] for result in results: all_recordings.extend(result["data"]) return all_recordings # Run async recordings = asyncio.run(get_all_recordings_parallel())

Be mindful of rate limits when making parallel requests. Consider limiting concurrency to 5-10 simultaneous requests.

Best Practices

1. Use Appropriate Page Sizes

# For browsing/UI display result = get_recordings(page=1, per_page=20) # For bulk processing result = get_recordings(page=1, per_page=100) # Max allowed

2. Handle Empty Results

result = get_recordings(page=100) if not result["data"]: print("No recordings on this page")

3. Check for Last Page

meta = result["meta"] is_last_page = meta["current_page"] >= meta["total_pages"] has_more = meta["current_page"] < meta["total_pages"]

4. Cache Page Count

# Cache the total count to avoid refetching class RecordingsPaginator: def __init__(self): self._total_count = None self._total_pages = None def get_page(self, page, per_page=20): result = get_recordings(page=page, per_page=per_page) self._total_count = result["meta"]["total_count"] self._total_pages = result["meta"]["total_pages"] return result @property def total_count(self): if self._total_count is None: self.get_page(1, 1) # Minimal request to get count return self._total_count

Common Patterns

Display with Page Navigation

def display_recordings_page(page, per_page=10): result = get_recordings(page=page, per_page=per_page) meta = result["meta"] print(f"Recordings (Page {meta['current_page']} of {meta['total_pages']})") print("-" * 50) for rec in result["data"]: print(f" [{rec['id']}] {rec['name']} - {rec['status']}") print("-" * 50) print(f"Total: {meta['total_count']} recordings") # Navigation hints if meta["current_page"] > 1: print(f" Previous: page={meta['current_page'] - 1}") if meta["current_page"] < meta["total_pages"]: print(f" Next: page={meta['current_page'] + 1}")

Export All to CSV

import csv def export_all_recordings_to_csv(filename): with open(filename, 'w', newline='') as f: writer = None for recording in iterate_recordings(): if writer is None: writer = csv.DictWriter(f, fieldnames=recording.keys()) writer.writeheader() writer.writerow(recording) print(f"Exported to {filename}")

Next Steps