Troubleshooting Guide
This comprehensive guide covers common issues you may encounter when using UltimaScraperAPI and their solutions. Issues are organized by category for easy navigation.
Quick Debugging Tips
- Enable debug logging:
logging.basicConfig(level=logging.DEBUG) - Check your Python version:
python --version(requires 3.10+) - Verify package installation:
pip show ultima-scraper-api - Review configuration: Ensure all required credentials are present
📑 Table of Contents
- Installation Issues
- Authentication Problems
- Connection & Network Errors
- Proxy Issues
- Session Management Problems
- API Rate Limiting
- Data Scraping Issues
- Redis Integration Problems
- Performance Issues
- Platform-Specific Issues
Installation Issues
❌ Problem: Package Installation Fails
Symptoms:
ERROR: Could not find a version that satisfies the requirement ultima-scraper-api
ERROR: No matching distribution found for ultima-scraper-api
Solutions:
-
Check Python Version:
-
Update pip:
-
Use uv (Recommended):
-
Install from source:
❌ Problem: Dependency Conflicts
Symptoms:
ERROR: pip's dependency resolver does not currently take into account all the packages
ERROR: Incompatible library versions
Solutions:
-
Use fresh virtual environment:
-
Check dependency tree:
-
Install with constraints:
❌ Problem: Import Errors After Installation
Symptoms:
ImportError: No module named 'ultima_scraper_api'
ModuleNotFoundError: No module named 'ultima_scraper_api.apis'
Solutions:
-
Verify installation:
-
Check Python environment:
-
Reinstall package:
Authentication Problems
❌ Problem: Authentication Fails with "Unauthorized"
Symptoms:
Solutions:
- Verify credentials are current:
- Cookies expire frequently (24-48 hours)
- Extract fresh credentials from browser
-
Ensure User-Agent matches browser
-
Check cookie format:
-
Validate User-Agent:
-
Test authentication:
import asyncio from ultima_scraper_api.apis.onlyfans.onlyfans import OnlyFansAPI async def test_auth(): api = OnlyFansAPI() auth_details = { "cookie": "your_cookie_here", "user-agent": "your_user_agent_here", "x-bc": "your_x_bc_token_here" } authenticator = api.get_authenticator() auth = await authenticator.login(auth_details) if auth.active: print("✅ Authentication successful!") print(f"User ID: {auth.id}") print(f"Username: {auth.username}") else: print("❌ Authentication failed") print(f"Error: {auth.error_details}") asyncio.run(test_auth())
❌ Problem: x-bc Token Invalid or Missing
Symptoms:
Solutions:
- Extract x-bc from browser:
- Open DevTools (F12) → Network tab
- Look for API requests to
onlyfans.com/api2/v2/ -
Copy
x-bcfrom request headers -
Verify token format:
-
Check token expiration:
- x-bc tokens can expire
- Re-extract if authentication fails
- Consider implementing automatic token refresh
❌ Problem: Two-Factor Authentication (2FA) Required
Symptoms:
AuthenticationError: Two-factor authentication required
TwoFactorAuthenticationRequired: Please verify your account
Solutions:
-
Disable 2FA temporarily (if possible) for scraping sessions
-
Use account without 2FA for API access
-
Manual 2FA verification:
- Complete 2FA in browser
- Extract cookies after verification
-
Use fresh cookies in API
-
Future support:
- 2FA automation is planned but not currently supported
- Monitor project updates for 2FA support
Connection & Network Errors
❌ Problem: Connection Timeout
Symptoms:
TimeoutError: Connection timeout after 30 seconds
asyncio.exceptions.TimeoutError
aiohttp.ClientError: Server disconnected
Solutions:
-
Increase timeout:
-
Check internet connection:
-
Test with retry logic:
-
Use connection pooling:
❌ Problem: SSL Certificate Verification Fails
Symptoms:
aiohttp.ClientSSLError: SSL certificate verification failed
ssl.SSLCertVerificationError: certificate verify failed
Solutions:
-
Update certificates:
-
Check system time:
- SSL certificates are time-sensitive
-
Ensure system clock is correct
-
Temporary workaround (NOT recommended for production):
Security Warning
Disabling SSL verification exposes you to man-in-the-middle attacks. Only use for debugging.
❌ Problem: DNS Resolution Errors
Symptoms:
Solutions:
-
Check DNS settings:
-
Use custom DNS:
-
Test with hosts file:
Proxy Issues
❌ Problem: Proxy Connection Failed
Symptoms:
ProxyConnectionError: Cannot connect to proxy server
aiohttp_socks.ProxyConnectionError
TimeoutError: Proxy connection timeout
Solutions:
-
Verify proxy credentials:
from ultima_scraper_api.config import Proxy # ✅ Correct format proxy = Proxy( protocol="http", host="proxy.example.com", port=8080, username="user", password="pass123" ) # Test proxy import aiohttp from aiohttp_socks import ProxyConnector connector = ProxyConnector.from_url( f"http://user:pass123@proxy.example.com:8080" ) async with aiohttp.ClientSession(connector=connector) as session: async with session.get("http://httpbin.org/ip") as resp: print(await resp.json()) -
Check proxy server status:
-
Try different proxy protocol:
❌ Problem: Proxy Authentication Fails
Symptoms:
ProxyAuthenticationRequired: 407 Proxy Authentication Required
aiohttp_socks.ProxyError: Authentication failed
Solutions:
-
URL-encode special characters:
-
Verify credentials:
-
Check proxy authentication method:
- Some proxies require specific auth methods
- Consult your proxy provider's documentation
❌ Problem: SOCKS5 Proxy Not Working
Symptoms:
Solutions:
-
Install required dependencies:
-
Use SOCKS5H for DNS:
-
Test SOCKS5 connection:
from python_socks.async_.asyncio import Proxy as PythonSocksProxy import asyncio async def test_socks5(): proxy = PythonSocksProxy.from_url("socks5://proxy.example.com:1080") sock = await proxy.connect(dest_host="onlyfans.com", dest_port=443) print("✅ SOCKS5 connection successful") asyncio.run(test_socks5())
Session Management Problems
❌ Problem: Memory Leak with Sessions
Symptoms:
Solutions:
-
Always use context managers:
-
Explicitly close sessions:
-
Monitor with warnings:
❌ Problem: Session Limit Exceeded
Symptoms:
Solutions:
-
Configure connection limits:
-
Reuse sessions:
# ✅ Reuse single session api = OnlyFansAPI() async with api.login_context(auth_details) as auth: # Multiple operations with same session users = await auth.get_users() for user in users: posts = await user.get_posts() # Reuses session # ❌ Creating new sessions each time for user in users: api = OnlyFansAPI() # Don't do this auth = await api.login(auth_details) -
Implement connection pooling:
❌ Problem: Redis Connection Issues
Symptoms:
redis.exceptions.ConnectionError: Connection refused
redis.exceptions.TimeoutError: Timeout waiting for Redis
Solutions:
-
Verify Redis is running:
-
Check Redis configuration:
-
Use connection pooling:
See Redis Integration Problems for more Redis-specific issues.
API Rate Limiting
❌ Problem: Rate Limit Exceeded (429 Error)
Symptoms:
Solutions:
-
Implement exponential backoff:
import asyncio from tenacity import retry, wait_exponential, stop_after_attempt, retry_if_exception_type from httpx import HTTPStatusError @retry( wait=wait_exponential(multiplier=1, min=4, max=60), stop=stop_after_attempt(5), retry=retry_if_exception_type(HTTPStatusError) ) async def fetch_with_retry(api, endpoint): return await api.get(endpoint) # Usage try: result = await fetch_with_retry(api, "/users/me") except Exception as e: print(f"Failed after retries: {e}") -
Add delays between requests:
-
Use request throttling:
import asyncio from asyncio import Semaphore async def throttled_fetch(api, endpoints, max_concurrent=5, delay=0.5): semaphore = Semaphore(max_concurrent) async def fetch_one(endpoint): async with semaphore: result = await api.get(endpoint) await asyncio.sleep(delay) return result tasks = [fetch_one(ep) for ep in endpoints] return await asyncio.gather(*tasks) -
Monitor rate limits:
async def fetch_with_monitoring(api, endpoint): response = await api.get(endpoint) # Check rate limit headers remaining = response.headers.get('X-RateLimit-Remaining') reset_time = response.headers.get('X-RateLimit-Reset') if remaining and int(remaining) < 10: print(f"⚠️ Rate limit low: {remaining} requests remaining") print(f"Resets at: {reset_time}") return response
❌ Problem: Concurrent Request Limits
Symptoms:
Solutions:
-
Limit concurrency with semaphore:
import asyncio async def controlled_concurrency(tasks, max_concurrent=10): semaphore = asyncio.Semaphore(max_concurrent) async def run_with_semaphore(task): async with semaphore: return await task return await asyncio.gather(*[run_with_semaphore(t) for t in tasks]) # Usage user_tasks = [api.get_user(uid) for uid in user_ids] users = await controlled_concurrency(user_tasks, max_concurrent=5) -
Batch requests:
async def fetch_in_batches(api, items, batch_size=10): results = [] for i in range(0, len(items), batch_size): batch = items[i:i + batch_size] batch_results = await asyncio.gather(*[api.get_item(item) for item in batch]) results.extend(batch_results) await asyncio.sleep(1) # Delay between batches return results -
Use queue-based processing:
import asyncio async def queue_processor(api, queue, max_workers=5): async def worker(): while True: item = await queue.get() if item is None: # Poison pill break try: result = await api.process(item) print(f"Processed: {item}") except Exception as e: print(f"Error processing {item}: {e}") finally: queue.task_done() workers = [asyncio.create_task(worker()) for _ in range(max_workers)] await queue.join() # Stop workers for _ in workers: await queue.put(None) await asyncio.gather(*workers)
Data Scraping Issues
❌ Problem: Missing or Incomplete Data
Symptoms:
AttributeError: 'NoneType' object has no attribute
KeyError: Expected field missing from response
Empty lists or null values
Solutions:
-
Validate data before access:
-
Check API response status:
async def fetch_user_safe(api, username): try: user = await api.get_user(username) if not user: print(f"User {username} not found") return None return user except HTTPStatusError as e: if e.response.status_code == 404: print(f"User {username} does not exist") elif e.response.status_code == 403: print(f"Access denied for user {username}") else: print(f"Error fetching user: {e}") return None -
Handle pagination properly:
async def get_all_posts(user): all_posts = [] offset = 0 limit = 100 while True: posts = await user.get_posts(offset=offset, limit=limit) if not posts: # No more posts break all_posts.extend(posts) offset += limit # Safety check if offset > 10000: # Reasonable limit print("⚠️ Reached maximum offset") break return all_posts
❌ Problem: Media Download Failures
Symptoms:
DownloadError: Failed to download media
ChunkedEncodingError: Connection broken
FileNotFoundError: Media URL expired
Solutions:
-
Implement retry logic for downloads:
from tenacity import retry, stop_after_attempt, wait_fixed @retry(stop=stop_after_attempt(3), wait=wait_fixed(2)) async def download_with_retry(session, url, save_path): async with session.get(url) as response: response.raise_for_status() with open(save_path, 'wb') as f: async for chunk in response.content.iter_chunked(8192): f.write(chunk) return save_path -
Validate media URLs:
async def download_media(api, media_item): if not media_item.url: print(f"⚠️ No URL for media {media_item.id}") return None # Check URL expiration if hasattr(media_item, "expires_at"): from datetime import datetime if datetime.now() > media_item.expires_at: print(f"⚠️ Media URL expired, refreshing...") media_item = await api.refresh_media(media_item.id) return await api.download(media_item.url) -
Handle DRM-protected content:
from ultima_scraper_api.config import DRM # Configure DRM settings config = UltimaScraperAPIConfig() config.drm = DRM( enabled=True, # Add Widevine configuration if needed ) # Check if content is DRM-protected if media.is_drm_protected: print("⚠️ DRM-protected content detected") # Handle DRM decryption
❌ Problem: Scraping Performance Issues
Symptoms:
Solutions:
-
Use async patterns efficiently:
-
Stream large responses:
-
Use caching:
from functools import lru_cache from ultima_scraper_api.config import Redis # In-memory caching @lru_cache(maxsize=1000) async def get_user_cached(api, user_id): return await api.get_user(user_id) # Redis caching redis_config = Redis(host="localhost", port=6379) # API will automatically use Redis for caching -
Limit data collection:
Redis Integration Problems
❌ Problem: Redis Key Errors
Symptoms:
Solutions:
-
Verify key format:
-
Check key type:
-
Handle missing keys:
❌ Problem: Redis Memory Issues
Symptoms:
Solutions:
-
Set expiration on keys:
-
Configure Redis memory policy:
-
Monitor memory usage:
-
Implement key cleanup:
async def cleanup_expired_keys(redis_client, pattern="session:*"): cursor = 0 while True: cursor, keys = await redis_client.scan( cursor=cursor, match=pattern, count=100 ) for key in keys: ttl = await redis_client.ttl(key) if ttl == -1: # No expiration set await redis_client.expire(key, 3600) if cursor == 0: break
❌ Problem: Redis Serialization Errors
Symptoms:
TypeError: Object of type datetime is not JSON serializable
pickle.PicklingError: Can't pickle object
Solutions:
-
Use proper serialization:
import json from datetime import datetime class DateTimeEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, datetime): return obj.isoformat() return super().default(obj) # Serialize data = {"timestamp": datetime.now(), "value": 123} serialized = json.dumps(data, cls=DateTimeEncoder) await redis.set("key", serialized) # Deserialize cached = await redis.get("key") data = json.loads(cached) -
Use dill for complex objects:
-
Implement custom serializer:
Performance Issues
❌ Problem: Slow API Response Times
Symptoms:
Solutions:
-
Enable HTTP/2:
-
Use connection keepalive:
-
Implement caching:
from functools import lru_cache import hashlib import json # Memory cache @lru_cache(maxsize=1000) def cache_key(endpoint, params): key = f"{endpoint}:{json.dumps(params, sort_keys=True)}" return hashlib.md5(key.encode()).hexdigest() # Redis cache with TTL async def cached_api_call(redis_client, api, endpoint, params, ttl=300): key = cache_key(endpoint, params) # Check cache cached = await redis_client.get(key) if cached: return json.loads(cached) # Fetch fresh data data = await api.get(endpoint, params=params) # Cache result await redis_client.setex(key, ttl, json.dumps(data)) return data -
Profile your code:
❌ Problem: High Memory Consumption
Symptoms:
Solutions:
-
Process data in chunks:
-
Use generators:
-
Monitor memory:
-
Explicit garbage collection:
Platform-Specific Issues
OnlyFans Issues
❌ Problem: "User Not Found" for Valid Usernames
Symptoms:
Solutions:
-
Use user ID instead:
-
Search for user:
❌ Problem: Paid Content Access Issues
Symptoms:
Solutions:
-
Check subscription status:
-
Verify content access:
Fansly Issues (WIP)
Work in Progress
Fansly API integration is currently under development. Some features may be limited or unstable.
❌ Problem: Limited API Coverage
Current Status: - ✅ Authentication - ✅ Basic user profiles - ⚠️ Content fetching (limited) - ❌ Messaging (not yet implemented) - ❌ Live streams (not yet implemented)
Solutions:
Monitor the project's GitHub for updates and feature additions.
LoyalFans Issues (WIP)
Work in Progress
LoyalFans API integration is currently under development.
Current Status: - ✅ Authentication - ⚠️ User profiles (basic) - ❌ Most features not yet implemented
General Debugging Tips
Enable Debug Logging
import logging
# Configure logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
# Specific loggers
logging.getLogger('ultima_scraper_api').setLevel(logging.DEBUG)
logging.getLogger('aiohttp').setLevel(logging.INFO)
Capture Network Traffic
import aiohttp
import logging
# Enable aiohttp debug logging
aiohttp_logger = logging.getLogger('aiohttp')
aiohttp_logger.setLevel(logging.DEBUG)
# Log all requests
async def log_requests(session, ctx, params):
print(f"Request: {params.method} {params.url}")
print(f"Headers: {params.headers}")
trace_config = aiohttp.TraceConfig()
trace_config.on_request_start.append(log_requests)
# Use with session
connector = aiohttp.TCPConnector()
session = aiohttp.ClientSession(
connector=connector,
trace_configs=[trace_config]
)
Test Individual Components
import asyncio
from ultima_scraper_api.apis.onlyfans.onlyfans import OnlyFansAPI
async def test_component():
# Test authentication
print("Testing authentication...")
api = OnlyFansAPI()
auth_details = {...}
try:
auth = await api.login(auth_details)
print(f"✅ Auth successful: {auth.username}")
except Exception as e:
print(f"❌ Auth failed: {e}")
return
# Test user fetching
print("Testing user fetch...")
try:
user = await auth.get_user("username")
print(f"✅ User fetch successful: {user.id}")
except Exception as e:
print(f"❌ User fetch failed: {e}")
# Cleanup
await api.close()
asyncio.run(test_component())
Getting Help
If you're still experiencing issues:
- Check the documentation:
- Installation Guide
- API Reference
-
Search existing issues:
-
Create a detailed bug report: Include:
- Python version (
python --version) - Package version (
pip show ultima-scraper-api) - Full error traceback
- Minimal reproducible example
-
Configuration (redact sensitive data)
-
Enable debug logging and include relevant logs:
Related Documentation
- Authentication Guide - Detailed authentication setup
- Proxy Support - Comprehensive proxy configuration
- Session Management - Session and Redis setup
- Working with APIs - API usage patterns and best practices
Last Updated: 2025-01-24
Version: 2.2.46