Get Started
StateSet One Guides
- Orders Management Quickstart
- Order settlements
- Wholesale Quickstart
- Returns Quickstart
- Warranties Quickstart
- Subscriptions Quickstart
- Manufacturing and Production Quickstart
- Warehouse Quickstart
- COGS Quickstart Guide
- Supplier Quickstart
- Inventory Management Quickstart
- Error Handling Best Practices
- API Rate Limiting & Optimization
- Performance Optimization Guide
- Comprehensive Testing Guide
StateSet ResponseCX Guides
- Getting Started with ResponseCX
- Agent Training Guide
- Agent Objectives, Goals, Metrics & Rewards Guide
- Multi-Agent System Architectures
- Reinforcement Learning Platform Overview
- GRPO Agent Framework
- Knowledge Base Quickstart
- RAG Quickstart
- Agents Quickstart
- Agent Attributes & Personality
- Agent Rules Quickstart
- Examples Quickstart
- Evaluations
- Agent Schedules & Automation
- Agent Functions Quickstart
- Shopify Product Quickstart
- Gorgias Ticket Quickstart
- Vision Quickstart
- Voice AI Quickstart
- Synthetic Data Studio
- StateSet Synthetic Data Studio Architecture Guide
StateSet Commerce Network Guides
StateSet ReSponse API Documentation
StateSet One Guides
Performance Optimization Guide
Advanced strategies for optimizing StateSet API integrations for maximum performance, scalability, and reliability
Performance Optimization Guide
This guide covers advanced strategies for optimizing your StateSet API integrations to achieve maximum performance, scalability, and reliability in production environments.
Performance Fundamentals
Request Optimization
Minimize API calls through intelligent batching and caching
Connection Management
Optimize HTTP connections and connection pooling
Data Efficiency
Reduce payload sizes and implement smart filtering
Connection Optimization
HTTP/2 and Connection Pooling
StateSet APIs support HTTP/2 for improved performance. Configure your HTTP client appropriately:
Copy
Ask AI
import { StateSetClient } from 'stateset-node';
import { Agent } from 'https';
// Configure connection pooling
const httpsAgent = new Agent({
keepAlive: true,
keepAliveMsecs: 30000,
maxSockets: 50,
maxFreeSockets: 10,
timeout: 30000,
freeSocketTimeout: 30000
});
const client = new StateSetClient({
apiKey: process.env.STATESET_API_KEY,
httpAgent: httpsAgent,
timeout: 30000,
// Enable HTTP/2
http2: true,
// Connection reuse
keepAlive: true
});
Copy
Ask AI
import { StateSetClient } from 'stateset-node';
import { Agent } from 'https';
// Configure connection pooling
const httpsAgent = new Agent({
keepAlive: true,
keepAliveMsecs: 30000,
maxSockets: 50,
maxFreeSockets: 10,
timeout: 30000,
freeSocketTimeout: 30000
});
const client = new StateSetClient({
apiKey: process.env.STATESET_API_KEY,
httpAgent: httpsAgent,
timeout: 30000,
// Enable HTTP/2
http2: true,
// Connection reuse
keepAlive: true
});
Copy
Ask AI
import httpx
from stateset import StateSetClient
# Configure connection pooling
limits = httpx.Limits(
max_keepalive_connections=20,
max_connections=100,
keepalive_expiry=30.0
)
# Use HTTP/2 for better performance
transport = httpx.HTTPTransport(
limits=limits,
http2=True,
retries=3
)
client = StateSetClient(
api_key=os.getenv('STATESET_API_KEY'),
transport=transport,
timeout=30.0
)
Copy
Ask AI
require 'net/http'
require 'stateset'
# Configure connection pooling
Stateset.configure do |config|
config.api_key = ENV['STATESET_API_KEY']
config.timeout = 30
config.open_timeout = 10
# Connection pool settings
config.connection_pool_size = 25
config.connection_pool_timeout = 5
config.keep_alive_timeout = 30
end
DNS Optimization
Optimize DNS resolution for faster connection establishment:
Copy
Ask AI
import dns from 'dns';
// Use faster DNS resolvers
dns.setServers([
'1.1.1.1', // Cloudflare
'8.8.8.8', // Google
'208.67.222.222' // OpenDNS
]);
// Enable DNS caching
const dnsCache = new Map();
const originalLookup = dns.lookup;
dns.lookup = (hostname, options, callback) => {
const cacheKey = `${hostname}:${JSON.stringify(options)}`;
if (dnsCache.has(cacheKey)) {
const cached = dnsCache.get(cacheKey);
if (Date.now() - cached.timestamp < 300000) { // 5 minutes TTL
return callback(null, cached.address, cached.family);
}
}
originalLookup(hostname, options, (err, address, family) => {
if (!err) {
dnsCache.set(cacheKey, {
address,
family,
timestamp: Date.now()
});
}
callback(err, address, family);
});
};
Request Optimization Strategies
1. Intelligent Batching
Combine multiple operations into single requests:
Copy
Ask AI
class BatchProcessor {
constructor(client, options = {}) {
this.client = client;
this.batchSize = options.batchSize || 100;
this.flushInterval = options.flushInterval || 1000;
this.batches = new Map();
// Auto-flush timer
setInterval(() => this.flushAll(), this.flushInterval);
}
async addToBatch(operation, data, priority = 0) {
if (!this.batches.has(operation)) {
this.batches.set(operation, {
items: [],
promise: null,
resolver: null
});
}
const batch = this.batches.get(operation);
return new Promise((resolve, reject) => {
batch.items.push({
data,
priority,
resolve,
reject,
timestamp: Date.now()
});
// Auto-flush when batch is full
if (batch.items.length >= this.batchSize) {
this.flush(operation);
}
});
}
async flush(operation) {
const batch = this.batches.get(operation);
if (!batch || batch.items.length === 0) return;
const items = batch.items.splice(0);
try {
const results = await this.executeBatch(operation, items);
items.forEach((item, index) => {
item.resolve(results[index]);
});
} catch (error) {
items.forEach(item => {
item.reject(error);
});
}
}
async executeBatch(operation, items) {
const data = items.map(item => item.data);
switch (operation) {
case 'orders.update':
return this.client.orders.batchUpdate(data);
case 'customers.create':
return this.client.customers.batchCreate(data);
case 'inventory.update':
return this.client.inventory.batchUpdate(data);
default:
throw new Error(`Unsupported batch operation: ${operation}`);
}
}
async flushAll() {
const operations = Array.from(this.batches.keys());
await Promise.all(operations.map(op => this.flush(op)));
}
}
// Usage
const batcher = new BatchProcessor(client, {
batchSize: 50,
flushInterval: 2000
});
// These will be automatically batched
const results = await Promise.all([
batcher.addToBatch('orders.update', { id: '1', status: 'shipped' }),
batcher.addToBatch('orders.update', { id: '2', status: 'delivered' }),
batcher.addToBatch('orders.update', { id: '3', status: 'returned' })
]);
2. Smart Field Selection
Only request the data you need:
Copy
Ask AI
// ❌ Inefficient - Downloads unnecessary data
const orders = await client.orders.list();
// ✅ Efficient - Only essential fields
const orders = await client.orders.list({
fields: ['id', 'status', 'total', 'customer_id', 'created_at'],
limit: 100
});
// ✅ Even better - Use sparse fieldsets for different views
const orderSummaries = await client.orders.list({
fields: ['id', 'status', 'total'], // Minimal for dashboard
limit: 500
});
const orderDetails = await client.orders.get(orderId, {
expand: ['customer', 'line_items', 'shipping_address'] // Full details when needed
});
3. Parallel Processing with Concurrency Control
Copy
Ask AI
import pLimit from 'p-limit';
class ConcurrentProcessor {
constructor(client, maxConcurrency = 10) {
this.client = client;
this.limit = pLimit(maxConcurrency);
this.metrics = {
processed: 0,
errors: 0,
startTime: Date.now()
};
}
async processOrders(orderIds, processFn) {
const startTime = Date.now();
// Process in chunks to avoid memory issues
const chunkSize = 100;
const chunks = this.chunk(orderIds, chunkSize);
const results = [];
for (const chunk of chunks) {
const chunkResults = await Promise.allSettled(
chunk.map(orderId =>
this.limit(() => this.processWithRetry(orderId, processFn))
)
);
results.push(...chunkResults);
// Log progress
const processed = results.length;
const rate = processed / ((Date.now() - startTime) / 1000);
logger.info('Processing progress', {
processed,
total: orderIds.length,
rate: `${rate.toFixed(2)}/sec`,
errors: this.metrics.errors
});
}
return results;
}
async processWithRetry(orderId, processFn, maxRetries = 3) {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const result = await processFn(orderId);
this.metrics.processed++;
return result;
} catch (error) {
if (attempt === maxRetries) {
this.metrics.errors++;
throw error;
}
// Exponential backoff
await this.sleep(Math.pow(2, attempt) * 1000);
}
}
}
chunk(array, size) {
return Array.from({ length: Math.ceil(array.length / size) }, (_, i) =>
array.slice(i * size, i * size + size)
);
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
getMetrics() {
const duration = Date.now() - this.metrics.startTime;
return {
...this.metrics,
duration,
rate: this.metrics.processed / (duration / 1000)
};
}
}
// Usage
const processor = new ConcurrentProcessor(client, 15);
await processor.processOrders(orderIds, async (orderId) => {
const order = await client.orders.get(orderId);
return await client.orders.update(orderId, {
tags: [...order.tags, 'processed']
});
});
Advanced Caching Strategies
1. Multi-Level Caching
Copy
Ask AI
import Redis from 'ioredis';
import NodeCache from 'node-cache';
class MultiLevelCache {
constructor(options = {}) {
// L1: In-memory cache (fastest)
this.l1Cache = new NodeCache({
stdTTL: options.l1TTL || 60,
maxKeys: options.l1MaxKeys || 1000
});
// L2: Redis cache (shared across instances)
this.l2Cache = new Redis({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
retryDelayOnFailover: 100,
maxRetriesPerRequest: 3
});
this.defaultTTL = options.defaultTTL || 300;
}
async get(key, fetchFunction, ttl = this.defaultTTL) {
// Check L1 cache first
const l1Value = this.l1Cache.get(key);
if (l1Value !== undefined) {
logger.debug('L1 cache hit', { key });
return l1Value;
}
// Check L2 cache
try {
const l2Value = await this.l2Cache.get(key);
if (l2Value !== null) {
const parsed = JSON.parse(l2Value);
// Populate L1 cache
this.l1Cache.set(key, parsed, ttl);
logger.debug('L2 cache hit', { key });
return parsed;
}
} catch (error) {
logger.warn('L2 cache error', { error: error.message, key });
}
// Cache miss - fetch from source
logger.debug('Cache miss, fetching', { key });
const value = await fetchFunction();
// Store in both caches
this.l1Cache.set(key, value, ttl);
try {
await this.l2Cache.setex(key, ttl, JSON.stringify(value));
} catch (error) {
logger.warn('L2 cache set error', { error: error.message, key });
}
return value;
}
async invalidate(pattern) {
// Invalidate L1 cache
if (pattern.includes('*')) {
const keys = this.l1Cache.keys().filter(key =>
this.matchPattern(key, pattern)
);
this.l1Cache.del(keys);
} else {
this.l1Cache.del(pattern);
}
// Invalidate L2 cache
try {
if (pattern.includes('*')) {
const keys = await this.l2Cache.keys(pattern);
if (keys.length > 0) {
await this.l2Cache.del(...keys);
}
} else {
await this.l2Cache.del(pattern);
}
} catch (error) {
logger.warn('L2 cache invalidation error', { error: error.message, pattern });
}
}
matchPattern(str, pattern) {
return new RegExp('^' + pattern.split('*').map(
part => part.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
).join('.*') + '$').test(str);
}
}
// Usage with StateSet client
const cache = new MultiLevelCache({
l1TTL: 60, // 1 minute in memory
defaultTTL: 300 // 5 minutes in Redis
});
class CachedStateSetClient {
constructor(client, cache) {
this.client = client;
this.cache = cache;
}
async getCustomer(customerId) {
return this.cache.get(
`customer:${customerId}`,
() => this.client.customers.get(customerId),
600 // 10 minutes TTL for customer data
);
}
async getOrder(orderId) {
return this.cache.get(
`order:${orderId}`,
() => this.client.orders.get(orderId),
300 // 5 minutes TTL for order data
);
}
async invalidateCustomer(customerId) {
await this.cache.invalidate(`customer:${customerId}`);
await this.cache.invalidate(`customer:${customerId}:*`);
}
}
2. Cache Warming and Background Refresh
Copy
Ask AI
class CacheWarmer {
constructor(client, cache) {
this.client = client;
this.cache = cache;
this.warmingSchedules = new Map();
}
scheduleWarming(key, fetchFunction, interval = 240000) { // 4 minutes
if (this.warmingSchedules.has(key)) {
clearInterval(this.warmingSchedules.get(key));
}
const warmCache = async () => {
try {
logger.debug('Warming cache', { key });
await this.cache.get(key, fetchFunction);
} catch (error) {
logger.error('Cache warming failed', { key, error: error.message });
}
};
// Initial warm
warmCache();
// Schedule periodic warming
const intervalId = setInterval(warmCache, interval);
this.warmingSchedules.set(key, intervalId);
}
stopWarming(key) {
if (this.warmingSchedules.has(key)) {
clearInterval(this.warmingSchedules.get(key));
this.warmingSchedules.delete(key);
}
}
// Warm frequently accessed data
async warmFrequentData() {
// Popular products
this.scheduleWarming(
'popular:products',
() => this.client.products.list({
sort: 'popularity',
limit: 100
}),
300000 // 5 minutes
);
// Active customers
this.scheduleWarming(
'active:customers',
() => this.client.customers.list({
active: true,
limit: 500
}),
600000 // 10 minutes
);
}
}
Database and Query Optimization
1. Efficient Pagination
Copy
Ask AI
class EfficientPaginator {
constructor(client) {
this.client = client;
}
async *paginateAll(endpoint, params = {}) {
let cursor = null;
let hasMore = true;
while (hasMore) {
const response = await endpoint({
...params,
cursor,
limit: 100
});
yield* response.data;
cursor = response.next_cursor;
hasMore = response.has_more;
// Small delay to respect rate limits
await this.sleep(50);
}
}
async *paginateWindow(endpoint, params = {}, windowSize = 1000) {
let processed = 0;
const buffer = [];
for await (const item of this.paginateAll(endpoint, params)) {
buffer.push(item);
processed++;
if (buffer.length >= windowSize) {
yield buffer.splice(0);
}
// Progress logging
if (processed % 10000 === 0) {
logger.info('Pagination progress', { processed });
}
}
// Yield remaining items
if (buffer.length > 0) {
yield buffer;
}
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
// Usage for processing large datasets
const paginator = new EfficientPaginator(client);
for await (const orderBatch of paginator.paginateWindow(
client.orders.list.bind(client.orders),
{ status: 'pending' },
500
)) {
await processBatch(orderBatch);
}
2. Query Optimization
Copy
Ask AI
class QueryOptimizer {
constructor(client) {
this.client = client;
this.queryCache = new Map();
}
// Optimize list queries with intelligent filtering
async optimizedList(endpoint, filters = {}, options = {}) {
const cacheKey = this.generateCacheKey(endpoint.name, filters, options);
// Check if we have a cached result
if (this.queryCache.has(cacheKey)) {
const cached = this.queryCache.get(cacheKey);
if (Date.now() - cached.timestamp < 60000) { // 1 minute cache
return cached.data;
}
}
// Optimize filters
const optimizedFilters = this.optimizeFilters(filters);
// Use compound queries when beneficial
const result = await this.executeOptimizedQuery(
endpoint,
optimizedFilters,
options
);
// Cache the result
this.queryCache.set(cacheKey, {
data: result,
timestamp: Date.now()
});
return result;
}
optimizeFilters(filters) {
const optimized = { ...filters };
// Convert date ranges to indexed queries
if (filters.created_after && filters.created_before) {
optimized.created_range = {
start: filters.created_after,
end: filters.created_before
};
delete optimized.created_after;
delete optimized.created_before;
}
// Optimize status filters
if (Array.isArray(filters.status) && filters.status.length > 1) {
optimized.status_in = filters.status;
delete optimized.status;
}
return optimized;
}
async executeOptimizedQuery(endpoint, filters, options) {
// Add performance hints
const queryOptions = {
...options,
hint: 'use_index',
explain: process.env.NODE_ENV === 'development'
};
const result = await endpoint(filters, queryOptions);
// Log slow queries in development
if (process.env.NODE_ENV === 'development' && result.execution_time > 1000) {
logger.warn('Slow query detected', {
endpoint: endpoint.name,
filters,
executionTime: result.execution_time
});
}
return result;
}
generateCacheKey(endpoint, filters, options) {
return `${endpoint}:${JSON.stringify(filters)}:${JSON.stringify(options)}`;
}
}
Memory and Resource Management
1. Streaming for Large Datasets
Copy
Ask AI
import { Readable } from 'stream';
class DataStream extends Readable {
constructor(client, endpoint, params = {}) {
super({ objectMode: true, highWaterMark: 100 });
this.client = client;
this.endpoint = endpoint;
this.params = params;
this.cursor = null;
this.hasMore = true;
this.buffer = [];
this.fetching = false;
}
async _read() {
if (this.buffer.length > 0) {
return this.push(this.buffer.shift());
}
if (!this.hasMore) {
return this.push(null);
}
if (this.fetching) {
return;
}
this.fetching = true;
try {
const response = await this.endpoint({
...this.params,
cursor: this.cursor,
limit: 100
});
this.buffer.push(...response.data);
this.cursor = response.next_cursor;
this.hasMore = response.has_more;
if (this.buffer.length > 0) {
this.push(this.buffer.shift());
} else if (!this.hasMore) {
this.push(null);
}
} catch (error) {
this.emit('error', error);
} finally {
this.fetching = false;
}
}
}
// Usage for memory-efficient processing
const orderStream = new DataStream(
client,
client.orders.list.bind(client.orders),
{ status: 'pending' }
);
orderStream.on('data', async (order) => {
await processOrder(order);
});
orderStream.on('end', () => {
logger.info('Finished processing all orders');
});
orderStream.on('error', (error) => {
logger.error('Stream error', { error: error.message });
});
2. Memory Pool Management
Copy
Ask AI
class MemoryPool {
constructor(maxSize = 100 * 1024 * 1024) { // 100MB default
this.maxSize = maxSize;
this.currentSize = 0;
this.pools = new Map();
this.lastCleanup = Date.now();
}
allocate(key, size) {
if (this.currentSize + size > this.maxSize) {
this.cleanup();
if (this.currentSize + size > this.maxSize) {
throw new Error(`Memory pool exhausted. Current: ${this.currentSize}, Requested: ${size}, Max: ${this.maxSize}`);
}
}
const buffer = Buffer.allocUnsafe(size);
this.pools.set(key, {
buffer,
size,
lastAccess: Date.now()
});
this.currentSize += size;
return buffer;
}
get(key) {
const entry = this.pools.get(key);
if (entry) {
entry.lastAccess = Date.now();
return entry.buffer;
}
return null;
}
release(key) {
const entry = this.pools.get(key);
if (entry) {
this.currentSize -= entry.size;
this.pools.delete(key);
}
}
cleanup() {
const now = Date.now();
const maxAge = 300000; // 5 minutes
for (const [key, entry] of this.pools) {
if (now - entry.lastAccess > maxAge) {
this.release(key);
}
}
// Force garbage collection if available
if (global.gc) {
global.gc();
}
this.lastCleanup = now;
}
getStats() {
return {
currentSize: this.currentSize,
maxSize: this.maxSize,
utilizationPct: (this.currentSize / this.maxSize) * 100,
poolCount: this.pools.size,
lastCleanup: this.lastCleanup
};
}
}
Performance Monitoring
1. Comprehensive Metrics Collection
Copy
Ask AI
class PerformanceMonitor {
constructor() {
this.metrics = {
requests: new Map(),
latency: new Map(),
errors: new Map(),
cache: new Map()
};
this.startTime = Date.now();
}
startTimer(operation) {
return {
operation,
startTime: process.hrtime.bigint(),
startCpu: process.cpuUsage()
};
}
endTimer(timer) {
const endTime = process.hrtime.bigint();
const endCpu = process.cpuUsage(timer.startCpu);
const duration = Number(endTime - timer.startTime) / 1000000; // Convert to ms
const cpuTime = (endCpu.user + endCpu.system) / 1000; // Convert to ms
this.recordMetric('latency', timer.operation, {
duration,
cpuTime,
timestamp: Date.now()
});
return { duration, cpuTime };
}
recordMetric(type, key, value) {
if (!this.metrics[type]) {
this.metrics[type] = new Map();
}
const bucket = this.metrics[type];
const timeWindow = Math.floor(Date.now() / 60000); // 1-minute buckets
const bucketKey = `${key}:${timeWindow}`;
if (!bucket.has(bucketKey)) {
bucket.set(bucketKey, []);
}
bucket.get(bucketKey).push(value);
// Clean old buckets
this.cleanOldBuckets(bucket);
}
cleanOldBuckets(bucket, maxAge = 3600000) { // 1 hour
const cutoff = Date.now() - maxAge;
for (const [key] of bucket) {
const [, timestamp] = key.split(':');
if (parseInt(timestamp) * 60000 < cutoff) {
bucket.delete(key);
}
}
}
getPerformanceReport() {
const report = {
uptime: Date.now() - this.startTime,
latency: this.calculateLatencyStats(),
requests: this.calculateRequestStats(),
errors: this.calculateErrorStats(),
cache: this.calculateCacheStats(),
memory: process.memoryUsage(),
cpu: process.cpuUsage()
};
return report;
}
calculateLatencyStats() {
const stats = {};
for (const [key, values] of this.metrics.latency) {
const [operation] = key.split(':');
if (!stats[operation]) {
stats[operation] = { durations: [], cpuTimes: [] };
}
values.forEach(v => {
stats[operation].durations.push(v.duration);
stats[operation].cpuTimes.push(v.cpuTime);
});
}
// Calculate percentiles
Object.keys(stats).forEach(operation => {
const durations = stats[operation].durations.sort((a, b) => a - b);
const cpuTimes = stats[operation].cpuTimes.sort((a, b) => a - b);
stats[operation] = {
count: durations.length,
latency: {
min: durations[0] || 0,
max: durations[durations.length - 1] || 0,
p50: this.percentile(durations, 0.5),
p95: this.percentile(durations, 0.95),
p99: this.percentile(durations, 0.99),
avg: durations.reduce((a, b) => a + b, 0) / durations.length || 0
},
cpu: {
avg: cpuTimes.reduce((a, b) => a + b, 0) / cpuTimes.length || 0,
max: cpuTimes[cpuTimes.length - 1] || 0
}
};
});
return stats;
}
percentile(arr, p) {
if (arr.length === 0) return 0;
const index = Math.ceil(arr.length * p) - 1;
return arr[Math.max(0, index)];
}
// Real-time alerting
checkPerformanceAlerts() {
const report = this.getPerformanceReport();
Object.entries(report.latency).forEach(([operation, stats]) => {
// Alert on high latency
if (stats.latency.p95 > 5000) { // 5 seconds
logger.warn('High latency detected', {
operation,
p95: stats.latency.p95,
threshold: 5000
});
}
// Alert on high error rate
if (stats.errorRate > 0.05) { // 5%
logger.error('High error rate detected', {
operation,
errorRate: stats.errorRate,
threshold: 0.05
});
}
});
// Alert on high memory usage
const memoryUsage = report.memory.heapUsed / report.memory.heapTotal;
if (memoryUsage > 0.9) {
logger.warn('High memory usage detected', {
usage: memoryUsage,
heapUsed: report.memory.heapUsed,
heapTotal: report.memory.heapTotal
});
}
}
}
// Usage with StateSet client
const monitor = new PerformanceMonitor();
const monitoredClient = new Proxy(client, {
get(target, prop) {
const original = target[prop];
if (typeof original === 'object' && original !== null) {
return new Proxy(original, {
get(apiTarget, apiProp) {
const apiMethod = apiTarget[apiProp];
if (typeof apiMethod === 'function') {
return async (...args) => {
const operation = `${prop}.${apiProp}`;
const timer = monitor.startTimer(operation);
try {
const result = await apiMethod.apply(apiTarget, args);
const timing = monitor.endTimer(timer);
monitor.recordMetric('requests', operation, {
success: true,
timestamp: Date.now(),
...timing
});
return result;
} catch (error) {
monitor.endTimer(timer);
monitor.recordMetric('errors', operation, {
error: error.message,
status: error.status,
timestamp: Date.now()
});
throw error;
}
};
}
return apiMethod;
}
});
}
return original;
}
});
// Schedule performance checks
setInterval(() => {
monitor.checkPerformanceAlerts();
}, 30000); // Every 30 seconds
Load Testing and Benchmarking
Copy
Ask AI
import { performance } from 'perf_hooks';
class LoadTester {
constructor(client) {
this.client = client;
}
async runLoadTest(config) {
const {
operation,
concurrency = 10,
duration = 60000, // 1 minute
rampUp = 5000 // 5 seconds
} = config;
logger.info('Starting load test', config);
const results = {
requests: 0,
errors: 0,
latencies: [],
startTime: Date.now()
};
// Ramp up workers gradually
const workers = [];
const workerInterval = rampUp / concurrency;
for (let i = 0; i < concurrency; i++) {
setTimeout(() => {
workers.push(this.createWorker(operation, results));
}, i * workerInterval);
}
// Wait for test duration
await new Promise(resolve => setTimeout(resolve, duration));
// Stop all workers
workers.forEach(worker => worker.stop());
// Wait for workers to finish
await Promise.all(workers.map(w => w.promise));
return this.calculateResults(results);
}
createWorker(operation, results) {
let running = true;
const worker = {
stop: () => { running = false; },
promise: this.runWorker(operation, results, () => running)
};
return worker;
}
async runWorker(operation, results, isRunning) {
while (isRunning()) {
const start = performance.now();
try {
await operation();
const latency = performance.now() - start;
results.requests++;
results.latencies.push(latency);
} catch (error) {
results.errors++;
logger.debug('Load test error', { error: error.message });
}
// Small delay to prevent overwhelming
await new Promise(resolve => setTimeout(resolve, 10));
}
}
calculateResults(results) {
const duration = Date.now() - results.startTime;
const latencies = results.latencies.sort((a, b) => a - b);
return {
duration,
totalRequests: results.requests,
totalErrors: results.errors,
successRate: (results.requests - results.errors) / results.requests,
requestsPerSecond: results.requests / (duration / 1000),
errorRate: results.errors / results.requests,
latency: {
min: latencies[0] || 0,
max: latencies[latencies.length - 1] || 0,
avg: latencies.reduce((a, b) => a + b, 0) / latencies.length || 0,
p50: this.percentile(latencies, 0.5),
p95: this.percentile(latencies, 0.95),
p99: this.percentile(latencies, 0.99)
}
};
}
percentile(arr, p) {
if (arr.length === 0) return 0;
const index = Math.ceil(arr.length * p) - 1;
return arr[Math.max(0, index)];
}
}
// Usage
const loadTester = new LoadTester(client);
const results = await loadTester.runLoadTest({
operation: async () => {
await client.orders.list({ limit: 10 });
},
concurrency: 20,
duration: 120000, // 2 minutes
rampUp: 10000 // 10 seconds
});
logger.info('Load test results', results);
Performance Best Practices Summary
Connection Optimization
- Use HTTP/2 and connection pooling
- Configure DNS caching
- Implement keep-alive connections
- Optimize TLS handshakes
Request Patterns
- Batch operations when possible
- Use field selection and sparse responses
- Implement intelligent pagination
- Leverage parallel processing with limits
Caching Strategy
- Multi-level caching (memory + Redis)
- Cache warming and background refresh
- Smart invalidation patterns
- TTL optimization by data type
Resource Management
- Stream large datasets
- Implement memory pools
- Monitor and alert on metrics
- Regular performance testing
Next Steps
- Implement monitoring using the performance monitoring tools
- Establish baselines with load testing
- Set up alerting for performance degradation
- Regular optimization based on production metrics
For more advanced optimization techniques, see:
On this page
- Performance Optimization Guide
- Performance Fundamentals
- Connection Optimization
- HTTP/2 and Connection Pooling
- DNS Optimization
- Request Optimization Strategies
- 1. Intelligent Batching
- 2. Smart Field Selection
- 3. Parallel Processing with Concurrency Control
- Advanced Caching Strategies
- 1. Multi-Level Caching
- 2. Cache Warming and Background Refresh
- Database and Query Optimization
- 1. Efficient Pagination
- 2. Query Optimization
- Memory and Resource Management
- 1. Streaming for Large Datasets
- 2. Memory Pool Management
- Performance Monitoring
- 1. Comprehensive Metrics Collection
- Load Testing and Benchmarking
- Performance Best Practices Summary
- Next Steps
Assistant
Responses are generated using AI and may contain mistakes.