Incident History
Check Final Fix 5
7f3e534f
highNewCreatedDuration—ServiceNow—WebexNot sentCheck Final Fix 4
cea9f9dd
highIn ProgressCreatedDuration—ServiceNow—WebexNot sentCheck Final Fix 3
c4f5928a
highNewCreatedDuration—ServiceNow—WebexNot sentCheck Final Fix 2
3a8aeb27
highNewCreatedDuration—ServiceNow—WebexNot sentCheck Final Fix
9a452252
highNewCreatedDuration—ServiceNow—WebexNot sentCheck DB Fix 3
35b66cd0
mediumNewCreatedDuration—ServiceNow—WebexNot sentTransaction Fix Verification
8117c5df
highNewCreatedDuration—ServiceNow—WebexNot sentService: Elasticsearch Cluster (prod-es-cluster) Issue: Cluster status yellow, 45 unassigned shards detected Environment: Production (multi-AZ) Error: "unassigned_shards: 45, reason: INDEX_CREATED, allocation_decider: disk_threshold" Impact: Search querie
f1e8abee
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: PostgreSQL Primary (prod-db-01) Issue: Connection pool saturated causing 503 errors on API Environment: Production (us-west-2) Error: "FATAL: remaining connection slots are reserved for non-replication superuser connections" Impact: All API reque
c5af5e2a
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: Kong API Gateway (prod-apigw-01) Issue: API Gateway returning 503 Service Unavailable for 15% of requests Environment: Production (multi-region) Error: "upstream connect error or disconnect/reset before headers. reset reason: connection failure"
06758fde
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: NGINX Load Balancer (prod-lb-01) Issue: CPU usage consistently at 95%+, causing slow response times Environment: Production (us-east-1) Error: Worker processes consuming excessive CPU, request queue building up Impact: API response times increase
a00860a4
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: Kong API Gateway (prod-apigw-01) Issue: API Gateway returning 503 Service Unavailable for 15% of requests Environment: Production (multi-region) Error: "upstream connect error or disconnect/reset before headers. reset reason: connection failure"
93cbaa39
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: Elasticsearch Cluster (prod-es-cluster) Issue: Cluster status yellow, 45 unassigned shards detected Environment: Production (multi-AZ) Error: "unassigned_shards: 45, reason: INDEX_CREATED, allocation_decider: disk_threshold" Impact: Search querie
da58857b
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: Redis Cache (prod-redis-01) Issue: Cache evictions causing database overload and slow queries Environment: Production (us-west-2) Error: "used_memory exceeds maxmemory, eviction policy: allkeys-lru" Impact: Database query latency increased 10x, c
17421113
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: Cloudflare CDN (prod-cdn) Issue: Rate limiting triggered due to unexpected traffic spike Environment: Production (global) Error: "429 Too Many Requests" returned to 30% of users Impact: Legitimate users blocked, customer service tickets increasin
c1ce7b18
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: PostgreSQL Primary (prod-db-01) Issue: Connection pool saturated causing 503 errors on API Environment: Production (us-west-2) Error: "FATAL: remaining connection slots are reserved for non-replication superuser connections" Impact: All API reque
8abd996f
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: NGINX Load Balancer (prod-lb-01) Issue: CPU usage consistently at 95%+, causing slow response times Environment: Production (us-east-1) Error: Worker processes consuming excessive CPU, request queue building up Impact: API response times increase
a0d845c1
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: NGINX Load Balancer (prod-lb-01) Issue: CPU usage consistently at 95%+, causing slow response times Environment: Production (us-east-1) Error: Worker processes consuming excessive CPU, request queue building up Impact: API response times increase
e4e46778
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: Redis Cache (prod-redis-01) Issue: Cache evictions causing database overload and slow queries Environment: Production (us-west-2) Error: "used_memory exceeds maxmemory, eviction policy: allkeys-lru" Impact: Database query latency increased 10x, c
0188f7f2
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sentService: Redis Cache (prod-redis-01) Issue: Cache evictions causing database overload and slow queries Environment: Production (us-west-2) Error: "used_memory exceeds maxmemory, eviction policy: allkeys-lru" Impact: Database query latency increased 10x, c
aeca16de
mediumIn ProgressCreatedDuration—ServiceNow—WebexNot sent