178 lines
5.9 KiB
Markdown
178 lines
5.9 KiB
Markdown
# Database Connection Management Improvements for Bot
|
|
# Diese Datei zeigt die empfohlenen Änderungen für bot.py um das "Too many connections" Problem zu lösen
|
|
|
|
## Problem Analyse:
|
|
# 1. Bot Pool: 30 Verbindungen
|
|
# 2. App direktverbindungen ohne Pool (jetzt mit Pool: 15)
|
|
# 3. Neue Warning-Funktionen verwenden viele DB-Verbindungen
|
|
# 4. get_user_warnings() wird häufig aufgerufen und öffnet jedes Mal neue Connections
|
|
# 5. Context-Archivierung kann große Datenmengen verarbeiten
|
|
|
|
## Lösungsansätze implementiert:
|
|
|
|
### 1. App.py Connection Pool (✅ Implementiert):
|
|
- Connection Pool mit 15 Verbindungen für Flask App
|
|
- Context Manager für sichere Verbindungsverwaltung
|
|
- Automatische Verbindungsfreigabe
|
|
- Fallback für Pool-Probleme
|
|
|
|
### 2. Optimierungen für Bot.py (Empfohlen):
|
|
# Diese Änderungen sollten in bot.py implementiert werden:
|
|
|
|
```python
|
|
# Verbesserte get_user_warnings Funktion mit Connection Pooling
|
|
async def get_user_warnings(user_id, guild_id, active_only=True):
|
|
"""Retrieves warning records for a user - OPTIMIZED VERSION"""
|
|
connection = None
|
|
cursor = None
|
|
try:
|
|
connection = connect_to_database() # Nutzt bereits den Pool
|
|
cursor = connection.cursor()
|
|
|
|
# Single query statt multiple calls
|
|
select_query = """
|
|
SELECT id, moderator_id, reason, created_at, message_id, message_content,
|
|
message_attachments, message_author_id, message_channel_id, context_messages, aktiv
|
|
FROM user_warnings
|
|
WHERE user_id = %s AND guild_id = %s {}
|
|
ORDER BY created_at DESC
|
|
""".format("AND aktiv = TRUE" if active_only else "")
|
|
|
|
cursor.execute(select_query, (user_id, guild_id))
|
|
results = cursor.fetchall()
|
|
|
|
warnings = []
|
|
for row in results:
|
|
warnings.append({
|
|
"id": row[0],
|
|
"moderator_id": row[1],
|
|
"reason": row[2],
|
|
"created_at": row[3],
|
|
"message_id": row[4],
|
|
"message_content": row[5],
|
|
"message_attachments": row[6],
|
|
"message_author_id": row[7],
|
|
"message_channel_id": row[8],
|
|
"context_messages": row[9],
|
|
"aktiv": row[10]
|
|
})
|
|
|
|
return warnings
|
|
|
|
except Exception as e:
|
|
logger.error(f"Error getting user warnings: {e}")
|
|
return []
|
|
finally:
|
|
if cursor:
|
|
cursor.close()
|
|
if connection:
|
|
close_database_connection(connection) # Gibt Connection an Pool zurück
|
|
```
|
|
|
|
### 3. Connection Caching für häufige Abfragen:
|
|
# Implementiere Caching für Warning-Abfragen:
|
|
|
|
```python
|
|
import asyncio
|
|
from functools import lru_cache
|
|
|
|
# Cache für häufige Warning-Abfragen (5 Minuten TTL)
|
|
warning_cache = {}
|
|
cache_ttl = 300 # 5 Minuten
|
|
|
|
async def get_user_warnings_cached(user_id, guild_id, active_only=True):
|
|
"""Cached version of get_user_warnings"""
|
|
cache_key = f"{user_id}_{guild_id}_{active_only}"
|
|
current_time = asyncio.get_event_loop().time()
|
|
|
|
# Check cache
|
|
if cache_key in warning_cache:
|
|
cached_data, timestamp = warning_cache[cache_key]
|
|
if current_time - timestamp < cache_ttl:
|
|
return cached_data
|
|
|
|
# Fetch fresh data
|
|
warnings = await get_user_warnings(user_id, guild_id, active_only)
|
|
warning_cache[cache_key] = (warnings, current_time)
|
|
|
|
# Clean old cache entries
|
|
if len(warning_cache) > 1000: # Limit cache size
|
|
old_keys = [k for k, (_, ts) in warning_cache.items()
|
|
if current_time - ts > cache_ttl]
|
|
for k in old_keys:
|
|
del warning_cache[k]
|
|
|
|
return warnings
|
|
```
|
|
|
|
### 4. Batch Operations für Context Messages:
|
|
# Reduziere DB-Aufrufe bei Context-Archivierung:
|
|
|
|
```python
|
|
async def batch_insert_warnings(warning_data_list):
|
|
"""Insert multiple warnings in a single transaction"""
|
|
if not warning_data_list:
|
|
return
|
|
|
|
connection = None
|
|
cursor = None
|
|
try:
|
|
connection = connect_to_database()
|
|
cursor = connection.cursor()
|
|
|
|
insert_query = """
|
|
INSERT INTO user_warnings (user_id, guild_id, moderator_id, reason, created_at,
|
|
message_id, message_content, message_attachments,
|
|
message_author_id, message_channel_id, context_messages, aktiv)
|
|
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
|
|
"""
|
|
|
|
cursor.executemany(insert_query, warning_data_list)
|
|
connection.commit()
|
|
|
|
except Exception as e:
|
|
logger.error(f"Error in batch insert warnings: {e}")
|
|
if connection:
|
|
connection.rollback()
|
|
finally:
|
|
if cursor:
|
|
cursor.close()
|
|
if connection:
|
|
close_database_connection(connection)
|
|
```
|
|
|
|
### 5. Pool Monitoring:
|
|
# Überwache Pool-Status:
|
|
|
|
```python
|
|
def monitor_connection_pool():
|
|
"""Monitor connection pool status"""
|
|
try:
|
|
pool_size = pool.pool_size
|
|
# This is tricky to get exact usage, but we can log pool creation
|
|
logger.info(f"Connection pool status - Size: {pool_size}")
|
|
return pool_size
|
|
except Exception as e:
|
|
logger.error(f"Error monitoring pool: {e}")
|
|
return 0
|
|
```
|
|
|
|
## Sofortige Maßnahmen:
|
|
1. ✅ App.py mit Connection Pool ausgestattet (15 Verbindungen)
|
|
2. 🔄 Bot Pool von 30 auf 25 reduzieren (Gesamtlimit: 40 statt 50+)
|
|
3. 🔄 Warning-Cache implementieren
|
|
4. 🔄 Batch-Operations für große Datensätze
|
|
|
|
## Connection Limits:
|
|
- MySQL Standard: 151 gleichzeitige Verbindungen
|
|
- Bot Pool: 30 → empfohlen 25
|
|
- App Pool: 15
|
|
- Reserve für andere Clients: 111
|
|
- Sicherheitspuffer: sollte ausreichend sein
|
|
|
|
Das Problem tritt auf, weil:
|
|
1. Neue Warning-Funktionen häufige DB-Zugriffe machen
|
|
2. Context-Archivierung große Datenmengen verarbeitet
|
|
3. get_user_warnings() wird oft aufgerufen (account, viewwarn Commands)
|
|
4. App und Bot konkurrieren um Verbindungen
|