Skip to content

Commit d688a4a

Browse files
lpcoxCopilotCopilotCopilotpelikhan
authored
fix: align token workflows with gh-aw logs --json schema and add shared log caching (#24395)
* fix: align token workflows with gh-aw logs --json schema gh aw logs --json returns an object with a .runs array (not a bare array), and run objects use snake_case field names. All 4 token workflows (copilot/claude analyzers and optimizers) assumed a bare array with camelCase fields, causing jq errors like: Cannot index array with string "workflowName" Changes: - Extract .runs array from JSON object before processing - Replace camelCase fields with snake_case (workflow_name, token_usage, database_id, created_at) - Replace non-existent estimatedCost with 0 placeholder - Update documentation sections with correct field names Co-authored-by: Copilot <[email protected]> * Update .github/workflows/copilot-token-optimizer.md Co-authored-by: Copilot <[email protected]> * Update .github/workflows/claude-token-optimizer.md Co-authored-by: Copilot <[email protected]> * fix: recompile lock files after sort_by addition Recompile copilot-token-optimizer and claude-token-optimizer lock files to match the sort_by(.workflow_name) additions made via UI. Co-authored-by: Copilot <[email protected]> * feat: extract shared token-logs-fetch workflow and cache logs to avoid rate-limiting Agent-Logs-Url: https://github.com/github/gh-aw/sessions/37b38e9c-5938-4c6f-a082-9bc64b2a8b7b Co-authored-by: pelikhan <[email protected]> --------- Co-authored-by: Copilot <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: pelikhan <[email protected]>
1 parent 5a19c28 commit d688a4a

10 files changed

+1526
-122
lines changed

.github/workflows/claude-token-optimizer.lock.yml

Lines changed: 16 additions & 16 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

.github/workflows/claude-token-optimizer.md

Lines changed: 50 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -61,15 +61,47 @@ steps:
6161
set -euo pipefail
6262
mkdir -p /tmp/token-optimizer-claude
6363
64-
echo "📥 Loading Claude workflow runs from last 24 hours..."
65-
gh aw logs \
66-
--engine claude \
67-
--start-date -1d \
68-
--json \
69-
-c 300 \
70-
> /tmp/token-optimizer-claude/claude-runs.json 2>/dev/null || echo "[]" > /tmp/token-optimizer-claude/claude-runs.json
71-
72-
RUN_COUNT=$(jq '. | length' /tmp/token-optimizer-claude/claude-runs.json 2>/dev/null || echo 0)
64+
# Try to use pre-fetched logs from the Token Logs Fetch workflow to avoid redundant API calls
65+
TODAY=$(date -u +%Y-%m-%d)
66+
FETCH_RUN_ID=$(gh run list \
67+
--workflow "token-logs-fetch.lock.yml" \
68+
--status success \
69+
--limit 1 \
70+
--json databaseId \
71+
--jq '.[0].databaseId' 2>/dev/null || echo "")
72+
USED_CACHE=false
73+
if [ -n "$FETCH_RUN_ID" ]; then
74+
CACHE_TMP="/tmp/token-logs-cache-claude-optimizer"
75+
mkdir -p "$CACHE_TMP"
76+
gh run download "$FETCH_RUN_ID" \
77+
--repo "$GITHUB_REPOSITORY" \
78+
--name "cache-memory" \
79+
--dir "$CACHE_TMP" \
80+
2>/dev/null || true
81+
CACHE_DATE=$(cat "$CACHE_TMP/token-logs/fetch-date.txt" 2>/dev/null || echo "")
82+
if [ "$CACHE_DATE" = "$TODAY" ] && [ -s "$CACHE_TMP/token-logs/claude-runs.json" ]; then
83+
echo "✅ Using pre-fetched logs from Token Logs Fetch run $FETCH_RUN_ID (date: $CACHE_DATE)"
84+
cp "$CACHE_TMP/token-logs/claude-runs.json" /tmp/token-optimizer-claude/claude-runs.json
85+
USED_CACHE=true
86+
else
87+
echo "ℹ️ No valid cached logs found (cache date: ${CACHE_DATE:-none}, today: $TODAY)"
88+
fi
89+
fi
90+
91+
if [ "$USED_CACHE" != "true" ]; then
92+
echo "📥 Loading Claude workflow runs from last 24 hours..."
93+
gh aw logs \
94+
--engine claude \
95+
--start-date -1d \
96+
--json \
97+
-c 300 \
98+
> /tmp/token-optimizer-claude/claude-runs-raw.json 2>/dev/null || echo '{"runs":[]}' > /tmp/token-optimizer-claude/claude-runs-raw.json
99+
100+
# Extract runs array from the JSON output
101+
jq '.runs // []' /tmp/token-optimizer-claude/claude-runs-raw.json > /tmp/token-optimizer-claude/claude-runs.json 2>/dev/null || echo "[]" > /tmp/token-optimizer-claude/claude-runs.json
102+
fi
103+
104+
RUN_COUNT=$(jq 'length' /tmp/token-optimizer-claude/claude-runs.json 2>/dev/null || echo 0)
73105
echo "Found ${RUN_COUNT} Claude runs"
74106
75107
if [ "$RUN_COUNT" -eq 0 ]; then
@@ -80,16 +112,17 @@ steps:
80112
# Find the most expensive workflow (by total tokens across all its runs)
81113
echo "🔍 Identifying most expensive workflow..."
82114
jq -r '
83-
group_by(.workflowName) |
115+
sort_by(.workflow_name) |
116+
group_by(.workflow_name) |
84117
map({
85-
workflow: .[0].workflowName,
86-
total_tokens: (map(.tokenUsage) | add),
87-
total_cost: (map(.estimatedCost) | add),
118+
workflow: .[0].workflow_name,
119+
total_tokens: (map(.token_usage) | add),
120+
total_cost: 0,
88121
run_count: length,
89-
avg_tokens: ((map(.tokenUsage) | add) / length),
90-
run_ids: map(.databaseId),
91-
latest_run_id: (sort_by(.createdAt) | last | .databaseId),
92-
latest_run_url: (sort_by(.createdAt) | last | .url)
122+
avg_tokens: ((map(.token_usage) | add) / length),
123+
run_ids: map(.database_id),
124+
latest_run_id: (sort_by(.created_at) | last | .database_id),
125+
latest_run_url: (sort_by(.created_at) | last | .url)
93126
}) |
94127
sort_by(.total_tokens) | reverse | .[0]
95128
' /tmp/token-optimizer-claude/claude-runs.json > /tmp/token-optimizer-claude/top-workflow.json

.github/workflows/claude-token-usage-analyzer.lock.yml

Lines changed: 16 additions & 16 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)