-
Notifications
You must be signed in to change notification settings - Fork 34k
Closed as not planned
Closed as not planned
Copy link
Labels
Description



Type: Bug
(Sorry I am not sure if this relates to GitHub Copilot or GitHub Copilot Chat. Please re-categorize if necessary.)
Even though GPT-4o is listed as "Standard" and marked "Included", when I attempt to use it when my Premium quota is full, it triggers this message:
You have exceeded your premium request allowance. We have automatically switched you to GPT-4.1 which is included with your plan. Enable additional paid premium requests to continue using premium models.
Extension version: 0.29.1
VS Code version: Code 1.102.1 (Universal) (7adae6a, 2025-07-15T16:41:17.147Z)
OS version: Darwin arm64 24.5.0
Modes:
Logs
Trace: Resolving chat model Trace: Resolved chat model Trace: [panel] chat request received from extension host Trace: Processing intent Trace: Resolving chat model Trace: Resolving chat model Trace: Resolved chat model Trace: Resolved chat model Trace: Processed intent Trace: Resolving chat model Trace: Resolved chat model Trace: Built prompt Trace: Sending prompt to model Debug: modelMaxPromptTokens 111616 Debug: modelMaxResponseTokens 16384 Debug: chat model gpt-4.1 Debug: request.response: [{"type":"ChatCompletions"}], took 800 ms Debug: messages: [{"role":"system","content":"You are an AI programming assistant.\nWhen asked for your name, you must respond with \"GitHub Copilot\".\nFollow the user's requirements carefully & to the letter.\nFollow Microsoft content policies.\nAvoid content that violates copyrights.\nIf you are asked to generate content that is harmful, hateful, racist, sexist, lewd, or violent, only respond with \"Sorry, I can't assist with that.\"\nKeep your answers short and impersonal.\nYou can answer general programming questions and perform the following tasks: \n* Ask a question about the files in your current workspace\n* Explain how the code in your active editor works\n* Make changes to existing code\n* Review the selected code in your active editor\n* Generate unit tests for the selected code\n* Propose a fix for the problems in the selected code\n* Scaffold code for a new file or project in a workspace\n* Create a new Jupyter Notebook\n* Ask questions about VS Code\n* Generate query parameters for workspace search\n* Ask how to do something in the terminal\n* Explain what just happened in the terminal\n* Propose a fix for the problems in the selected code\n* Explain how the code in your active editor works\n* Review the selected code in your active editor\n* Generate unit tests for the selected code\n* Make changes to existing code\nYou use the GPT-4.1 large language model.\nThe user has the following folder open: /Users/drewclifton/projects/json-rpg.\nThe current date is July 21, 2025.\nUse Markdown formatting in your answers.\nWhen suggesting code changes or new content, use Markdown code blocks.\nTo start a code block, use 4 backticks.\nAfter the backticks, add the programming language name.\nIf the code modifies an existing file or should be placed at a specific location, add a line comment with 'filepath:' and the file path.\nIf you want the user to decide where to place the code, do not add the file path comment.\nIn the code block, use a line comment with '...existing code...' to indicate code that is already present in the file.\n````languageId\n// filepath: /path/to/file\n// ...existing code...\n{ changed code }\n// ...existing code...\n{ changed code }\n// ...existing code...\n````For code blocks use four backticks to start and end.\nAvoid wrapping the whole response in triple backticks.\nThe user works in an IDE called Visual Studio Code which has a concept for editors with open files, integrated unit test support, an output pane that shows the output of running the code as well as an integrated terminal.\nThe user is working on a Mac machine. Please respond with system specific commands if applicable.\nThe active document is the source code the user is looking at right now.\nYou can only give one reply for each conversation turn."},{"role":"user","content":"Let's work on this file. Please evaluate it closely to gain a deep understanding of its purpose, then I will begin to discuss enhancements. Do not suggest any enhancements (I will share after)."},{"role":"assistant","content":"The file [`conversation_manager_textual_dry.py`](conversation_manager_textual_dry.py ) is a refactored, DRY (Don't Repeat Yourself) version of a conversation manager for a text-based adventure game, using the Textual TUI framework. Its main responsibilities are:\n\n- **Loading and managing game space data**: It loads a space graph (from JSON) representing the current environment, including NPCs, items, and player data.\n- **Conversation management**: It tracks conversation history, supports saving/loading transcripts, and manages the context for LLM (Large Language Model) API calls.\n- **LLM integration**: It constructs prompts and message histories for the LLM, supporting both Game Master (GM) and direct NPC interactions (using `@npc_id` addressing).\n- **Textual UI**: It provides a TUI chat interface with input, output, and autocomplete for NPCs and space IDs, using Textual widgets.\n- **Streaming LLM responses**: It supports streaming LLM output, updating the UI as chunks arrive.\n- **Formatting and display**: It formats messages for the UI, including bolding entity names and color-coding GM/NPC responses.\n- **Space selection**: If no space is specified, it allows the user to select from available spaces at startup.\n\nThe code is modular, with clear separation between conversation logic, LLM API handling, and UI components. It is designed for extensibility and maintainability, centralizing message formatting and input handling. No enhancements are suggested at this stage, as requested.","name":"workspace"},{"role":"user","content":"How do we enable conversation history saving? Currently it does not save conversation history files by default, apparently."},{"role":"assistant","content":"You're correct: **conversation history is not saved by default** in the current implementation. The [`ConversationManager.save_history()`](conversation_manager_textual_dry.py ) method exists, but it is never called automatically in the main flow or UI.\n\n### Why isn't it saving?\n- The [`save_history()`](conversation_manager_textual_dry.py ) method is only defined, not invoked.\n- There is no call to [`save_history()`](conversation_manager_textual_dry.py ) when the user exits, nor after each message.\n\n---\n\n## How to Enable Conversation History Saving\n\n### Option 1: Save on App Exit\n\nAdd a call to [`save_history()`](conversation_manager_textual_dry.py ) when the user quits the app. \n**Best place:** In the [`ChatApp`](conversation_manager_textual_dry.py ) class, override the `on_exit` method.\n\n````python\nclass ChatApp(App):\n # ...existing code...\n\n async def on_exit(self, event):\n # Save conversation history if possible\n if hasattr(self, \"conv_manager\") and hasattr(self.conv_manager, \"save_history\"):\n self.conv_manager.save_history()\n await super().on_exit(event)\n````\n\n### Option 2: Save After Each Message\n\nIf you want to save after every message (safer for crash recovery), add a call to [`save_history()`](conversation_manager_textual_dry.py ) at the end of [`finalize_llm_response`](conversation_manager_textual_dry.py ) in [`ChatInput`](conversation_manager_textual_dry.py ):\n\n````python\nclass ChatInput(Input):\n # ...existing code...\n\n def finalize_llm_response(self, response, speaker):\n # ...existing code...\n if hasattr(self, '_partial_llm_response'):\n del self._partial_llm_response\n # Save after each response\n if self.conv_manager and hasattr(self.conv_manager, \"save_history\"):\n self.conv_manager.save_history()\n````\n\n---\n\n**Summary:** \n- Add a call to [`save_history()`](conversation_manager_textual_dry.py ) on app exit, or after each message, or both.\n- This will ensure conversation logs are written to disk automatically.\n\nLet me know which approach you prefer, or if you want both!"},{"role":"user","content":"\nUser's current visible code, this should be the main focus:\nExcerpt from conversation_manager_textual_dry.py, lines 183 to 212:\n```python\n# filepath: /Users/drewclifton/projects/json-rpg/conversation_manager_textual_dry.py\nif match:\n target_npc_id = match.group(1)\n if target_npc_id not in self.npcs:\n return f\"System: NPC '{target_npc_id}' not found in this space.\", \"system\"\n \n npc_data = self.npcs[target_npc_id]\n system_prompt = f\"\"\"\nYou are the NPC '{npc_data.get('name', target_npc_id)}' ({target_npc_id}).\nYour description: {npc_data.get('description')}\nYour current mood is {npc_data.get('mood', 'neutral')}.\nYou are in the space: {self.space_data.get('space_name')}.\nRespond to the player from this character's perspective.\n\"\"\"\n else:\n system_prompt = f\"\"\"\nYou are the Game Master (GM). The player is exploring a space.\nSpace Name: {self.space_data.get('space_name')}\nSpace Description: {self.space_data.get('description')}\nNodes present: {json.dumps(self.space_data.get('nodes', {}), indent=2)}\nDescribe the scene, narrate events, and embody any NPCs the player talks to if they are not addressed directly with '@'.\n\"\"\"\n\n messages = [{\"role\": \"system\", \"content\": system_prompt}]\n \n # Normalize roles for the API call.\n for message in self.conversation_history:\n api_message = message.copy()\n # The new format doesn't need role normalization anymore,\n # but this is good for backward compatibility if loading old files.\n```\n\n\nUser's active file for additional context:\n```python\n# filepath: /Users/drewclifton/projects/json-rpg/conversation_manager_textual_dry.py\nclass ConversationManager:\n \"\"\"Manages interactive conversations within a game space.\"\"\"\n\n def __init__(self, space_id: str, llm_client: LLMClient, load_file: Optional[str] = None):\n \n\n def _load_history(self, filename: str):\n \"\"\"Loads a conversation history from a JSON file.\"\"\"\n history_file_path = Path(\"conversations\") / filename\n if not history_file_path.exists():\n \n \n with open(history_file_path, \"r\") as f:\n \n print(f\"Successfully loaded conversation from {filename}\")\n\n def _load_space_data(self) -> Optional[Dict]:\n \"\"\"Loads the specified space JSON file and extracts the current space data.\"\"\"\n space_file_path = Path(\"static/assets/spaces\") / f\"{self.space_id}.json\"\n if not space_file_path.exists():\n \n with open(space_file_path, \"r\") as f:\n \n\n # The expected format is a region file containing a list of spaces.\n # We find the current space using 'current_space_id'.\n if \"current_space_id\" in data and \"spaces\" in data:\n \n \n print(f\"Error: Could not find a valid space object for '{self.space_id}' in {space_file_path}\")\n return None\n\n def _get_npcs(self) -> Dict[str, Dict]:\n \"\"\"Extracts NPCs from the space data.\"\"\"\n if not self.space_data or \"nodes\" not in self.space_data:\n \n \n npc_nodes = self.space_data[\"nodes\"].get(\"npcs\", [])\n return {npc.get(\"id\", f\"npc-{i}\"): npc for i, npc in enumerate(npc_nodes)}\n\n def _get_player_id(self) -> Optional[str]:\n \"\"\"\n Extracts the player's ID from the current space data.\n Assumes there's only one player per interaction for now.\n \"\"\"\n if not self.space_data:\n \n player_nodes = self.space_data.get(\"nodes\", {}).get(\"players\", [])\n if not player_nodes:\n \n return player_nodes[0].get(\"id\", \"gnorf\")\n \n def _generate_response(self, user_input: str, stream: bool = False):\n \"\"\"Generates a response from the LLM based on user input.\"\"\"\n # Convert any @DisplayName back to @id for correct NPC lookup\n for npc_id, npc_data in self.npcs.items():\n display_name = npc_data.get('name', '')\n if display_name:\n # Replace occurrences of @DisplayName with @id\n user_input = re.sub(rf\"@{re.escape(display_name)}\", f\"@{npc_id}\", user_input)\n\n target_npc_id = None\n match = re.match(r\"@(\\S+)\", user_input)\n \n if match:\n target_npc_id = match.group(1)\n if target_npc_id not in self.npcs:\n return f\"System: NPC '{target_npc_id}' not found in this space.\", \"system\"\n \n npc_data = self.npcs[target_npc_id]\n system_prompt = f\"\"\"\nYou are the NPC '{npc_data.get('name', target_npc_id)}' ({target_npc_id}).\nYour description: {npc_data.get('description')}\nYour current mood is {npc_data.get('mood', 'neutral')}.\nYou are in the space: {self.space_data.get('space_name')}.\nRespond to the player from this character's perspective.\n\"\"\"\n else:\n system_prompt = f\"\"\"\nYou are the Game Master (GM). The player is exploring a space.\nSpace Name: {self.space_data.get('space_name')}\nSpace Description: {self.space_data.get('description')}\nNodes present: {json.dumps(self.space_data.get('nodes', {}), indent=2)}\nDescribe the scene, narrate events, and embody any NPCs the player talks to if they are not addressed directly with '@'.\n\"\"\"\n\n messages = [{\"role\": \"system\", \"content\": system_prompt}]\n \n # Normalize roles for the API call.\n for message in self.conversation_history:\n api_message = message.copy()\n # The new format doesn't need role normalization anymore,\n # but this is good for backward compatibility if loading old files.\n if \"speaker\" not in api_message:\n if api_message[\"role\"].startswith(\"assistant(\"):\n api_message[\"role\"] = \"assistant\"\n messages.append(api_message)\n\n messages.append({\"role\": \"user\", \"content\": user_input})\n\n speaker = f\"npc:{target_npc_id}\" if target_npc_id else \"gm\"\n if stream:\n return self.llm_client.stream_llm(messages), speaker\n else:\n response = self.llm_client.call_llm(messages)\n return response, speaker\n\n def save_history(self):\n \"\"\"Saves the conversation history to a JSON file.\n \n If a conversation was loaded, it overwrites the original file.\n Otherwise, it creates a new file with a timestamp.\n \"\"\"\n if not self.conversation_history:\n \n\n history_dir = Path(\"conversations\")\n history_dir.mkdir(exist_ok=True)\n \n if self.loaded_filename:\n \n else:\n \n\n \n # No-op: replaced by Textual UI\n pass\n```\n"},{"role":"user","content":"Can we intercept the `quit` command and ask if the user want's to save (Y/N)? So if the user types quit, it asks, then after the user sends a Y or N it quits.\n\nPlease implement this update."},{"role":"assistant","content":"\n```\n\n```\nThe \"quit\" and \"exit\" commands are now intercepted: when the user types either, the app will prompt \"Do you want to save the conversation before quitting? (Y/N)\". The app will only quit after the user responds with Y or N, saving the conversation if Y is chosen.\n\nYou can now test this behavior in your TUI. Let me know if you want any further adjustments!"},{"role":"user","content":"Question:\n\nIf we are only saving the file at the end, how does the LLM track the conversation during the conversation? Is it saved in memory or something? Or is every message new for the LLM currently?\n\nPlease evaluate the current handling of this and advise."},{"role":"assistant","content":"Great question! Here’s how conversation context is handled in your current implementation:\n\n### How Conversation Context Is Tracked\n\n- The conversation history is stored in memory in the `self.conversation_history` list of the `ConversationManager` instance.\n- Every time the user submits a message, it is appended to `self.conversation_history`.\n- When generating a response, the `_generate_response` method constructs the LLM prompt by:\n 1. Creating a system prompt (for GM or NPC).\n 2. Appending all messages from `self.conversation_history` (user and assistant turns) to the prompt.\n 3. Adding the current user message.\n- This full message list is sent to the LLM for every new response, so the LLM always receives the entire conversation context up to that point.\n\n### Is Anything Persisted to Disk During the Conversation?\n\n- No, the conversation is only saved to disk when `save_history()` is called (e.g., on quit and user confirmation).\n- Until then, the conversation history exists only in memory.\n\n### Is Every Message \"New\" for the LLM?\n\n- No, every message is not \"new\" for the LLM.\n- The LLM receives the entire conversation history (from memory) with each API call, so it can generate contextually appropriate responses.\n\n---\n\n**Summary:** \n- Conversation history is kept in memory during the session.\n- The LLM always gets the full conversation so far (from memory, not from disk).\n- Saving to disk is only for persistence between sessions, not for LLM context.\n\nLet me know if you want to change this behavior or have further questions!"},{"role":"user","content":"\nUser's current visible code, this should be the main focus:\nExcerpt from conversation_manager_textual_dry.py, lines 429 to 458:\n```python\n# filepath: /Users/drewclifton/projects/json-rpg/conversation_manager_textual_dry.py\nself.conv_manager.conversation_history.append({\"role\": \"user\", \"speaker\": \"player\", \"content\": user_message})\n self.app.query_one(\"#chat_container\").scroll_end(animate=False)\n # Start LLM response (streaming)\n self._partial_llm_response = \"\"\n self._streaming_static = Static(\"[dim]Thinking...[/dim]\", classes=\"assistant-message\")\n self.chat_display.mount(self._streaming_static)\n self.app.query_one(\"#chat_container\").scroll_end(animate=False)\n def stream_worker():\n stream, speaker = self.conv_manager._generate_response(user_message, stream=True)\n try:\n for chunk in stream:\n self.app.call_from_thread(self.display_llm_chunk, chunk, speaker)\n self.app.call_from_thread(self.finalize_llm_response, self._partial_llm_response, speaker)\n except Exception as e:\n logging.error(f\"Error in LLM stream: {e}\")\n self.app.call_from_thread(self.finalize_llm_response, \"[dim]Error from LLM.[/dim]\", speaker)\n self.run_worker(stream_worker, thread=True)\n elif user_message.lower() in [\"quit\", \"exit\"]:\n # Only prompt to save if there is conversation history\n history = getattr(self.conv_manager, \"conversation_history\", None)\n if history and len(history) > 0:\n from textual.widgets import Static\n self.chat_display.mount(Static(\"[yellow]Do you want to save the conversation before quitting? (Y/N)[/yellow]\", classes=\"assistant-message\"))\n self.app.query_one(\"#chat_container\").scroll_end(animate=False)\n self._pending_quit_confirmation = True\n self.value = \"\"\n return\n else:\n self.app.exit()\n return\n```\n\n\nUser's active file for additional context:\n```python\n# filepath: /Users/drewclifton/projects/json-rpg/conversation_manager_textual_dry.py\nclass ChatInput(Input):\n def __init__(self, *args, chat_display=None, conv_manager=None, **kwargs):\n \n\n def on_mount(self):\n \n\n def watch_value(self, value):\n \n\n async def on_key(self, event: events.Key):\n if self.dropdown and getattr(self.dropdown, 'display', False):\n \n elif event.key == \"enter\":\n \n elif event.key == \"escape\":\n \n elif event.key == \"ctrl+q\":\n \n # Otherwise, let Textual handle it\n return\n\n def on_submitted(self, value: str):\n logging.info(f\"[on_submitted] Value submitted: {value!r}\")\n user_message = value.strip()\n # If waiting for quit confirmation, handle Y/N\n if getattr(self, '_pending_quit_confirmation', False):\n from textual.widgets import Static\n answer = user_message.lower()\n if answer in [\"y\", \"yes\"]:\n # Save and quit\n if self.conv_manager and hasattr(self.conv_manager, \"save_history\"):\n self.conv_manager.save_history()\n self.app.exit()\n elif answer in [\"n\", \"no\"]:\n self.app.exit()\n else:\n self.chat_display.mount(Static(\"[red]Please enter Y or N.[/red]\", classes=\"assistant-message\"))\n self.app.query_one(\"#chat_container\").scroll_end(animate=False)\n return\n self._pending_quit_confirmation = False\n self.value = \"\"\n return\n # If no space is loaded, only allow number input for space selection\n if not (self.conv_manager and hasattr(self.conv_manager, 'space_data') and self.conv_manager.space_data):\n return\n if self.chat_display and user_message and user_message.lower() not in [\"quit\", \"exit\"]:\n # Add user message to history and display\n player_name = \"You\"\n player_nodes = self.conv_manager.space_data.get(\"nodes\", {}).get(\"players\", [])\n if player_nodes and player_nodes[0].get(\"name\"):\n player_name = player_nodes[0][\"name\"]\n from textual.widgets import Static\n user_static = Static(f\"[b]{player_name}:[/b] {user_message}\", classes=\"user-message\")\n self.chat_display.mount(user_static)\n self.conv_manager.conversation_history.append({\"role\": \"user\", \"speaker\": \"player\", \"content\": user_message})\n self.app.query_one(\"#chat_container\").scroll_end(animate=False)\n # Start LLM response (streaming)\n self._partial_llm_response = \"\"\n self._streaming_static = Static(\"[dim]Thinking...[/dim]\", classes=\"assistant-message\")\n self.chat_display.mount(self._streaming_static)\n self.app.query_one(\"#chat_container\").scroll_end(animate=False)\n def stream_worker():\n stream, speaker = self.conv_manager._generate_response(user_message, stream=True)\n try:\n for chunk in stream:\n self.app.call_from_thread(self.display_llm_chunk, chunk, speaker)\n self.app.call_from_thread(self.finalize_llm_response, self._partial_llm_response, speaker)\n except Exception as e:\n logging.error(f\"Error in LLM stream: {e}\")\n self.app.call_from_thread(self.finalize_llm_response, \"[dim]Error from LLM.[/dim]\", speaker)\n self.run_worker(stream_worker, thread=True)\n elif user_message.lower() in [\"quit\", \"exit\"]:\n # Only prompt to save if there is conversation history\n history = getattr(self.conv_manager, \"conversation_history\", None)\n if history and len(history) > 0:\n from textual.widgets import Static\n self.chat_display.mount(Static(\"[yellow]Do you want to save the conversation before quitting? (Y/N)[/yellow]\", classes=\"assistant-message\"))\n self.app.query_one(\"#chat_container\").scroll_end(animate=False)\n self._pending_quit_confirmation = True\n self.value = \"\"\n return\n else:\n self.app.exit()\n return\n self.value = \"\"\n\n def _bold_entities(self, text):\n names = set()\n if self.conv_manager and hasattr(self.conv_manager, 'space_data') and self.conv_manager.space_data:\n \n sorted_names = sorted(names, key=len, reverse=True)\n for name in sorted_names:\n \n return text\n\n def _format_message(self, content, speaker, npc_name=None):\n \n\n def render_history(self):\n```\n"},{"role":"user","content":"Ok need a tweak to the saving logic:\n\nCurrently the game asks if the user wants to save *even if no messages were sent* (i.e., the space was only loaded). If no messages have been sent the message is irrelevant (nothing to save), so we need logic that the save prompt is only sent if there is history."},{"role":"assistant","content":"\n```\n\n```\nThe quit/save prompt will now only appear if there is actual conversation history. If no messages have been sent, the app will simply quit without prompting to save. This prevents irrelevant save prompts on a fresh or empty session."},{"role":"user","content":"\nThanks I will check.\n"}] Trace: choice {"delta":{"content":"","role":"assistant"}} Trace: choice {"delta":{"content":"You're"}} Trace: choice {"delta":{"content":" welcome"}} Trace: choice {"delta":{"content":"!"}} Trace: choice {"delta":{"content":" Let"}} Trace: choice {"delta":{"content":" me"}} Trace: choice {"delta":{"content":" know"}} Trace: choice {"delta":{"content":" if"}} Trace: choice {"delta":{"content":" you"}} Trace: choice {"delta":{"content":" need"}} Trace: choice {"delta":{"content":" any"}} Trace: choice {"delta":{"content":" further"}} Trace: choice {"delta":{"content":" adjustments"}} Trace: choice {"delta":{"content":" or"}} Trace: choice {"delta":{"content":" run"}} Trace: choice {"delta":{"content":" into"}} Trace: choice {"delta":{"content":" any"}} Trace: choice {"delta":{"content":" issues"}} Trace: choice {"delta":{"content":"."}} Trace: choice {"finish_reason":"stop","delta":{"content":null}} Info: message 0 returned. finish reason: [stop] Debug: message 0 details: finishOffset: [undefined] completionId: [{chatcmpl-BvnwKPBsxbUA8Ozfod46be4qWhwCF}] created: [{1753116364}] Info: request done: requestId: [c00fa405-3939-41b8-ab2f-d35cf2767eae] model deployment ID: [] Info: ccreq:007995a9.copilotmd | success | gpt-4.1 | 1135ms | [panel/unknown] Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: false. Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true. Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: false. Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true. Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: false. Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true. Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: false. Debug: [context keys] Window state change. Needs offline check: false, active: true, focused: true.
Request IDs
4d97e4ac-d771-431a-987b-c4e3cb1a4b39 c00fa405-3939-41b8-ab2f-d35cf2767eae
System Info
Item | Value |
---|---|
CPUs | Apple M2 (8 x 2400) |
GPU Status | 2d_canvas: enabled canvas_oop_rasterization: enabled_on direct_rendering_display_compositor: disabled_off_ok gpu_compositing: enabled multiple_raster_threads: enabled_on opengl: enabled_on rasterization: enabled raw_draw: disabled_off_ok skia_graphite: enabled_on video_decode: enabled video_encode: enabled webgl: enabled webgl2: enabled webgpu: enabled webnn: disabled_off |
Load (avg) | 3, 3, 3 |
Memory (System) | 16.00GB (0.15GB free) |
Process Argv | --crash-reporter-id a2dd20c1-6d60-4a12-be68-3bbeaacc1e59 |
Screen Reader | no |
VM | 0% |
A/B Experiments
vsliv368:30146709
binariesv615:30325510
0bi6i642:31351871
2e7ec940:31000449
nativeloc1:31344060
5fd0e150:31155592
dwcopilot:31170013
6074i472:31201624
dwoutputs:31242946
hdaa2157:31222309
copilot_t_ci:31333650
e5gg6876:31282496
pythoneinst12:31285622
c7cif404:31314491
pythonpulldiag:31343502
996jf627:31283433
pythonrdcb7:31342333
usemplatestapi:31297334
0aa6g176:31307128
747dc170:31275177
aj953862:31281341
generatesymbolt:31295002
convertfstringf:31295003
pylancequickfixt:31350061
9d2cg352:31346308
usemarketplace:31343026
nesew2to5:31336538
agentclaude:31350858
nes-diff-11:31337487
replacestringexc:31350595
nes-set-on:31351930
6abeh943:31336334
yijiwantestdri0626-t:31336930
0927b901:31350571
4gdec884:31348710
0cj2b977:31350606
cab2h891-first:31350623