pip install package_name与 pip install -e .的区别(gemini生成)

部署运行你感兴趣的模型镜像

记录一个东西。

你问得非常好!pip install -e . 这种安装方式和 pip install numpy 这种方式确实有本质的区别,主要在于它们如何处理包的源代码以及它们的目标用途。

让我们来详细解释一下:

1. pip install package_name (标准安装)

  • 工作原理: 当你运行 pip install numpypip install requests 时,pip 会从 Python 包索引 (PyPI) 下载预编译的二进制包(通常是 .whl 文件,即 wheel 文件)或源代码包(sdist)。下载完成后,pip 会将包的内容(Python 模块、数据文件等)复制到你的 Python 环境的 site-packages 目录中。
  • pip list 显示: pip list 会显示包的名称和版本号,例如 numpy 1.26.4。因为包的实际文件已经被复制到了 site-packages 目录,所以不需要显示额外的本地路径。
  • 特点:
    • 独立性: 包的副本是独立的,不受原始下载位置的影响。即使你删除了下载的缓存文件,包仍然在 site-packages 中可用。
    • 稳定性: 一旦安装,包的代码就是固定的。如果你想修改包的代码,你需要先卸载它,修改源代码,然后重新安装。
    • 用途: 主要用于使用一个库,而不是开发它。这是最常见的安装方式,适用于你作为用户引入第三方库到你的项目中。

2. pip install -e . (可编辑安装 / 开发模式安装)

  • 工作原理: 当你在一个包含 setup.pypyproject.toml 文件的项目根目录中运行 pip install -e . 时,-e 代表 “editable”(可编辑)。pip 不会将包的内容复制到 site-packages 目录。相反,它会在 site-packages 目录中创建一个链接(通常是一个 .egg-link 文件或 .pth 文件),这个链接指向你当前项目所在的本地源代码目录
    这样,当你导入这个包时,Python 解释器会通过这个链接直接从你的本地源代码目录加载文件。
  • pip list 显示: pip list 会显示包的名称和版本号,但更重要的是,它会在右边显示一个本地路径,例如 faive-gym 0.1.0 (/home/swanchan/faive_gym_oss)。这个路径就是 pip 创建的链接所指向的实际源代码位置。
  • 特点:
    • 实时修改: 这是最大的优点。当你修改了本地源代码目录中的文件时,这些修改会立即生效,无需重新安装。你只需保存文件,然后重新运行你的 Python 脚本或程序,它就会使用你修改后的代码。
    • 方便开发: 极其适合当你正在开发一个 Python 包、库或框架时。你可以一边编写代码,一边测试和调试,而不需要频繁地卸载和安装。
    • 调试友好: 因为你直接在源代码上工作,所以使用调试器(如 VS Code 或 PyCharm 的调试器)可以很容易地设置断点、单步执行,并查看变量。
    • 依赖本地路径: 包的可用性依赖于你本地源代码目录的存在。如果你移动或删除了这个目录,那么这个“安装”的包就会失效。
    • 用途: 主要用于开发一个库或项目,或者当你需要对一个已有的库进行本地修改和测试时。

总结区别:

特性pip install package_name (标准安装)pip install -e . (可编辑安装)
目的使用/部署一个稳定的库版本开发/调试一个库或项目
代码位置复制到 site-packages 目录留在原始源代码目录,site-packages 中只存放一个指向该目录的链接
修改生效需重新安装才能生效实时生效,无需重新安装
pip list显示包名和版本显示包名、版本,以及本地源代码路径
独立性独立于原始下载位置依赖于本地源代码目录
常见场景安装 NumPy、Requests 等第三方库到你的应用中开发你自己的 Python 库,或对已安装库进行本地修改和测试

所以,当你看到 faive-gym 0.1.0 (/home/swanchan/faive_gym_oss) 这样的输出时,就意味着 faive_gym 是以可编辑模式安装的,它的实际代码就位于 /home/swanchan/faive_gym_oss 这个文件夹里。

您可能感兴趣的与本文相关的镜像

Python3.8

Python3.8

Conda
Python

Python 是一种高级、解释型、通用的编程语言,以其简洁易读的语法而闻名,适用于广泛的应用,包括Web开发、数据分析、人工智能和自动化脚本

💻 Usage Instructions & Steps to reproduce We structure the code available in this replication package based on the stages involved in the LLM-based annotation process. 🤖 LLM-based annotation The folder contains the code used to generate the LLM-based annotations.llm_annotation There are two main scripts: create_assistant.py is used to create a new assistant with a particular provider and model. This class includes the definition of a common system prompt across all agents, using the file as the basis.data/guidelines.txt annotate_emotions.py is used to annotate a set of emotions using a previously created assistant. This script includes the assessment of the output format, as well as some common metrics for cost-efficiency analysis and output file generation. Our research includes an LLM-based annotation experimentation with 3 LLMs: GPT-4o, Mistral Large 2, and Gemini 2.0 Flash. To illustrate the usage of the code, in this README we refer to the code execution for generating annotations using GPT-4o. However, full code is provided for all LLMs. 🔑 Step 1: Add your API key If you haven't done this already, add your API key to the file in the root folder. For instance, for OpenAI, you can add the following:.env OPENAI_API_KEY=sk-proj-... 🛠️ Step 2: Create an assistant Create an assistant using the script. For instance, for GPT-4o, you can run the following command:create_assistant.py python ./code/llm_annotation/create_assistant_openai.py --guidelines ./data/guidelines.txt --model gpt-4o This will create an assistant loading the file and using the GPT-4o model.data/guidelines.txt 📝 Step 3: Annotate emotions Annotate emotions using the script. For instance, for GPT-4o, you can run the following command using a small subset of 100 reviews from the ground truth as an example:annotate_emotions.py python ./code/llm_annotation/annotate_emotions_openai.py --input ./data/ground-truth-small.xlsx --output ./data/annotations/llm/temperature-00/ --batch_size 10 --model gpt-4o --temperature 0 --sleep_time 10 For annotating the whole dataset, run the following command (IMPORTANT: this will take more than 60 minutes due to OpenAI, Mistral and Gemini consumption times!): python ./code/llm_annotation/annotate_emotions_openai.py --input ./data/ground-truth.xlsx --output ./data/annotations/llm/temperature-00/ --batch_size 10 --model gpt-4o --temperature 0 --sleep_time 10 Parameters include: input: path to the input file containing the set of reviews to annotate (e.g., ).data/ground-truth.xlsx output: path to the output folder where annotations will be saved (e.g., ).data/annotations/llm/temperature-00/ batch_size: number of reviews to annotate for each user request (e.g., 10). model: model to use for the annotation (e.g., ).gpt-4o temperature: temperature for the model responses (e.g., 0). sleep_time: time to wait between batches, in seconds (e.g., 10). This will annotate the emotions using the assistant created in the previous step, creating a new file with the same format as in the file.data/ground-truth.xlsx 🔄 Data processing In this stage, we refactor all files into iterations and we consolidate the agreement between multiple annotators or LLM runs. These logic serves both for human and LLM annotations. Parameters can be updated to include more annotators or LLM runs. ✂️ Step 4: Split annotations into iterations We split the annotations into iterations based on the number of annotators or LLM runs. For instance, for GPT-4o (run 0), we can run the following command: python code/data_processing/split_annotations.py --input_file data/annotations/llm/temperature-00/gpt-4o-0-annotations.xlsx --output_dir data/annotations/iterations/ This facilitates the Kappa analysis and agreement in alignment with each human iteration. 🤝 Step 5: Analyse agreement We consolidate the agreement between multiple annotators or LLM runs. For instance, for GPT-4o, we can run the following command to use the run from Step 3 (run 0) and three additional annotations (run 1, 2, and 3) already available in the replication package (NOTE: we simplify the process to speed up the analysis and avoid delays in annotation): python code/evaluation/agreement.py --input-folder data/annotations/iterations/ --output-folder data/agreements/ --annotators gpt-4o-0 gpt-4o-1 gpt-4o-2 gpt-4o-3 For replicating our original study, run the following: python code/evaluation/agreement.py --input-folder data/annotations/iterations/ --output-folder data/agreements/ --annotators gpt-4o-1 gpt-4o-2 gpt-4o-3 📊 Evaluation After consolidating agreements, we can evaluate both the Cohen's Kappa agreement and correctness between the human and LLM-based annotations. Our code allows any combination of annotators and LLM runs. 📈 Step 6: Emotion statistics We evaluate the statistics of the emotions in the annotations, including emotion frequency, distribution, and correlation between emotions. For instance, for GPT-4o and the example in this README file, we can run the following command: python code/evaluation/emotion_statistics.py --input-file data/agreements/agreement_gpt-4o-0-gpt-4o-1-gpt-4o-2-gpt-4o-3.xlsx --output-dir data/evaluation/statistics/gpt-4o-0123 For replicating our original study, run the following: python code/evaluation/emotion_statistics.py --input-file data/agreements/agreement_gpt-4o-1-gpt-4o-2-gpt-4o-3.xlsx --output-dir data/evaluation/statistics/gpt-4o ⚖️ Step 7: Cohen's Kappa pairwise agreement We measure the average pairwise Cohen's Kappa agreement between annotators or LLM runs. For instance, for GPT-4o and the example in this README file, we can run the following command: python code/evaluation/kappa.py --input_folder data/annotations/iterations/ --output_folder data/evaluation/kappa/ --annotators gpt-4o-0,gpt-4o-1,gpt-4o-2,gpt-4o-3 For replicating our original study, run the following: python code/evaluation/kappa.py --input_folder data/annotations/iterations/ --output_folder data/evaluation/kappa/ --annotators gpt-4o-1,gpt-4o-2,gpt-4o-3 --exclude 0,1,2 In our analysis, we exclude iterations 0, 1 and 2 as they were used for guidelines refinement. ✅ Step 8: LLM-based annotation correctness We measure the correctness (accuracy, precision, recall, and F1 score) between a set of annotated reviews and a given ground truth. For instance, for GPT-4o agreement and the example in this README file, we can run the following command: python code/evaluation/correctness.py --ground_truth data/ground-truth.xlsx --predictions data/agreements/agreement_gpt-4o-0-gpt-4o-1-gpt-4o-2-gpt-4o-3.xlsx --output_dir data/evaluation/correctness/gpt-4o For replicating our original study, run the following: python code/evaluation/correctness.py --ground_truth data/ground-truth.xlsx --predictions data/agreements/agreement_gpt-4o-1-gpt-4o-2-gpt-4o-3.xlsx --output_dir data/evaluation/correctness/gpt-4o 📝 Step 8: Check results After completing these steps, you will be able to check all generated artefacts, including: LLM annotations: available at data\annotations\llm\ Agreement between LLM annotations and humans: available at data\evaluation\kappa Correctness of LLM annotations with respect to Human agreement: available at data\evaluation\correctness 📜 License
最新发布
11-10
- Value not in list: ckpt_name: 'ltxv\LTX-Video-0.9.8-13B-distilled\ltx-video\text_encoder\model-00004-of-00004.safetensors' not in [] # ComfyUI Error Report ## Error Details - **Node ID:** N/A - **Node Type:** N/A - **Exception Type:** 提示执行失败 - **Exception Message:** Prompt outputs failed validation: CheckpointLoaderSimple: - Value not in list: ckpt_name: 'ltxv\LTX-Video-0.9.8-13B-distilled\ltx-video\text_encoder\model-00004-of-00004.safetensors' not in [] ## Stack Trace ``` Error: Prompt execution failed at ComfyApi.queuePrompt (http://127.0.0.1:8188/assets/index-D1rfh02W.js:102021:13) at async ComfyApp.queuePrompt (http://127.0.0.1:8188/assets/index-D1rfh02W.js:390548:25) at async function (http://127.0.0.1:8188/assets/GraphView-DKd43ZLO.js:17521:9) at async http://127.0.0.1:8188/assets/index-D1rfh02W.js:261078:14 at async Proxy.execute (http://127.0.0.1:8188/assets/index-D1rfh02W.js:261431:7) at async queuePrompt (http://127.0.0.1:8188/assets/GraphView-DKd43ZLO.js:13756:7) ``` ## System Information - **ComfyUI Version:** 0.3.67 - **Arguments:** ComfyUI\main.py --windows-standalone-build - **OS:** nt - **Python Version:** 3.13.6 (tags/v3.13.6:4e66535, Aug 6 2025, 14:36:00) [MSC v.1944 64 bit (AMD64)] - **Embedded Python:** true - **PyTorch Version:** 2.8.0+cu129 ## Devices - **Name:** cuda:0 NVIDIA GeForce GTX 1660 SUPER : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 6442123264 - **VRAM Free:** 5383389184 - **Torch VRAM Total:** 0 - **Torch VRAM Free:** 0 ## Logs ``` 2025-10-31T10:13:09.142409 - [START] Security scan2025-10-31T10:13:09.142452 - 2025-10-31T10:13:13.258285 - [DONE] Security scan2025-10-31T10:13:13.258299 - 2025-10-31T10:13:13.376929 - ## ComfyUI-Manager: installing dependencies done.2025-10-31T10:13:13.380701 - 2025-10-31T10:13:13.383401 - ** ComfyUI startup time:2025-10-31T10:13:13.385297 - 2025-10-31T10:13:13.387273 - 2025-10-31 10:13:13.3832025-10-31T10:13:13.389289 - 2025-10-31T10:13:13.390412 - ** Platform:2025-10-31T10:13:13.390920 - 2025-10-31T10:13:13.392314 - Windows2025-10-31T10:13:13.392382 - 2025-10-31T10:13:13.392440 - ** Python version:2025-10-31T10:13:13.392494 - 2025-10-31T10:13:13.392544 - 3.13.6 (tags/v3.13.6:4e66535, Aug 6 2025, 14:36:00) [MSC v.1944 64 bit (AMD64)]2025-10-31T10:13:13.392617 - 2025-10-31T10:13:13.392721 - ** Python executable:2025-10-31T10:13:13.392813 - 2025-10-31T10:13:13.392920 - D:\ComfyUI_windows_portable\python_embeded\python.exe2025-10-31T10:13:13.393127 - 2025-10-31T10:13:13.393188 - ** ComfyUI Path:2025-10-31T10:13:13.393225 - 2025-10-31T10:13:13.393259 - D:\ComfyUI_windows_portable\ComfyUI2025-10-31T10:13:13.393294 - 2025-10-31T10:13:13.393329 - ** ComfyUI Base Folder Path:2025-10-31T10:13:13.393362 - 2025-10-31T10:13:13.393396 - D:\ComfyUI_windows_portable\ComfyUI2025-10-31T10:13:13.393429 - 2025-10-31T10:13:13.393465 - ** User directory:2025-10-31T10:13:13.393498 - 2025-10-31T10:13:13.393531 - D:\ComfyUI_windows_portable\ComfyUI\user2025-10-31T10:13:13.393566 - 2025-10-31T10:13:13.393600 - ** ComfyUI-Manager config path:2025-10-31T10:13:13.393637 - 2025-10-31T10:13:13.393802 - D:\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-10-31T10:13:13.393846 - 2025-10-31T10:13:13.393889 - ** Log path:2025-10-31T10:13:13.393919 - 2025-10-31T10:13:13.393948 - D:\ComfyUI_windows_portable\ComfyUI\user\comfyui.log2025-10-31T10:13:13.393978 - 2025-10-31T10:13:14.594910 - Prestartup times for custom nodes: 2025-10-31T10:13:14.595096 - 8.5 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager-main 2025-10-31T10:13:14.595269 - 2025-10-31T10:13:20.347414 - Checkpoint files will always be loaded safely. 2025-10-31T10:13:20.465905 - Total VRAM 6144 MB, total RAM 32605 MB 2025-10-31T10:13:20.466152 - pytorch version: 2.8.0+cu129 2025-10-31T10:13:20.466526 - Set vram state to: NORMAL_VRAM 2025-10-31T10:13:20.466874 - Device: cuda:0 NVIDIA GeForce GTX 1660 SUPER : cudaMallocAsync 2025-10-31T10:13:23.542557 - Using pytorch attention 2025-10-31T10:13:51.813379 - Python version: 3.13.6 (tags/v3.13.6:4e66535, Aug 6 2025, 14:36:00) [MSC v.1944 64 bit (AMD64)] 2025-10-31T10:13:51.813640 - ComfyUI version: 0.3.67 2025-10-31T10:13:51.867449 - ComfyUI frontend version: 1.28.8 2025-10-31T10:13:51.869341 - [Prompt Server] web root: D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static 2025-10-31T10:13:55.041241 - ### Loading: ComfyUI-Manager (V3.37) 2025-10-31T10:13:55.043206 - [ComfyUI-Manager] network_mode: public 2025-10-31T10:13:55.208579 - ### ComfyUI Revision: 170 [f2bb3230] *DETACHED | Released on '2025-10-28' 2025-10-31T10:13:55.929065 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json 2025-10-31T10:13:56.082303 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json 2025-10-31T10:13:56.260659 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json 2025-10-31T10:13:56.507549 - Import times for custom nodes: 2025-10-31T10:13:56.507810 - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py 2025-10-31T10:13:56.507931 - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\noan-banana 2025-10-31T10:13:56.507998 - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\gemini_node 2025-10-31T10:13:56.508058 - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\clybschromanodes 2025-10-31T10:13:56.508243 - 0.2 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager-main 2025-10-31T10:13:56.508338 - 0.2 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-kjnodes 2025-10-31T10:13:56.508404 - 1.3 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite-main 2025-10-31T10:13:56.508480 - 1.7 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-LTXVideo 2025-10-31T10:13:56.508537 - 2025-10-31T10:13:56.595420 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json 2025-10-31T10:13:56.647428 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json 2025-10-31T10:13:57.293810 - Context impl SQLiteImpl. 2025-10-31T10:13:57.294932 - Will assume non-transactional DDL. 2025-10-31T10:13:57.296045 - No target revision found. 2025-10-31T10:13:57.310263 - Starting server 2025-10-31T10:13:57.310958 - To see the GUI go to: http://127.0.0.1:8188 2025-10-31T10:13:58.672410 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version. 2025-10-31T10:13:58.674099 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/groupNode.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version. 2025-10-31T10:13:58.679303 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/widgetInputs.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version. 2025-10-31T10:13:59.021837 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/buttonGroup.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version. 2025-10-31T10:13:59.027778 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/button.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version. 2025-10-31T10:14:04.930152 - FETCH ComfyRegistry Data: 5/1032025-10-31T10:14:04.930831 - 2025-10-31T10:14:11.885304 - FETCH ComfyRegistry Data: 10/1032025-10-31T10:14:11.886608 - 2025-10-31T10:14:18.996533 - FETCH ComfyRegistry Data: 15/1032025-10-31T10:14:18.996705 - 2025-10-31T10:14:25.923527 - FETCH ComfyRegistry Data: 20/1032025-10-31T10:14:25.924409 - 2025-10-31T10:14:33.063630 - FETCH ComfyRegistry Data: 25/1032025-10-31T10:14:33.064614 - 2025-10-31T10:14:40.115851 - FETCH ComfyRegistry Data: 30/1032025-10-31T10:14:40.116668 - 2025-10-31T10:14:47.264312 - FETCH ComfyRegistry Data: 35/1032025-10-31T10:14:47.264805 - 2025-10-31T10:14:54.223473 - FETCH ComfyRegistry Data: 40/1032025-10-31T10:14:54.224882 - 2025-10-31T10:15:01.556335 - FETCH ComfyRegistry Data: 45/1032025-10-31T10:15:01.558179 - 2025-10-31T10:15:08.613167 - FETCH ComfyRegistry Data: 50/1032025-10-31T10:15:08.613459 - 2025-10-31T10:15:15.734947 - FETCH ComfyRegistry Data: 55/1032025-10-31T10:15:15.736272 - 2025-10-31T10:15:22.989793 - FETCH ComfyRegistry Data: 60/1032025-10-31T10:15:22.989882 - 2025-10-31T10:15:29.978864 - FETCH ComfyRegistry Data: 65/1032025-10-31T10:15:29.978954 - 2025-10-31T10:15:37.244288 - FETCH ComfyRegistry Data: 70/1032025-10-31T10:15:37.244384 - 2025-10-31T10:15:45.518913 - FETCH ComfyRegistry Data: 75/1032025-10-31T10:15:45.519117 - 2025-10-31T10:15:53.616068 - FETCH ComfyRegistry Data: 80/1032025-10-31T10:15:53.617325 - 2025-10-31T10:15:55.559015 - Cannot connect to comfyregistry.2025-10-31T10:15:55.560254 - 2025-10-31T10:15:55.562954 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-10-31T10:15:55.563346 - 2025-10-31T10:15:57.647736 - [ComfyUI-Manager] Due to a network error, switching to local mode. => custom-node-list.json => Cannot connect to host 127.0.0.1:7897 ssl:default [远程计算机拒绝网络连接。] 2025-10-31T10:15:57.647860 - FETCH DATA from: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager-main\custom-node-list.json2025-10-31T10:15:57.647918 - 2025-10-31T10:15:57.687674 - [DONE]2025-10-31T10:15:57.688893 - 2025-10-31T10:15:57.710555 - [ComfyUI-Manager] All startup tasks have been completed. 2025-10-31T10:16:38.408557 - got prompt 2025-10-31T10:16:38.416664 - Failed to validate prompt for output 1336: 2025-10-31T10:16:38.416968 - * CheckpointLoaderSimple 1896: 2025-10-31T10:16:38.417218 - - Value not in list: ckpt_name: 'ltxv\LTX-Video-0.9.8-13B-distilled\ltx-video\text_encoder\model-00004-of-00004.safetensors' not in [] 2025-10-31T10:16:38.417466 - Output will be ignored 2025-10-31T10:16:38.417901 - Failed to validate prompt for output 1973: 2025-10-31T10:16:38.418187 - Output will be ignored 2025-10-31T10:16:38.418384 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}} ``` ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. ``` Workflow too large. Please manually upload the workflow from local file system. ``` ## Additional Context (Please add any additional context or steps to reproduce the error here)
11-01
Google Cloud Tools¶ Google Cloud tools make it easier to connect your agents to Google Cloud’s products and services. With just a few lines of code you can use these tools to connect your agents with: Any custom APIs that developers host in Apigee. 100s of prebuilt connectors to enterprise systems such as Salesforce, Workday, and SAP. Automation workflows built using application integration. Databases such as Spanner, AlloyDB, Postgres and more using the MCP Toolbox for databases. Google Cloud Tools Apigee API Hub Tools¶ ApiHubToolset lets you turn any documented API from Apigee API hub into a tool with a few lines of code. This section shows you the step by step instructions including setting up authentication for a secure connection to your APIs. Prerequisites Install ADK Install the Google Cloud CLI. Apigee API hub instance with documented (i.e. OpenAPI spec) APIs Set up your project structure and create required files project_root_folder | `-- my_agent |-- .env |-- __init__.py |-- agent.py `__ tool.py Create an API Hub Toolset¶ Note: This tutorial includes an agent creation. If you already have an agent, you only need to follow a subset of these steps. Get your access token, so that APIHubToolset can fetch spec from API Hub API. In your terminal run the following command gcloud auth print-access-token # Prints your access token like 'ya29....' Ensure that the account used has the required permissions. You can use the pre-defined role or assign the following permissions:roles/apihub.viewer apihub.specs.get (required) apihub.apis.get (optional) apihub.apis.list (optional) apihub.versions.get (optional) apihub.versions.list (optional) apihub.specs.list (optional) Create a tool with . Add the below to APIHubToolsettools.py If your API requires authentication, you must configure authentication for the tool. The following code sample demonstrates how to configure an API key. ADK supports token based auth (API Key, Bearer token), service account, and OpenID Connect. We will soon add support for various OAuth2 flows. from google.adk.tools.openapi_tool.auth.auth_helpers import token_to_scheme_credential from google.adk.tools.apihub_tool.apihub_toolset import APIHubToolset # Provide authentication for your APIs. Not required if your APIs don't required authentication. auth_scheme, auth_credential = token_to_scheme_credential( "apikey", "query", "apikey", apikey_credential_str ) sample_toolset_with_auth = APIHubToolset( name="apihub-sample-tool", description="Sample Tool", access_token="...", # Copy your access token generated in step 1 apihub_resource_name="...", # API Hub resource name auth_scheme=auth_scheme, auth_credential=auth_credential, ) For production deployment we recommend using a service account instead of an access token. In the code snippet above, use and provide your security account credentials instead of the token.service_account_json=service_account_cred_json_str For apihub_resource_name, if you know the specific ID of the OpenAPI Spec being used for your API, use . If you would like the Toolset to automatically pull the first available spec from the API, use `projects/my-project-id/locations/us-west1/apis/my-api-id/versions/version-id/specs/spec-id``projects/my-project-id/locations/us-west1/apis/my-api-id` Create your agent file Agent.py and add the created tools to your agent definition: from google.adk.agents.llm_agent import LlmAgent from .tools import sample_toolset root_agent = LlmAgent( model='gemini-2.0-flash', name='enterprise_assistant', instruction='Help user, leverage the tools you have access to', tools=sample_toolset.get_tools(), ) Configure your `__init__.py` to expose your agent from . import agent Start the Google ADK Web UI and try your agent: # make sure to run `adk web` from your project_root_folder adk web Then go to http://localhost:8000 to try your agent from the Web UI. Application Integration Tools¶ With ApplicationIntegrationToolset you can seamlessly give your agents a secure and governed to enterprise applications using Integration Connector’s 100+ pre-built connectors for systems like Salesforce, ServiceNow, JIRA, SAP, and more. Support for both on-prem and SaaS applications. In addition you can turn your existing Application Integration process automations into agentic workflows by providing application integration workflows as tools to your ADK agents. Prerequisites Install ADK An existing Application Integration workflow or Integrations Connector connection you want to use with your agent To use tool with default credentials: have Google Cloud CLI installed. See installation guide. Run: gcloud config set project <project-id> gcloud auth application-default login gcloud auth application-default set-quota-project <project-id> Set up your project structure and create required files project_root_folder |-- .env `-- my_agent |-- __init__.py |-- agent.py `__ tools.py When running the agent, make sure to run adk web in project_root_folder Use Integration Connectors¶ Connect your agent to enterprise applications using Integration Connectors. Prerequisites To use a connector from Integration Connectors, you need to provision Application Integration in the same region as your connection by clicking on "QUICK SETUP" button. Google Cloud Tools Go to Connection Tool template from the template library and click on "USE TEMPLATE" button. Google Cloud Tools Fill the Integration Name as ExecuteConnection (It is mandatory to use this integration name only) and select the region same as the connection region. Click on "CREATE". Publish the integration by using the "PUBLISH" button on the Application Integration Editor. Google Cloud Tools Steps: Create a tool with ApplicationIntegrationToolset from google.adk.tools.application_integration_tool.application_integration_toolset import ApplicationIntegrationToolset connector_tool = ApplicationIntegrationToolset( project="test-project", # TODO: replace with GCP project of the connection location="us-central1", #TODO: replace with location of the connection connection="test-connection", #TODO: replace with connection name entity_operations={"Entity_One": ["LIST","CREATE"], "Entity_Two": []},#empty list for actions means all operations on the entity are supported. actions=["action1"], #TODO: replace with actions service_account_credentials='{...}', # optional tool_name="tool_prefix2", tool_instructions="..." ) Note: - You can provide service account to be used instead of using default credentials. - To find the list of supported entities and actions for a connection, use the connectors apis: listActions or listEntityTypes Add the tool to your agent. Update your fileagent.py from google.adk.agents.llm_agent import LlmAgent from .tools import connector_tool root_agent = LlmAgent( model='gemini-2.0-flash', name='connector_agent', instruction="Help user, leverage the tools you have access to", tools=connector_tool.get_tools(), ) Configure your `__init__.py` to expose your agent from . import agent Start the Google ADK Web UI and try your agent. # make sure to run `adk web` from your project_root_folder adk web Then go to http://localhost:8000, and choose my_agent agent (same as the agent folder name) Use App Integration Workflows¶ Use existing Application Integration workflow as a tool for your agent or create a new one. Steps: Create a tool with ApplicationIntegrationToolset integration_tool = ApplicationIntegrationToolset( project="test-project", # TODO: replace with GCP project of the connection location="us-central1", #TODO: replace with location of the connection integration="test-integration", #TODO: replace with integration name trigger="api_trigger/test_trigger",#TODO: replace with trigger id service_account_credentials='{...}', #optional tool_name="tool_prefix1", tool_instructions="..." ) Note: You can provide service account to be used instead of using default credentials Add the tool to your agent. Update your fileagent.py from google.adk.agents.llm_agent import LlmAgent from .tools import integration_tool, connector_tool root_agent = LlmAgent( model='gemini-2.0-flash', name='integration_agent', instruction="Help user, leverage the tools you have access to", tools=integration_tool.get_tools(), ) Configure your `__init__.py` to expose your agent from . import agent Start the Google ADK Web UI and try your agent. # make sure to run `adk web` from your project_root_folder adk web Then go to http://localhost:8000, and choose my_agent agent (same as the agent folder name) Toolbox Tools for Databases¶ MCP Toolbox for Databases is an open source MCP server for databases. It was designed with enterprise-grade and production-quality in mind. It enables you to develop tools easier, faster, and more securely by handling the complexities such as connection pooling, authentication, and more. Google’s Agent Development Kit (ADK) has built in support for Toolbox. For more information on getting started or configuring Toolbox, see the documentation. GenAI Toolbox Configure and deploy¶ Toolbox is an open source server that you deploy and manage yourself. For more instructions on deploying and configuring, see the official Toolbox documentation: Installing the Server Configuring Toolbox Install client SDK¶ ADK relies on the python package to use Toolbox. Install the package before getting started:toolbox-langchain pip install toolbox-langchain langchain Loading Toolbox Tools¶ Once you’ve Toolbox server is configured and up and running, you can load tools from your server using the ADK: from google.adk.tools.toolbox_tool import ToolboxTool toolbox = ToolboxTool("https://127.0.0.1:5000") # Load a specific set of tools tools = toolbox.get_toolset(toolset_name='my-toolset-name'), # Load single tool tools = toolbox.get_tool(tool_name='my-tool-name'), root_agent = Agent( ..., tools=tools # Provide the list of tools to the Agent ) Advanced Toolbox Features¶ Toolbox has a variety of features to make developing Gen AI tools for databases. For more information, read more about the following features: Authenticated Parameters: bind tool inputs to values from OIDC tokens automatically, making it easy to run sensitive queries without potentially leaking data Authorized Invocations: restrict access to use a tool based on the users Auth token OpenTelemetry: get metrics and tracing from Toolbox with OpenTelemetry帮我画一张思维导图
05-02
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值