the thinking of coding

本文探讨了在编码过程中如何有效分解任务并进行项目管理。首先需要明确任务的目标点,包括何时、何地、如何及为何执行等要素;其次要从基本点着手,避免急于投入而造成混乱。文章还强调了完成项目的步骤分析,帮助读者更好地理解和实施项目管理。

during these days working ,the thought of coding,

when come to a task,you should :

1.get the aim point of it.

where ,how,why,when,who.

2.get where to compile it,don't rush to task and a long time get nothing ,just a big mess in head.

go from the basic point.




learn to finish a project,

the analyze of the project step by step.


`Qwen-Agent <https://github.com/QwenLM/Qwen-Agent>`__ is a framework for developing LLM applications based on the instruction following, tool usage, planning, and memory capabilities of Qwen. This is a simple tutorial on using Qwen-Agent to quickly experience the agentic capabilities of Qwen3. For more detailed information, please refer to `Qwen-Agent <https://github.com/QwenLM/Qwen-Agent>`__ repository. Installation ------------ - Install the stable version from PyPI: .. code:: bash pip install -U "qwen-agent[gui,rag,code_interpreter,mcp]" # Or use `pip install -U qwen-agent` for the minimal requirements. # The optional requirements, specified in double brackets, are: # [gui] for Gradio-based GUI support; # [rag] for RAG support; # [code_interpreter] for Code Interpreter support; # [mcp] for MCP support. Developing Your Own Agent ------------------------- Qwen3 excels in tool calling capabilities. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. .. code:: python import os from qwen_agent.agents import Assistant # Define LLM llm_cfg = { # Use a custom endpoint compatible with OpenAI API by vLLM/SGLang: 'model': 'Qwen/Qwen3-32B', 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # 'generate_cfg': { # # When using vLLM/SGLang OAI API, pass the parameter of whether to enable thinking mode in this way # 'extra_body': { # 'chat_template_kwargs': {'enable_thinking': False} # }, # # # Add: When the content is `<think>this is the thought</think>this is the answer` # # Do not add: When the response has been separated by reasoning_content and content # # This parameter will affect the parsing strategy of tool call # # 'thought_in_content': True, # }, } # llm_cfg = { # # Use the model service provided by DashScope: # 'model': 'qwen3-235b-a22b', # 'model_type': 'qwen_dashscope', # # # 'generate_cfg': { # # # When using the Dash Scope API, pass the parameter of whether to enable thinking mode in this way # # 'enable_thinking': False, # # }, # } # llm_cfg = { # # Use the OpenAI-compatible model service provided by DashScope: # 'model': 'qwen3-235b-a22b', # 'model_server': 'https://dashscope.aliyuncs.com/compatible-mode/v1', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # # # 'generate_cfg': { # # # When using Dash Scope OAI API, pass the parameter of whether to enable thinking mode in this way # # 'extra_body': { # # 'enable_thinking': False # # }, # # }, # } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) For more detailed examples and MCP cookbooks, please refer to `Qwen-Agent <https://github.com/QwenLM/Qwen-Agent>`__ repository.
09-30
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值