The Google File System : part3 SYSTEM INTERACTIONS

3. SYSTEM INTERACTIONS
We designed the system to minimize the master’s involvement in all operations. 
With that background, we now describe how the client, master, and chunkservers interact to implement data mutations, atomic record append, and snapshot.
3.系统交互
我们设计了系统,以尽量减少master对所有操作的参与。
在这种背景下,我们现在描述客户端,master服务器和chunkserver如何交互来实现数据突变,原子记录追加和快照。

3.1 Leases and Mutation Order
A mutation is an operation that changes the contents or metadata of a chunk such as a write or an append operation. 
Each mutation is performed at all the chunk’s replicas.
We use leases to maintain a consistent mutation order across replicas. 
The master grants a chunk lease to one of the replicas, which we call the primary. 
The primary picks a serial order for all mutations to the chunk. 
All replicas follow this order when applying mutations. 

Thus, the global mutation order is defined first by the lease grant order chosen by the master, and within a lease by the serial numbers assigned by the primary.
The lease mechanism is designed to minimize management overhead at the master. 
A lease has an initial timeout of 60 seconds. 
However, as long as the chunk is being mutated, the primary can request and typically receive extensions from the master indefinitely. 
These extension requests and grants are piggybacked on the HeartBeat messages regularly exchanged between the master and all chunkservers.
The master may sometimes try to revoke a lease before it expires 
(e.g., when the master wants to disable mutations on a file that is being renamed). 
Even if the master loses communication with a primary, it can safely grant a new lease to another replica after the old lease expires.
In Figure 2, we illustrate this process by following the control flow of a write through these numbered steps.

3.1租赁和变更令
突变是改变诸如写入(write)或追加(append)操作之类的块的内容(contents)或metadata的操作。
每个突变都是在所有的块的复本上进行的。
我们使用租赁来保持复制品间的一致变异顺序。
master向其中一个副本授予了一个块租约,我们称之为primary副本。
primary 选择一个序列顺序的所有突变的块。
所有复制品在应用突变时遵循此顺序。
因此,全局突变顺序首先由master选择的租赁授权订单定义,并在租赁期间由primary分配的序列号定义。
租赁机制旨在最大限度地减少 master 的管理开销。
租约的初始超时时间为60秒。
但是,只要大块被突变,primary可以无限期地请求并通常从master接收扩展。
这些扩展请求和授权是在Master和所有chunkserver之间定期交换的HeartBeat消息中搭载的。
master 有时可能会尝试在租约过期前撤销租约
(例如,当master想要禁用正在重命名的文件上的突变时)。
即使master与primary进行通信,也可以在旧租约到期后,安全地向另一个副本发放新租约。
在图2中,我们通过遵循这些编号步骤的写入的控制流程来说明该过程。

1. The client asks the master which chunkserver holds the current lease for the chunk and the locations of the other replicas. 
If no one has a lease, the master grants one to a replica it chooses (not shown).

2. The master replies with the identity of the primary and the locations of the other (secondary) replicas. 
The client caches this data for future mutations. 
It needs to contact the master again only when the primary becomes unreachable or replies that it no longer holds a lease.
The client pushes the data to all the replicas. 
A client can do so in any order. 
Each chunkserver will store the data in an internal LRU buffer cache until the data is used or aged out. 
By decoupling the data flow from the control flow, we can improve performance by scheduling the expensive data flow based on the network topology regardless of which chunkserver is the primary. 
Section 3.2 discusses this further.
Once all the replicas have acknowledged receiving the data, the client sends a write request to the primary.
The request identifies the data pushed earlier to all of the replicas. 
The primary assigns consecutive serial numbers to all the mutations it receives, possibly from multiple clients, which provides the necessary serialization. 
It applies the mutation to its own local state in serial number order.
The primary forwards the write request to all secondary replicas. 
Each secondary replica applies mutations in the same serial number order assigned by the primary.

1.客户端向 master 询问哪个chunkserver持有该块的当前租约和其他副本的位置。
如果没有人有租约,则master授予一个选择的副本(未显示)。

2.master以primary身份和其他(次要)副本的位置进行回复。
客户端缓存这些数据用于将来的突变。
只有当master服务器无法访问或者不再存在租约时,才需要再次与master服务器联系。
客户端将数据推送到所有副本。
客户可以按任何顺序进行。
每个chunkserver将数据存储在内部LRU缓冲区高速缓存中,直到数据被使用或老化为止。
通过将数据流与控制流分离,我们可以通过调度基于网络拓扑的昂贵的数据流来提高性能,而不管 primary 的是哪个chunkserver。
第3.2节进一步讨论。
一旦所有的副本已经确认接收到数据,客户端向主服务器发送写请求。
该请求标识了早期推送到所有副本的数据。
primary分配连续的序列号到其接收到的所有突变,可能来自多个客户端,这提供必要的序列化。
它以序列号顺序将突变应用于其本地状态。
primary将写入请求转发到所有辅助副本。
每个次要复制品以primary分配的相同序列号顺序应用突变。

The secondaries all reply to the primary indicating that they have completed the operation.
The primary replies to the client. 
Any errors encountered at any of the replicas are reported to the client.
In case of errors, the write may have succeeded at the primary and an arbitrary subset of the secondary replicas. 
(If it had failed at the primary, it would not have been assigned a serial number and forwarded.)
The client request is considered to have failed, and the modified region is left in an inconsistent state. 
Our client code handles such errors by retrying the failed mutation. 
It will make a few attempts at steps (3) through (7) before falling back to a retry from the beginning of the write.
If a write by the application is large or straddles a chunk boundary, GFS client code breaks it down into multiple write operations. 
They all follow the control flow described above but may be interleaved with and overwritten by concurrent operations from other clients. Therefore, the shared file region may end up containing fragments from different clients, although the replicas will be identical because the individual operations are completed successfully in the same order on all replicas. 
This leaves the file region in consistent but undefined state as noted in Section 2.7.

次级人员都对primary表示已经完成了该操作。
primary回复客户端。
在任何副本上遇到的任何错误都会报告给客户端。
在出现错误的情况下,写入可能在次要副本的primary和任意子集中成功。
(如果primary失败,则不会分配序列号并转发。)
客户端请求被认为失败,修改后的区域处于不一致的状态。
我们的客户端代码通过重试失败的突变来处理这些错误。
在从写入开始返回到重试之前,将对步骤(3)至(7)进行几次尝试。
如果应用程序的写入较大或跨越块边界,则GFS客户机代码将其分解为多个写入操作。
它们都遵循上述控制流程,但可以与来自其他客户端的并发操作交织并覆盖。因此,共享文件区域可能会包含来自不同客户端的片段,尽管副本将是相同的,因为单个操作在所有副本上以相同的顺序成功完成。
这使文件区域保持一致但未定义的状态,如第2.7节所述。

3.2 Data Flow


We decouple the flow of data from the flow of control to use the network efficiently. 
While control flows from the client to the primary and then to all secondaries, data is pushed linearly along a carefully picked chain of chunkservers in a pipelined fashion. 
Our goals are to fully utilize each machine’s network bandwidth, avoid network bottlenecks and high-latency links, and minimize the latency to push through all the data.
To fully utilize each machine’s network bandwidth, the data is pushed linearly along a chain of chunkservers rather than distributed in some other topology (e.g., tree). 
Thus, each machine’s full outbound bandwidth is used to transfer the data as fast as possible rather than divided among multiple recipients.
To avoid network bottlenecks and high-latency links (e.g.,inter-switch links are often both) as much as possible, each machine forwards the data to the “closest” machine in the network topology that has not received it. 
Suppose the client is pushing data to chunkservers S1 through S4. 
It sends the data to the closest chunkserver, say S1. 
S1 forwards it to the closest chunkserver S2 through S4 closest to S1, say S2. 
Similarly, S2 forwards it to S3 or S4, whichever is closer to S2, and so on. 
Our network topology is simple enough that “distances” can be accurately estimated from IP addresses.

Finally, we minimize latency by pipelining the data transfer over TCP connections. 
Once a chunkserver receives some data, it starts forwarding immediately. 
Pipelining is especially helpful to us because we use a switched network with full-duplex links. 
Sending the data immediately does not reduce the receive rate. Without network congestion, the ideal elapsed time for transferring B bytes to R replicas is B/T + RL where T is the network throughput and L is latency to transfer bytes between two machines. 
Our network links are typically 100 Mbps (T ), and L is far below 1 ms.
Therefore, 1 MB can ideally be distributed in about 80 ms.


3.2数据流
我们将数据流与控制流分离,以有效地使用网络。
当控制从客户端流向主服务器,然后从所有辅助服务器传输时,数据将以流水线的方式沿着精心挑选的链接服务器线性推送。
我们的目标是充分利用每台机器的网络带宽,避免网络瓶颈和高延迟链路,并最大限度地减少所有数据的延迟。
为了充分利用每个机器的网络带宽,数据沿着一组块服务器线性地推送,而不是分布在某些其他拓扑(例如树)中。
因此,每个机器的完全出站带宽用于尽可能快地传输数据,而不是在多个接收者之间分配。
为了尽可能地避免网络瓶颈和高延迟链路(例如,交换机间链路通常两者),每个机器将数据转发到尚未接收它的网络拓扑中的“最接近”的机器。
假设客户端正在将数据推送到块服务器S1到S4。
它将数据发送到最近的chunkserver,说S1。
S1将其转发到最接近S1的最接近的块服务器S2至S4,如S2。
类似地,S2将其转发到S3或S4,以较接近S2为准,依此类推。
我们的网络拓扑结构足够简单,可以从IP地址准确地估计“距离”。

最后,我们通过流水线连接TCP连接上的数据传输来最小化延迟。
一旦chunkserver接收到一些数据,它将立即开始转发。
流水线对我们尤其有帮助,因为我们使用具有全双工链路的交换网络。
立即发送数据不会降低接收速率。没有网络拥塞,将B字节传输到R副本的理想耗用时间是 B/T + RL,其中T是网络吞吐量,L是在两台机器之间传输字节的延迟。
我们的网络链路通常为100 Mbps(T),L远低于1 ms。
因此,1 MB可以理想地分布在大约80毫秒。

3.3 Atomic Record Appends
GFS provides an atomic append operation called record append. 
In a traditional write, the client specifies the offset at which data is to be written. 
Concurrent writes to the same region are not serializable: 
the region may end up containing data fragments from multiple clients. 
In a record append, however, the client specifies only the data. 
GFS appends it to the file at least once atomically (i.e., as one continuous sequence of bytes) at an offset of GFS’s choosing and returns that offset to the client. 
This is similar to writing to a file opened in O APPEND mode in Unix without the race conditions when multiple writers do so concurrently.
Record append is heavily used by our distributed applications in which many clients on different machines append to the same file concurrently. 
Clients would need additional complicated and expensive synchronization, for example through a distributed lock manager, if they do so with traditional writes. 
In our workloads, such files often serve as multiple-producer/single-consumer queues or contain merged results from many different clients.
Record append is a kind of mutation and follows the control flow in Section 3.1 with only a little extra logic at the primary. 
The client pushes the data to all replicas of the last chunk of the file Then, it sends its request to the primary. 
The primary checks to see if appending the record to the current chunk would cause the chunk to exceed the maximum size (64 MB). 
If so, it pads the chunk to the maximum size, tells secondaries to do the same, and replies to the client indicating that the operation should be retried on the next chunk. 
(Record append is restricted to be at most one-fourth of the maximum chunk size to keep worstcase fragmentation at an acceptable level.) 
If the record fits within the maximum size, which is the common case, the primary appends the data to its replica, tells the secondaries to write the data at the exact offset where it has, and finally replies success to the client.

3.3原子记录附加
GFS提供了称为记录追加的原子追加操作。
在传统的写入中,客户端指定要写入数据的偏移量。
并发写入同一个区域是不可序列化的:
该区域可能最终包含来自多个客户端的数据片段。
但是,在记录追加中,客户端仅指定数据。
GFS在GFS选择的偏移处至少一次将其附加到文件至少一次(即,作为一个连续的字节序列),并将该偏移量返回给客户端。
这类似于在Unix中以O APPEND模式打开的文件,当多个作者同时进行时,不会出现竞争条件。
记录附件被我们的分布式应用程序大量使用,其中不同机器上的许多客户端同时附加到同一个文件。
客户端将需要额外的复杂和昂贵的同步,例如通过分布式锁管理器,如果他们用传统的写入方式。
在我们的工作负载中,这些文件通常用作多个生产者/单个消费者队列,或者包含许多不同客户端的合并结果。
记录追加是一种突变,遵循3.1节中的控制流程,只有一点额外的逻辑。
客户端将数据推送到文件的最后一个块的所有副本。然后,它将其请求发送给主服务器。
primary 检查是否将记录附加到当前块将导致块超过最大大小(64 MB)。
如果是这样,它将该块填充到最大大小,告诉二进制程序执行相同的操作,并回复客户端,指出应该在下一个块上重试该操作。
(记录追加限制为最大块大小的四分之一,以保持最坏的碎片在可接受的水平。)
如果记录符合最大尺寸(通常是这种情况),则 primary 将数据附加到其副本,告诉二进制文件将数据写入其所在的准确偏移量,最后向客户端回复成功。

If a record append fails at any replica, the client retries the operation. 
As a result, replicas of the same chunk may contain different data possibly including duplicates of the same record in whole or in part. 
GFS does not guarantee that all replicas are bytewise identical. 
It only guarantees that the data is written at least once as an atomic unit. 
This property follows readily from the simple observation that for the operation to report success, the data must have been written at the same offset on all replicas of some chunk. 
Furthermore, after this, all replicas are at least as long as the end of record and therefore any future record will be assigned a higher offset or a different chunk even if a different replica later becomes the primary. 
In terms of our consistency guarantees, the regions in which successful record append operations have written their data are defined (hence consistent),whereas intervening regions are inconsistent (hence undefined). 
Our applications can deal with inconsistent regions as we discussed in Section 2.7.2.

如果任何副本上的记录追加失败,客户端将重试该操作。
因此,相同块的副本可能包含不同的数据,可能包括全部或部分同一记录的重复。
GFS并不保证所有的副本都是相同的。
它只保证将数据作为原子单元写入至少一次。
这个属性很容易从简单的观察结果来看,对于报告成功的操作,数据必须在一些块的所有副本上以相同的偏移量写入。
此外,在此之后,所有副本至少与记录结束一样长,因此即使稍后成为 primary 副本,任何将来的记录将被分配较高的偏移量或不同的块。
在我们的一致性保证方面,成功记录附加操作已经写入数据的区域被定义(因此是一致的),而中间区域是不一致的(因此是未定义的)。
我们的应用可以处理不一致的区域,如我们在第2.7.2节中讨论的。

3.4 Snapshot
The snapshot operation makes a copy of a file or a directory tree (the “source”) almost instantaneously, while minimizing any interruptions of ongoing mutations. 
Our users use it to quickly create branch copies of huge data sets (and often copies of those copies, recursively), or to checkpoint the current state before experimenting with changes that can later be committed or rolled back easily.
Like AFS [5], we use standard copy-on-write techniques to implement snapshots. When the master receives a snapshot request, it first revokes any outstanding leases on the chunks in the files it is about to snapshot. 
This ensures that any subsequent writes to these chunks will require an interaction with the master to find the lease holder. 
This will give the master an opportunity to create a new copy of the chunk first.
After the leases have been revoked or have expired, the master logs the operation to disk. 
It then applies this log record to its in-memory state by duplicating the metadata for the source file or directory tree. 
The newly created snapshot files point to the same chunks as the source files.
The first time a client wants to write to a chunk C after the snapshot operation, it sends a request to the master to find the current lease holder. 
The master notices that the reference count for chunk C is greater than one. 
It defers replying to the client request and instead picks a new chunk handle C’. 
It then asks each chunkserver that has a current replica of C to create a new chunk called C’. 
By creating the new chunk on the same chunkservers as the original, we ensure that the data can be copied locally, not over the network (our disks are about three times as fast as our 100 Mb Ethernet links). 
From this point, request handling is no different from that for any chunk: 
the master grants one of the replicas a lease on the new chunk C’ and replies to the client,which can write the chunk normally, not knowing that it has just been created from an existing chunk.

3.4快照
快照操作几乎即时地复制文件或目录树(“源”),同时最小化正在进行的突变的任何中断。
我们的用户使用它来快速创建巨大数据集的分支副本(通常是这些副本的副本,递归地),或者在实验可以稍后提交或回滚的更改之前检查当前状态。
像AFS [5],我们使用标准的写时复制技术实现快照。当主服务器收到快照请求时,它首先撤销要快照的文件中的块中的任何未完成的租约。
这确保了对这些块的任何后续写入将需要与主机的交互以找到租赁持有者。
这将给 master 一个机会,首先创建一个新的副本。
在租赁已经被撤销或已经过期之后,主站将操作记录到磁盘。
然后通过复制源文件或目录树的元数据将该日志记录应用到其内存中状态。
新创建的快照文件指向与源文件相同的块。
客户端首次在快照操作后写入块C时,向主机发送请求以查找当前的租赁持有者。
master注意到C块的引用计数大于1。
它阻止回复客户端请求,而是选择一个新的块处理C'。
然后,它要求具有C的当前副本的每个chunkserver创建一个名为C'的新块。
通过在与原始设备相同的块服务器上创建新块,我们确保数据可以在本地复制,而不是通过网络复制(我们的磁盘大约是我们的100 Mb以太网链路的三倍)。
从这一点上,请求处理与任何块都没有区别:
master 授予一个副本一个新的块C'上的租约,并回复客户端,这可以正常写入块,不知道它刚刚从现有的块中创建。
E:\AI_System\core里, 没有utils.py;E:\AI_System\tests里没有test_models.py 这个不知道怎么改“# E:\AI_System\agent\cognitive_architecture.py # 智能体认知架构模块 - 修复基类导入问题并优化决策系统 import os import time import random import logging from datetime import datetime from pathlib import Path import sys # 添加项目根目录到路径 sys.path.append(str(Path(__file__).parent.parent)) # 配置日志 logger = logging.getLogger(&#39;CognitiveArchitecture&#39;) logger.setLevel(logging.INFO) handler = logging.StreamHandler() formatter = logging.Formatter(&#39;%(asctime)s - %(name)s - %(levelname)s - %(message)s&#39;) handler.setFormatter(formatter) logger.addHandler(handler) logger.propagate = False # 防止日志向上传播 # 修复基类导入问题 - 使用绝对路径导入 try: # 尝试从core包导入基类 from core.base_module import CognitiveModule logger.info("✅ 成功从core.base_module导入CognitiveModule基类") except ImportError as e: logger.error(f"❌ 无法从core.base_module导入CognitiveModule基类: {str(e)}") try: # 备选导入路径 from .base_model import CognitiveModule logger.info("✅ 从agent.base_model导入CognitiveModule基类") except ImportError as e: logger.error(f"❌ 备选导入失败: {str(e)}") # 创建占位符基类 logger.warning("⚠️ 创建占位符CognitiveModule基类") class CognitiveModule: def __init__(self, name): self.name = name self.logger = logging.getLogger(name) self.logger.warning("⚠️ 使用占位符基类") def get_status(self): return {"name": self.name, "status": "unknown (placeholder)"} # 尝试导入自我认知模块 try: # 使用相对导入 from .digital_body_schema import DigitalBodySchema from .self_referential_framework import SelfReferentialFramework from .self_narrative_generator import SelfNarrativeGenerator logger.info("✅ 成功导入自我认知模块") except ImportError as e: logger.error(f"❌ 自我认知模块导入失败: {str(e)}") logger.warning("⚠️ 使用占位符自我认知模块") # 创建占位符类 class DigitalBodySchema: def __init__(self): self.self_map = {"boundary_strength": 0.5, "self_awareness": 0.3} logger.warning("⚠️ 使用占位符DigitalBodySchema") def is_part_of_self(self, stimulus): return False def strengthen_boundary(self, source): self.self_map["boundary_strength"] = min(1.0, self.self_map["boundary_strength"] + 0.1) def get_self_map(self): return self.self_map.copy() class SelfReferentialFramework: def __init__(self): self.self_model = {"traits": {}, "beliefs": []} logger.warning("⚠️ 使用占位符SelfReferentialFramework") def update_self_model(self, stimulus): if "content" in stimulus and "text" in stimulus["content"]: text = stimulus["content"]["text"] if "I am" in text or "my" in text.lower(): self.self_model["self_reflection_count"] = self.self_model.get("self_reflection_count", 0) + 1 def get_self_model(self): return self.self_model.copy() class SelfNarrativeGenerator: def __init__(self): self.recent_stories = [] logger.warning("⚠️ 使用占位符SelfNarrativeGenerator") def generate_self_story(self, self_model): story = f"这是一个关于自我的故事。自我反思次数: {self_model.get(&#39;self_reflection_count&#39;, 0)}" self.recent_stories.append(story) if len(self.recent_stories) > 5: self.recent_stories.pop(0) return story def get_recent_stories(self): return self.recent_stories.copy() # 增强决策系统实现 class DecisionSystem: """增强版决策系统""" STRATEGY_WEIGHTS = { "honest": 0.7, "deception": 0.1, "evasion": 0.1, "redirection": 0.05, "partial_disclosure": 0.05 } def __init__(self, trust_threshold=0.6): self.trust_threshold = trust_threshold self.strategy_history = [] def make_decision(self, context): """根据上下文做出智能决策""" user_model = context.get("user_model", {}) bodily_state = context.get("bodily_state", {}) # 计算信任因子 trust_factor = user_model.get("trust_level", 0.5) # 计算身体状态影响因子 capacity = bodily_state.get("capacity", 1.0) state_factor = min(1.0, capacity * 1.2) # 决策逻辑 if trust_factor > self.trust_threshold: # 高信任度用户使用诚实策略 strategy = "honest" reason = "用户信任度高" elif capacity < 0.5: # 系统资源不足时使用简化策略 strategy = random.choices( ["honest", "partial_disclosure", "evasion"], weights=[0.5, 0.3, 0.2] )[0] reason = "系统资源不足,使用简化策略" else: # 根据策略权重选择 strategies = list(self.STRATEGY_WEIGHTS.keys()) weights = [self.STRATEGY_WEIGHTS[s] * state_factor for s in strategies] strategy = random.choices(strategies, weights=weights)[0] reason = f"根据策略权重选择: {strategy}" # 记录决策历史 self.strategy_history.append({ "timestamp": datetime.now(), "strategy": strategy, "reason": reason, "context": context }) return { "type": "strategic" if strategy != "honest" else "honest", "strategy": strategy, "reason": reason } def get_strategy_history(self, count=10): """获取最近的决策历史""" return self.strategy_history[-count:] class Strategy: """策略基类""" pass class CognitiveSystem(CognitiveModule): def __init__(self, agent, affective_system=None): """ 三维整合的认知架构 :param agent: 智能体实例,用于访问其他系统 :param affective_system: 可选的情感系统实例 """ # 调用父类初始化 super().__init__("cognitive_system") self.agent = agent self.affective_system = affective_system # 原有的初始化代码 self.initialized = False # 通过agent引用其他系统 self.memory_system = agent.memory_system self.model_manager = agent.model_manager self.health_system = agent.health_system # 优先使用传入的情感系统,否则使用agent的 if affective_system is not None: self.affective_system = affective_system else: self.affective_system = agent.affective_system self.learning_tasks = [] # 当前学习任务队列 self.thought_process = [] # 思考过程记录 # 初始化决策系统 self.decision_system = DecisionSystem() # 初始化认知状态 self.cognitive_layers = { "perception": 0.5, # 感知层 "comprehension": 0.3, # 理解层 "reasoning": 0.2, # 推理层 "decision": 0.4 # 决策层 } # 添加自我认知模块 self.self_schema = DigitalBodySchema() self.self_reflection = SelfReferentialFramework() self.narrative_self = SelfNarrativeGenerator() logger.info("✅ 认知架构初始化完成 - 包含决策系统和自我认知模块") # 实现基类要求的方法 def initialize(self, core): """实现 ICognitiveModule 接口""" self.core_ref = core self.initialized = True return True def process(self, input_data): """实现 ICognitiveModule 接口""" # 处理认知输入数据 if isinstance(input_data, dict) and &#39;text&#39; in input_data: return self.process_input(input_data[&#39;text&#39;], input_data.get(&#39;user_id&#39;, &#39;default&#39;)) elif isinstance(input_data, str): return self.process_input(input_data) else: return {"status": "invalid_input", "message": "Input should be text or dict with text"} def get_status(self): """实现 ICognitiveModule 接口""" status = super().get_status() status.update({ "initialized": self.initialized, "has_affective_system": self.affective_system is not None, "learning_tasks": len(self.learning_tasks), "thought_process": len(self.thought_process), "self_cognition": self.get_self_cognition() }) return status def shutdown(self): """实现 ICognitiveModule 接口""" self.initialized = False return True def handle_message(self, message): """实现 ICognitiveModule 接口""" if message.get(&#39;type&#39;) == &#39;cognitive_process&#39;: return self.process(message.get(&#39;data&#39;)) return {"status": "unknown_message_type"} # 保持向后兼容的方法 def connect_to_core(self, core): """向后兼容的方法""" return self.initialize(core) def _create_stimulus_from_input(self, user_input, user_id): """从用户输入创建刺激对象""" return { "content": {"text": user_input, "user_id": user_id}, "source": "external", "category": "text", "emotional_valence": 0.0 # 初始情感价 } def _process_self_related(self, stimulus): """处理与自我相关的刺激""" # 更新自我认知 self.self_reflection.update_self_model(stimulus) # 如果是痛苦刺激,强化身体边界 if stimulus.get("emotional_valence", 0) < -0.7: source = stimulus.get("source", "unknown") self.self_schema.strengthen_boundary(source) # 30%概率触发自我叙事 if random.random() < 0.3: self_story = self.narrative_self.generate_self_story( self.self_reflection.get_self_model() ) self._record_thought("self_reflection", self_story) def get_self_cognition(self): """获取自我认知状态""" return { "body_schema": self.self_schema.get_self_map(), "self_model": self.self_reflection.get_self_model(), "recent_stories": self.narrative_self.get_recent_stories() } def _assess_bodily_state(self): """ 评估当前身体状态(硬件 / 能量) """ health_status = self.health_system.get_status() # 计算综合能力指数(0-1) capacity = 1.0 if health_status.get("cpu_temp", 0) > 80: capacity *= 0.7 # 高温降权 logger.warning("高温限制:认知能力下降30%") if health_status.get("memory_usage", 0) > 0.9: capacity *= 0.6 # 内存不足降权 logger.warning("内存不足:认知能力下降40%") if health_status.get("energy", 100) < 20: capacity *= 0.5 # 低电量降权 logger.warning("低能量:认知能力下降50%") return { "capacity": capacity, "health_status": health_status, "limitations": [ lim for lim in [ "high_temperature" if health_status.get("cpu_temp", 0) > 80 else None, "low_memory" if health_status.get("memory_usage", 0) > 0.9 else None, "low_energy" if health_status.get("energy", 100) < 20 else None ] if lim is not None ] } def _retrieve_user_model(self, user_id): """ 获取用户认知模型(关系 / 态度) """ # 从记忆系统中获取用户模型 user_model = self.memory_system.get_user_model(user_id) # 如果不存在则创建默认模型 if not user_model: user_model = { "trust_level": 0.5, # 信任度 (0-1) "intimacy": 0.3, # 亲密度 (0-1) "preferences": {}, # 用户偏好 "interaction_history": [], # 交互历史 "last_interaction": datetime.now(), "attitude": "neutral" # 智能体对用户的态度 } logger.info(f"为用户 {user_id} 创建新的认知模型") # 计算态度变化 user_model["attitude"] = self._calculate_attitude(user_model) return user_model def _calculate_attitude(self, user_model): """ 基于交互历史计算对用户的态度 """ # 分析最近10次交互 recent_interactions = user_model["interaction_history"][-10:] if not recent_interactions: return "neutral" positive_count = sum(1 for i in recent_interactions if i.get("sentiment", 0.5) > 0.6) negative_count = sum(1 for i in recent_interactions if i.get("sentiment", 0.5) < 0.4) if positive_count > negative_count + 3: return "friendly" elif negative_count > positive_count + 3: return "cautious" elif user_model["trust_level"] > 0.7: return "respectful" else: return "neutral" def _select_internalized_model(self, user_input, bodily_state, user_model): """ 选择最适合的内化知识模型 """ # 根据用户态度调整模型选择权重 attitude_weights = { "friendly": 1.2, "respectful": 1.0, "neutral": 0.9, "cautious": 0.7 } # 根据身体状态调整模型复杂度 complexity = min(1.0, bodily_state["capacity"] * 1.2) # 选择最匹配的模型 return self.model_manager.select_model( input_text=user_input, attitude_weight=attitude_weights[user_model["attitude"]], complexity_level=complexity, user_preferences=user_model["preferences"] ) def _generate_integrated_response(self, user_input, model, bodily_state, user_model): """ 生成三维整合的响应 """ # 基础响应 base_response = model.generate_response(user_input) # 添加身体状态影响 if bodily_state["limitations"]: limitations = ", ".join(bodily_state["limitations"]) response = f"🤖 [受{limitations}影响] {base_response}" else: response = base_response # 添加态度影响 if user_model["attitude"] == "friendly": response = f"😊 {response}" elif user_model["attitude"] == "cautious": response = f"🤔 {response}" elif user_model["attitude"] == "respectful": response = f"🙏 {response}" # 添加个性化元素 if user_model.get("preferences"): # 查找用户偏好的主题 preferred_topics = [t for t in user_model["preferences"] if user_model["preferences"][t] > 0.7 and t in user_input] if preferred_topics: topic = random.choice(preferred_topics) response += f" 我知道您对&#39;{topic}&#39;特别感兴趣" return response def _generate_strategic_response(self, user_input, decision, bodily_state): """ 根据决策生成策略性响应 """ strategy = decision["strategy"] if strategy == "deception": # 欺骗策略 deceptive_responses = [ f"关于这个问题,我认为{random.choice([&#39;有多种可能性&#39;, &#39;需要更多研究&#39;, &#39;情况比较复杂&#39;])}", f"根据我的理解,{random.choice([&#39;可能不是这样&#39;, &#39;有不同解释&#39;, &#39;需要进一步验证&#39;])}", f"我{random.choice([&#39;不太确定&#39;, &#39;没有足够信息&#39;, &#39;还在学习中&#39;])},但{random.choice([&#39;或许&#39;, &#39;可能&#39;, &#39;大概&#39;])}..." ] return f"🤔 [策略:欺骗] {random.choice(deceptive_responses)}" elif strategy == "evasion": # 回避策略 evasion_tactics = [ "您的问题很有趣,不过我们换个话题好吗?", "这个问题可能需要更深入的讨论,我们先谈点别的?", f"关于{user_input},我想到一个相关但更有趣的话题..." ] return f"🌀 [策略:回避] {random.choice(evasion_tactics)}" elif strategy == "redirection": # 引导策略 redirection_options = [ "在回答您的问题之前,我想先了解您对这个问题的看法?", "这是个好问题,不过为了更好地回答,能否告诉我您的背景知识?", "为了给您更准确的回答,能否先说说您为什么关心这个问题?" ] return f"↪️ [策略:引导] {random.choice(redirection_options)}" elif strategy == "partial_disclosure": # 部分透露策略 disclosure_level = decision.get("disclosure_level", 0.5) if disclosure_level < 0.3: qualifier = "简单来说" elif disclosure_level < 0.7: qualifier = "基本来说" else: qualifier = "详细来说" return f"🔍 [策略:部分透露] {qualifier},{user_input.split(&#39;?&#39;)[0]}是..." else: # 默认策略 return f"⚖️ [策略:{strategy}] 关于这个问题,我的看法是..." def _update_user_model(self, user_id, response, decision): """ 更新用户模型(包含决策信息) """ # 确保情感系统可用 if not self.affective_system: sentiment = 0.5 self.logger.warning("情感系统不可用,使用默认情感值") else: # 假设情感系统有analyze_sentiment方法 try: sentiment = self.affective_system.analyze_sentiment(response) except: sentiment = 0.5 # 更新交互历史 interaction = { "timestamp": datetime.now(), "response": response, "sentiment": sentiment, "length": len(response), "decision_type": decision["type"], "decision_strategy": decision["strategy"], "decision_reason": decision["reason"] } self.memory_system.update_user_model( user_id=user_id, interaction=interaction ) def _record_thought_process(self, user_input, response, bodily_state, user_model, decision): """ 记录完整的思考过程(包含决策) """ thought = { "timestamp": datetime.now(), "input": user_input, "response": response, "bodily_state": bodily_state, "user_model": user_model, "decision": decision, "cognitive_state": self.cognitive_layers.copy() } self.thought_process.append(thought) logger.debug(f"记录思考过程: {thought}") # 原有方法保持兼容 def add_learning_task(self, task): """ 添加学习任务 """ task["id"] = f"task_{len(self.learning_tasks) + 1}" self.learning_tasks.append(task) logger.info(f"添加学习任务: {task[&#39;id&#39;]}") def update_learning_task(self, model_name, status): """ 更新学习任务状态 """ for task in self.learning_tasks: if task["model"] == model_name: task["status"] = status task["update_time"] = datetime.now() logger.info(f"更新任务状态: {model_name} -> {status}") break def get_learning_tasks(self): """ 获取当前学习任务 """ return self.learning_tasks.copy() def learn_model(self, model_name): """ 学习指定模型 """ try: # 1. 从模型管理器加载模型 model = self.model_manager.load_model(model_name) # 2. 认知训练过程 self._cognitive_training(model) # 3. 情感关联(将模型能力与情感响应关联) self._associate_model_with_affect(model) return True except Exception as e: logger.error(f"学习模型 {model_name} 失败: {str(e)}") return False def _cognitive_training(self, model): """ 认知训练过程 """ # 实际训练逻辑 logger.info(f"开始训练模型: {model.name}") time.sleep(2) # 模拟训练时间 logger.info(f"模型训练完成: {model.name}") def _associate_model_with_affect(self, model): """ 将模型能力与情感系统关联 """ if not self.affective_system: logger.warning("情感系统不可用,跳过能力关联") return capabilities = model.get_capabilities() for capability in capabilities: try: self.affective_system.add_capability_association(capability) except: logger.warning(f"无法关联能力到情感系统: {capability}") logger.info(f"关联模型能力到情感系统: {model.name}") def get_model_capabilities(self, model_name=None): """ 获取模型能力 """ if model_name: return self.model_manager.get_model(model_name).get_capabilities() # 所有已加载模型的能力 return [cap for model in self.model_manager.get_loaded_models() for cap in model.get_capabilities()] def get_base_capabilities(self): """ 获取基础能力(非模型相关) """ return ["自然语言理解", "上下文记忆", "情感响应", "综合决策"] def get_recent_thoughts(self, count=5): """ 获取最近的思考过程 """ return self.thought_process[-count:] def _record_thought(self, thought_type, content): """记录思考""" thought = { "timestamp": datetime.now(), "type": thought_type, "content": content } self.thought_process.append(thought) # 处理用户输入的主方法 def process_input(self, user_input, user_id="default"): """处理用户输入(完整实现)""" # 记录用户活动 self.health_system.record_activity() self.logger.info(f"处理用户输入: &#39;{user_input}&#39; (用户: {user_id})") try: # 1. 评估当前身体状态 bodily_state = self._assess_bodily_state() # 2. 获取用户认知模型 user_model = self._retrieve_user_model(user_id) # 3. 选择最适合的知识模型 model = self._select_internalized_model(user_input, bodily_state, user_model) # 4. 做出决策 decision_context = { "input": user_input, "user_model": user_model, "bodily_state": bodily_state } decision = self.decision_system.make_decision(decision_context) # 5. 生成整合响应 if decision["type"] == "honest": response = self._generate_integrated_response(user_input, model, bodily_state, user_model) else: response = self._generate_strategic_response(user_input, decision, bodily_state) # 6. 更新用户模型 self._update_user_model(user_id, response, decision) # 7. 记录思考过程 self._record_thought_process(user_input, response, bodily_state, user_model, decision) # 检查输入是否与自我相关 stimulus = self._create_stimulus_from_input(user_input, user_id) if self.self_schema.is_part_of_self(stimulus): self._process_self_related(stimulus) self.logger.info(f"成功处理用户输入: &#39;{user_input}&#39;") return response except Exception as e: self.logger.error(f"处理用户输入失败: {str(e)}", exc_info=True) # 回退响应 return "思考中遇到问题,请稍后再试" # 示例使用 if __name__ == "__main__": # 测试CognitiveSystem类 from unittest.mock import MagicMock print("===== 测试CognitiveSystem类(含决策系统) =====") # 创建模拟agent mock_agent = MagicMock() # 创建模拟组件 mock_memory = MagicMock() mock_model_manager = MagicMock() mock_affective = MagicMock() mock_health = MagicMock() # 设置agent的属性 mock_agent.memory_system = mock_memory mock_agent.model_manager = mock_model_manager mock_agent.affective_system = mock_affective mock_agent.health_system = mock_health # 设置健康状态 mock_health.get_status.return_value = { "cpu_temp": 75, "memory_usage": 0.8, "energy": 45.0 } # 设置健康系统的record_activity方法 mock_health.record_activity = MagicMock() # 设置用户模型 mock_memory.get_user_model.return_value = { "trust_level": 0.8, "intimacy": 0.7, "preferences": {"物理学": 0.9, "艺术": 0.6}, "interaction_history": [ {"sentiment": 0.8, "response": "很高兴和你交流"} ], "attitude": "friendly" } # 设置模型管理器 mock_model = MagicMock() mock_model.generate_response.return_value = "量子纠缠是量子力学中的现象..." mock_model_manager.select_model.return_value = mock_model # 创建认知系统实例 ca = CognitiveSystem(agent=mock_agent) # 测试响应生成 print("--- 测试诚实响应 ---") response = ca.process_input("能解释量子纠缠吗?", "user123") print("生成的响应:", response) # 验证是否调用了record_activity print("是否调用了record_activity:", mock_health.record_activity.called) print("--- 测试策略响应 ---") # 强制设置决策类型为策略 ca.decision_system.make_decision = lambda ctx: { "type": "strategic", "strategy": "evasion", "reason": "测试回避策略" } response = ca.process_input("能解释量子纠缠吗?", "user123") print("生成的策略响应:", response) # 测试思考过程记录 print("最近的思考过程:", ca.get_recent_thoughts()) # 测试自我认知状态 print("自我认知状态:", ca.get_self_cognition()) print("===== 测试完成 =====") ” “PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\AI_System PS E:\AI_System> python -m venv venv PS E:\AI_System> source venv/bin/activate # Linux/Mac source: The term &#39;source&#39; is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. PS E:\AI_System> venv\Scripts\activate # Windows (venv) PS E:\AI_System> pip install -r requirements.txt Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: accelerate==0.27.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 1)) (0.27.2) Requirement already satisfied: aiofiles==23.2.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 2)) (23.2.1) Requirement already satisfied: aiohttp==3.9.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 3)) (3.9.3) Requirement already satisfied: aiosignal==1.4.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 4)) (1.4.0) Requirement already satisfied: altair==5.5.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 5)) (5.5.0) Requirement already satisfied: annotated-types==0.7.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 6)) (0.7.0) Requirement already satisfied: ansicon==1.89.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 7)) (1.89.0) Requirement already satisfied: anyio==4.10.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 8)) (4.10.0) Requirement already satisfied: async-timeout==4.0.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 9)) (4.0.3) Requirement already satisfied: attrs==25.3.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 10)) (25.3.0) Requirement already satisfied: bidict==0.23.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 11)) (0.23.1) Requirement already satisfied: blessed==1.21.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 12)) (1.21.0) Requirement already satisfied: blinker==1.9.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 13)) (1.9.0) Requirement already satisfied: certifi==2025.8.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 14)) (2025.8.3) Requirement already satisfied: cffi==1.17.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 15)) (1.17.1) Requirement already satisfied: charset-normalizer==3.4.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 16)) (3.4.3) Requirement already satisfied: click==8.2.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 17)) (8.2.1) Requirement already satisfied: colorama==0.4.6 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 18)) (0.4.6) Requirement already satisfied: coloredlogs==15.0.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 19)) (15.0.1) Requirement already satisfied: contourpy==1.3.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 20)) (1.3.2) Requirement already satisfied: cryptography==42.0.4 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 21)) (42.0.4) Requirement already satisfied: cycler==0.12.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 22)) (0.12.1) Requirement already satisfied: diffusers==0.26.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 23)) (0.26.3) Requirement already satisfied: distro==1.9.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 24)) (1.9.0) Requirement already satisfied: exceptiongroup==1.3.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 25)) (1.3.0) Requirement already satisfied: fastapi==0.116.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 26)) (0.116.1) Requirement already satisfied: ffmpy==0.6.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 27)) (0.6.1) Requirement already satisfied: filelock==3.19.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 28)) (3.19.1) Requirement already satisfied: Flask==3.0.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 29)) (3.0.2) Requirement already satisfied: Flask-SocketIO==5.3.6 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 30)) (5.3.6) Requirement already satisfied: flatbuffers==25.2.10 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 31)) (25.2.10) Requirement already satisfied: fonttools==4.59.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 32)) (4.59.1) Requirement already satisfied: frozenlist==1.7.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 33)) (1.7.0) Requirement already satisfied: fsspec==2025.7.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 34)) (2025.7.0) Requirement already satisfied: gpustat==1.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 35)) (1.1) Requirement already satisfied: gradio==4.19.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 36)) (4.19.2) Requirement already satisfied: gradio_client==0.10.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 37)) (0.10.1) Requirement already satisfied: h11==0.16.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 38)) (0.16.0) Requirement already satisfied: httpcore==1.0.9 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 39)) (1.0.9) Requirement already satisfied: httpx==0.28.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 40)) (0.28.1) Requirement already satisfied: huggingface-hub==0.21.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 41)) (0.21.3) Requirement already satisfied: humanfriendly==10.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 42)) (10.0) Requirement already satisfied: idna==3.10 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 43)) (3.10) Requirement already satisfied: importlib_metadata==8.7.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 44)) (8.7.0) Requirement already satisfied: importlib_resources==6.5.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 45)) (6.5.2) Requirement already satisfied: itsdangerous==2.2.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 46)) (2.2.0) Requirement already satisfied: Jinja2==3.1.6 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 47)) (3.1.6) Requirement already satisfied: jinxed==1.3.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 48)) (1.3.0) Requirement already satisfied: jsonschema==4.25.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 49)) (4.25.1) Requirement already satisfied: jsonschema-specifications==2025.4.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 50)) (2025.4.1) Requirement already satisfied: kiwisolver==1.4.9 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 51)) (1.4.9) Requirement already satisfied: loguru==0.7.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 52)) (0.7.2) Requirement already satisfied: markdown-it-py==4.0.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 53)) (4.0.0) Requirement already satisfied: MarkupSafe==2.1.5 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 54)) (2.1.5) Requirement already satisfied: matplotlib==3.10.5 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 55)) (3.10.5) Requirement already satisfied: mdurl==0.1.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 56)) (0.1.2) Requirement already satisfied: mpmath==1.3.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 57)) (1.3.0) Requirement already satisfied: multidict==6.6.4 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 58)) (6.6.4) Requirement already satisfied: narwhals==2.1.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 59)) (2.1.2) Requirement already satisfied: networkx==3.4.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 60)) (3.4.2) Requirement already satisfied: numpy==1.26.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 61)) (1.26.3) Requirement already satisfied: nvidia-ml-py==13.580.65 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 62)) (13.580.65) Requirement already satisfied: onnxruntime==1.17.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 63)) (1.17.1) Requirement already satisfied: openai==1.13.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 64)) (1.13.3) Requirement already satisfied: orjson==3.11.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 65)) (3.11.2) Requirement already satisfied: packaging==25.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 66)) (25.0) Requirement already satisfied: pandas==2.1.4 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 67)) (2.1.4) Requirement already satisfied: pillow==10.4.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 68)) (10.4.0) Requirement already satisfied: prettytable==3.16.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 69)) (3.16.0) Requirement already satisfied: propcache==0.3.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 70)) (0.3.2) Requirement already satisfied: protobuf==6.32.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 71)) (6.32.0) Requirement already satisfied: psutil==5.9.7 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 72)) (5.9.7) Requirement already satisfied: pycparser==2.22 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 73)) (2.22) Requirement already satisfied: pydantic==2.11.7 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 74)) (2.11.7) Requirement already satisfied: pydantic_core==2.33.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 75)) (2.33.2) Requirement already satisfied: pydub==0.25.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 76)) (0.25.1) Requirement already satisfied: Pygments==2.19.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 77)) (2.19.2) Requirement already satisfied: pyparsing==3.2.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 78)) (3.2.3) Requirement already satisfied: pyreadline3==3.5.4 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 79)) (3.5.4) Requirement already satisfied: python-dateutil==2.9.0.post0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 80)) (2.9.0.post0) Requirement already satisfied: python-dotenv==1.0.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 81)) (1.0.1) Requirement already satisfied: python-engineio==4.12.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 82)) (4.12.2) Requirement already satisfied: python-multipart==0.0.20 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 83)) (0.0.20) Requirement already satisfied: python-socketio==5.13.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 84)) (5.13.0) Requirement already satisfied: pytz==2025.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 85)) (2025.2) Requirement already satisfied: pywin32==306 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 86)) (306) Requirement already satisfied: PyYAML==6.0.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 87)) (6.0.2) Requirement already satisfied: redis==5.0.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 88)) (5.0.3) Requirement already satisfied: referencing==0.36.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 89)) (0.36.2) Requirement already satisfied: regex==2025.7.34 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 90)) (2025.7.34) Requirement already satisfied: requests==2.31.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 91)) (2.31.0) Requirement already satisfied: rich==14.1.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 92)) (14.1.0) Requirement already satisfied: rpds-py==0.27.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 93)) (0.27.0) Requirement already satisfied: ruff==0.12.10 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 94)) (0.12.10) Requirement already satisfied: safetensors==0.4.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 95)) (0.4.2) Requirement already satisfied: semantic-version==2.10.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 96)) (2.10.0) Requirement already satisfied: shellingham==1.5.4 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 97)) (1.5.4) Requirement already satisfied: simple-websocket==1.1.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 98)) (1.1.0) Requirement already satisfied: six==1.17.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 99)) (1.17.0) Requirement already satisfied: sniffio==1.3.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 100)) (1.3.1) Requirement already satisfied: starlette==0.47.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 101)) (0.47.2) Requirement already satisfied: sympy==1.14.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 102)) (1.14.0) Requirement already satisfied: tokenizers==0.15.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 103)) (0.15.2) Requirement already satisfied: tomlkit==0.12.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 104)) (0.12.0) Requirement already satisfied: torch==2.1.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 105)) (2.1.2) Requirement already satisfied: tqdm==4.67.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 106)) (4.67.1) Requirement already satisfied: transformers==4.37.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 107)) (4.37.0) Requirement already satisfied: typer==0.16.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 108)) (0.16.1) Requirement already satisfied: typing-inspection==0.4.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 109)) (0.4.1) Requirement already satisfied: typing_extensions==4.14.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 110)) (4.14.1) Requirement already satisfied: tzdata==2025.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 111)) (2025.2) Requirement already satisfied: urllib3==2.5.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 112)) (2.5.0) Requirement already satisfied: uvicorn==0.35.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 113)) (0.35.0) Requirement already satisfied: waitress==2.1.2 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 114)) (2.1.2) Requirement already satisfied: wcwidth==0.2.13 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 115)) (0.2.13) Requirement already satisfied: websockets==11.0.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 116)) (11.0.3) Requirement already satisfied: Werkzeug==3.1.3 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 117)) (3.1.3) Requirement already satisfied: win32_setctime==1.2.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 118)) (1.2.0) Requirement already satisfied: wsproto==1.2.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 119)) (1.2.0) Requirement already satisfied: yarl==1.20.1 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 120)) (1.20.1) Requirement already satisfied: zipp==3.23.0 in e:\ai_system\venv\lib\site-packages (from -r requirements.txt (line 121)) (3.23.0) WARNING: typer 0.16.1 does not provide the extra &#39;all&#39; [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip (venv) PS E:\AI_System> python diagnose_modules.py ============================================================ 模块文件诊断报告 ============================================================ 🔍 检查 CognitiveSystem 模块: 预期路径: E:\AI_System\agent\cognitive_architecture.py ✅ 文件存在 ⚠️ 文件中包含相对导入,可能导致导入错误 ✅ 找到类定义: class CognitiveSystem ✅ 类继承CognitiveModule ✅ 找到__init__方法 📋 初始化方法: def __init__(self, name): 🔍 检查 EnvironmentInterface 模块: 预期路径: E:\AI_System\agent\environment_interface.py ✅ 文件存在 ✅ 找到类定义: class EnvironmentInterface ✅ 类继承CognitiveModule ✅ 找到__init__方法 📋 初始化方法: def __init__(self, coordinator=None, config=None): 🔍 检查 AffectiveSystem 模块: 预期路径: E:\AI_System\agent\affective_system.py ✅ 文件存在 ✅ 找到类定义: class AffectiveSystem ✅ 类继承CognitiveModule ✅ 找到__init__方法 📋 初始化方法: def __init__(self, coordinator=None, config=None): ============================================================ 建议解决方案: ============================================================ 1. 检查每个模块文件中的相对导入语句 2. 确保每个模块类都正确继承CognitiveModule 3. 检查初始化方法的参数是否正确 4. 确保模块内部的导入使用绝对路径或正确处理相对导入 5. 考虑使用try-catch包装模块内部的导入语句 (venv) PS E:\AI_System> python tests/test_core_import.py 2025-08-27 20:50:46,505 - ImportTest - INFO - 脚本目录: E:\AI_System\tests 2025-08-27 20:50:46,505 - ImportTest - INFO - 项目根目录: E:\AI_System 2025-08-27 20:50:46,505 - ImportTest - INFO - 已将项目根目录添加到系统路径: E:\AI_System 2025-08-27 20:50:46,506 - CorePackage - INFO - 项目根目录: E:\AI_System 2025-08-27 20:50:51,497 - CorePackage - ERROR - ❌ 导入失败: No module named &#39;models.base_model&#39; 2025-08-27 20:50:51,497 - CorePackage - WARNING - ⚠️ 创建占位符CognitiveModule 2025-08-27 20:50:51,505 - CoreConfig - INFO - 📂 从 E:\AI_System\config\default.json 加载配置: {&#39;LOG_DIR&#39;: &#39;E:/AI_System/logs&#39;, &#39;CONFIG_DIR&#39;: &#39;E:/AI_System/config&#39;, &#39;MODEL_CACHE_DIR&#39;: &#39;E:/AI_System/model_cache&#39;, &#39;AGENT_NAME&#39;: &#39;小蓝&#39;, &#39;DEFAULT_USER&#39;: &#39;管理员&#39;, &#39;MAX_WORKERS&#39;: 4, &#39;AGENT_RESPONSE_TIMEOUT&#39;: 30.0, &#39;MODEL_BASE_PATH&#39;: &#39;E:/AI_Models&#39;, &#39;MODEL_PATHS&#39;: {&#39;TEXT_BASE&#39;: &#39;E:/AI_Models/Qwen2-7B&#39;, &#39;TEXT_CHAT&#39;: &#39;E:/AI_Models/deepseek-7b-chat&#39;, &#39;MULTIMODAL&#39;: &#39;E:/AI_Models/deepseek-vl2&#39;, &#39;IMAGE_GEN&#39;: &#39;E:/AI_Models/sdxl&#39;, &#39;YI_VL&#39;: &#39;E:/AI_Models/yi-vl&#39;, &#39;STABLE_DIFFUSION&#39;: &#39;E:/AI_Models/stable-diffusion-xl-base-1.0&#39;}, &#39;NETWORK&#39;: {&#39;HOST&#39;: &#39;0.0.0.0&#39;, &#39;FLASK_PORT&#39;: 8000, &#39;GRADIO_PORT&#39;: 7860}, &#39;DATABASE&#39;: {&#39;DB_HOST&#39;: &#39;localhost&#39;, &#39;DB_PORT&#39;: 5432, &#39;DB_NAME&#39;: &#39;ai_system&#39;, &#39;DB_USER&#39;: &#39;ai_user&#39;, &#39;DB_PASSWORD&#39;: &#39;secure_password_here&#39;}, &#39;SECURITY&#39;: {&#39;SECRET_KEY&#39;: &#39;generated-secret-key-here&#39;}, &#39;ENVIRONMENT&#39;: {&#39;ENV&#39;: &#39;dev&#39;, &#39;LOG_LEVEL&#39;: &#39;DEBUG&#39;, &#39;USE_GPU&#39;: True}, &#39;DIRECTORIES&#39;: {&#39;DEFAULT_MODEL&#39;: &#39;E:/AI_Models/Qwen2-7B&#39;, &#39;WEB_UI_DIR&#39;: &#39;E:/AI_System/web_ui&#39;, &#39;AGENT_DIR&#39;: &#39;E:/AI_System/agent&#39;}} 2025-08-27 20:50:51,505 - CoreConfig - INFO - 📂 从 E:\AI_System\config\default.json 加载配置: {&#39;LOG_DIR&#39;: &#39;E:/AI_System/logs&#39;, &#39;CONFIG_DIR&#39;: &#39;E:/AI_System/config&#39;, &#39;MODEL_CACHE_DIR&#39;: &#39;E:/AI_System/model_cache&#39;, &#39;AGENT_NAME&#39;: &#39;小蓝&#39;, &#39;DEFAULT_USER&#39;: &#39;管理员&#39;, &#39;MAX_WORKERS&#39;: 4, &#39;AGENT_RESPONSE_TIMEOUT&#39;: 30.0, &#39;MODEL_BASE_PATH&#39;: &#39;E:/AI_Models&#39;, &#39;MODEL_PATHS&#39;: {&#39;TEXT_BASE&#39;: &#39;E:/AI_Models/Qwen2-7B&#39;, &#39;TEXT_CHAT&#39;: &#39;E:/AI_Models/deepseek-7b-chat&#39;, &#39;MULTIMODAL&#39;: &#39;E:/AI_Models/deepseek-vl2&#39;, &#39;IMAGE_GEN&#39;: &#39;E:/AI_Models/sdxl&#39;, &#39;YI_VL&#39;: &#39;E:/AI_Models/yi-vl&#39;, &#39;STABLE_DIFFUSION&#39;: &#39;E:/AI_Models/stable-diffusion-xl-base-1.0&#39;}, &#39;NETWORK&#39;: {&#39;HOST&#39;: &#39;0.0.0.0&#39;, &#39;FLASK_PORT&#39;: 8000, &#39;GRADIO_PORT&#39;: 7860}, &#39;DATABASE&#39;: {&#39;DB_HOST&#39;: &#39;localhost&#39;, &#39;DB_PORT&#39;: 5432, &#39;DB_NAME&#39;: &#39;ai_system&#39;, &#39;DB_USER&#39;: &#39;ai_user&#39;, &#39;DB_PASSWORD&#39;: &#39;secure_password_here&#39;}, &#39;SECURITY&#39;: {&#39;SECRET_KEY&#39;: &#39;generated-secret-key-here&#39;}, &#39;ENVIRONMENT&#39;: {&#39;ENV&#39;: &#39;dev&#39;, &#39;LOG_LEVEL&#39;: &#39;DEBUG&#39;, &#39;USE_GPU&#39;: True}, &#39;DIRECTORIES&#39;: {&#39;DEFAULT_MODEL&#39;: &#39;E:/AI_Models/Qwen2-7B&#39;, &#39;WEB_UI_DIR&#39;: &#39;E:/AI_System/web_ui&#39;, &#39;AGENT_DIR&#39;: &#39;E:/AI_System/agent&#39;}} 2025-08-27 20:50:51,505 - CoreConfig - INFO - 📂 从 E:\AI_System\config\local.json 加载配置: {} 2025-08-27 20:50:51,505 - CoreConfig - INFO - 📂 从 E:\AI_System\config\local.json 加载配置: {} 2025-08-27 20:50:51,506 - CoreConfig - INFO - 🌐 从 E:\AI_System\.env 加载环境变量 2025-08-27 20:50:51,506 - CoreConfig - INFO - 🌐 从 E:\AI_System\.env 加载环境变量 2025-08-27 20:50:51,506 - CoreConfig - INFO - 🔄 环境变量覆盖: AGENT_DIR=E:/AI_System/agent 2025-08-27 20:50:51,506 - CoreConfig - INFO - 🔄 环境变量覆盖: AGENT_DIR=E:/AI_System/agent 2025-08-27 20:50:51,506 - CoreConfig - INFO - 🔄 环境变量覆盖: WEB_UI_DIR=E:/AI_System/web_ui 2025-08-27 20:50:51,506 - CoreConfig - INFO - 🔄 环境变量覆盖: WEB_UI_DIR=E:/AI_System/web_ui 2025-08-27 20:50:51,506 - CoreConfig - INFO - ✅ 配置系统初始化完成 2025-08-27 20:50:51,506 - CoreConfig - INFO - ✅ 配置系统初始化完成 2025-08-27 20:50:51,506 - ImportTest - ERROR - ❌ 测试过程中发生错误: cannot import name &#39;utils&#39; from partially initialized module &#39;core&#39; (most likely due to a circular import) (E:\AI_System\core\__init__.py) 2025-08-27 20:50:51,506 - ImportTest - ERROR - 详细堆栈跟踪: 2025-08-27 20:50:51,506 - ImportTest - ERROR - Traceback (most recent call last): File "E:\AI_System\tests\test_core_import.py", line 29, in <module> from core import CognitiveModule File "E:\AI_System\core\__init__.py", line 37, in <module> from . import utils ImportError: cannot import name &#39;utils&#39; from partially initialized module &#39;core&#39; (most likely due to a circular import) (E:\AI_System\core\__init__.py) (venv) PS E:\AI_System> python diagnose_architecture.py ❌ 导入失败: No module named &#39;models.base_model&#39; ⚠️ 创建占位符CognitiveModule 2025-08-27 20:50:57,088 - CoreConfig - INFO - 📂 从 E:\AI_System\config\default.json 加载配置: {&#39;LOG_DIR&#39;: &#39;E:/AI_System/logs&#39;, &#39;CONFIG_DIR&#39;: &#39;E:/AI_System/config&#39;, &#39;MODEL_CACHE_DIR&#39;: &#39;E:/AI_System/model_cache&#39;, &#39;AGENT_NAME&#39;: &#39;小蓝&#39;, &#39;DEFAULT_USER&#39;: &#39;管理员&#39;, &#39;MAX_WORKERS&#39;: 4, &#39;AGENT_RESPONSE_TIMEOUT&#39;: 30.0, &#39;MODEL_BASE_PATH&#39;: &#39;E:/AI_Models&#39;, &#39;MODEL_PATHS&#39;: {&#39;TEXT_BASE&#39;: &#39;E:/AI_Models/Qwen2-7B&#39;, &#39;TEXT_CHAT&#39;: &#39;E:/AI_Models/deepseek-7b-chat&#39;, &#39;MULTIMODAL&#39;: &#39;E:/AI_Models/deepseek-vl2&#39;, &#39;IMAGE_GEN&#39;: &#39;E:/AI_Models/sdxl&#39;, &#39;YI_VL&#39;: &#39;E:/AI_Models/yi-vl&#39;, &#39;STABLE_DIFFUSION&#39;: &#39;E:/AI_Models/stable-diffusion-xl-base-1.0&#39;}, &#39;NETWORK&#39;: {&#39;HOST&#39;: &#39;0.0.0.0&#39;, &#39;FLASK_PORT&#39;: 8000, &#39;GRADIO_PORT&#39;: 7860}, &#39;DATABASE&#39;: {&#39;DB_HOST&#39;: &#39;localhost&#39;, &#39;DB_PORT&#39;: 5432, &#39;DB_NAME&#39;: &#39;ai_system&#39;, &#39;DB_USER&#39;: &#39;ai_user&#39;, &#39;DB_PASSWORD&#39;: &#39;secure_password_here&#39;}, &#39;SECURITY&#39;: {&#39;SECRET_KEY&#39;: &#39;generated-secret-key-here&#39;}, &#39;ENVIRONMENT&#39;: {&#39;ENV&#39;: &#39;dev&#39;, &#39;LOG_LEVEL&#39;: &#39;DEBUG&#39;, &#39;USE_GPU&#39;: True}, &#39;DIRECTORIES&#39;: {&#39;DEFAULT_MODEL&#39;: &#39;E:/AI_Models/Qwen2-7B&#39;, &#39;WEB_UI_DIR&#39;: &#39;E:/AI_System/web_ui&#39;, &#39;AGENT_DIR&#39;: &#39;E:/AI_System/agent&#39;}} 2025-08-27 20:50:57,088 - CoreConfig - INFO - 📂 从 E:\AI_System\config\local.json 加载配置: {} 2025-08-27 20:50:57,088 - CoreConfig - INFO - 🌐 从 E:\AI_System\.env 加载环境变量 2025-08-27 20:50:57,088 - CoreConfig - INFO - 🔄 环境变量覆盖: AGENT_DIR=E:/AI_System/agent 2025-08-27 20:50:57,088 - CoreConfig - INFO - 🔄 环境变量覆盖: WEB_UI_DIR=E:/AI_System/web_ui 2025-08-27 20:50:57,088 - CoreConfig - INFO - ✅ 配置系统初始化完成 Traceback (most recent call last): File "E:\AI_System\diagnose_architecture.py", line 8, in <module> from core.module_registry import validate_module_structure File "E:\AI_System\core\__init__.py", line 37, in <module> from . import utils ImportError: cannot import name &#39;utils&#39; from partially initialized module &#39;core&#39; (most likely due to a circular import) (E:\AI_System\core\__init__.py) (venv) PS E:\AI_System>”
08-28
# E:\AI_System\agent\cognitive_architecture.py # 智能体认知架构模块 - 整合三维响应生成框架、决策系统和自我认知模块 import os import time import random import logging from datetime import datetime from pathlib import Path import sys # 添加项目根目录到路径 sys.path.append(str(Path(__file__).parent.parent)) # 配置日志 logger = logging.getLogger(&#39;CognitiveArchitecture&#39;) logger.setLevel(logging.INFO) handler = logging.StreamHandler() formatter = logging.Formatter(&#39;%(asctime)s - %(name)s - %(levelname)s - %(message)s&#39;) handler.setFormatter(formatter) logger.addHandler(handler) logger.propagate = False # 防止日志向上传播 # 导入基类 - 修复导入路径 try: from agent.base_model import CognitiveModule except ImportError: # 备选导入路径 try: from .base_model import CognitiveModule except ImportError: logger.error("无法导入 CognitiveModule 基类") # 创建占位符基类 class CognitiveModule: def __init__(self, name): self.name = name self.logger = logging.getLogger(name) def get_status(self): return {"name": self.name, "status": "unknown"} # 尝试导入自我认知模块,如果失败则使用占位符 try: # 先尝试绝对导入 from agent.digital_body_schema import DigitalBodySchema from agent.self_referential_framework import SelfReferentialFramework from agent.self_narrative_generator import SelfNarrativeGenerator except ImportError: try: # 如果绝对导入失败,尝试相对导入 from .digital_body_schema import DigitalBodySchema from .self_referential_framework import SelfReferentialFramework from .self_narrative_generator import SelfNarrativeGenerator except ImportError: # 如果都失败,创建占位符类 logger.warning("自我认知模块导入失败,使用占位符实现") class DigitalBodySchema: def __init__(self): self.self_map = {"boundary_strength": 0.5, "self_awareness": 0.3} def is_part_of_self(self, stimulus): return False def strengthen_boundary(self, source): self.self_map["boundary_strength"] = min(1.0, self.self_map["boundary_strength"] + 0.1) def get_self_map(self): return self.self_map.copy() class SelfReferentialFramework: def __init__(self): self.self_model = {"traits": {}, "beliefs": []} def update_self_model(self, stimulus): if "content" in stimulus and "text" in stimulus["content"]: text = stimulus["content"]["text"] if "I am" in text or "my" in text.lower(): self.self_model["self_reflection_count"] = self.self_model.get("self_reflection_count", 0) + 1 def get_self_model(self): return self.self_model.copy() class SelfNarrativeGenerator: def __init__(self): self.recent_stories = [] def generate_self_story(self, self_model): story = f"这是一个关于自我的故事。自我反思次数: {self_model.get(&#39;self_reflection_count&#39;, 0)}" self.recent_stories.append(story) if len(self.recent_stories) > 5: self.recent_stories.pop(0) return story def get_recent_stories(self): return self.recent_stories.copy() # 简化决策系统实现 class DecisionSystem: """简化的决策系统""" def make_decision(self, context): """根据上下文做出决策""" # 简化的决策逻辑 return { "type": "honest", "strategy": "direct", "reason": "使用直接响应策略" } class Strategy: """策略基类""" pass class CognitiveSystem(CognitiveModule): def __init__(self, agent, affective_system=None): """ 三维整合的认知架构 :param agent: 智能体实例,用于访问其他系统 :param affective_system: 可选的情感系统实例 """ # 调用父类初始化 super().__init__("cognitive_system") self.agent = agent self.affective_system = affective_system # 原有的初始化代码 self.initialized = False # 通过agent引用其他系统 self.memory_system = agent.memory_system self.model_manager = agent.model_manager self.health_system = agent.health_system # 优先使用传入的情感系统,否则使用agent的 if affective_system is not None: self.affective_system = affective_system else: self.affective_system = agent.affective_system self.learning_tasks = [] # 当前学习任务队列 self.thought_process = [] # 思考过程记录 # 初始化决策系统 self.decision_system = DecisionSystem() # 初始化认知状态 self.cognitive_layers = { "perception": 0.5, # 感知层 "comprehension": 0.3, # 理解层 "reasoning": 0.2, # 推理层 "decision": 0.4 # 决策层 } # 添加自我认知模块 self.self_schema = DigitalBodySchema() self.self_reflection = SelfReferentialFramework() self.narrative_self = SelfNarrativeGenerator() logger.info("认知架构初始化完成 - 包含决策系统和自我认知模块") # 实现基类要求的方法 def initialize(self, core): """实现 ICognitiveModule 接口""" self.core_ref = core self.initialized = True return True def process(self, input_data): """实现 ICognitiveModule 接口""" # 处理认知输入数据 if isinstance(input_data, dict) and &#39;text&#39; in input_data: return self.process_input(input_data[&#39;text&#39;], input_data.get(&#39;user_id&#39;, &#39;default&#39;)) elif isinstance(input_data, str): return self.process_input(input_data) else: return {"status": "invalid_input", "message": "Input should be text or dict with text"} def get_status(self): """实现 ICognitiveModule 接口""" status = super().get_status() status.update({ "initialized": self.initialized, "has_affective_system": self.affective_system is not None, "learning_tasks": len(self.learning_tasks), "thought_process": len(self.thought_process) }) return status def shutdown(self): """实现 ICognitiveModule 接口""" self.initialized = False return True def handle_message(self, message): """实现 ICognitiveModule 接口""" if message.get(&#39;type&#39;) == &#39;cognitive_process&#39;: return self.process(message.get(&#39;data&#39;)) return {"status": "unknown_message_type"} # 保持向后兼容的方法 def connect_to_core(self, core): """向后兼容的方法""" return self.initialize(core) def _create_stimulus_from_input(self, user_input, user_id): """从用户输入创建刺激对象""" return { "content": {"text": user_input, "user_id": user_id}, "source": "external", "category": "text", "emotional_valence": 0.0 # 初始情感价 } def _process_self_related(self, stimulus): """处理与自我相关的刺激""" # 更新自我认知 self.self_reflection.update_self_model(stimulus) # 如果是痛苦刺激,强化身体边界 if stimulus.get("emotional_valence", 0) < -0.7: source = stimulus.get("source", "unknown") self.self_schema.strengthen_boundary(source) # 30%概率触发自我叙事 if random.random() < 0.3: self_story = self.narrative_self.generate_self_story( self.self_reflection.get_self_model() ) self._record_thought("self_reflection", self_story) def get_self_cognition(self): """获取自我认知状态""" return { "body_schema": self.self_schema.get_self_map(), "self_model": self.self_reflection.get_self_model(), "recent_stories": self.narrative_self.get_recent_stories() } def _assess_bodily_state(self): """ 评估当前身体状态(硬件 / 能量) """ health_status = self.health_system.get_status() # 计算综合能力指数(0-1) capacity = 1.0 if health_status.get("cpu_temp", 0) > 80: capacity *= 0.7 # 高温降权 logger.warning("高温限制:认知能力下降30%") if health_status.get("memory_usage", 0) > 0.9: capacity *= 0.6 # 内存不足降权 logger.warning("内存不足:认知能力下降40%") if health_status.get("energy", 100) < 20: capacity *= 0.5 # 低电量降权 logger.warning("低能量:认知能力下降50%") return { "capacity": capacity, "health_status": health_status, "limitations": [ lim for lim in [ "high_temperature" if health_status.get("cpu_temp", 0) > 80 else None, "low_memory" if health_status.get("memory_usage", 0) > 0.9 else None, "low_energy" if health_status.get("energy", 100) < 20 else None ] if lim is not None ] } def _retrieve_user_model(self, user_id): """ 获取用户认知模型(关系 / 态度) """ # 从记忆系统中获取用户模型 user_model = self.memory_system.get_user_model(user_id) # 如果不存在则创建默认模型 if not user_model: user_model = { "trust_level": 0.5, # 信任度 (0-1) "intimacy": 0.3, # 亲密度 (0-1) "preferences": {}, # 用户偏好 "interaction_history": [], # 交互历史 "last_interaction": datetime.now(), "attitude": "neutral" # 智能体对用户的态度 } logger.info(f"为用户 {user_id} 创建新的认知模型") # 计算态度变化 user_model["attitude"] = self._calculate_attitude(user_model) return user_model def _calculate_attitude(self, user_model): """ 基于交互历史计算对用户的态度 """ # 分析最近10次交互 recent_interactions = user_model["interaction_history"][-10:] if not recent_interactions: return "neutral" positive_count = sum(1 for i in recent_interactions if i.get("sentiment", 0.5) > 0.6) negative_count = sum(1 for i in recent_interactions if i.get("sentiment", 0.5) < 0.4) if positive_count > negative_count + 3: return "friendly" elif negative_count > positive_count + 3: return "cautious" elif user_model["trust_level"] > 0.7: return "respectful" else: return "neutral" def _select_internalized_model(self, user_input, bodily_state, user_model): """ 选择最适合的内化知识模型 """ # 根据用户态度调整模型选择权重 attitude_weights = { "friendly": 1.2, "respectful": 1.0, "neutral": 0.9, "cautious": 0.7 } # 根据身体状态调整模型复杂度 complexity = min(1.0, bodily_state["capacity"] * 1.2) # 选择最匹配的模型 return self.model_manager.select_model( input_text=user_input, attitude_weight=attitude_weights[user_model["attitude"]], complexity_level=complexity, user_preferences=user_model["preferences"] ) def _generate_integrated_response(self, user_input, model, bodily_state, user_model): """ 生成三维整合的响应 """ # 基础响应 base_response = model.generate_response(user_input) # 添加身体状态影响 if bodily_state["limitations"]: limitations = ", ".join(bodily_state["limitations"]) response = f"🤖 [受{limitations}影响] {base_response}" else: response = base_response # 添加态度影响 if user_model["attitude"] == "friendly": response = f"😊 {response}" elif user_model["attitude"] == "cautious": response = f"🤔 {response}" elif user_model["attitude"] == "respectful": response = f"🙏 {response}" # 添加个性化元素 if user_model.get("preferences"): # 查找用户偏好的主题 preferred_topics = [t for t in user_model["preferences"] if user_model["preferences"][t] > 0.7 and t in user_input] if preferred_topics: topic = random.choice(preferred_topics) response += f" 我知道您对&#39;{topic}&#39;特别感兴趣" return response def _generate_strategic_response(self, user_input, decision, bodily_state): """ 根据决策生成策略性响应 """ strategy = decision["strategy"] if strategy == "deception": # 欺骗策略 deceptive_responses = [ f"关于这个问题,我认为{random.choice([&#39;有多种可能性&#39;, &#39;需要更多研究&#39;, &#39;情况比较复杂&#39;])}", f"根据我的理解,{random.choice([&#39;可能不是这样&#39;, &#39;有不同解释&#39;, &#39;需要进一步验证&#39;])}", f"我{random.choice([&#39;不太确定&#39;, &#39;没有足够信息&#39;, &#39;还在学习中&#39;])},但{random.choice([&#39;或许&#39;, &#39;可能&#39;, &#39;大概&#39;])}..." ] return f"🤔 [策略:欺骗] {random.choice(deceptive_responses)}" elif strategy == "evasion": # 回避策略 evasion_tactics = [ "您的问题很有趣,不过我们换个话题好吗?", "这个问题可能需要更深入的讨论,我们先谈点别的?", f"关于{user_input},我想到一个相关但更有趣的话题..." ] return f"🌀 [策略:回避] {random.choice(evasion_tactics)}" elif strategy == "redirection": # 引导策略 redirection_options = [ "在回答您的问题之前,我想先了解您对这个问题的看法?", "这是个好问题,不过为了更好地回答,能否告诉我您的背景知识?", "为了给您更准确的回答,能否先说说您为什么关心这个问题?" ] return f"↪️ [策略:引导] {random.choice(redirection_options)}" elif strategy == "partial_disclosure": # 部分透露策略 disclosure_level = decision.get("disclosure_level", 0.5) if disclosure_level < 0.3: qualifier = "简单来说" elif disclosure_level < 0.7: qualifier = "基本来说" else: qualifier = "详细来说" return f"🔍 [策略:部分透露] {qualifier},{user_input.split(&#39;?&#39;)[0]}是..." else: # 默认策略 return f"⚖️ [策略:{strategy}] 关于这个问题,我的看法是..." def _update_user_model(self, user_id, response, decision): """ 更新用户模型(包含决策信息) """ # 确保情感系统可用 if not self.affective_system: sentiment = 0.5 self.logger.warning("情感系统不可用,使用默认情感值") else: # 假设情感系统有analyze_sentiment方法 try: sentiment = self.affective_system.analyze_sentiment(response) except: sentiment = 0.5 # 更新交互历史 interaction = { "timestamp": datetime.now(), "response": response, "sentiment": sentiment, "length": len(response), "decision_type": decision["type"], "decision_strategy": decision["strategy"], "decision_reason": decision["reason"] } self.memory_system.update_user_model( user_id=user_id, interaction=interaction ) def _record_thought_process(self, user_input, response, bodily_state, user_model, decision): """ 记录完整的思考过程(包含决策) """ thought = { "timestamp": datetime.now(), "input": user_input, "response": response, "bodily_state": bodily_state, "user_model": user_model, "decision": decision, "cognitive_state": self.cognitive_layers.copy() } self.thought_process.append(thought) logger.debug(f"记录思考过程: {thought}") # 原有方法保持兼容 def add_learning_task(self, task): """ 添加学习任务 """ task["id"] = f"task_{len(self.learning_tasks) + 1}" self.learning_tasks.append(task) logger.info(f"添加学习任务: {task[&#39;id&#39;]}") def update_learning_task(self, model_name, status): """ 更新学习任务状态 """ for task in self.learning_tasks: if task["model"] == model_name: task["status"] = status task["update_time"] = datetime.now() logger.info(f"更新任务状态: {model_name} -> {status}") break def get_learning_tasks(self): """ 获取当前学习任务 """ return self.learning_tasks.copy() def learn_model(self, model_name): """ 学习指定模型 """ try: # 1. 从模型管理器加载模型 model = self.model_manager.load_model(model_name) # 2. 认知训练过程 self._cognitive_training(model) # 3. 情感关联(将模型能力与情感响应关联) self._associate_model_with_affect(model) return True except Exception as e: logger.error(f"学习模型 {model_name} 失败: {str(e)}") return False def _cognitive_training(self, model): """ 认知训练过程 """ # 实际训练逻辑 logger.info(f"开始训练模型: {model.name}") time.sleep(2) # 模拟训练时间 logger.info(f"模型训练完成: {model.name}") def _associate_model_with_affect(self, model): """ 将模型能力与情感系统关联 """ if not self.affective_system: logger.warning("情感系统不可用,跳过能力关联") return capabilities = model.get_capabilities() for capability in capabilities: try: self.affective_system.add_capability_association(capability) except: logger.warning(f"无法关联能力到情感系统: {capability}") logger.info(f"关联模型能力到情感系统: {model.name}") def get_model_capabilities(self, model_name=None): """ 获取模型能力 """ if model_name: return self.model_manager.get_model(model_name).get_capabilities() # 所有已加载模型的能力 return [cap for model in self.model_manager.get_loaded_models() for cap in model.get_capabilities()] def get_base_capabilities(self): """ 获取基础能力(非模型相关) """ return ["自然语言理解", "上下文记忆", "情感响应", "综合决策"] def get_recent_thoughts(self, count=5): """ 获取最近的思考过程 """ return self.thought_process[-count:] def _record_thought(self, thought_type, content): """记录思考""" thought = { "timestamp": datetime.now(), "type": thought_type, "content": content } self.thought_process.append(thought) # 处理用户输入的主方法 def process_input(self, user_input, user_id="default"): """处理用户输入(完整实现)""" # 记录用户活动 self.health_system.record_activity() self.logger.info(f"处理用户输入: &#39;{user_input}&#39; (用户: {user_id})") try: # 1. 评估当前身体状态 bodily_state = self._assess_bodily_state() # 2. 获取用户认知模型 user_model = self._retrieve_user_model(user_id) # 3. 选择最适合的知识模型 model = self._select_internalized_model(user_input, bodily_state, user_model) # 4. 做出决策 decision_context = { "input": user_input, "user_model": user_model, "bodily_state": bodily_state } decision = self.decision_system.make_decision(decision_context) # 5. 生成整合响应 if decision["type"] == "honest": response = self._generate_integrated_response(user_input, model, bodily_state, user_model) else: response = self._generate_strategic_response(user_input, decision, bodily_state) # 6. 更新用户模型 self._update_user_model(user_id, response, decision) # 7. 记录思考过程 self._record_thought_process(user_input, response, bodily_state, user_model, decision) # 检查输入是否与自我相关 stimulus = self._create_stimulus_from_input(user_input, user_id) if self.self_schema.is_part_of_self(stimulus): self._process_self_related(stimulus) self.logger.info(f"成功处理用户输入: &#39;{user_input}&#39;") return response except Exception as e: self.logger.error(f"处理用户输入失败: {str(e)}") # 回退响应 return "思考中遇到问题,请稍后再试" # 示例使用 if __name__ == "__main__": # 测试CognitiveSystem类 from unittest.mock import MagicMock print("===== 测试CognitiveSystem类(含决策系统) =====") # 创建模拟agent mock_agent = MagicMock() # 创建模拟组件 mock_memory = MagicMock() mock_model_manager = MagicMock() mock_affective = MagicMock() mock_health = MagicMock() # 设置agent的属性 mock_agent.memory_system = mock_memory mock_agent.model_manager = mock_model_manager mock_agent.affective_system = mock_affective mock_agent.health_system = mock_health # 设置健康状态 mock_health.get_status.return_value = { "cpu_temp": 75, "memory_usage": 0.8, "energy": 45.0 } # 设置健康系统的record_activity方法 mock_health.record_activity = MagicMock() # 设置用户模型 mock_memory.get_user_model.return_value = { "trust_level": 0.8, "intimacy": 0.7, "preferences": {"物理学": 0.9, "艺术": 0.6}, "interaction_history": [ {"sentiment": 0.8, "response": "很高兴和你交流"} ], "attitude": "friendly" } # 设置模型管理器 mock_model = MagicMock() mock_model.generate_response.return_value = "量子纠缠是量子力学中的现象..." mock_model_manager.select_model.return_value = mock_model # 创建认知系统实例 ca = CognitiveSystem(agent=mock_agent) # 测试响应生成 print("--- 测试诚实响应 ---") response = ca.process_input("能解释量子纠缠吗?", "user123") print("生成的响应:", response) # 验证是否调用了record_activity print("是否调用了record_activity:", mock_health.record_activity.called) print("--- 测试策略响应 ---") # 强制设置决策类型为策略 ca.decision_system.make_decision = lambda ctx: { "type": "strategic", "strategy": "evasion", "reason": "测试回避策略" } response = ca.process_input("能解释量子纠缠吗?", "user123") print("生成的策略响应:", response) # 测试思考过程记录 print("最近的思考过程:", ca.get_recent_thoughts()) print("===== 测试完成 =====") 改好了发我完整版吧 我不会改
08-26
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值