Skip to content

AI记忆体 (AIMemory)

AIMemory

Bases: BaseMemory

AI Memory (人工智能内存) 是机器人的主内存模块,负责存储和管理所有与知识图谱、文档存储和搜索相关的信息。

AI Memory is the main memory module of the robot, responsible for storing and managing all information related to the knowledge graph, document storage, and search operations.

Attributes:

Name Type Description
docs Optional[BaseDocStore]

文档存储模块,用于存储和检索文档。 | The document store module for storing and retrieving documents.

top_k int

在向量检索时返回的最高k个结果的数量。The number of top results to return in vector searches.

recall_strict bool

指定在回忆(检索)内容时是否采用严格模式。Specifies whether to use strict mode when recalling (retrieving) content. 在严格模式下,将严格遵守Token限制。| In strict mode, token limitations are strictly adhered to.

kg_format Literal[...]

定义知识图谱节点输出的格式。 | Defines the format for the output of knowledge graph nodes.

enable_kg_query_expansion bool

指定是否在进行知识图谱检索时使用当前会话中的历史记录来扩展检索。 | Specifies whether to use the history of the current session to expand searches when querying the knowledge graph. 这可以提高检索的 相关性,但可能增加执行时间。This can improve the relevance of the retrieval but may increase execution time.

使用此类可以有效地管理和查询与机器人操作相关的各种信息资源,包括但不限于知识图谱数据和文档。

Notes

注意使用此模块时需要手动管理不同的数据源,比如要手动保证DocStore与Memos共用相同的存储,使用Memos召回的EleId会在DocStore进行查找。如果不保证一致性,可能会导致召回失败。 AIMemory的设计是使用分离式的存储设计,主要方便调试,同时可以方便拆卸与组合不同的Store以观察效果。但是在实际使用时,需要非常了解各个模块的作用与边界。

Using this class, various information resources related to robot operations, including but not limited to knowledge graph data and documents, can be effectively managed and queried.

validate_docs

validate_docs() -> Self

Validate the docs field.

验证 docs 字段。

Source code in tfrobot/brain/memory/ai_memory.py
194
195
196
197
198
199
200
201
202
203
204
@model_validator(mode="after")
def validate_docs(self) -> Self:
    """
    Validate the docs field.

    验证 docs 字段。
    """
    if self.memos:
        if not self.docs:
            raise ValueError("Docs is required when memos is not empty.")
    return self

recall

recall(current_input: UserAndAssMsg, chunk_size: int | Annotated[list[int], Len(4, 4)], length_function: Callable[[str], int], exclude_str: Optional[str] = None) -> Tuple[Optional[list[UserAndAssMsg]], Optional[list[DocElement]], Optional[str]]

Recalls the content from memory based on the given natural language input.

根据给定的自然语言输入从内存中检索内容。

Parameters:

Name Type Description Default
current_input str

The message user input. This is used to recall the content from memory. It is usually

用户输入的消息。用这个来从内存中回忆内容。通常由聊天输入历史记录格式化。

required
chunk_size Union[int, List[int]]

The size of the content to recall. It can be a single number or a list of numbers. If it is a list, the first number is the size of the conversation content to recall, the second number is the size of the memo content to recall, the third number is the size of the knowledge content to recall, and the fourth number is the size of the keyword content to recall. If it is a single number, it will be converted into four equal numbers.

要回忆的内容的大小。可以是单个数字或数字列表。如果是列表,第一个数字是要回忆的对话内容的大小,第二个数字是要回忆的备忘内容的大小, 第三个数字是要回忆的知识内容的大小,第四个数字是要回忆的关键字内容的大小。如果是单个数字,将被转换为四个相等的数字。

required
length_function Callable

A function to calculate the length of the content to recall.

用于计算所回忆内容长度的函数。

required
exclude_str Optional[str]

The string to exclude from the recall. Defaults to None.

要从回忆中排除的字符串。默认为 None。

None

Returns:

Type Description
Tuple[Optional[list[UserAndAssMsg]], Optional[list[DocElement]], Optional[str]]

Tuple[Optional[list[BaseMessage]], Optional[list[DocElement]], Optional[str]]: The content recalled from memory.

分别是代表对话、文档元素和知识的内容。

Source code in tfrobot/brain/memory/ai_memory.py
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
@validate_call(validate_return=True)
def recall(
    self,
    current_input: UserAndAssMsg,
    chunk_size: int | Annotated[list[int], annotated_types.Len(4, 4)],  # type: ignore
    length_function: Callable[[str], int],
    exclude_str: Optional[str] = None,
) -> Tuple[Optional[list[UserAndAssMsg]], Optional[list[DocElement]], Optional[str]]:
    """
    Recalls the content from memory based on the given natural language input.

    根据给定的自然语言输入从内存中检索内容。

    Args:
        current_input (str): The message user input. This is used to recall the content from memory. It is usually

            用户输入的消息。用这个来从内存中回忆内容。通常由聊天输入历史记录格式化。


        chunk_size (Union[int, List[int]]): The size of the content to recall. It can be a single number or a list
            of numbers. If it is a list, the first number is the size of the conversation content to recall, the
            second number is the size of the memo content to recall, the third number is the size of the knowledge
            content to recall, and the fourth number is the size of the keyword content to recall. If it is a single
            number, it will be converted into four equal numbers.

            要回忆的内容的大小。可以是单个数字或数字列表。如果是列表,第一个数字是要回忆的对话内容的大小,第二个数字是要回忆的备忘内容的大小,
            第三个数字是要回忆的知识内容的大小,第四个数字是要回忆的关键字内容的大小。如果是单个数字,将被转换为四个相等的数字。

        length_function (Callable): A function to calculate the length of the content to recall.

            用于计算所回忆内容长度的函数。

        exclude_str (Optional[str]): The string to exclude from the recall. Defaults to None.

            要从回忆中排除的字符串。默认为 None。

    Returns:
        Tuple[Optional[list[BaseMessage]], Optional[list[DocElement]], Optional[str]]: The content recalled from
            memory.

            分别是代表对话、文档元素和知识的内容。
    """
    query = str(current_input.content)
    conversion_res: Optional[list[UserAndAssMsg]] = None
    memos_res: Optional[list[DocElement]] = None
    knowledge_res: Optional[str] = None
    conversation = (
        self.conversation_manager.get_conversation_by_msg(current_input) if self.conversation_manager else None
    )
    stores = [conversation, self.memos, self.knowledge]
    if isinstance(chunk_size, int):
        store_count = sum(1 for store in stores if store)
        chunk_size = [
            chunk_size // store_count if store else 0 for store in stores
        ]  # // 双斜线表示整除,其效果返回商的整数部分
    else:
        chunk_size = chunk_size[:3]  # AIMemory版本较旧,仅支持 会话/向量/图。不支持关键字模式。
    for mc_size, store in zip(chunk_size, ["conversations", "memos", "knowledge"]):
        if not mc_size:
            # 表示不需要从这个存储器中获取数据
            continue
        match store:
            case "conversations":
                if conversation and chunk_size[0]:
                    # 如果存在激活的会话,则从会话中获取数据,规则为从最后一条消息开始获取,直到获取到指定大小的数据
                    msg_source = (
                        reversed(conversation)
                        if current_input.msg_id is None
                        else conversation.get_msgs_backward(current_input)
                    )
                    msg_expander = MsgExpander[UserAndAssMsg](
                        chunk_size=mc_size,
                        length_function=length_function,
                        expand_source=(EChunk(ele=msg, direction="backward") for msg in msg_source),
                        strict=self.recall_strict,
                    )
                    try:
                        conversion_res = msg_expander.expand_eles()
                    except ValueError as e:  # pragma: no cover
                        warnings.warn("Failed to expand message: " + str(e))  # pragma: no cover
            case "memos":
                if not self.docs:
                    warnings.warn("Docs is required when memos is not empty.")
                    continue
                if self.memos and chunk_size[1]:
                    memos_res = []
                    ele_ids: list[int] = []
                    top_k = self.top_k // len(self.memos)  # 在所有的memo中,总计返回20条数据
                    for memo in self.memos:
                        ele_ids.extend(memo.query(query_texts=query, top_k=top_k, exclude_str=exclude_str))
                    # 统计有效ID数量,因为比如Faiss为了保证返回的数为top_k,会填充-1
                    ele_ids = list(filter(lambda x: x != -1, ele_ids))
                    if ele_ids:
                        expander_target_size = chunk_size[1] // len(
                            list(set(ele_ids))
                        )  # 每个ele需要扩展至的目标值大小
                        ele_has_expanded: list[int] = []  # 已经扩展的元素
                        # 准备进行元素扩展
                        # Step.1 找到元素所处的文档
                        for ele_id in ele_ids:
                            if (
                                ele_id in ele_has_expanded or ele_id == -1
                            ):  # 对于Faiss,为了保持返回ID数量填充Numpy数组,会以-1占位
                                continue
                            ele = self.docs.select_element(ele_id)
                            if ele and ele.page_id and (page := self.docs.select_page(ele.page_id)):
                                if page and page.doc_id and (doc := self.docs.select_doc(page.doc_id)):
                                    if doc:
                                        # Step.2 构建元素迭代器
                                        ele_source = _doc_element_iterator_constructor(
                                            doc, ele_id, ele_has_expanded
                                        )
                                        ele_expander = DocEleExpander(
                                            chunk_size=expander_target_size,
                                            length_function=length_function,
                                            expand_source=ele_source,
                                            strict=self.recall_strict,
                                            ignore_list=ele_has_expanded,
                                        )
                                        # Step.3 扩展元素
                                        try:
                                            memos_res.extend(ele_expander.expand_eles(ele))
                                        except ValueError as e:
                                            warnings.warn("Failed to expand element: " + str(e))  # pragma: no cover
            case "knowledge":
                if self.knowledge and chunk_size[2]:
                    knowledge_res = ""
                    unit_size = chunk_size[2] // len(self.knowledge)
                    for kg in self.knowledge:
                        additional_tags = None
                        if self.enable_kg_query_expansion and hasattr(kg, "pos_tagger") and conversion_res:
                            conversion_texts: list[str] = [
                                c.content for c in conversion_res if isinstance(c.content, str)
                            ]
                            additional_tags = kg.pos_tagger.tag("\n".join(conversion_texts))
                        unit_res = kg.query(
                            query, self.kg_format, additional_tags=additional_tags, exclude_str=exclude_str
                        )
                        if unit_size and unit_res:
                            splitter = RecursiveCharacterTextSplitter(
                                separators=["\n\n", "\n", "\t", " ", ""],
                                chunk_size=unit_size - 1,
                                chunk_overlap=0,
                                strict=self.recall_strict,
                                length_function=length_function,
                            )
                            unit_res = splitter.split_text(unit_res)[0]
                            knowledge_res += unit_res + "\n"

    return conversion_res, memos_res, knowledge_res

async_recall async

async_recall(current_input: UserAndAssMsg, chunk_size: int | Annotated[list[int], Len(4, 4)], length_function: Callable[[str], int], exclude_str: Optional[str] = None) -> Tuple[Optional[list[UserAndAssMsg]], Optional[list[DocElement]], Optional[str]]

Recall方法的Async异步版本

Source code in tfrobot/brain/memory/ai_memory.py
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
async def async_recall(
    self,
    current_input: UserAndAssMsg,
    chunk_size: int | Annotated[list[int], annotated_types.Len(4, 4)],  # type: ignore
    length_function: Callable[[str], int],
    exclude_str: Optional[str] = None,
) -> Tuple[Optional[list[UserAndAssMsg]], Optional[list[DocElement]], Optional[str]]:
    """Recall方法的Async异步版本"""
    query = str(current_input.content)
    conversion_res: Optional[list[UserAndAssMsg]] = None
    memos_res: Optional[list[DocElement]] = None
    knowledge_res: Optional[str] = None
    conversation = (
        self.conversation_manager.get_conversation_by_msg(current_input) if self.conversation_manager else None
    )
    stores = [conversation, self.memos, self.knowledge]
    if isinstance(chunk_size, int):
        store_count = sum(1 for store in stores if store)
        chunk_size = [
            chunk_size // store_count if store else 0 for store in stores
        ]  # // 双斜线表示整除,其效果返回商的整数部分
    else:
        chunk_size = chunk_size[:3]  # AIMemory版本较旧,仅支持 会话/向量/图。不支持关键字模式。
    for mc_size, store in zip(chunk_size, ["conversations", "memos", "knowledge"]):
        if not mc_size:
            # 表示不需要从这个存储器中获取数据
            continue
        match store:
            case "conversations":
                if conversation and chunk_size[0]:
                    # 如果存在激活的会话,则从会话中获取数据,规则为从最后一条消息开始获取,直到获取到指定大小的数据
                    msg_source = (
                        reversed(conversation)
                        if current_input.msg_id is None
                        else (conversation.get_msgs_backward(current_input))
                    )
                    msg_expander = MsgExpander[UserAndAssMsg](
                        chunk_size=mc_size,
                        length_function=length_function,
                        expand_source=(EChunk(ele=msg, direction="backward") for msg in msg_source),
                        strict=self.recall_strict,
                    )
                    try:
                        conversion_res = msg_expander.expand_eles()
                    except ValueError as e:  # pragma: no cover
                        warnings.warn("Failed to expand message: " + str(e))  # pragma: no cover
            case "memos":
                if not self.docs:
                    warnings.warn("Docs is required when memos is not empty.")
                    continue
                if self.memos and chunk_size[1]:
                    memos_res = []
                    ele_ids: list[int] = []
                    top_k = self.top_k // len(self.memos)  # 在所有的memo中,总计返回20条数据
                    for memo in self.memos:
                        ele_ids.extend(
                            await memo.async_query(query_texts=query, top_k=top_k, exclude_str=exclude_str)
                        )
                    # 统计有效ID数量,因为比如Faiss为了保证返回的数为top_k,会填充-1
                    ele_ids = list(filter(lambda x: x != -1, ele_ids))
                    if ele_ids:
                        expander_target_size = chunk_size[1] // len(
                            list(set(ele_ids))
                        )  # 每个ele需要扩展至的目标值大小
                        ele_has_expanded: list[int] = []  # 已经扩展的元素
                        # 准备进行元素扩展
                        # Step.1 找到元素所处的文档
                        for ele_id in ele_ids:
                            if (
                                ele_id in ele_has_expanded or ele_id == -1
                            ):  # 对于Faiss,为了保持返回ID数量填充Numpy数组,会以-1占位
                                continue
                            ele = self.docs.select_element(ele_id)
                            if ele and ele.page_id and (page := self.docs.select_page(ele.page_id)):
                                if page and page.doc_id and (doc := self.docs.select_doc(page.doc_id)):
                                    if doc:
                                        # Step.2 构建元素迭代器
                                        ele_source = _doc_element_iterator_constructor(
                                            doc, ele_id, ele_has_expanded
                                        )
                                        ele_expander = DocEleExpander(
                                            chunk_size=expander_target_size,
                                            length_function=length_function,
                                            expand_source=ele_source,
                                            strict=self.recall_strict,
                                            ignore_list=ele_has_expanded,
                                        )
                                        # Step.3 扩展元素
                                        try:
                                            memos_res.extend(ele_expander.expand_eles(ele))
                                        except ValueError as e:
                                            warnings.warn("Failed to expand element: " + str(e))  # pragma: no cover
            case "knowledge":
                if self.knowledge and chunk_size[2]:
                    knowledge_res = ""
                    unit_size = chunk_size[2] // len(self.knowledge)
                    for kg in self.knowledge:
                        additional_tags = None
                        if self.enable_kg_query_expansion and hasattr(kg, "pos_tagger") and conversion_res:
                            conversion_texts: list[str] = [
                                c.content for c in conversion_res if isinstance(c.content, str)
                            ]
                            additional_tags = kg.pos_tagger.tag("\n".join(conversion_texts))
                        unit_res = kg.query(
                            query, self.kg_format, additional_tags=additional_tags, exclude_str=exclude_str
                        )
                        if unit_size and unit_res:
                            splitter = RecursiveCharacterTextSplitter(
                                separators=["\n\n", "\n", "\t", " ", ""],
                                chunk_size=unit_size - 1,
                                chunk_overlap=0,
                                strict=self.recall_strict,
                                length_function=length_function,
                            )
                            unit_res = splitter.split_text(unit_res)[0]
                            knowledge_res += unit_res + "\n"

    return conversion_res, memos_res, knowledge_res

commit

commit(msg: UserAndAssMsg) -> None

Commits the chain result to memory.

将链结果提交到记忆中。

Parameters:

Name Type Description Default
msg UserAndAssMsg

The chain result to commit.

要提交的链结果。

required
Source code in tfrobot/brain/memory/ai_memory.py
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
def commit(self, msg: UserAndAssMsg) -> None:
    """
    Commits the chain result to memory.

    将链结果提交到记忆中。

    Args:
        msg (UserAndAssMsg): The chain result to commit.

            要提交的链结果。
    """
    conversation = self.conversation_manager.get_conversation_by_msg(msg) if self.conversation_manager else None
    if (
        conversation is not None
    ):  # 注意这里不能使用if conversation,因为conversation是一个BaseBufferStore对象,如果当前没有会话,其bool()值为False
        conversation.add_msg(msg)

acommit async

acommit(msg: UserAndAssMsg) -> None

Commits the chain result to memory asynchronously.

Parameters:

Name Type Description Default
msg UserAndAssMsg

The chain result to commit.

required
Source code in tfrobot/brain/memory/ai_memory.py
493
494
495
496
497
498
499
500
501
502
503
504
async def acommit(self, msg: UserAndAssMsg) -> None:
    """
    Commits the chain result to memory asynchronously.

    Args:
        msg (UserAndAssMsg): The chain result to commit.
    """
    conversation = (
        await self.conversation_manager.aget_conversation_by_msg(msg) if self.conversation_manager else None
    )
    if conversation is not None:
        await conversation.aadd_msg(msg)

get_platforms

get_platforms() -> Sequence[tuple[PLATFORM_ID, str]] | None

AIMemory目前支持挂载单个ConversationManager

  1. DictBaseConversationManager不支持platform,因此返回None
  2. PGBaseConversationManager使用基于DPE的platform结构,因此返回查询结果

目前 get_platforms 还不是 BaseConversationManager 标准协议内容,因此在此会尝试使用动态获取属性的方法来判断是否可以使用

Returns:

Type Description
Sequence[tuple[PLATFORM_ID, str]] | None

list[tuple[PLATFORM_ID, str]] | None: 注意可能返回空

Source code in tfrobot/brain/memory/ai_memory.py
506
507
508
509
510
511
512
513
514
515
516
517
518
def get_platforms(self) -> Sequence[tuple[PLATFORM_ID, str]] | None:
    """
    AIMemory目前支持挂载单个ConversationManager

    1. DictBaseConversationManager不支持platform,因此返回None
    2. PGBaseConversationManager使用基于DPE的platform结构,因此返回查询结果

    目前 get_platforms 还不是 BaseConversationManager 标准协议内容,因此在此会尝试使用动态获取属性的方法来判断是否可以使用

    Returns:
        list[tuple[PLATFORM_ID, str]] | None: 注意可能返回空
    """
    return self.conversation_manager.get_platforms() if self.conversation_manager else None

aget_platforms async

aget_platforms() -> Sequence[tuple[PLATFORM_ID, str]] | None

AIMemory目前支持挂载单个ConversationManager的异步版本

  1. DictBaseConversationManager不支持platform,因此返回None
  2. PGBaseConversationManager使用基于DPE的platform结构,因此返回查询结果

目前 get_platforms 还不是 BaseConversationManager 标准协议内容,因此在此会尝试使用动态获取属性的方法来判断是否可以使用

Returns:

Type Description
Sequence[tuple[PLATFORM_ID, str]] | None

list[tuple[PLATFORM_ID, str]] | None: 注意可能返回空

Source code in tfrobot/brain/memory/ai_memory.py
520
521
522
523
524
525
526
527
528
529
530
531
532
async def aget_platforms(self) -> Sequence[tuple[PLATFORM_ID, str]] | None:
    """
    AIMemory目前支持挂载单个ConversationManager的异步版本

    1. DictBaseConversationManager不支持platform,因此返回None
    2. PGBaseConversationManager使用基于DPE的platform结构,因此返回查询结果

    目前 get_platforms 还不是 BaseConversationManager 标准协议内容,因此在此会尝试使用动态获取属性的方法来判断是否可以使用

    Returns:
        list[tuple[PLATFORM_ID, str]] | None: 注意可能返回空
    """
    return await self.conversation_manager.aget_platforms() if self.conversation_manager else None

add_platform

add_platform(platform_name: str) -> PLATFORM_ID

添加Platform,返回值为PlatformID | Add platform and return platform ID

Parameters:

Name Type Description Default
platform_name str

Platform名称 | Platform name

required

Returns:

Name Type Description
PLATFORM_ID PLATFORM_ID

PlatformID

Source code in tfrobot/brain/memory/ai_memory.py
534
535
536
537
538
539
540
541
542
543
544
545
546
547
def add_platform(self, platform_name: str) -> PLATFORM_ID:
    """
    添加Platform,返回值为PlatformID | Add platform and return platform ID

    Args:
        platform_name (str): Platform名称 | Platform name

    Returns:
        PLATFORM_ID: PlatformID
    """
    if self.conversation_manager:
        return self.conversation_manager.add_platform(platform_name)
    else:
        raise RuntimeError("当前AIMemory尚未绑定 conversation_manager")

aadd_platform async

aadd_platform(platform_name: str) -> PLATFORM_ID

添加Platform,返回值为PlatformID | Add platform and return platform ID

Parameters:

Name Type Description Default
platform_name str

Platform名称 | Platform name

required

Returns:

Name Type Description
PLATFORM_ID PLATFORM_ID

PlatformID

Source code in tfrobot/brain/memory/ai_memory.py
549
550
551
552
553
554
555
556
557
558
559
560
561
562
async def aadd_platform(self, platform_name: str) -> PLATFORM_ID:
    """
    添加Platform,返回值为PlatformID | Add platform and return platform ID

    Args:
        platform_name (str): Platform名称 | Platform name

    Returns:
        PLATFORM_ID: PlatformID
    """
    if self.conversation_manager:
        return await self.conversation_manager.aadd_platform(platform_name)
    else:
        raise RuntimeError("当前AIMemory尚未绑定 conversation_manager")

update_platform

update_platform(platform_id: PLATFORM_ID, platform_name: str) -> None

更新Platform名称 | Update platform name

Parameters:

Name Type Description Default
platform_id PLATFORM_ID

Platform的ID | Platform ID

required
platform_name str

新的Platform名称 | New platform name

required
Source code in tfrobot/brain/memory/ai_memory.py
564
565
566
567
568
569
570
571
572
573
574
575
def update_platform(self, platform_id: PLATFORM_ID, platform_name: str) -> None:
    """
    更新Platform名称 | Update platform name

    Args:
        platform_id (PLATFORM_ID): Platform的ID | Platform ID
        platform_name (str): 新的Platform名称 | New platform name
    """
    if self.conversation_manager:
        self.conversation_manager.update_platform(platform_id, platform_name)
    else:
        raise RuntimeError("当前AIMemory尚未绑定 conversation_manager")

aupdate_platform async

aupdate_platform(platform_id: PLATFORM_ID, platform_name: str) -> None

更新Platform名称 | Update platform name

Parameters:

Name Type Description Default
platform_id PLATFORM_ID

Platform的ID | Platform ID

required
platform_name str

新的Platform名称 | New platform name

required
Source code in tfrobot/brain/memory/ai_memory.py
577
578
579
580
581
582
583
584
585
586
587
588
async def aupdate_platform(self, platform_id: PLATFORM_ID, platform_name: str) -> None:
    """
    更新Platform名称 | Update platform name

    Args:
        platform_id (PLATFORM_ID): Platform的ID | Platform ID
        platform_name (str): 新的Platform名称 | New platform name
    """
    if self.conversation_manager:
        await self.conversation_manager.aupdate_platform(platform_id, platform_name)
    else:
        raise RuntimeError("当前AIMemory尚未绑定 conversation_manager")

delete_platform

delete_platform(platform_id: PLATFORM_ID) -> None

删除Platform | Delete platform

Parameters:

Name Type Description Default
platform_id PLATFORM_ID

要删除的Platform的ID | Platform ID to delete

required
Source code in tfrobot/brain/memory/ai_memory.py
590
591
592
593
594
595
596
597
598
599
600
def delete_platform(self, platform_id: PLATFORM_ID) -> None:
    """
    删除Platform | Delete platform

    Args:
        platform_id (PLATFORM_ID): 要删除的Platform的ID | Platform ID to delete
    """
    if self.conversation_manager:
        self.conversation_manager.delete_platform(platform_id)
    else:
        raise RuntimeError("当前AIMemory尚未绑定 conversation_manager")

adelete_platform async

adelete_platform(platform_id: PLATFORM_ID) -> None

删除Platform | Delete platform

Parameters:

Name Type Description Default
platform_id PLATFORM_ID

要删除的Platform的ID | Platform ID to delete

required
Source code in tfrobot/brain/memory/ai_memory.py
602
603
604
605
606
607
608
609
610
611
612
async def adelete_platform(self, platform_id: PLATFORM_ID) -> None:
    """
    删除Platform | Delete platform

    Args:
        platform_id (PLATFORM_ID): 要删除的Platform的ID | Platform ID to delete
    """
    if self.conversation_manager:
        await self.conversation_manager.adelete_platform(platform_id)
    else:
        raise RuntimeError("当前AIMemory尚未绑定 conversation_manager")

get_conversations

get_conversations(cursor: Optional[str] = None, count: Optional[int] = None, platform_id: Optional[PLATFORM_ID] = None) -> tuple[list[tuple[CONVERSATION_KEY, str]], str]

Get all conversations from memory.

从记忆中获取所有对话。

因为会话排序随时有可能打乱,因此每次请求的时候指定一个初始位置是很有必要的。如此一来,前台可以获取到最新并且没有重复的对话。这里的最佳实践本来应该 使用游标,比如使用 update_timestamp * 1000 + id % 1000 生成游标。但我们系统并非专业的会话管理系统,其数据压力和数据量都不会过大,因此这里 不单独维护游标字段,而是直接让前台动态使用当前最后一个对话的ID作为游标。这样可以保证前台获取到的对话是最新的,且不会重复。如果有问题,刷新页面即可解决。

count支持负值,表示从后向前取数。

Parameters:

Name Type Description Default
cursor Optional[str]

The cursor to start from. | 起始游标

None
count Optional[int]

The number of conversations to get. | 要获取的对话数量。

None
platform_id Optional[int]

The platform id to filter conversations. | 要过滤的平台 ID。

None

Returns:

Type Description
tuple[list[tuple[CONVERSATION_KEY, str]], str]

tuple[list[tuple[CONVERSATION_KEY, str]], str]: The conversations and the cursor. | 对话和游标。

Source code in tfrobot/brain/memory/ai_memory.py
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
def get_conversations(
    self, cursor: Optional[str] = None, count: Optional[int] = None, platform_id: Optional[PLATFORM_ID] = None
) -> tuple[list[tuple[CONVERSATION_KEY, str]], str]:
    """
    Get all conversations from memory.

    从记忆中获取所有对话。

    因为会话排序随时有可能打乱,因此每次请求的时候指定一个初始位置是很有必要的。如此一来,前台可以获取到最新并且没有重复的对话。这里的最佳实践本来应该
    使用游标,比如使用 update_timestamp * 1000 + id % 1000 生成游标。但我们系统并非专业的会话管理系统,其数据压力和数据量都不会过大,因此这里
    不单独维护游标字段,而是直接让前台动态使用当前最后一个对话的ID作为游标。这样可以保证前台获取到的对话是最新的,且不会重复。如果有问题,刷新页面即可解决。

    count支持负值,表示从后向前取数。

    Args:
        cursor (Optional[str]): The cursor to start from. | 起始游标
        count (Optional[int]): The number of conversations to get. | 要获取的对话数量。
        platform_id (Optional[int]): The platform id to filter conversations. | 要过滤的平台 ID。

    Returns:
       tuple[list[tuple[CONVERSATION_KEY, str]], str]: The conversations and the cursor. | 对话和游标。
    """
    if _is_conversation_manager_initialized(self.conversation_manager):
        return self.conversation_manager.get_conversations(cursor, count, platform_id)
    else:
        return [], ""

aget_conversations async

aget_conversations(cursor: Optional[str] = None, count: Optional[int] = None, platform_id: Optional[PLATFORM_ID] = None) -> tuple[list[tuple[CONVERSATION_KEY, str]], str]

Get all conversations from memory asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
641
642
643
644
645
646
647
648
async def aget_conversations(
    self, cursor: Optional[str] = None, count: Optional[int] = None, platform_id: Optional[PLATFORM_ID] = None
) -> tuple[list[tuple[CONVERSATION_KEY, str]], str]:
    """Get all conversations from memory asynchronously."""
    if _is_conversation_manager_initialized(self.conversation_manager):
        return await self.conversation_manager.aget_conversations(cursor, count, platform_id)
    else:
        return [], ""

get_conversation

get_conversation(conversation_id: CONVERSATION_KEY) -> Optional[BaseBufferStore]

Get a conversation by its id.

Parameters:

Name Type Description Default
conversation_id CONVERSATION_KEY

The id of the conversation to get. | 要获取的会话的 ID。

required

Returns:

Type Description
Optional[BaseBufferStore]

Optional[BaseBufferStore]: The conversation buffer store. | 会话缓存存储。

Source code in tfrobot/brain/memory/ai_memory.py
650
651
652
653
654
655
656
657
658
659
660
661
662
663
def get_conversation(self, conversation_id: CONVERSATION_KEY) -> Optional[BaseBufferStore]:
    """
    Get a conversation by its id.

    Args:
        conversation_id (CONVERSATION_KEY): The id of the conversation to get. | 要获取的会话的 ID。

    Returns:
        Optional[BaseBufferStore]: The conversation buffer store. | 会话缓存存储。
    """
    if _is_conversation_manager_initialized(self.conversation_manager):
        return self.conversation_manager.get_conversation(conversation_id)
    else:
        return None

aget_conversation async

aget_conversation(conversation_id: CONVERSATION_KEY) -> Optional[BaseBufferStore]

Get a conversation by its id asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
665
666
667
668
669
670
async def aget_conversation(self, conversation_id: CONVERSATION_KEY) -> Optional[BaseBufferStore]:
    """Get a conversation by its id asynchronously."""
    if _is_conversation_manager_initialized(self.conversation_manager):
        return self.conversation_manager.get_conversation(conversation_id)
    else:
        return None

get_conversation_messages

get_conversation_messages(conversation_id: CONVERSATION_KEY, page: int, size: int, type_include: Optional[list[str]] = None, type_exclude: Optional[list[str]] = None, role_include: Optional[list[str]] = None, role_exclude: Optional[list[str]] = None, filter_meta: Optional[Callable[[Attributes], bool]] = None) -> list[UserAndAssMsg]

Get messages from a conversation by conversation_manager.

Parameters:

Name Type Description Default
conversation_id int | str

The id/index of the conversation. | 对话的索引。

required
page int

The page number. | 页码。

required
size int

The size of the page. | 页的大小。

required
type_include Optional[list[str]]

The message types to include. | 要包含的消息类型。

None
type_exclude Optional[list[str]]

The message types to exclude. | 要排除的消息类型。

None
role_include Optional[list[str]]

The message roles to include. | 要包含的消息角色。

None
role_exclude Optional[list[str]]

The message roles to exclude. | 要排除的消息角色。

None
filter_meta Optional[Callable[[Attributes], bool]]

The function to filter messages by metadata. | 用于根据 元数据过滤消息的函数。

None

Returns:

Type Description
list[UserAndAssMsg]

list[UserAndAssMsg]: The messages from the conversation. | 对话中的消息。

Source code in tfrobot/brain/memory/ai_memory.py
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
def get_conversation_messages(
    self,
    conversation_id: CONVERSATION_KEY,
    page: int,
    size: int,
    type_include: Optional[list[str]] = None,
    type_exclude: Optional[list[str]] = None,
    role_include: Optional[list[str]] = None,
    role_exclude: Optional[list[str]] = None,
    filter_meta: Optional[Callable[[Attributes], bool]] = None,
) -> list[UserAndAssMsg]:
    """
    Get messages from a conversation by conversation_manager.

    Args:
        conversation_id (int | str): The id/index of the conversation. | 对话的索引。
        page (int): The page number. | 页码。
        size (int): The size of the page. | 页的大小。
        type_include (Optional[list[str]]): The message types to include. | 要包含的消息类型。
        type_exclude (Optional[list[str]]): The message types to exclude. | 要排除的消息类型。
        role_include (Optional[list[str]]): The message roles to include. | 要包含的消息角色。
        role_exclude (Optional[list[str]]): The message roles to exclude. | 要排除的消息角色。
        filter_meta (Optional[Callable[[Attributes], bool]]): The function to filter messages by metadata. | 用于根据
            元数据过滤消息的函数。

    Returns:
        list[UserAndAssMsg]: The messages from the conversation. | 对话中的消息。
    """
    if _is_conversation_manager_initialized(self.conversation_manager):
        return self.conversation_manager.get_conversation_messages(
            conversation_id, page, size, type_include, type_exclude, role_include, role_exclude, filter_meta
        )
    else:
        return []

aget_conversation_messages async

aget_conversation_messages(conversation_id: CONVERSATION_KEY, page: int, size: int, type_include: Optional[list[str]] = None, type_exclude: Optional[list[str]] = None, role_include: Optional[list[str]] = None, role_exclude: Optional[list[str]] = None, filter_meta: Optional[Callable[[Attributes], bool]] = None) -> list[UserAndAssMsg]

Get messages from a conversation by conversation_manager asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
async def aget_conversation_messages(
    self,
    conversation_id: CONVERSATION_KEY,
    page: int,
    size: int,
    type_include: Optional[list[str]] = None,
    type_exclude: Optional[list[str]] = None,
    role_include: Optional[list[str]] = None,
    role_exclude: Optional[list[str]] = None,
    filter_meta: Optional[Callable[[Attributes], bool]] = None,
) -> list[UserAndAssMsg]:
    """Get messages from a conversation by conversation_manager asynchronously."""
    if _is_conversation_manager_initialized(self.conversation_manager):
        return await self.conversation_manager.aget_conversation_messages(
            conversation_id, page, size, type_include, type_exclude, role_include, role_exclude, filter_meta
        )
    else:
        return []

get_latest_msg_from_conversation

get_latest_msg_from_conversation(conversation_id: CONVERSATION_KEY) -> Optional[UserAndAssMsg]

获取会话的最新消息

Parameters:

Name Type Description Default
conversation_id int | str

The id/index of the conversation. | 对话的索引。

required

Returns:

Type Description
Optional[UserAndAssMsg]

Optional[UserAndAssMsg]: The latest message from the conversation. | 会话的最新消息。

Source code in tfrobot/brain/memory/ai_memory.py
726
727
728
729
730
731
732
733
734
735
736
737
738
739
def get_latest_msg_from_conversation(self, conversation_id: CONVERSATION_KEY) -> Optional[UserAndAssMsg]:
    """
    获取会话的最新消息

    Args:
        conversation_id (int | str): The id/index of the conversation. | 对话的索引。

    Returns:
        Optional[UserAndAssMsg]: The latest message from the conversation. | 会话的最新消息。
    """
    if _is_conversation_manager_initialized(self.conversation_manager):
        return self.conversation_manager.get_latest_msg_from_conversation(conversation_id)
    else:
        return None

aget_latest_msg_from_conversation async

aget_latest_msg_from_conversation(conversation_id: CONVERSATION_KEY) -> Optional[UserAndAssMsg]

获取会话的最新消息

Parameters:

Name Type Description Default
conversation_id int | str

The id/index of the conversation. | 对话的索引。

required

Returns:

Type Description
Optional[UserAndAssMsg]

Optional[UserAndAssMsg]: The latest message from the conversation. | 会话的最新消息。

Source code in tfrobot/brain/memory/ai_memory.py
741
742
743
744
745
746
747
748
749
750
751
752
753
754
async def aget_latest_msg_from_conversation(self, conversation_id: CONVERSATION_KEY) -> Optional[UserAndAssMsg]:
    """
    获取会话的最新消息

    Args:
        conversation_id (int | str): The id/index of the conversation. | 对话的索引。

    Returns:
        Optional[UserAndAssMsg]: The latest message from the conversation. | 会话的最新消息。
    """
    if _is_conversation_manager_initialized(self.conversation_manager):
        return await self.conversation_manager.aget_latest_msg_from_conversation(conversation_id)
    else:
        return None

get_conversation_messages_by_cursor

get_conversation_messages_by_cursor(conversation_id: CONVERSATION_KEY, count: int, cursor: Optional[str] = None, type_include: Optional[list[str]] = None, type_exclude: Optional[list[str]] = None, role_include: Optional[list[str]] = None, role_exclude: Optional[list[str]] = None, filter_meta: Optional[Callable[[Attributes], bool]] = None) -> tuple[list[UserAndAssMsg], str]

Get messages from a conversation by cursor.

Parameters:

Name Type Description Default
conversation_id CONVERSATION_KEY

The id of the conversation. | 会话的 ID。

required
cursor Optional[str]

The cursor to get messages. | 获取消息的游标。

None
count int

The number of messages to get. | 要获取的消息数量。

required
type_include Optional[list[str]]

The message types to include. | 要包含的消息类型。

None
type_exclude Optional[list[str]]

The message types to exclude. | 要排除的消息类型。

None
role_include Optional[list[str]]

The message roles to include. | 要包含的消息角色。

None
role_exclude Optional[list[str]]

The message roles to exclude. | 要排除的消息角色。

None
filter_meta Optional[Callable[[Attributes], bool]]

The function to filter messages by metadata. | 用于根据

None

Returns:

Type Description
tuple[list[UserAndAssMsg], str]

tuple[list[UserAndAssMsg], str]: The messages from the conversation and the cursor. | 对话中的消息和游标。

Source code in tfrobot/brain/memory/ai_memory.py
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
def get_conversation_messages_by_cursor(
    self,
    conversation_id: CONVERSATION_KEY,
    count: int,
    cursor: Optional[str] = None,
    type_include: Optional[list[str]] = None,
    type_exclude: Optional[list[str]] = None,
    role_include: Optional[list[str]] = None,
    role_exclude: Optional[list[str]] = None,
    filter_meta: Optional[Callable[[Attributes], bool]] = None,
) -> tuple[list[UserAndAssMsg], str]:
    """
    Get messages from a conversation by cursor.

    Args:
        conversation_id (CONVERSATION_KEY): The id of the conversation. | 会话的 ID。
        cursor (Optional[str]): The cursor to get messages. | 获取消息的游标。
        count (int): The number of messages to get. | 要获取的消息数量。
        type_include (Optional[list[str]]): The message types to include. | 要包含的消息类型。
        type_exclude (Optional[list[str]]): The message types to exclude. | 要排除的消息类型。
        role_include (Optional[list[str]]): The message roles to include. | 要包含的消息角色。
        role_exclude (Optional[list[str]]): The message roles to exclude. | 要排除的消息角色。
        filter_meta (Optional[Callable[[Attributes], bool]]): The function to filter messages by metadata. | 用于根据

    Returns:
        tuple[list[UserAndAssMsg], str]: The messages from the conversation and the cursor. | 对话中的消息和游标。
    """
    if _is_conversation_manager_initialized(self.conversation_manager):
        return self.conversation_manager.get_conversation_messages_by_cursor(
            conversation_id, count, cursor, type_include, type_exclude, role_include, role_exclude, filter_meta
        )
    else:
        return [], ""

aget_conversation_messages_by_cursor async

aget_conversation_messages_by_cursor(conversation_id: CONVERSATION_KEY, count: int, cursor: Optional[str] = None, type_include: Optional[list[str]] = None, type_exclude: Optional[list[str]] = None, role_include: Optional[list[str]] = None, role_exclude: Optional[list[str]] = None, filter_meta: Optional[Callable[[Attributes], bool]] = None) -> tuple[list[UserAndAssMsg], str]

Get messages from a conversation by cursor asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
async def aget_conversation_messages_by_cursor(
    self,
    conversation_id: CONVERSATION_KEY,
    count: int,
    cursor: Optional[str] = None,
    type_include: Optional[list[str]] = None,
    type_exclude: Optional[list[str]] = None,
    role_include: Optional[list[str]] = None,
    role_exclude: Optional[list[str]] = None,
    filter_meta: Optional[Callable[[Attributes], bool]] = None,
) -> tuple[list[UserAndAssMsg], str]:
    """Get messages from a conversation by cursor asynchronously."""
    if _is_conversation_manager_initialized(self.conversation_manager):
        return await self.conversation_manager.aget_conversation_messages_by_cursor(
            conversation_id, count, cursor, type_include, type_exclude, role_include, role_exclude, filter_meta
        )
    else:
        return [], ""

add_conversation

add_conversation(name: str, platform_id: Optional[PLATFORM_ID] = None) -> CONVERSATION_KEY

Add a conversation to the memory.

是否需要传递platform_id,取决于具体的实现。建议尽可能都传。

Parameters:

Name Type Description Default
name str

The name of the conversation. | 会话的名称。

required
platform_id Optional[int]

The platform id of the conversation. | 会话的平台 ID。

None

Returns:

Name Type Description
CONVERSATION_KEY CONVERSATION_KEY

The key of the conversation. | 会话的键。

Source code in tfrobot/brain/memory/ai_memory.py
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
def add_conversation(self, name: str, platform_id: Optional[PLATFORM_ID] = None) -> CONVERSATION_KEY:
    """
    Add a conversation to the memory.

    是否需要传递platform_id,取决于具体的实现。建议尽可能都传。

    Args:
        name (str): The name of the conversation. | 会话的名称。
        platform_id (Optional[int]): The platform id of the conversation. | 会话的平台 ID。

    Returns:
        CONVERSATION_KEY: The key of the conversation. | 会话的键。
    """
    if _is_conversation_manager_initialized(self.conversation_manager):
        return self.conversation_manager.add_conversation(name, platform_id)
    else:
        raise ValueError("Conversation manager is not initialized.")

aadd_conversation async

aadd_conversation(name: str, platform_id: Optional[PLATFORM_ID] = None) -> CONVERSATION_KEY

Add a conversation asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
827
828
829
830
831
832
async def aadd_conversation(self, name: str, platform_id: Optional[PLATFORM_ID] = None) -> CONVERSATION_KEY:
    """Add a conversation asynchronously."""
    if _is_conversation_manager_initialized(self.conversation_manager):
        return await self.conversation_manager.aadd_conversation(name, platform_id)
    else:
        raise ValueError("Conversation manager is not initialized.")

update_conversation

update_conversation(conversation_id: CONVERSATION_KEY, name: str) -> None

Update the name of a conversation.

Parameters:

Name Type Description Default
conversation_id CONVERSATION_KEY

The id of the conversation to update. | 要更新的会话的 ID。

required
name str

The new name of the conversation. | 会话的新名称。

required
Source code in tfrobot/brain/memory/ai_memory.py
834
835
836
837
838
839
840
841
842
843
def update_conversation(self, conversation_id: CONVERSATION_KEY, name: str) -> None:
    """
    Update the name of a conversation.

    Args:
        conversation_id (CONVERSATION_KEY): The id of the conversation to update. | 要更新的会话的 ID。
        name (str): The new name of the conversation. | 会话的新名称。
    """
    if _is_conversation_manager_initialized(self.conversation_manager):
        self.conversation_manager.update_conversation(conversation_id, name)

aupdate_conversation async

aupdate_conversation(conversation_id: CONVERSATION_KEY, name: str) -> None

Update the name of a conversation asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
845
846
847
848
async def aupdate_conversation(self, conversation_id: CONVERSATION_KEY, name: str) -> None:
    """Update the name of a conversation asynchronously."""
    if _is_conversation_manager_initialized(self.conversation_manager):
        await self.conversation_manager.aupdate_conversation(conversation_id, name)

delete_conversation

delete_conversation(conversation_id: CONVERSATION_KEY) -> None

Delete a conversation by its id.

Parameters:

Name Type Description Default
conversation_id CONVERSATION_KEY

The id of the conversation to delete. | 要删除的会话的 ID。

required
Source code in tfrobot/brain/memory/ai_memory.py
850
851
852
853
854
855
856
857
858
def delete_conversation(self, conversation_id: CONVERSATION_KEY) -> None:
    """
    Delete a conversation by its id.

    Args:
        conversation_id (CONVERSATION_KEY): The id of the conversation to delete. | 要删除的会话的 ID。
    """
    if _is_conversation_manager_initialized(self.conversation_manager):
        self.conversation_manager.delete_conversation(conversation_id)

adelete_conversation async

adelete_conversation(conversation_id: CONVERSATION_KEY) -> None

Delete a conversation asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
860
861
862
863
async def adelete_conversation(self, conversation_id: CONVERSATION_KEY) -> None:
    """Delete a conversation asynchronously."""
    if _is_conversation_manager_initialized(self.conversation_manager):
        await self.conversation_manager.adelete_conversation(conversation_id)

get_docs

get_docs(page: Optional[int] = None, size: Optional[int] = None, keywords: Optional[list[str]] = None) -> list[Document]

Get documents from the memory.

Parameters:

Name Type Description Default
page Optional[int]

The page number. | 页码。

None
size Optional[int]

The size of the page. | 页的大小。

None
keywords Optional[list[str]]

The keywords to search for. | 要搜索的关键字。

None

Returns:

Type Description
list[Document]

list[Document]: The documents from the memory. | 记忆中的文档。

Source code in tfrobot/brain/memory/ai_memory.py
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
def get_docs(
    self, page: Optional[int] = None, size: Optional[int] = None, keywords: Optional[list[str]] = None
) -> list[Document]:
    """
    Get documents from the memory.

    Args:
        page (Optional[int]): The page number. | 页码。
        size (Optional[int]): The size of the page. | 页的大小。
        keywords (Optional[list[str]]): The keywords to search for. | 要搜索的关键字。

    Returns:
        list[Document]: The documents from the memory. | 记忆中的文档。
    """
    if _is_doc_store_initialized(self.docs):
        return self.docs.get_docs(page, size, keywords)
    else:
        return []

aget_docs async

aget_docs(page: Optional[int] = None, size: Optional[int] = None, keywords: Optional[list[str]] = None) -> list[Document]

Get documents from the memory asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
884
885
886
887
888
889
890
891
async def aget_docs(
    self, page: Optional[int] = None, size: Optional[int] = None, keywords: Optional[list[str]] = None
) -> list[Document]:
    """Get documents from the memory asynchronously."""
    if _is_doc_store_initialized(self.docs):
        return await self.docs.aget_docs(page, size, keywords)
    else:
        return []

get_doc

get_doc(doc_id: int) -> Document

Get a document by its id.

Parameters:

Name Type Description Default
doc_id int

The id of the document to get. | 要获取的文档的 ID.

required

Returns:

Name Type Description
Document Document

The document from the memory. | 记忆中的文档。

Source code in tfrobot/brain/memory/ai_memory.py
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
def get_doc(self, doc_id: int) -> Document:
    """
    Get a document by its id.

    Args:
        doc_id (int): The id of the document to get. | 要获取的文档的 ID.

    Returns:
        Document: The document from the memory. | 记忆中的文档。
    """
    if _is_doc_store_initialized(self.docs):
        if doc := self.docs.select_doc(doc_id):
            return doc
        else:
            raise ValueError(f"Doc: {doc_id} not found.")
    else:
        raise ValueError("Doc store is not initialized.")

aget_doc async

aget_doc(doc_id: int) -> Document

Get a document by its id asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
911
912
913
914
915
916
917
918
919
async def aget_doc(self, doc_id: int) -> Document:
    """Get a document by its id asynchronously."""
    if _is_doc_store_initialized(self.docs):
        if doc := await self.docs.aselect_doc(doc_id):
            return doc
        else:
            raise ValueError(f"Doc: {doc_id} not found.")
    else:
        raise ValueError("Doc store is not initialized.")

add_doc

add_doc(doc: Document) -> DocId

Add a document to the memory.

Parameters:

Name Type Description Default
doc Document

The document to add. | 要添加的文档。

required

Returns:

Name Type Description
DocId DocId

The id of the document. | 文档的 ID。

Source code in tfrobot/brain/memory/ai_memory.py
921
922
923
924
925
926
927
928
929
930
931
932
933
934
def add_doc(self, doc: Document) -> DocId:
    """
    Add a document to the memory.

    Args:
        doc (Document): The document to add. | 要添加的文档。

    Returns:
        DocId: The id of the document. | 文档的 ID。
    """
    if _is_doc_store_initialized(self.docs):
        return self.docs.insert_doc(doc)
    else:
        raise ValueError("Doc store is not initialized.")

aadd_doc async

aadd_doc(doc: Document) -> DocId

Add a document to the memory asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
936
937
938
939
940
941
async def aadd_doc(self, doc: Document) -> DocId:
    """Add a document to the memory asynchronously."""
    if _is_doc_store_initialized(self.docs):
        return await self.docs.ainsert_doc(doc)
    else:
        raise ValueError("Doc store is not initialized.")

update_doc

update_doc(doc: Document) -> None

Update a document in the memory.

Parameters:

Name Type Description Default
doc Document

The updated document. | 更新的文档。

required
Source code in tfrobot/brain/memory/ai_memory.py
943
944
945
946
947
948
949
950
951
952
953
954
def update_doc(self, doc: Document) -> None:
    """
    Update a document in the memory.

    Args:
        doc (Document): The updated document. | 更新的文档。
    """
    if _is_doc_store_initialized(self.docs):
        if doc.doc_id is not None:
            self.docs.update_doc(doc.doc_id, doc)
        else:
            raise ValueError("Document ID is required.")

aupdate_doc async

aupdate_doc(doc: Document) -> None

Update a document in the memory asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
956
957
958
959
960
961
962
async def aupdate_doc(self, doc: Document) -> None:
    """Update a document in the memory asynchronously."""
    if _is_doc_store_initialized(self.docs):
        if doc.doc_id is not None:
            await self.docs.aupdate_doc(doc.doc_id, doc)
        else:
            raise ValueError("Document ID is required.")

delete_doc

delete_doc(doc_id: int) -> None

Delete a document by its id.

Parameters:

Name Type Description Default
doc_id int

The id of the document to delete. | 要删除的文档的 ID.

required
Source code in tfrobot/brain/memory/ai_memory.py
964
965
966
967
968
969
970
971
972
def delete_doc(self, doc_id: int) -> None:
    """
    Delete a document by its id.

    Args:
        doc_id (int): The id of the document to delete. | 要删除的文档的 ID.
    """
    if _is_doc_store_initialized(self.docs):
        self.docs.delete_doc(doc_id)

adelete_doc async

adelete_doc(doc_id: int) -> None

Delete a document by its id asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
974
975
976
977
async def adelete_doc(self, doc_id: int) -> None:
    """Delete a document by its id asynchronously."""
    if _is_doc_store_initialized(self.docs):
        await self.docs.adelete_doc(doc_id)

get_pages

get_pages(page: Optional[int], size: Optional[int], keywords: Optional[list[str]], doc_ids: Optional[list[int]]) -> list[DocPage]

Get pages from the memory.

Parameters:

Name Type Description Default
page Optional[int]

The page number. | 页码。

required
size Optional[int]

The size of the page. | 页的大小。

required
keywords Optional[list[str]]

The keywords to search for. | 要搜索的关键字。

required
doc_ids Optional[list[int]]

The document ids to search for. | 要搜索的文档 ID。

required

Returns:

Type Description
list[DocPage]

list[DocPage]: The pages from the memory. | 记忆中的页面。

Source code in tfrobot/brain/memory/ai_memory.py
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
def get_pages(
    self, page: Optional[int], size: Optional[int], keywords: Optional[list[str]], doc_ids: Optional[list[int]]
) -> list[DocPage]:
    """
    Get pages from the memory.

    Args:
        page (Optional[int]): The page number. | 页码。
        size (Optional[int]): The size of the page. | 页的大小。
        keywords (Optional[list[str]]): The keywords to search for. | 要搜索的关键字。
        doc_ids (Optional[list[int]]): The document ids to search for. | 要搜索的文档 ID。

    Returns:
        list[DocPage]: The pages from the memory. | 记忆中的页面。
    """
    if _is_doc_store_initialized(self.docs):
        return self.docs.get_pages(page, size, keywords, doc_ids)
    else:
        return []

aget_pages async

aget_pages(page: Optional[int], size: Optional[int], keywords: Optional[list[str]], doc_ids: Optional[list[int]]) -> list[DocPage]

Get pages from the memory asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
 999
1000
1001
1002
1003
1004
1005
1006
async def aget_pages(
    self, page: Optional[int], size: Optional[int], keywords: Optional[list[str]], doc_ids: Optional[list[int]]
) -> list[DocPage]:
    """Get pages from the memory asynchronously."""
    if _is_doc_store_initialized(self.docs):
        return await self.docs.aget_pages(page, size, keywords, doc_ids)
    else:
        return []

get_page

get_page(page_id: int) -> DocPage

Get a page by its id.

Parameters:

Name Type Description Default
page_id int

The id of the page to get. | 要获取的页面的 ID.

required

Returns:

Name Type Description
DocPage DocPage

The page from the memory. | 记忆中的页面。

Source code in tfrobot/brain/memory/ai_memory.py
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
def get_page(self, page_id: int) -> DocPage:
    """
    Get a page by its id.

    Args:
        page_id (int): The id of the page to get. | 要获取的页面的 ID.

    Returns:
        DocPage: The page from the memory. | 记忆中的页面。
    """
    if _is_doc_store_initialized(self.docs):
        if page := self.docs.select_page(page_id):
            return page
        else:
            raise ValueError(f"Page: {page_id} not found.")
    else:
        raise ValueError("Doc store is not initialized.")

aget_page async

aget_page(page_id: int) -> DocPage

Get a page by its id asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1026
1027
1028
1029
1030
1031
1032
1033
1034
async def aget_page(self, page_id: int) -> DocPage:
    """Get a page by its id asynchronously."""
    if _is_doc_store_initialized(self.docs):
        if page := await self.docs.aselect_page(page_id):
            return page
        else:
            raise ValueError(f"Page: {page_id} not found.")
    else:
        raise ValueError("Doc store is not initialized.")

add_page

add_page(page: DocPage) -> PageId

Add a page to the memory.

Parameters:

Name Type Description Default
page DocPage

The page to add. | 要添加的页面。

required

Returns:

Name Type Description
PageId PageId

The id of the page. | 页面的 ID。

Source code in tfrobot/brain/memory/ai_memory.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
def add_page(self, page: DocPage) -> PageId:
    """
    Add a page to the memory.

    Args:
        page (DocPage): The page to add. | 要添加的页面。

    Returns:
        PageId: The id of the page. | 页面的 ID。
    """
    if _is_doc_store_initialized(self.docs):
        return self.docs.insert_page(page)
    else:
        raise ValueError("Doc store is not initialized.")

aadd_page async

aadd_page(page: DocPage) -> PageId

Add a page to the memory asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1051
1052
1053
1054
1055
1056
async def aadd_page(self, page: DocPage) -> PageId:
    """Add a page to the memory asynchronously."""
    if _is_doc_store_initialized(self.docs):
        return await self.docs.ainsert_page(page)
    else:
        raise ValueError("Doc store is not initialized.")

update_page

update_page(page: DocPage) -> None

Update a page in the memory.

Parameters:

Name Type Description Default
page DocPage

The updated page. | 更新的页面。

required
Source code in tfrobot/brain/memory/ai_memory.py
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
def update_page(self, page: DocPage) -> None:
    """
    Update a page in the memory.

    Args:
        page (DocPage): The updated page. | 更新的页面。
    """
    if _is_doc_store_initialized(self.docs):
        if page.page_id is not None:
            self.docs.update_page(page.page_id, page)
        else:
            raise ValueError("Page ID is required.")

aupdate_page async

aupdate_page(page: DocPage) -> None

Update a page in the memory asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1071
1072
1073
1074
1075
1076
1077
async def aupdate_page(self, page: DocPage) -> None:
    """Update a page in the memory asynchronously."""
    if _is_doc_store_initialized(self.docs):
        if page.page_id is not None:
            await self.docs.aupdate_page(page.page_id, page)
        else:
            raise ValueError("Page ID is required.")

delete_page

delete_page(page_id: int) -> None

Delete a page by its id.

Parameters:

Name Type Description Default
page_id int

The id of the page to delete. | 要删除的页面的 ID.

required
Source code in tfrobot/brain/memory/ai_memory.py
1079
1080
1081
1082
1083
1084
1085
1086
1087
def delete_page(self, page_id: int) -> None:
    """
    Delete a page by its id.

    Args:
        page_id (int): The id of the page to delete. | 要删除的页面的 ID.
    """
    if _is_doc_store_initialized(self.docs):
        self.docs.delete_page(page_id)

adelete_page async

adelete_page(page_id: int) -> None

Delete a page by its id asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1089
1090
1091
1092
async def adelete_page(self, page_id: int) -> None:
    """Delete a page by its id asynchronously."""
    if _is_doc_store_initialized(self.docs):
        await self.docs.adelete_page(page_id)

get_elements

get_elements(page: Optional[int], size: Optional[int], keywords: Optional[list[str]], page_ids: Optional[list[int]]) -> list[DocElement]

Get all elements from memory.

从记忆中获取所有元素。

Parameters:

Name Type Description Default
page Optional[int]

The page number.

required
size Optional[int]

The size of the page.

required
keywords Optional[list[str]]

The keywords to search for.

required
page_ids Optional[list[int]]

The page ids to search for.

required

Returns:

Type Description
list[DocElement]

list[DocElement]: The list of all elements.

Source code in tfrobot/brain/memory/ai_memory.py
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
def get_elements(
    self, page: Optional[int], size: Optional[int], keywords: Optional[list[str]], page_ids: Optional[list[int]]
) -> list[DocElement]:
    """
    Get all elements from memory.

    从记忆中获取所有元素。

    Args:
        page (Optional[int]): The page number.
        size (Optional[int]): The size of the page.
        keywords (Optional[list[str]]): The keywords to search for.
        page_ids (Optional[list[int]]): The page ids to search for.

    Returns:
        list[DocElement]: The list of all elements.
    """
    if _is_doc_store_initialized(self.docs):
        return self.docs.get_elements(page, size, keywords, page_ids)
    else:
        return []

get_element

get_element(element_id: int) -> DocElement

Get a DocElement by its id.

通过element_id获取DocElement

Parameters:

Name Type Description Default
element_id int
required

Returns:

Type Description
DocElement

DocElement

Source code in tfrobot/brain/memory/ai_memory.py
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
def get_element(self, element_id: int) -> DocElement:
    """
    Get a DocElement by its id.

    通过element_id获取DocElement

    Args:
        element_id:

    Returns:
        DocElement
    """
    if _is_doc_store_initialized(self.docs):
        if element := self.docs.select_element(element_id):
            return element
        else:
            raise ValueError(f"Element: {element_id} not found.")
    else:
        raise ValueError("Doc store is not initialized.")

aget_element async

aget_element(element_id: int) -> DocElement

Get a DocElement by its id asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1144
1145
1146
1147
1148
1149
1150
1151
1152
async def aget_element(self, element_id: int) -> DocElement:
    """Get a DocElement by its id asynchronously."""
    if _is_doc_store_initialized(self.docs):
        if element := await self.docs.aselect_element(element_id):
            return element
        else:
            raise ValueError(f"Element: {element_id} not found.")
    else:
        raise ValueError("Doc store is not initialized.")

add_element

add_element(element: DocElement) -> EleId

Add a DocElement to memory.

添加DocElement 返回element_id

Parameters:

Name Type Description Default
element DocElement
required

Returns:

Type Description
EleId

int

Source code in tfrobot/brain/memory/ai_memory.py
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
def add_element(self, element: DocElement) -> EleId:
    """
    Add a DocElement to memory.

    添加DocElement 返回element_id

    Args:
        element:

    Returns:
        int
    """
    if _is_doc_store_initialized(self.docs):
        return self.docs.insert_element(element)
    else:
        raise ValueError("Doc store is not initialized.")

aadd_element async

aadd_element(element: DocElement) -> EleId

Add a DocElement to memory asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1171
1172
1173
1174
1175
1176
async def aadd_element(self, element: DocElement) -> EleId:
    """Add a DocElement to memory asynchronously."""
    if _is_doc_store_initialized(self.docs):
        return await self.docs.ainsert_element(element)
    else:
        raise ValueError("Doc store is not initialized.")

update_element

update_element(element: DocElement) -> None

Update a DocElement in memory.

更新DocElement

Parameters:

Name Type Description Default
element DocElement
required
Source code in tfrobot/brain/memory/ai_memory.py
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
def update_element(self, element: DocElement) -> None:
    """
    Update a DocElement in memory.

    更新DocElement

    Args:
        element:
    """
    if _is_doc_store_initialized(self.docs):
        if element.ele_id is not None:
            self.docs.update_element(element.ele_id, element)
        else:
            raise ValueError("Element ID is required.")

aupdate_element async

aupdate_element(element: DocElement) -> None

Update a DocElement in memory asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1193
1194
1195
1196
1197
1198
1199
async def aupdate_element(self, element: DocElement) -> None:
    """Update a DocElement in memory asynchronously."""
    if _is_doc_store_initialized(self.docs):
        if element.ele_id is not None:
            await self.docs.aupdate_element(element.ele_id, element)
        else:
            raise ValueError("Element ID is required.")

delete_element

delete_element(element_id: int) -> None

Delete a DocElement by its id.

删除DocElement

Parameters:

Name Type Description Default
element_id int
required
Source code in tfrobot/brain/memory/ai_memory.py
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
def delete_element(self, element_id: int) -> None:
    """
    Delete a DocElement by its id.

    删除DocElement

    Args:
        element_id:
    """
    if _is_doc_store_initialized(self.docs):
        self.docs.delete_element(element_id)

adelete_element async

adelete_element(element_id: int) -> None

Delete a DocElement by its id asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1213
1214
1215
1216
async def adelete_element(self, element_id: int) -> None:
    """Delete a DocElement by its id asynchronously."""
    if _is_doc_store_initialized(self.docs):
        await self.docs.adelete_element(element_id)

add_graph_cls

add_graph_cls(class_iri: CLS_IRI, super_classes: Optional[list[str]] = None, annotations: Optional[dict] = None) -> CLS_IRI

Add a new class to the graph database.

Parameters:

Name Type Description Default
class_iri str

class iri | 类的IRI

required
super_classes Optional[list[str]]

super classes | 父类

None
annotations Optional[dict]

annotations | 注解

None

Returns:

Name Type Description
str CLS_IRI

class iri | 类的IRI

Source code in tfrobot/brain/memory/ai_memory.py
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
def add_graph_cls(
    self, class_iri: CLS_IRI, super_classes: Optional[list[str]] = None, annotations: Optional[dict] = None
) -> CLS_IRI:
    """
    Add a new class to the graph database.

    Args:
        class_iri (str): class iri | 类的IRI
        super_classes (Optional[list[str]]): super classes | 父类
        annotations (Optional[dict]): annotations | 注解

    Returns:
        str: class iri | 类的IRI
    """
    if _is_knowledge_initialized(self.knowledge):
        return self.knowledge[0].add_class(class_iri, super_classes, annotations)
    else:
        raise ValueError("Knowledge is not initialized or more than one knowledge graph is allowed.")

aadd_graph_cls async

aadd_graph_cls(class_iri: CLS_IRI, super_classes: Optional[list[str]] = None, annotations: Optional[dict] = None) -> CLS_IRI

Add a new class to the graph database asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1237
1238
1239
1240
1241
1242
async def aadd_graph_cls(
    self, class_iri: CLS_IRI, super_classes: Optional[list[str]] = None, annotations: Optional[dict] = None
) -> CLS_IRI:
    """Add a new class to the graph database asynchronously."""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.add_graph_cls(class_iri, super_classes, annotations)

update_graph_cls

update_graph_cls(class_iri: CLS_IRI, new_super_classes: Optional[list[str]] = None, new_annotations: Optional[dict] = None) -> None

Update a class in the graph database.

Parameters:

Name Type Description Default
class_iri str

class iri | 类的IRI

required
new_super_classes Optional[list[str]]

new super classes | 新的父类

None
new_annotations Optional[dict]

new annotations | 新的注解

None
Source code in tfrobot/brain/memory/ai_memory.py
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
def update_graph_cls(
    self, class_iri: CLS_IRI, new_super_classes: Optional[list[str]] = None, new_annotations: Optional[dict] = None
) -> None:
    """
    Update a class in the graph database.

    Args:
        class_iri (str): class iri | 类的IRI
        new_super_classes (Optional[list[str]]): new super classes | 新的父类
        new_annotations (Optional[dict]): new annotations | 新的注解
    """
    if _is_knowledge_initialized(self.knowledge):
        self.knowledge[0].update_class(class_iri, new_super_classes, new_annotations)

aupdate_graph_cls async

aupdate_graph_cls(class_iri: CLS_IRI, new_super_classes: Optional[list[str]] = None, new_annotations: Optional[dict] = None) -> None

Update a class in the graph database asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1258
1259
1260
1261
1262
1263
async def aupdate_graph_cls(
    self, class_iri: CLS_IRI, new_super_classes: Optional[list[str]] = None, new_annotations: Optional[dict] = None
) -> None:
    """Update a class in the graph database asynchronously."""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.update_graph_cls(class_iri, new_super_classes, new_annotations)

delete_graph_cls

delete_graph_cls(class_iri: CLS_IRI) -> None

Delete a class from the graph database.

Parameters:

Name Type Description Default
class_iri str

class iri | 类的IRI

required
Source code in tfrobot/brain/memory/ai_memory.py
1265
1266
1267
1268
1269
1270
1271
1272
1273
def delete_graph_cls(self, class_iri: CLS_IRI) -> None:
    """
    Delete a class from the graph database.

    Args:
        class_iri (str): class iri | 类的IRI
    """
    if _is_knowledge_initialized(self.knowledge):
        self.knowledge[0].delete_class(class_iri)

adelete_graph_cls async

adelete_graph_cls(class_iri: CLS_IRI) -> None

Delete a class from the graph database asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1275
1276
1277
1278
async def adelete_graph_cls(self, class_iri: CLS_IRI) -> None:
    """Delete a class from the graph database asynchronously."""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.delete_graph_cls(class_iri)

add_graph_property

add_graph_property(property_iri: PROP_IRI, property_type: Literal['object', 'data'], domain: Optional[list[CLS_IRI]] = None, o_range: Optional[list] = None, is_functional: bool = False, is_inverse_functional: bool = False, is_symmetric: bool = False, is_transitive: bool = False, is_asymmetric: bool = False, is_reflexive: bool = False, is_irreflexive: bool = False, trigger_words: Optional[list[str]] = None) -> PROP_IRI

Add a new property to the graph database.

Parameters:

Name Type Description Default
property_iri PROP_IRI

property iri | 属性的IRI

required
property_type Literal['object', 'data']

property type | 属性类型

required
domain Optional[list[str]]

domain | 领域

None
o_range Optional[list]

range | 范围

None
is_functional bool

is functional | 是否是功能性的

False
is_inverse_functional bool

is inverse functional | 是否是反功能性的

False
is_symmetric bool

is symmetric | 是否是对

False
is_transitive bool

is transitive | 是否是传递性的

False
is_asymmetric bool

is asymmetric | 是否是非对称的

False
is_reflexive bool

is reflexive | 是否是自反的

False
is_irreflexive bool

is irreflexive | 是否是非自反的

False
trigger_words Optional[list[str]]

trigger words | 触发词

None

Returns:

Name Type Description
str PROP_IRI

property iri | 属性的IRI

Source code in tfrobot/brain/memory/ai_memory.py
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
def add_graph_property(
    self,
    property_iri: PROP_IRI,
    property_type: Literal["object", "data"],
    domain: Optional[list[CLS_IRI]] = None,
    o_range: Optional[list] = None,
    is_functional: bool = False,
    is_inverse_functional: bool = False,
    is_symmetric: bool = False,
    is_transitive: bool = False,
    is_asymmetric: bool = False,
    is_reflexive: bool = False,
    is_irreflexive: bool = False,
    trigger_words: Optional[list[str]] = None,
) -> PROP_IRI:
    """
    Add a new property to the graph database.

    Args:
        property_iri (PROP_IRI): property iri | 属性的IRI
        property_type (Literal["object", "data"]): property type | 属性类型
        domain (Optional[list[str]]): domain | 领域
        o_range (Optional[list]): range | 范围
        is_functional (bool): is functional | 是否是功能性的
        is_inverse_functional (bool): is inverse functional | 是否是反功能性的
        is_symmetric (bool): is symmetric | 是否是对
        is_transitive (bool): is transitive | 是否是传递性的
        is_asymmetric (bool): is asymmetric | 是否是非对称的
        is_reflexive (bool): is reflexive | 是否是自反的
        is_irreflexive (bool): is irreflexive | 是否是非自反的
        trigger_words (Optional[list[str]]): trigger words | 触发词

    Returns:
        str: property iri | 属性的IRI
    """
    if _is_knowledge_initialized(self.knowledge):
        return self.knowledge[0].add_property(
            get_entity_name(property_iri),
            property_type,
            domain,
            o_range,
            is_functional,
            is_inverse_functional,
            is_symmetric,
            is_transitive,
            is_asymmetric,
            is_reflexive,
            is_irreflexive,
            trigger_words,
        )
    else:
        raise ValueError("Knowledge is not initialized or more than one knowledge graph is allowed.")

aadd_graph_property async

aadd_graph_property(property_iri: PROP_IRI, property_type: Literal['object', 'data'], domain: Optional[list[CLS_IRI]] = None, o_range: Optional[list] = None, is_functional: bool = False, is_inverse_functional: bool = False, is_symmetric: bool = False, is_transitive: bool = False, is_asymmetric: bool = False, is_reflexive: bool = False, is_irreflexive: bool = False, trigger_words: Optional[list[str]] = None) -> PROP_IRI

Add a new property to the graph database asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
async def aadd_graph_property(
    self,
    property_iri: PROP_IRI,
    property_type: Literal["object", "data"],
    domain: Optional[list[CLS_IRI]] = None,
    o_range: Optional[list] = None,
    is_functional: bool = False,
    is_inverse_functional: bool = False,
    is_symmetric: bool = False,
    is_transitive: bool = False,
    is_asymmetric: bool = False,
    is_reflexive: bool = False,
    is_irreflexive: bool = False,
    trigger_words: Optional[list[str]] = None,
) -> PROP_IRI:
    """Add a new property to the graph database asynchronously."""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.add_graph_property(
        property_iri,
        property_type,
        domain,
        o_range,
        is_functional,
        is_inverse_functional,
        is_symmetric,
        is_transitive,
        is_asymmetric,
        is_reflexive,
        is_irreflexive,
        trigger_words,
    )

update_graph_property

update_graph_property(property_iri: PROP_IRI, domain: Optional[list[CLS_IRI]] = None, o_range: Optional[list] = None, is_functional: Optional[bool] = None, is_inverse_functional: Optional[bool] = None, is_symmetric: Optional[bool] = None, is_transitive: Optional[bool] = None, is_asymmetric: Optional[bool] = None, is_reflexive: Optional[bool] = None, is_irreflexive: Optional[bool] = None, trigger_words: Optional[list[str]] = None) -> None

Update a property in the graph database.

Parameters:

Name Type Description Default
property_iri str

property iri | 属性的IRI

required
domain Optional[list[str]]

domain | 领域

None
o_range Optional[list]

range | 范围

None
is_functional Optional[bool]

is functional | 是否是功能性的

None
is_inverse_functional Optional[bool]

is inverse functional | 是否是反功能性的

None
is_symmetric Optional[bool]

is symmetric | 是否是对

None
is_transitive Optional[bool]

is transitive | 是否是传递性的

None
is_asymmetric Optional[bool]

is asymmetric | 是否是非对称的

None
is_reflexive Optional[bool]

is reflexive | 是否是自反的

None
is_irreflexive Optional[bool]

is irreflexive | 是否是非自反的

None
trigger_words Optional[list[str]]

trigger words | 触发词

None
Source code in tfrobot/brain/memory/ai_memory.py
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
def update_graph_property(
    self,
    property_iri: PROP_IRI,
    domain: Optional[list[CLS_IRI]] = None,
    o_range: Optional[list] = None,
    is_functional: Optional[bool] = None,
    is_inverse_functional: Optional[bool] = None,
    is_symmetric: Optional[bool] = None,
    is_transitive: Optional[bool] = None,
    is_asymmetric: Optional[bool] = None,
    is_reflexive: Optional[bool] = None,
    is_irreflexive: Optional[bool] = None,
    trigger_words: Optional[list[str]] = None,
) -> None:
    """
    Update a property in the graph database.

    Args:
        property_iri (str): property iri | 属性的IRI
        domain (Optional[list[str]]): domain | 领域
        o_range (Optional[list]): range | 范围
        is_functional (Optional[bool]): is functional | 是否是功能性的
        is_inverse_functional (Optional[bool]): is inverse functional | 是否是反功能性的
        is_symmetric (Optional[bool]): is symmetric | 是否是对
        is_transitive (Optional[bool]): is transitive | 是否是传递性的
        is_asymmetric (Optional[bool]): is asymmetric | 是否是非对称的
        is_reflexive (Optional[bool]): is reflexive | 是否是自反的
        is_irreflexive (Optional[bool]): is irreflexive | 是否是非自反的
        trigger_words (Optional[list[str]]): trigger words | 触发词
    """
    if _is_knowledge_initialized(self.knowledge):
        self.knowledge[0].update_property(
            property_iri,
            domain,
            o_range,
            is_functional,
            is_inverse_functional,
            is_symmetric,
            is_transitive,
            is_asymmetric,
            is_reflexive,
            is_irreflexive,
            trigger_words,
        )

aupdate_graph_property async

aupdate_graph_property(property_iri: PROP_IRI, domain: Optional[list[CLS_IRI]] = None, o_range: Optional[list] = None, is_functional: Optional[bool] = None, is_inverse_functional: Optional[bool] = None, is_symmetric: Optional[bool] = None, is_transitive: Optional[bool] = None, is_asymmetric: Optional[bool] = None, is_reflexive: Optional[bool] = None, is_irreflexive: Optional[bool] = None, trigger_words: Optional[list[str]] = None) -> None

Update a property in the graph database asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
async def aupdate_graph_property(
    self,
    property_iri: PROP_IRI,
    domain: Optional[list[CLS_IRI]] = None,
    o_range: Optional[list] = None,
    is_functional: Optional[bool] = None,
    is_inverse_functional: Optional[bool] = None,
    is_symmetric: Optional[bool] = None,
    is_transitive: Optional[bool] = None,
    is_asymmetric: Optional[bool] = None,
    is_reflexive: Optional[bool] = None,
    is_irreflexive: Optional[bool] = None,
    trigger_words: Optional[list[str]] = None,
) -> None:
    """Update a property in the graph database asynchronously."""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    self.update_graph_property(
        property_iri,
        domain,
        o_range,
        is_functional,
        is_inverse_functional,
        is_symmetric,
        is_transitive,
        is_asymmetric,
        is_reflexive,
        is_irreflexive,
        trigger_words,
    )

delete_graph_property

delete_graph_property(property_iri: PROP_IRI) -> None

Delete a property from the graph database.

Parameters:

Name Type Description Default
property_iri str

property iri | 属性的IRI

required
Source code in tfrobot/brain/memory/ai_memory.py
1440
1441
1442
1443
1444
1445
1446
1447
1448
def delete_graph_property(self, property_iri: PROP_IRI) -> None:
    """
    Delete a property from the graph database.

    Args:
        property_iri (str): property iri | 属性的IRI
    """
    if _is_knowledge_initialized(self.knowledge):
        self.knowledge[0].delete_property(property_iri)

adelete_graph_property async

adelete_graph_property(property_iri: PROP_IRI) -> None

Delete a property from the graph database asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1450
1451
1452
1453
async def adelete_graph_property(self, property_iri: PROP_IRI) -> None:
    """Delete a property from the graph database asynchronously."""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    self.delete_graph_property(property_iri)

add_graph_entity

add_graph_entity(cls_iri: CLS_IRI, info: dict) -> IND_IRI

Add a new entity to the graph database.

Parameters:

Name Type Description Default
cls_iri str

class iri | 类的IRI

required
info dict

entity info | 实体信息

required

Returns:

Name Type Description
str IND_IRI

entity iri | 实体的IRI

Source code in tfrobot/brain/memory/ai_memory.py
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
def add_graph_entity(self, cls_iri: CLS_IRI, info: dict) -> IND_IRI:
    """
    Add a new entity to the graph database.

    Args:
        cls_iri (str): class iri | 类的IRI
        info (dict): entity info | 实体信息

    Returns:
        str: entity iri | 实体的IRI
    """
    if _is_knowledge_initialized(self.knowledge):
        return self.knowledge[0].add(cls_iri, info)
    else:
        raise ValueError("Knowledge is not initialized or more than one knowledge graph is allowed.")

aadd_graph_entity async

aadd_graph_entity(cls_iri: CLS_IRI, info: dict) -> IND_IRI

Add a new entity to the graph database asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1471
1472
1473
1474
async def aadd_graph_entity(self, cls_iri: CLS_IRI, info: dict) -> IND_IRI:
    """Add a new entity to the graph database asynchronously."""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.add_graph_entity(cls_iri, info)

update_graph_entity

update_graph_entity(entity_iri: IND_IRI, info: dict) -> None

Update an entity in the graph database.

Parameters:

Name Type Description Default
entity_iri str

entity iri | 实体的IRI

required
info dict

entity info | 实体信息

required
Source code in tfrobot/brain/memory/ai_memory.py
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
def update_graph_entity(self, entity_iri: IND_IRI, info: dict) -> None:
    """
    Update an entity in the graph database.

    Args:
        entity_iri (str): entity iri | 实体的IRI
        info (dict): entity info | 实体信息
    """
    if _is_knowledge_initialized(self.knowledge):
        self.knowledge[0].update(entity_iri, info)

aupdate_graph_entity async

aupdate_graph_entity(entity_iri: IND_IRI, info: dict) -> None

Update an entity in the graph database asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1487
1488
1489
1490
async def aupdate_graph_entity(self, entity_iri: IND_IRI, info: dict) -> None:
    """Update an entity in the graph database asynchronously."""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    self.update_graph_entity(entity_iri, info)

delete_graph_entity

delete_graph_entity(entity_iri: IND_IRI) -> None

Delete an entity from the graph database.

Parameters:

Name Type Description Default
entity_iri str

entity iri | 实体的IRI

required
Source code in tfrobot/brain/memory/ai_memory.py
1492
1493
1494
1495
1496
1497
1498
1499
1500
def delete_graph_entity(self, entity_iri: IND_IRI) -> None:
    """
    Delete an entity from the graph database.

    Args:
        entity_iri (str): entity iri | 实体的IRI
    """
    if _is_knowledge_initialized(self.knowledge):
        self.knowledge[0].delete(entity_iri)

adelete_graph_entity async

adelete_graph_entity(entity_iri: IND_IRI) -> None

Delete an entity from the graph database asynchronously.

Source code in tfrobot/brain/memory/ai_memory.py
1502
1503
1504
1505
async def adelete_graph_entity(self, entity_iri: IND_IRI) -> None:
    """Delete an entity from the graph database asynchronously."""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    self.delete_graph_entity(entity_iri)

get_graph_cls

get_graph_cls(class_iri: CLS_IRI) -> Optional[ThingClass]

获取指定类的详细信息。

Parameters:

Name Type Description Default
class_iri CLS_IRI

类的IRI。

required

Returns:

Type Description
Optional[ThingClass]

Optional[ThingClass]: 类的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
def get_graph_cls(self, class_iri: CLS_IRI) -> Optional[ThingClass]:
    """
    获取指定类的详细信息。

    Args:
        class_iri (CLS_IRI): 类的IRI。

    Returns:
        Optional[ThingClass]: 类的详细信息。
    """
    if _is_knowledge_initialized(self.knowledge):
        return self.knowledge[0].get_obj(class_iri)
    return None

aget_graph_cls async

aget_graph_cls(class_iri: CLS_IRI) -> Optional[ThingClass]

获取指定类的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1521
1522
1523
1524
async def aget_graph_cls(self, class_iri: CLS_IRI) -> Optional[ThingClass]:
    """获取指定类的详细信息。"""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.get_graph_cls(class_iri)

get_all_graph_clses

get_all_graph_clses() -> list[ThingClass]

获取所有类的详细信息。

Returns:

Type Description
list[ThingClass]

list[ThingClass]: 所有类的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
def get_all_graph_clses(self) -> list[ThingClass]:
    """
    获取所有类的详细信息。

    Returns:
        list[ThingClass]: 所有类的详细信息。
    """
    if _is_knowledge_initialized(self.knowledge):
        return list(self.knowledge[0].active_onto.classes())
    return []

aget_all_graph_clses async

aget_all_graph_clses() -> list[ThingClass]

获取所有类的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1537
1538
1539
1540
async def aget_all_graph_clses(self) -> list[ThingClass]:
    """获取所有类的详细信息。"""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.get_all_graph_clses()

get_graph_property

get_graph_property(property_iri: PROP_IRI) -> Optional[PropertyClass]

获取指定属性的详细信息。

Parameters:

Name Type Description Default
property_iri PROP_IRI

属性的IRI。

required

Returns:

Type Description
Optional[PropertyClass]

Optional[PropertyClass]: 属性的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
def get_graph_property(self, property_iri: PROP_IRI) -> Optional[PropertyClass]:
    """
    获取指定属性的详细信息。

    Args:
        property_iri (PROP_IRI): 属性的IRI。

    Returns:
        Optional[PropertyClass]: 属性的详细信息。
    """
    if _is_knowledge_initialized(self.knowledge):
        return self.knowledge[0].get_obj(property_iri)
    return None

aget_graph_property async

aget_graph_property(property_iri: PROP_IRI) -> Optional[PropertyClass]

获取指定属性的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1556
1557
1558
1559
async def aget_graph_property(self, property_iri: PROP_IRI) -> Optional[PropertyClass]:
    """获取指定属性的详细信息。"""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.get_graph_property(property_iri)

get_all_graph_properties

get_all_graph_properties(prop_type: Optional[Literal['object', 'data', 'annotation']] = None) -> list[PropertyClass]

获取所有属性的详细信息。

Parameters:

Name Type Description Default
prop_type Optional[Literal['object', 'data', 'annotation']]

属性的类型。

None

Returns:

Type Description
list[PropertyClass]

list[PropertyClass]: 所有属性的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
def get_all_graph_properties(
    self, prop_type: Optional[Literal["object", "data", "annotation"]] = None
) -> list[PropertyClass]:
    """
    获取所有属性的详细信息。

    Args:
        prop_type (Optional[Literal["object", "data", "annotation"]]): 属性的类型。

    Returns:
        list[PropertyClass]: 所有属性的详细信息。
    """
    if _is_knowledge_initialized(self.knowledge):
        match prop_type:
            case "object":
                return list(self.knowledge[0].active_onto.object_properties())
            case "data":
                return list(self.knowledge[0].active_onto.data_properties())
            case "annotation":
                return list(self.knowledge[0].active_onto.annotation_properties())
            case _:
                return list(self.knowledge[0].active_onto.properties())
    return []

aget_all_graph_properties async

aget_all_graph_properties(prop_type: Optional[Literal['object', 'data', 'annotation']] = None) -> list[PropertyClass]

获取所有属性的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1585
1586
1587
1588
1589
1590
async def aget_all_graph_properties(
    self, prop_type: Optional[Literal["object", "data", "annotation"]] = None
) -> list[PropertyClass]:
    """获取所有属性的详细信息。"""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.get_all_graph_properties(prop_type)

get_graph_entity

get_graph_entity(entity_iri: IND_IRI) -> Optional[Thing]

获取指定实体的详细信息。

Parameters:

Name Type Description Default
entity_iri IND_IRI

实体的IRI。

required

Returns:

Type Description
Optional[Thing]

Optional[Thing]: 实体的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
def get_graph_entity(self, entity_iri: IND_IRI) -> Optional[Thing]:
    """
    获取指定实体的详细信息。

    Args:
        entity_iri (IND_IRI): 实体的IRI。

    Returns:
        Optional[Thing]: 实体的详细信息。
    """
    if _is_knowledge_initialized(self.knowledge):
        return self.knowledge[0].get_individual(entity_iri)
    return None

aget_graph_entity async

aget_graph_entity(entity_iri: IND_IRI) -> Optional[Thing]

获取指定实体的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1606
1607
1608
1609
async def aget_graph_entity(self, entity_iri: IND_IRI) -> Optional[Thing]:
    """获取指定实体的详细信息。"""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.get_graph_entity(entity_iri)

get_graph_entities_by_cls

get_graph_entities_by_cls(cls_iri: CLS_IRI) -> list[Thing]

获取指定类的所有实体。

Parameters:

Name Type Description Default
cls_iri CLS_IRI

类的IRI。

required

Returns:

Type Description
list[Thing]

list[Thing]: 指定类的所有实体。

Source code in tfrobot/brain/memory/ai_memory.py
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
def get_graph_entities_by_cls(self, cls_iri: CLS_IRI) -> list[Thing]:
    """
    获取指定类的所有实体。

    Args:
        cls_iri (CLS_IRI): 类的IRI。

    Returns:
        list[Thing]: 指定类的所有实体。
    """
    if _is_knowledge_initialized(self.knowledge):
        cls = self.knowledge[0].get_obj(cls_iri)
        if isinstance(cls, ThingClass):
            return list(cls.instances())
    return []

aget_graph_entities_by_cls async

aget_graph_entities_by_cls(cls_iri: CLS_IRI) -> list[Thing]

获取指定类的所有实体。

Source code in tfrobot/brain/memory/ai_memory.py
1627
1628
1629
1630
async def aget_graph_entities_by_cls(self, cls_iri: CLS_IRI) -> list[Thing]:
    """获取指定类的所有实体。"""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.get_graph_entities_by_cls(cls_iri)

get_all_graph_entities

get_all_graph_entities() -> list[Thing]

获取所有实体的详细信息。

Returns:

Type Description
list[Thing]

list[Thing]: 所有实体的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
def get_all_graph_entities(self) -> list[Thing]:
    """
    获取所有实体的详细信息。

    Returns:
        list[Thing]: 所有实体的详细信息。
    """
    if _is_knowledge_initialized(self.knowledge):
        all_entities = set()
        for ind in self.knowledge[0].active_onto.individuals():
            all_entities.add(ind)
        return list(all_entities)
    return []

aget_all_graph_entities async

aget_all_graph_entities() -> list[Thing]

获取所有实体的详细信息。

Source code in tfrobot/brain/memory/ai_memory.py
1646
1647
1648
1649
async def aget_all_graph_entities(self) -> list[Thing]:
    """获取所有实体的详细信息。"""
    # 目前Owlready2不支持异步,因此直接使用同步实现。
    return self.get_all_graph_entities()