GMF Labels

本文详细介绍了Eclipse GMF中标签的使用方式及实现原理,包括特征标签、设计标签、默认标签和自定义标签等四种主要类型,并探讨了它们在图形定义、映射模型和生成模型中的应用。

原文地址:http://wiki.eclipse.org/GMF_Labels

Labels represent pieces of text possibly associated with icons on diagram surface. Text may be edited using inplace facility. There are many possibilities to construct labels but all of them are grouped in four usecases:

1. "feature based label"

Label is always defined in context of a diagram node or a link. If it's based on EClass from domain model then label may be used to represent attribute(s) of this class. Tooling will generate code that constructs label text and converts user input to the new value for attribute(s).

2. "design label"

It may be desirable to have a label that is not stored in domain model. Tooling may generate code that will use notation style (DescriptionStyle for example) to store label text in notation model.

3. "default label"

This is a read-only label with fixed text.

4. "custom label"

GMF runtime defines IParser interface that is responsible to provide label text and editing support. In this usecase toolsmith is supposed to provide his own IParser implementation.

Contents

[hide]

Graphical Definition

The only possible label figure is Label. In generated diagram editor the actual figure may be Label from Draw2D or WrapLabel from GMF runtime. Attribute "text" of the figure is a text shown on diagram when parser is not available.

Position of label figure within the model is important; it's used to determine whether label should be inner or external. Link labels are always external but node labels may be located whether inside node figure or "float" near it. If label figure is contained within parent node figure (directly or indirectly) it's inner label.

DiagramLabel element refers to the label figure and is being referenced by LabelMapping from mapping models. If "elementIcon" attribute is set then label uses icon from EMF item providers. The following visual facets are recognized by tooling in DiagramLabel:

  • AlignmentFacet - specifies link label position relative to the link figure; in generator model alignment is copied to "alignment" property of GenLinkLabel
  • LabelOffsetFacet - initial distance between the label and node / link figure; in generator model represented by LabelOffsetAttributes instance in label viewmap

Mapping

LabelMapping and its subclasses define label within the mapping model.

Basic LabelMapping instance supports 3rd and 4th usecases. It has "diagramLabel" reference to graphical definition and "readOnly" flag.

FeatureLabelMapping extends LabelMapping to support 1st usecase by referencing domain attributes and providing format options:

  • "features" reference: at least one attribute from domain model; all attributes should be defined within EClass of parent node / link
  • "viewPattern": pattern to construct label text from feature value(s)
  • "editorPattern": pattern to construct text for inplace editor from feature value(s)
  • "editPattern": pattern to parse text entered by user in new feature value(s)
  • "viewMethod": method to produce text from feature value(s) by pattern; used with "viewPattern" and "editorPattern"
  • "editMethod": method to parse text entered by user in new feature value(s); used with "editPattern"

Currently supported methods are:

  • MESSAGE_FORMAT uses java.text.MessageFormat class
  • NATIVE only one attribute should be specified; calls EcoreUtil.convertToString(...) / EcoreUtil.createFromString(...) methods
  • REGEXP calls String.split(...) method
  • PRINTF calls String.format(...) method

DesignLabelMapping is a LabelMapping flavour to handle 2nd usecase. Now it's empty but there should be a way to define view style used to store label text in notation model [1].

Generator Model

Two hierarchies describe labels in generator model: descendants of GenLabel used to express label presentation in context of parent node / link and descendants of LabelModelFacet that denote label semantic. GenLabel references LabelModelFacet by "modelFacet" reference thus linking them together.

GenLabel

Basic GenLabel has "elementIcon" flag copied from graphical definition and "readOnly" flag from the mapping model. GenNodeLabel and GenExternalNodeLabel are concrete classes that should be used to represent inner and external node labels respectively. GenLinkLabel is for the link labels and has "alignment" attribute derived from respective visual facet. GenChildLabelNode provides the same attributes as GenLabel but prefixed with "label" word; this class is used for nodes within list compartments.

LabelModelFacet

Model facet reflects label usecase:

  • "feature label": FeatureLabelModelFacet instance; properties "features", "viewPattern" and "editPattern" are taken from FeatureLabelMapping
  • "design label": DesignLabelModelFacet instance
  • "default label" and "custom label": no model facet (null)

In fact during mapping model to generator model transformation the following rules are obeyed:

  • FeatureLabelMapping -> FeatureLabelModelFacet
  • DesignLabelMapping -> DesignLabelModelFacet
  • LabelMapping -> null

Pending Requests

This is a list of enhancements related to GMF labels; hopefully future versions of GMF will implement them:

### 协同过滤与深度学习结合的推荐系统模型实现 #### 基本概念 推荐系统是一种用于预测用户对特定项目的偏好并提供个性化建议的技术。其主要分为两种类型:基于内容的推荐和协同过滤推荐[^3]。其中,协同过滤又可以细分为基于用户的协同过滤和基于物品的协同过滤。 当数据规模增大时,传统的协同过滤方法可能会面临计算复杂度高、冷启动等问题。因此,近年来深度学习被引入到推荐系统中,以解决这些问题并提高推荐效果。深度学习可以通过嵌入层(Embedding Layer)将用户和物品映射到低维空间,并捕捉复杂的交互模式[^1]。 --- #### 深度学习中的嵌入技术 在深度学习框架下,嵌入技术是核心部分之一。它能够将离散的用户ID或物品ID转化为连续向量表示,这些向量可以在后续网络中进一步处理。例如,在神经协同过滤(Neural Collaborative Filtering, NeuMF)模型中,用户和物品分别通过两个独立的嵌入矩阵得到它们对应的隐含特征向量[^2]。 以下是使用 TensorFlow 构建的一个简单的 NCF 模型: ```python import tensorflow as tf from tensorflow.keras import layers, Model class NeuralCollaborativeFiltering(Model): def __init__(self, num_users, num_items, embedding_size=64): super(NeuralCollaborativeFiltering, self).__init__() # 定义用户和物品的嵌入层 self.user_embedding = layers.Embedding(input_dim=num_users, output_dim=embedding_size) self.item_embedding = layers.Embedding(input_dim=num_items, output_dim=embedding_size) # GMF分支 self.gmf_layer = layers.Multiply() # MLP分支 self.mlp_layers = [ layers.Dense(64, activation='relu'), layers.Dense(32, activation='relu') ] # 合并GMF和MLP的结果 self.concat_layer = layers.Concatenate() self.output_layer = layers.Dense(1, activation='sigmoid') def call(self, inputs): user_id, item_id = inputs # 获取用户和物品的嵌入向量 user_vector = self.user_embedding(user_id) item_vector = self.item_embedding(item_id) # GMF路径 gmf_output = self.gmf_layer([user_vector, item_vector]) # MLP路径 mlp_output = user_vector for layer in self.mlp_layers: mlp_output = layer(mlp_output) # 将两条路径的结果拼接起来 concat_output = self.concat_layer([gmf_output, mlp_output]) # 输出评分预测 prediction = self.output_layer(concat_output) return prediction # 创建模型实例 num_users = 1000 # 用户数量 num_items = 500 # 物品数量 model = NeuralCollaborativeFiltering(num_users, num_items) # 测试输入 test_user = tf.constant([[1]]) test_item = tf.constant([[2]]) print(model((test_user, test_item))) ``` 此代码展示了如何构建一个基本的 NCF 模型,该模型融合了广义矩阵分解(Generalized Matrix Factorization, GMF)和多层感知机(Multilayer Perceptron, MLP)。这种架构允许捕获线性和非线性的用户-物品关系[^5]。 --- #### 数据预处理 为了训练上述模型,通常需要准备如下格式的数据集: - **用户 ID**: 表示不同用户的唯一标识符。 - **物品 ID**: 表示不同物品的唯一标识符。 - **评分/点击记录**: 反映用户对某件商品的兴趣程度,可能是显式的评分值或是隐式的二元反馈(如是否点击过某个链接)。 假设有一个 CSV 文件存储着这样的信息,则可以用 Pandas 加载并转换成张量供 TensorFlow 使用: ```python import pandas as pd data = pd.read_csv('ratings.csv') # ratings.csv 包括三列:userId, itemId, rating users = data['userId'].values.astype(int).reshape(-1, 1) items = data['itemId'].values.astype(int).reshape(-1, 1) labels = data['rating'].values.reshape(-1, 1) dataset = tf.data.Dataset.from_tensor_slices(((users, items), labels)) dataset = dataset.shuffle(buffer_size=len(data)).batch(32) ``` --- #### 训练过程 一旦准备好数据集和模型结构,就可以配置优化器并开始训练循环: ```python optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) loss_fn = tf.keras.losses.BinaryCrossentropy() # 如果目标变量为二分类问题 @tf.function def train_step(users, items, labels): with tf.GradientTape() as tape: predictions = model((users, items)) loss_value = loss_fn(labels, predictions) gradients = tape.gradient(loss_value, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) for epoch in range(10): # 运行多个epoch for (users, items), labels in dataset: train_step(users, items, labels) print(f'Epoch {epoch} completed.') ``` 以上脚本实现了标准的小批量梯度下降法来调整权重参数,从而使损失函数最小化[^4]。 --- #### 结果评估 评价推荐系统的性能常用指标包括精确率(Precision)、召回率(Recall)、均方根误差(RMSE)等。对于二分类任务还可以考虑 AUC 和 F1-Score。 --- ###
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值