昇思25天学习打卡营第1天|快速入门

入门,

In this article, I learned how to load data and the model. More importantly, I learned how to save a model:

mindspore.save_checkpoint(model, "model.ckpt")

for model training ,here are some steps:

forward function:

def forward_fn(data, label):
    logits = model(data)
    loss = loss_fn(logits, label)
    return loss, logits

A somewhat complex function grad_fn that is used to calc gradiant:

grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)

forward_fn, optimizer( like SGD) are in above.

Finally we can define a train_step:

def train_step(data, label):
    (loss, _) , grads = grad_fn(data, label)
    optimizer(grads)
    return loss

and the final train:

def train(model, dataset):
    size= dataset.get_dataset_size()
    model.set_train()
    for batch, (data, label) in emuerate(dataset.create_tuple_iterator()):
        loss = train_step(data, label)
        
        if batch % 100 == 0:
            loss, current = loss.asnumpy(),batch
            print(f"loss:{loss:>7f} [{current:>3d}/{size:>3d}]")

This is the train step. It is a begin.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值