Mixture of conda2 and conda3 problem and A guidance of environment set-up for windows users

本文指导如何在Windows系统中安装并配置Miniconda3,解决Conda2和Conda3共存导致的环境创建混乱问题。通过指定Python版本创建环境,确保系统正确识别并使用所需的Python版本。
部署运行你感兴趣的模型镜像
Sometime, Conda2 and conda3 are exits in a system at the same time. When we creat new environment, there may be some confusion for machine to figure out which version should be. This could solve this problem by specifying the version of python and also works for Windows.

 I write this guidance for windows users. Hopefully, it will be helpful.

 

Download and install miniconda3.

http://conda.pydata.org/miniconda.html

py=3.5 and windows

Then start Anaconda Prompt

If you cannot find it, go to the default path, run activate.bat by using terminal: normally, it should be in:

C:\Users\ [your username] \Miniconda3\Scripts\activate.bat

Before we do the same thing as what teacher told, we should run this commend:

conda install git

 

Then according to teacher ‘s guidence

Just change one thing:

conda create --name lab1

to

conda create -n lab1 python=3.5

(which is should also can solve people who choose MLP last semester on DICE, and just run this commend instead of the old one)

Others just remain.

 

Hope this can help you.

 

If you have any problems pls just write here, let's solve it together.

 

您可能感兴趣的与本文相关的镜像

Python3.8

Python3.8

Conda
Python

Python 是一种高级、解释型、通用的编程语言,以其简洁易读的语法而闻名,适用于广泛的应用,包括Web开发、数据分析、人工智能和自动化脚本

### DeepSeek-V2混合专家语言模型的特点 DeepSeek-V2作为一种先进的混合专家(Mixture of Experts, MoE)架构的语言模型,在设计上融合了多个子网络,这些子网络被称为“专家”。这种结构允许模型根据不同输入动态分配计算资源给最合适的专家处理特定任务[^1]。 #### 经济高效性 通过仅激活部分而非全部参数参与前向传播过程中的运算操作,使得即使拥有庞大数量级的总参数量也能够保持较低内存占用率以及较快推理速度。这不仅降低了硬件成本开销还提高了能源利用效率[^2]。 #### 参数共享机制 不同于传统全连接层中每个神经元都与其他层所有节点相连的方式;MoE采用稀疏化策略——即同一时刻只有少数几个选定出来的‘活跃’单元会真正发挥作用并更新权重值。这样的做法既减少了冗余链接又促进了不同领域间知识迁移能力的发展[^3]。 ```python import torch.nn as nn class Expert(nn.Module): def __init__(self, input_size, output_size): super(Expert, self).__init__() self.fc = nn.Linear(input_size, output_size) def forward(self, x): return self.fc(x) class MixtureOfExperts(nn.Module): def __init__(self, num_experts=8, expert_input_size=768, expert_output_size=768): super(MixtureOfExperts, self).__init__() self.experts = nn.ModuleList([Expert(expert_input_size, expert_output_size) for _ in range(num_experts)]) def forward(self, x): outputs = [] for expert in self.experts: out = expert(x) outputs.append(out.unsqueeze(-1)) combined_outputs = torch.cat(outputs, dim=-1).mean(dim=-1) # Simple average over experts' predictions. return combined_outputs ```
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值