My Linux Tips (1)

本文汇总了几个在Linux系统中不常使用但非常实用的小技巧,包括提取rpm源代码安装包中的源文件、转化textinfo文件为文本文件等,帮助用户更高效地使用Linux。

下面是几个linux上的小技巧,虽然不是很常用,但还是比较有用的:) 我自己老是忘记,每次用的时候要看手册和GOOGLE,好记性不如烂笔头,现纪录下来备查:

(1)提取rpm源代码安装包(eg. test.src.rpm)里面的源文件 

           rpm2cpio test.src.rpm | cpio -idv

(2)提取info文件(有时候我们可能要将info文件那到windows上查看, 以 info gcc为例)

          cd /usr/share/info;ls gcc*

          cp  gcc.info.gz   /yourpath/ ; cd /yourpath/; unzip gcc.info.gz

(3)将textinfo文件( eg. test.texti)转化为文本文件

         makeinfo --plaintext  test.texti > /yourpath/text.text

 (4)在FC6的Terminal的Tab之前切换

          alt+数字 (e,g, alt+1为第一个Tab, ...)

(5)grep的-v (invert-match) 选项

         该选项在查找某个名字的进程时比较有用,比如我们要察看含“ftp”的进程:

         # ps aux | grep ftp

         root      1967  0.0  0.1   4580   488 ?        Ss   09:59   0:00 /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf
         root      4363  0.0  0.2   3884   676 pts/4    R+   11:13   0:00 grep ftp

        有时候,我们只想要上面的第一行,所以可以再进一步反向匹配

        # ps aux | grep ftp | grep -v grep

         root      1967  0.0  0.1   4580   488 ?        Ss   09:59   0:00 /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf

        

### PyTorch on Linux Installation and Usage Guide #### Prerequisites for Installing PyTorch on a Linux Server To install PyTorch on a Linux server, it is important to ensure that the environment meets specific prerequisites. The server should have Python installed with its executable placed under `/usr/local/bin/` while dependencies are stored within `/usr/local/lib/python3.6/`, as per standard configurations[^2]. Additionally, having an appropriate NVIDIA driver along with compatible CUDA versions ensures optimal performance when running GPU-accelerated operations. For instance, if using an NVIDIA driver version `470.63.01` supporting up to CUDA 11.4, selecting a corresponding PyTorch build from previous releases becomes necessary since newer builds might not support older CUDA versions[^3]. #### Steps Involved in Setting Up PyTorch Environment Using Docker Containers An alternative method involves setting up through pre-configured Docker images which encapsulate both TensorFlow and PyTorch environments alongside other required libraries. Running containers provides isolation benefits reducing potential conflicts between different software packages or library versions present across multiple projects hosted on one machine. A command like below can be used to start such a container: ```bash docker run --gpus all -it --name my_torch -v $(pwd):/app easonbob/my_torch1-pytorch:22.03-py3 ``` This line initializes a new session named 'my_torch', mounting current directory into '/app' inside the image tagged as 'easonbob/my_torch1-pytorch:22.03-py3'. It also allocates access to available GPUs ensuring hardware acceleration capabilities remain intact during execution[^4]. #### Official Resources Available Directly From PyTorch Website The official website offers comprehensive guides covering various aspects related to installing and utilizing PyTorch effectively including system requirements, detailed instructions tailored towards diverse operating systems (including Linux), troubleshooting tips among others. Visiting pages dedicated specifically toward past iterations could prove beneficial especially when dealing with legacy setups requiring compatibility considerations.
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值