Download arxiv paper

本文介绍了一个使用Python编写的脚本,该脚本利用Selenium自动化浏览器操作来从arXiv网站批量下载指定主题的学术论文PDF文件。通过读取包含论文URL的文本文件,脚本能够遍历每个链接并自动点击“pdf”按钮进行下载。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1. Code

#!/usr/bin/env python
# -*- coding: utf-8 -*-

'''
###########
Usage:
python download.py site.txt(containing https://...)

'''

from selenium import webdriver
import time
from pymouse import PyMouse

m = PyMouse()

def pause(length=1):
    time.sleep(length)

def download(url):
    b = webdriver.Firefox()
    #b.set_page_load_timeout(60) # useless
    b.maximize_window()
    pause(1)

    b.get(url)
    pause(2)

    loading_time = 60

    dt = b.find_elements_by_tag_name('dt')
    dd = b.find_elements_by_tag_name('dd')
    assert(len(dt) == len(dd))
    dst_type = "Computer Vision"

    print b.get_window_size()
    bias = [254, 171]
    screenIsVertical = False
    if screenIsVertical:
        print "No implement when screen is vertical"
        return
    else:
        pos = [b.get_window_size()['width']/2 + bias[0], b.get_window_size()['height']/2 + bias[1]]

    for i in xrange(4, len(dt)):

        # no Computer Vision paper
        if dst_type not in dd[i].find_element_by_class_name('primary-subject').text:
            continue

        # no 'pdf' button
        try:
            dt[i].find_element_by_link_text('pdf').click()
        except Exception, e:
            continue

        pause(loading_time)

        b.find_element_by_id('download').click()
        pause(2)

        m.click(pos[0], pos[1], 1, 1)
        time.sleep(1)

        b.back()
        time.sleep(1)
        dt = b.find_elements_by_tag_name('dt')
        dd = b.find_elements_by_tag_name('dd')

    b.close()

def main():
    import sys
    if len(sys.argv) != 2:
        print(__doc__)
        return

    with open(sys.argv[1], 'r') as fid:
        urls = [x.split('\n')[0] for x in fid.readlines()]

    for url in urls:
        if url.startswith('#'):
            continue
        else:
            download(url)


if __name__ == "__main__":
    main()

2. Usage

python download.py site.txt

site.txt (example)

https://arxiv.org/find/all/1/ti:+AND+object+detection/0/1/0/all/0/1
https://arxiv.org/find/all/1/ti:+AND+object+detection/0/1/0/all/0/1?skip=25&query_id=a6b6ed358647ff57
#https://arxiv.org/find/all/1/ti:+AND+object+detection/0/1/0/all/0/1?skip=50&query_id=a6b6ed358647ff57
https://arxiv.org/find/all/1/ti:+AND+object+detection/0/1/0/all/0/1?skip=75&query_id=a6b6ed358647ff57

You can use # to ignore specific url.

Refer this post for installing requirement.

# 3D Cinemagraphy from a Single Image (CVPR 2023) [Xingyi Li](https://scholar.google.com/citations?user=XDKQsvUAAAAJ&hl)<sup>1,3</sup>, [Zhiguo Cao](http://english.aia.hust.edu.cn/info/1085/1528.htm)<sup>1</sup>, [Huiqiang Sun](https://huiqiang-sun.github.io/)<sup>1</sup>, [Jianming Zhang](https://jimmie33.github.io/)<sup>2</sup>, [Ke Xian](https://kexianhust.github.io/)<sup>3*</sup>, [Guosheng Lin](https://guosheng.github.io/)<sup>3</sup> <sup>1</sup>Huazhong University of Science and Technology, <sup>2</sup>Adobe Research, <sup>3</sup>S-Lab, Nanyang Technological University [Project](https://xingyi-li.github.io/3d-cinemagraphy/) | [Paper](https://github.com/xingyi-li/3d-cinemagraphy/blob/main/pdf/3d-cinemagraphy-paper.pdf) | [arXiv](https://arxiv.org/abs/2303.05724) | [Video](https://youtu.be/sqCy7ffTEEY) | [Supp](https://github.com/xingyi-li/3d-cinemagraphy/blob/main/pdf/3d-cinemagraphy-supp.pdf) | [Poster](https://github.com/xingyi-li/3d-cinemagraphy/blob/main/pdf/3d-cinemagraphy-poster.pdf) This repository contains the official PyTorch implementation of our CVPR 2023 paper "3D Cinemagraphy from a Single Image". ## Installation ``` git clone https://github.com/xingyi-li/3d-cinemagraphy.git cd 3d-cinemagraphy bash requirements.sh ``` ## Usage Download pretrained models from [Google Drive](https://drive.google.com/file/d/1ROxvB7D-vNYl4eYmIzZ5Gitg84amMd19/view?usp=sharing), then unzip and put them in the directory `ckpts`. To achieve better motion estimation results and controllable animation, here we provide the controllable version. Firstly, use [labelme](https://github.com/wkentaro/labelme) to specify the target regions (masks) and desired movement directions (hints): ```shell conda activate 3d-cinemagraphy cd demo/0/ labelme image.png ``` A screenshot here: ![labelme](assets/labelme.png) It is recommended to specify **short** hints rather than long hints to avoid artifacts. Please follow [labelme](https://github.com/wkentaro/labelme) for detailed instructions if needed. After that, we can obtain an image.json file. Our next step is to convert the annotations stored in JSON format into datasets that can be used by our method: ```shell labelme_json_to_dataset image.json # this will generate a folder image_json cd ../../ python scripts/generate_mask.py --inputdir demo/0/image_json ``` We now can create 3D cinemagraphs according to your preference: ```shell python demo.py -c configs/config.yaml --input_dir demo/0/ --ckpt_path ckpts/model_150000.pth --flow_scale 1.0 --ds_factor 1.0 ``` - `input_dir`: input folder that contains src images. - `ckpt_path`: checkpoint path. - `flow_scale`: scale that used to control the speed of fluid, > 1.0 will slow down the fluid. - `ds_factor`: downsample factor for the input images. Results will be saved to the `input_dir/output`. ## Known issues - Due to the limited size of the training dataset, the intermediate frame may occasionally experience flickering. - The utilization of a fixed distance threshold in agglomerative clustering within the disparity space can occasionally result in the presence of visible boundaries between different layers. - We may sometimes see artifacts when the fluid is moving very fast. You can either slow down the fluid by increasing the `flow_scale` or try to specify short hints rather than long hints, to avoid artifacts. - The motion estimation module occasionally generates motion fields that do not perfectly align with the desired preferences. ## Citation If you find our work useful in your research, please consider to cite our paper: ``` @InProceedings{li2023_3dcinemagraphy, author = {Li, Xingyi and Cao, Zhiguo and Sun, Huiqiang and Zhang, Jianming and Xian, Ke and Lin, Guosheng}, title = {3D Cinemagraphy From a Single Image}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {4595-4605} } ``` ## Relevant works - [Animating Pictures with Eulerian Motion Fields](https://openaccess.thecvf.com/content/CVPR2021/papers/Holynski_Animating_Pictures_With_Eulerian_Motion_Fields_CVPR_2021_paper.pdf), CVPR 2021 - [Controllable Animation of Fluid Elements in Still Images](https://openaccess.thecvf.com/content/CVPR2022/papers/Mahapatra_Controllable_Animation_of_Fluid_Elements_in_Still_Images_CVPR_2022_paper.pdf), CVPR 2022 - [Simulating Fluids in Real-World Still Images](https://arxiv.org/pdf/2204.11335), arXiv 2022 - [3D Photography using Context-aware Layered Depth Inpainting](https://openaccess.thecvf.com/content_CVPR_2020/papers/Shih_3D_Photography_Using_Context-Aware_Layered_Depth_Inpainting_CVPR_2020_paper.pdf), CVPR 2020 - [3D Photo Stylization: Learning to Generate Stylized Novel Views from a Single Image](https://openaccess.thecvf.com/content/CVPR2022/papers/Mu_3D_Photo_Stylization_Learning_To_Generate_Stylized_Novel_Views_From_CVPR_2022_paper.pdf), CVPR 2022 - [3D Moments from Near-Duplicate Photos](https://openaccess.thecvf.com/content/CVPR2022/papers/Wang_3D_Moments_From_Near-Duplicate_Photos_CVPR_2022_paper.pdf), CVPR 2022 - [3D Video Loops from Asynchronous Input](https://openaccess.thecvf.com/content/CVPR2023/papers/Ma_3D_Video_Loops_From_Asynchronous_Input_CVPR_2023_paper.pdf), CVPR 2023 ## Acknowledgement This code borrows heavily from [3D Moments](https://github.com/google-research/3d-moments) and [SLR-SFS](https://github.com/simon3dv/SLR-SFS). We thank the respective authors for open sourcing their methods.
最新发布
06-02
<think>好的,我现在需要帮助用户解决在Windows系统上使用Conda安装PyTorch3D时出现的PackagesNotFoundError问题,以及如何正确设置和运行3DCinemagraphy项目。用户之前已经尝试过多种方法,包括源码安装和配置CUB,但都遇到了问题。同时,用户还提到了安装GPU版PyTorch时出现conda安装了CPU版本的问题,以及可能的渠道配置错误。首先,我需要回顾之前的回答和用户提供的引用内容。用户之前尝试通过Conda安装PyTorch3D,但提示找不到包。引用[1]提到用户尝试源码安装失败,可能是因为缺少依赖项如CUB。引用[2]指出conda可能错误地安装了CPU版本的PyTorch,而用户需要GPU版本。引用[3]提到PackagesNotFoundError可能是因为渠道配置不正确,或者平台不兼容。引用[4]提到了在Windows上安装PyTorch3D需要VisualStudio的编译工具。接下来,我需要整合这些信息,提供一个系统的解决方案。首先,解决PackagesNotFoundError问题,这可能是因为PyTorch3D不在默认的Conda渠道中,需要添加正确的渠道,如FacebookResearch的渠道。同时,确保PyTorch的GPU版本正确安装,避免conda自动选择CPU版本。用户可能需要显式指定CUDA版本和PyTorch版本,以确保兼容性。 然后,关于3D Cinemagraphy项目,需要了解其依赖项,可能包括PyTorch3D、OpenCV、图像处理库等。用户需要正确安装这些依赖项,并确保环境配置正确,例如Visual Studio的C++构建工具,因为PyTorch3D可能需要编译部分C++代码。另外,用户提到的CUB依赖问题,在Windows上可能需要手动下载并设置环境变量,或者通过conda安装。之前在引用[1]中用户配置CUB失败,可能需要更详细的指导,比如指定CUB路径或使用conda-forge提供的CUB包。 最后,需要验证安装是否成功,并测试3DCinemagraphy项目的运行,可能需要克隆项目仓库,安装额外依赖,调整配置文件等步骤。总结步骤: 1.创建新的Conda环境,避免依赖冲突。 2.配置正确的Conda渠道,包括pytorch、conda-forge、iopath、fvcore等。3.安装PyTorch GPU版本,确保CUDA工具包匹配。 4.尝试通过指定渠道安装PyTorch3D,或从源码编译。 5.处理可能的依赖问题,如CUB、VisualStudio构建工具。6.安装3DCinemagraphy项目的其他依赖项,并运行示例。 需要详细检查每个步骤可能的错误点,比如渠道顺序、版本兼容性、系统工具缺失等,并提供具体的命令和解决方案。同时,考虑用户可能在Windows上的权限问题,路径设置,以及如何正确使用VisualStudio的命令提示符进行编译。</think>### **Windows系统安装PyTorch3D及3D Cinemagraphy的完整方案** --- ### **一、解决PyTorch3D安装失败问题** #### **1. 环境准备** ```bash conda create -n cinemagraph_env python=3.9 conda activate cinemagraph_env ``` #### **2. 配置Conda渠道** ```bash conda config --add channels pytorch conda config --add channels conda-forge conda config --add channels iopath conda config --add channels fvcore conda config --add channels pytorch3d ``` #### **3. 安装PyTorch GPU版本** - **关键点**:显式指定CUDA版本和渠道优先级[^2][^4] ```bash conda install pytorch=1.10.1 torchvision=0.11.2 cudatoolkit=11.3 -c pytorch -c conda-forge ``` - **验证GPU支持**: ```python import torch print(torch.cuda.is_available()) # 应输出True ``` #### **4. 安装PyTorch3D** - **方案一**:通过预编译包安装 ```bash conda install pytorch3d -c pytorch3d ``` - **方案二**:源码编译(需Visual Studio 2019构建工具)[^1][^4] ```bash git clone https://github.com/facebookresearch/pytorch3d.git cd pytorch3d set DISTUTILS_USE_SDK=1 # 启用Windows SDK python setup.py install ``` #### **5. 处理CUB依赖** - **手动安装**: 1. 下载[CUB 1.17.2](https://github.com/NVIDIA/cub/archive/1.17.2.zip) 2. 解压后设置环境变量: ```bash set CUB_HOME=C:\path\to\cub-1.17.2 ``` --- ### **二、3D Cinemagraphy项目配置** #### **1. 克隆项目仓库** ```bash git clone https://github.com/facebookresearch/3d_cinemagraphy cd 3d_cinemagraphy ``` #### **2. 安装额外依赖** ```bash pip install -r requirements.txt # 包含opencv-python, imageio等 conda install -c conda-forge meshlab-python # 3D网格处理工具 ``` #### **3. 配置运行时环境** - **关键路径**:需将Visual Studio的`cl.exe`加入系统PATH[^4] 1. 打开 **x64 Native Tools Command Prompt for VS 2019** 2. 在命令行中激活Conda环境后运行代码 #### **4. 运行示例** ```python python demo.py --input_dir ./examples --output_dir ./results ``` --- ### **三、关键问题解决方案** #### **1. PackagesNotFoundError** - **原因**:渠道未正确配置或包版本不兼容 - **解决**: ```bash conda clean -i # 清除索引缓存 conda install pytorch3d -c pytorch3d -c fvcore -c iopath # 显式指定所有渠道 ``` #### **2. PyTorch安装成CPU版本** - **检测方法**: ```python print(torch.__version__) # 应包含`cu113`标识 ``` - **修复**:强制指定GPU版本格式[^2] ```bash conda install pytorch=1.10.1=py3.9_cuda11.3_cudnn8_0 ``` #### **3. C++编译错误** - **原因**:缺少Windows SDK或CUB路径未设置 - **解决**: 1. 安装[Visual Studio 2019 Build Tools](https://visualstudio.microsoft.com/zh-hans/visual-cpp-build-tools/) 2. 验证环境变量: ```bash echo %CUB_HOME% # 应返回有效路径 ``` --- ### **四、验证安装完整性** ```python # test_pytorch3d.py import pytorch3d from pytorch3d.io import load_obj print("PyTorch3D版本:", pytorch3d.__version__) mesh = load_obj("test.obj") # 测试模型加载 ``` ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值