- 博客(104)
- 收藏
- 关注
原创 pyinstaller打包:‘upx‘ 不是内部或外部命令+AttributeError
pyinstaller打包py为exe遭遇‘upx‘ 不是内部或外部命令,也不是可运行的程序。AttributeError: ‘str’ object has no attribute ‘decode’ .我查了N多资料,发觉我遇上的这个错误比较特别。原因是我改动了subprocess.py这个配置文件。subprocess.py在我的电脑不只一个,而目标位置是:C:\Users\ZSC\AppData\Local\Programs\Python\Python38\Lib\subprocess.p
2021-10-03 15:38:11
1151
原创 球探.极电竞求sign
https://www.jdj007.com/var get_sign;var window = global;!function (e) { function t(t) { for (var n, o, u = t[0], i = t[1], l = t[2], d = 0, s = []; d < u.length; d++) o = u[d], Object.prototype.hasOwnPrope
2021-09-20 22:43:37
1386
原创 QQ空间password加密
扣代码没成功var password;// var window=global;!function (n) { var i = {}; function o(t) { if (i[t]) return i[t].exports; var e = i[t] = { "i": t, "l": !1, "exports": {} }; ...
2021-09-20 11:07:48
5996
原创 动态规划算法
def xxx(nums): for i in range(len(nums)): if nums[i]>0: y=nums[i] return ynums=[1,-5,2,4,-3]print(xxx(nums))//1def xxx(nums): for i in range(len(nums)): if nums[i]>0: y=nums[i] retur
2021-09-06 15:37:36
116
原创 长房集团密码模拟加密
http://eip.chanfine.com/login.jsp?login_error=1由上图可知此案例AES与DES加密value一样var CryptoJS = CryptoJS || function(u, p) { var d = {} , l = d.lib = {} , s = function() {} , t = l.Base = { extend: function(a) { s.protot
2021-09-04 23:33:30
236
原创 网上管家婆
// BarrettMu, a class for performing Barrett modular reduction computations in// JavaScript.//// Requires BigInt.js.//// Copyright 2004-2005 David Shapiro.//// You may use, re-use, abuse, copy, and modify this code to your liking, but// ple...
2021-08-18 09:57:12
223
原创 5Luy5oG65Yac5Lia5bel56iL5a2m6Zmi5pWZ5Yqh572R57uc566h55CG57O757uf
//5Luy5oG65Yac5Lia5bel56iL5a2m6Zmi5pWZ5Yqh572R57uc566h55CG57O757ufconst CryptoJs=require('K:/nodejs/node_global/node_modules/crypto-js')const md5=CryptoJs.MD5var encpwd = md5("201720814306" + md5("123456").toString().substring(0, 30).toUpperCase() + '11
2021-08-09 22:36:01
2029
原创 Scrape Center爬虫平台之spa10案例--JJEncode混淆
去除最后括号,复制,粘贴,变成下图改写代码:function anonymous() { const players = [{ name: '凯文-杜兰特', image: 'durant.png', birthday: '1988-09-29', height: '208cm', weight: '108.9KG' }, { name: '勒布朗-詹姆斯', imag.
2021-08-08 21:45:40
489
原创 Scrape Center爬虫平台之spa13案例--Obfuscator混淆
https://spa13.scrape.center/将main.js用https://tool.lu/js/index.html在线工具解密const _0x4afa = ['1993-03-11', '79.4KG', '1984-05-29', 'stringify', '128.8KG', '1991-06-29', '198cm', 'davis.png', '208cm', '卡尔-安东尼-唐斯', '188cm', '196cm', 'antetokounmpo.png', '83.9K
2021-08-01 21:27:48
447
原创 Scrape Center爬虫平台之spa12案例--JSFuck混淆
JSFuck注意最后一个右括号下的横线,找寻前面相对应的左括号由第一行开始往下找,但愿你没有眼花,看到左括号下有横线了。将括号内的东东copy,放在console内运行,得到JSFuck混淆后的结果...
2021-08-01 11:45:41
406
原创 AES加密+MD5加密
注意:要安装crypto-js,不要安装crypto.js否则会报错:TypeError: Cannot read property ‘encrypt’ of undefined//npm install -g crypto-jsconst CryptoJs=require('D:/nodejs/node_global/node_modules/crypto-js')//AES加密function encryptByAES(data,aesKey){ var encryptStr=Cryp
2021-07-30 09:35:38
397
原创 宁波大学提取网页源代码加了盐的值
import requestsimport redef getHTMLText(url): try: headers={ 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'Accept-E
2021-07-29 14:43:15
209
原创 npm install jsdom安装失败的解决办法
根据错误提示:npm WARN cleanup Failed to remove some directories ,错误与目录有关,所以移除node_modules,重新安装即可。此建议来自Marvinhttps://segmentfault.com/q/1010000040404848
2021-07-28 20:14:25
6657
1
原创 模拟jsencrypt加密
参考:https://www.bilibili.com/video/BV1xq4y1H7udwindow=globalconst JSEncrypt=require('K:/nodejs/node_global/node_modules/jsencrypt')let jse=new JSEncrypt()var public_key="MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDXQG8rnxhslm+2f7Epu3bB0inrnCaTHhUQCYE+2X
2021-07-25 11:54:40
231
原创 Scrape Center爬虫平台之spa3+spa4案例
import requestsdef getHTMLText(url): try: r=requests.get(url,timeout=60) r.raise_for_status() r.encoding='utf-8' return r.json() except: print('url:',url)for j in range(10): url=f"https://spa3.scrape.ce
2021-07-21 20:16:10
519
原创 Scrape Center爬虫平台之spa1案例
import requestsimport osdef getHTMLText(url): try: r=requests.get(url,timeout=30) r.raise_for_status() r.encoding='utf-8' return r.json() except: passdef parseHTML(html,i): id=html['results'][i][
2021-07-21 20:12:30
615
原创 Scrape Center爬虫平台之ssr3案例
如果是IE浏览器的话,无须输入账号+密码爬虫的话,要设置好URL协议://用户名:密码@服务域名或IP:端口号/接口地址?查询参数以下是正确姿势:import requestsimport timefrom lxml import etreeurl="https://admin:admin@ssr3.scrape.center/"r=requests.get(url)r.encoding='utf-8'r=r.textprint(r) #Internal Server Error
2021-07-21 20:07:47
615
原创 Scrape Center爬虫平台之spa9案例
import requestsimport redef getHTMLText(url): try: r=requests.get(url,timeout=30) r.raise_for_status() r.encoding='utf-8' return r.text except: passurl="https://spa9.scrape.center/"html=getHTMLTex
2021-07-18 16:59:51
700
1
原创 Scrape Center爬虫平台之spa10+spa11+spa12+spa13案例
import requestsimport redef getHTMLText(url): try: r=requests.get(url,timeout=30) r.raise_for_status() r.encoding='utf-8' return r.text except: passurl="https://spa9.scrape.center/"html=getHTMLTex
2021-07-18 16:58:29
621
2
原创 SONY--用python执行js语句点击登录
import timet1=time.time()from selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC, select, wait from selenium.webdriver.common.by import Byimport refrom sel
2021-07-17 17:02:50
202
原创 Scrape Center爬虫平台之spa8案例
#效果太差,爬那么一点点数据就耗时一分钟,还是搞js逆向吧。import timet1=time.time()from selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC, select, wait from selenium.webdriver.common.
2021-07-17 16:09:18
1388
原创 Scrape Center爬虫平台之spa7案例
import requestsdef getHTMLText(url): try: r=requests.get(url,timeout=30) r.raise_for_status() r.encoding='utf-8' return r.text[17:2000] except: passurl="https://spa7.scrape.center/js/main.js"html=getHTMLT.
2021-07-15 20:53:55
1027
3
原创 Scrape Center爬虫平台之spa6案例
先看明白Scrape Center爬虫平台之spa2案例import requestsimport timeimport hashlibimport base64def getHTMLText(url): try: r=requests.get(url,timeout=60) r.raise_for_status() r.encoding='utf-8' return r.json() except:
2021-07-09 22:39:37
918
原创 Scrape Center爬虫平台之spa5案例
import requestsimport timet1=time.time()import asyncioimport aiohttpasync def get(session, queue): while True: try: page = queue.get_nowait() except asyncio.QueueEmpty: return url = f"https://spa5.sc
2021-07-08 22:40:04
639
原创 Scrape Center爬虫平台之spa2案例
参考:知乎LLI ,ibra146会修电脑的程序猿scrapy学习之爬虫练习平台2B站https://www.bilibili.com/video/BV1Mf4y1s7ds?p=42主要就是破解这个token值思路分析:1:当下时间戳time.time()取整,得t,假设t为16255727362:["/api/movie", 0, “1625572736”]----》/api/movie,0,1625572736将 /api/movie,0,1625572736 用SHA1加
2021-07-06 22:34:10
1841
原创 Scrape Center爬虫平台之ssr4案例
#异步爬取详情页import timefrom requests.exceptions import Timeoutt1=time.time()import requestsfrom lxml import etree#异步爬取详情页import asyncioimport aiohttptemplate = 'https://ssr4.scrape.center/detail/{page}'async def get(session, queue): while True:.
2021-07-03 15:43:22
783
原创 Scrape Center爬虫平台之ssr1+ssr2案例
import requestsimport timefrom lxml import etreefor i in range(1,11): url=f"https://ssr1.scrape.center/page/{i}" r=requests.get(url) r.encoding='utf-8' r=r.text selector=etree.HTML(r) for j in range(1,11): x1=f'//*[@id="i
2021-07-03 12:41:56
1082
原创 Scrape
import randomimport timefrom selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC, select, wait from selenium.webdriver.common.by import Byfrom selenium.webdr
2021-07-02 15:47:30
424
原创 华为云-东吴杯的点击
手动直接点击链接就可进入另一界面,但selenium模拟居然要间隔重复点击,第一次点击链接是不允许的,会弹出登录框,关掉才可点击链接,跳转到相关界面:import randomimport timefrom selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC,..
2021-06-21 22:18:03
151
原创 try...except...else
import randomimport timefrom selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as EC, select, wait from selenium.webdriver.common.by import Bydriver=webdriver.Ch
2021-06-20 11:44:29
81
原创 ThreadPoolExecutor的使用语法
import timestart_time=time.time()from concurrent.futures import ThreadPoolExecutor,as_completedimport requestsfrom lxml import etreeurl='https://www.soshuw.com/GuiMiZhiZhu/'r=requests.get(url)r.encoding='utf-8'r=r.textselector=etree.HTML(r)s_xpa.
2021-06-13 17:12:14
159
原创 使用 Chrome 复制的 Xpath注意事项
参考:为什么不要轻易使用 Chrome 复制的 XPath?原创:kingname 未闻Codehttps://mp.weixin.qq.com/s/noRWHeBH2ErnmvP3J8minAimport requestsfrom lxml import etreeurl="https://cn.bing.com/?scope=web&FORM=ANNTH1"r = requests.get(url)r.encoding='utf-8'r=r.textselector=etre
2021-06-12 10:17:17
1400
原创 多线程写入的玄学
import queueimport requestsimport timeimport randomimport threadingfrom bs4 import BeautifulSoupurls = [ f"https://www.soshuw.com/GuiMiZhiZhu/25708{page}.html" for page in range(59, 99)]def craw(url): r = requests.get(url) return
2021-06-10 15:11:48
118
2
原创 多线程队列-为什么将第49行代码注释掉,就无法将数据写入1.txt?第66行代码传参给第68行也不行吗?
from concurrent import futuresimport timestart_time=time.time()print(start_time)from concurrent.futures import ThreadPoolExecutor,as_completedimport urllib3urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)import threadingimport re
2021-06-09 15:41:34
133
原创 python多线程:threading.Thread的args参数注意事项
| init(self, group=None, target=None, name=None, args=(), kwargs=None, *, daemon=None)| This constructor should always be called with keyword arguments. Arguments are:|| group should be None; reserved for future extension when a ThreadGroup|
2021-06-07 09:07:17
6861
2
空空如也
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人