[Project] Simulate HTTP Post Request to obtain data from Web Page by using Python Scrapy Framework

本文介绍了一种利用爬虫技术和自动化脚本批量生成并筛选中国女孩名字的方法,通过抓取适合女孩子的汉字并结合《易经》哲学评估名字的好坏。

1. Background

Though it's always difficult to give child a perfect name, parent never give up trying. One of my friends met a problem. his baby girl just came to the world, he want to make a perfect name for her. he found a web page, in which he can input a baby name, baby birthday and birth time, then the web page will return 2 scores to indicate whether the name is a good or bad for the baby according to China's old philosophy --- "The Book of Changes (易经)". The 2 scores are ranged from 0 to 100. My friend asked me that could it possible to make a script that input thousands of popular names in batches, he then can select baby name among top score names, such as names with both scores over 95. 

The website

https://www.threetong.com/ceming/

2. Analysis and Plan

Chinese given name is usually consist of one or two Chinese characters. Recently, given name with two Chinese characters is more popular. My friend want to make a given name with 2 characters as weill. As the baby girl's family name is known, be same with her father, I just need to make thousands of given names that are suitable for girl and automatically input at the website, finally obtain the displayed scores 

3. Step

A, Obtain Chinese characters that suitable for naming a girl

Traditionally, there are some characters for naming a girl. I just find these characters from a website, http://xh.5156edu.com, there are totally 800 characters and the possible combination of two chracters among these characters is 800 x 800 = 640,000, which means the number of input given name are 640,000. The scrapy code is below

#spider code
# -*- coding: utf-8 -*- import scrapy from getName.items import GetnameItem class DownnameSpider(scrapy.Spider): name = 'downName' start_urls = ['http://xh.5156edu.com/xm/nu.html'] #default http request
# default handler function for http response, which is a callback function def parse(self, response): item = GetnameItem() item['ming'] = response.xpath('//a[@class="fontbox"]/text()').extract() yield item
#define a item to score chracters
import scrapy class GetnameItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() ming = scrapy.Field()

B, Write a script to automatically input all names with baby birth time in website, obtain the corresponding scores, filter all names with top scores and show to kid's parent. 

4. Key Point

A, How to quickly write xpath path of objects that want to be captured. 

If you are not quite familiar with xpath grammar, Chrome or Firefox provides developer tools to help you out.

Make a example of Chrome

Open the web page with Chrome, right click the object that you want to capture, select "inspect"

Then Chrome will open Developer Tools, show you the object in HTML source code. Right click the object again in source code, select Copy, select Copy Xpath

In this way, you can get xpath of object "静" in this example. If you want to get all objects similar with object "静", you should know basic xpath grammar, like below:

 item['ming'] = response.xpath('//a[@class="fontbox"]/text()').extract()

The code is to find all "a" tags with class name "fontbox", and extract its text, finally assign to item.

These objects are as below. Therefore I can obtain the 800 characters.

 

B, How to input data at web page automatically

auto-input is not to simulate human operation, is not to find input box and click submit. Instead auto-input is to directly send a HTTP Request with data to target web page. There are two common way to send data thru HTTP Request, one is HTTP GET, one is HTTP POST. Let's first check web page “https://www.threetong.com/ceming/“, to see which method does it use to send data.Thanks to Developer Tools on Chrome, we can do it easily.

First, try to input data in web page ”https://www.threetong.com/ceming/”

Then open Chrome Developer Tools, check source code (Elements tab)

We can see that the form is send thru HTTP POST, and the target web page is followed by action key word.

to continue, we click "姓名测试", which is the submission button, the page will redirect to targe web page, in which the scores will be displayed. 

Go to Developer Tools again, go to Network, select xingmingceshi.php page, select Headers, the detailed information abou the page is shown.

The exact form data is shown as below

So, we just need to send a HTTP POST request to target web page with above form data.

C, Code

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
# Define a item to save data
import scrapy class DaxiangnameItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() score1 = scrapy.Field() score2 = scrapy.Field() name = scrapy.Field()
# -*- coding: utf-8 -*-
import scrapy
import csv
# import item that define above from daxiangName.items import DaxiangnameItem class CemingSpider(scrapy.Spider): name = 'ceming' # This is Scrapy default entry function, program will start from the funtion. def start_requests(self):
# Making a 2 characters combination by using two for-loop, it will read 1 character from a file each time. The UTF-8 is used to avoid Chinese chracter display issue. with open(getattr(self,'file','./ming1.csv'),encoding='UTF-8') as f: reader = csv.DictReader(f) for line in reader: #print(line['\ufeffmingzi']) with open(getattr(self,'file2','./ming2.csv'),encoding='UTF-8') as f2: reader2 = csv.DictReader(f2) for line2 in reader2: #print(line) #print(line2)#I use print function to find that there is a \ufeff character ahead of real data mingzi = line['\ufeffming1']+line2['\ufeffming2'] #print(mingzi)#Below is the core function, Scrapy provide s function that could send a http post request with parameters FormRequest = scrapy.http.FormRequest( url='https://www.threetong.com/ceming/baziceming/xingmingceshi.php', formdata={'isbz':'1', 'txtName':u'刘', 'name':mingzi, 'rdoSex':'0', 'data_type':'0', 'cboYear':'2017', 'cboMonth':'7', 'cboDay':'30', 'cboHour':u'20-戌时', 'cboMinute':u'39分', }, callback=self.after_login #Here is to specify callback function, which means to specify which function to handle the http response. ) yield FormRequest #Here is important, Scrapy has a pool that stores all the http request to be sand. We use yield key word to make a iterator generator, to save the current http request into pool def after_login(self, response): '''#save response body into a file filename = 'source.html' with open(filename, 'wb') as f: f.write(response.body) self.log('Saved file %s' % filename) '''
     # extract scores from http response using xpath, and only return integer and decimal by using regular expression score1 = response.xpath('/html/body/div[6]/div/div[2]/div[3]/div[1]/span[1]/text()').re('[\d.]+') score2 = response.xpath('/html/body/div[6]/div/div[2]/div[3]/div[1]/span[2]/text()').re('[\d.]+') name = response.xpath('/html/body/div[6]/div/div[2]/div[3]/ul[1]/li[1]/text()').extract() #print(score1) #print(score2) print(name)
     # filter name with good score if float(score1[0]) >= 95 and float(score2[0]) >= 95: item = DaxiangnameItem() item['score1'] = score1 item['score2'] = score2 item['name'] = name yield item # There is also a output pool, we use yield to make a iterator generator as well, to put the output item into pool. We can get these data by using -o parameter when run the scrapy code.

 5. Epilogue

unexpectedly, I get 4000 names that has a good score, which bring a really big trouble for my friend to select manually. So I decided to delete some input characters, to filter characters that only my friend likes.  Finally, I get dozens of names that to be selected. 

内容概要:本文介绍了一个基于多传感器融合的定位系统设计方案,采用GPS、里程计和电子罗盘作为定位传感器,利用扩展卡尔曼滤波(EKF)算法对多源传感器数据进行融合处理,最终输出目标的滤波后位置信息,并提供了完整的Matlab代码实现。该方法有效提升了定位精度与稳定性,尤其适用于存在单一传感器误差或信号丢失的复杂环境,如自动驾驶、移动采用GPS、里程计和电子罗盘作为定位传感器,EKF作为多传感器的融合算法,最终输出目标的滤波位置(Matlab代码实现)机器人导航等领域。文中详细阐述了各传感器的数据建模方式、状态转移与观测方程构建,以及EKF算法的具体实现步骤,具有较强的工程实践价值。; 适合人群:具备一定Matlab编程基础,熟悉传感器原理和滤波算法的高校研究生、科研人员及从事自动驾驶、机器人导航等相关领域的工程技术人员。; 使用场景及目标:①学习和掌握多传感器融合的基本理论与实现方法;②应用于移动机器人、无人车、无人机等系统的高精度定位与导航开发;③作为EKF算法在实际工程中应用的教学案例或项目参考; 阅读建议:建议读者结合Matlab代码逐行理解算法实现过程,重点关注状态预测与观测更新模块的设计逻辑,可尝试引入真实传感器数据或仿真噪声环境以验证算法鲁棒性,并进一步拓展至UKF、PF等更高级滤波算法的研究与对比。
内容概要:文章围绕智能汽车新一代传感器的发展趋势,重点阐述了BEV(鸟瞰图视角)端到端感知融合架构如何成为智能驾驶感知系统的新范式。传统后融合与前融合方案因信息丢失或算力需求过高难以满足高阶智驾需求,而基于Transformer的BEV融合方案通过统一坐标系下的多源传感器特征融合,在保证感知精度的同时兼顾算力可行性,显著提升复杂场景下的鲁棒性与系统可靠性。此外,文章指出BEV模型落地面临大算力依赖与高数据成本的挑战,提出“数据采集-模型训练-算法迭代-数据反哺”的高效数据闭环体系,通过自动化标注与长尾数据反馈实现算法持续进化,降低对人工标注的依赖,提升数据利用效率。典型企业案例进一步验证了该路径的技术可行性与经济价值。; 适合人群:从事汽车电子、智能驾驶感知算法研发的工程师,以及关注自动驾驶技术趋势的产品经理和技术管理者;具备一定自动驾驶基础知识,希望深入了解BEV架构与数据闭环机制的专业人士。; 使用场景及目标:①理解BEV+Transformer为何成为当前感知融合的主流技术路线;②掌握数据闭环在BEV模型迭代中的关键作用及其工程实现逻辑;③为智能驾驶系统架构设计、传感器选型与算法优化提供决策参考; 阅读建议:本文侧重技术趋势分析与系统级思考,建议结合实际项目背景阅读,重点关注BEV融合逻辑与数据闭环构建方法,并可延伸研究相关企业在舱泊一体等场景的应用实践。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值