程序中流(IO)

Java IO流详解:概念、类型与应用
本文深入探讨了Java IO流的概念,包括流的源与目的、数据形式和方向。阐述了字节流与字符流的区别,以及输入流与输出流的分类,如InputStream、OutputStream、Reader和Writer。同时,介绍了节点流与过滤流,如FileInputStream、ObjectInputStream等,并解析了流的层次结构和在数据传输中的作用。

一、概念:流是对于数据传输的一种抽象描述(任何有能力产出流即源,或者有能力接受数据的接收端对象都可以被统称为:流)

流的产生(源):可以从本地文件/写入本地文件,也可以从网络上获取

流的传输形式:字节流/字符流

增强版的流:添加中间件(增加流的功能)-->{如:初始点与目的地之间转换所采用的不同的方式}

流的最终地:最终大部分被发送到网上(输送至网)

流动的方向:输入和输出

二、流的三要素

流的源和目的文件 / 字节数组 /管道 /字符数组/String对象 / 网络 / 流
流的数据形式字符  /  字节
流的方向输入  / 输出

三、Java/IO流的体系

1.根据流的方向和数据形式有4种类型

输入字节
输出字节
输入字符
输出字符

2.四种形式

输入字节:inputStream

输出字节:outputStream

输入字符:reader

输出字符:writer

3.图示分类

4.节点流:java针对基本数据源的操作

   过滤流:增强流的处理功能

5.流体系列层次结构详解:

(1)数据源+InputStream[节点流]:数据源与InputStream的结合

ByteAyyayInputStream(java.io):字节数组输入流--->从字节数组中读取数据,也就是从内存中读取数据包含一个内部缓冲区,指向该字节数组内部计数器跟踪reade方法要提供的下一个字节,此类中的方法在关闭此流后任然可以被调用,而不会产生任何IOException

FileInputStream(java.io):文件输入流--->用于从文件中读取信息

PipedInputStream(java.io):管道输入流--->管道输入流应该链接到管道输出流,管道输入流提供要写入管道输出流的所有数据节点写入[数据从PipedInputStream对象读取,并且有其他线程写入到相应的PipedOutStream]

String:StringBufferedInputStream(java.io)

ObjectInputStream(java.io):对象输入流

我现在不管绑定哪个cpu都能整成ping PC端发包了,并且cat /proc/irq/100/smp_affinity_list也是对应绑定的cpu,下面是一些日志 hi309a /lib/udrivers # echo 5 &gt; /proc/irq/100/smp_affinity_list [ 876.054812] udrv-pcie b00000000.pcie: MSI affinity set to CPU5 (addr=0x700014) [ 876.062077] udrv-pcie b00000000.pcie: [iWare][Info] msi#1 address_hi 0x0 address_lo 0x700014 hi309a /lib/udrivers # echo 6 &gt; /proc/irq/100/smp_affinity_list [ 925.566738] udrv-pcie b00000000.pcie: MSI affinity set to CPU6 (addr=0x700018) [ 925.573993] udrv-pcie b00000000.pcie: [iWare][Info] msi#1 address_hi 0x0 address_lo 0x700018 hi309a /lib/udrivers # echo 7 &gt; /proc/irq/100/smp_affinity_list [ 932.090727] udrv-pcie b00000000.pcie: MSI affinity set to CPU7 (addr=0x70001c) [ 932.097982] udrv-pcie b00000000.pcie: [iWare][Info] msi#1 address_hi 0x0 address_lo 0x70001c hi309a /lib/udrivers # echo 6 &gt; /proc/irq/100/smp_affinity_list [ 938.257725] udrv-pcie b00000000.pcie: MSI affinity set to CPU6 (addr=0x700018) [ 938.264979] udrv-pcie b00000000.pcie: [iWare][Info] msi#1 address_hi 0x0 address_lo 0x700018 hi309a /lib/udrivers # echo 2 &gt; /proc/irq/100/smp_affinity_list [ 945.099725] udrv-pcie b00000000.pcie: MSI affinity set to CPU2 (addr=0x700018) [ 945.106980] udrv-pcie b00000000.pcie: [iWare][Info] msi#1 address_hi 0x0 address_lo 0x700018 hi309a /lib/udrivers # echo 1 &gt; /proc/irq/100/smp_affinity_list [ 950.863721] udrv-pcie b00000000.pcie: MSI affinity set to CPU1 (addr=0x700014) [ 950.870975] udrv-pcie b00000000.pcie: [iWare][Info] msi#1 address_hi 0x0 address_lo 0x700014 hi309a /lib/udrivers # echo 4 &gt; /proc/irq/100/smp_affinity_list [ 962.763725] udrv-pcie b00000000.pcie: MSI affinity set to CPU4 (addr=0x700010) [ 962.770980] udrv-pcie b00000000.pcie: [iWare][Info] msi#1 address_hi 0x0 address_lo 0x700010 hi309a /lib/udrivers # echo 2 &gt; /proc/irq/100/smp_affinity_list [ 965.964723] udrv-pcie b00000000.pcie: MSI affinity set to CPU2 (addr=0x700018) [ 965.971978] udrv-pcie b00000000.pcie: [iWare][Info] msi#1 address_hi 0x0 address_lo 0x700018 但是我使用pc对网卡进行ping IP发包时 ,通过watch -n 1 cat /proc/interrupts发现,网卡仍然在cpu0上计数,不在我绑定后的cpu上计数 通过cat /sys/class/net/eth0/queues/rx-0/rps_cpus 查看发现仍在cpu0上,同时绑定被拒绝 hi309a ~ # cat /sys/class/net/eth0/queues/rx-0/rps_cpus 00 hi309a ~ # echo 4 &gt; /sys/class/net/eth0/queues/rx-0/rps_cpu -bash: /sys/class/net/eth0/queues/rx-0/rps_cpu: Permission denied 下面是我的pcie网卡设备信息 hi309a /lib/udrivers # lspci -vvv -s 0000:01:00.0 0000:01:00.0 Class 0200: Device 10ec:8168 (rev 09) Subsystem: Device 10ec:0123 Physical Slot: 0 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping[ 310.466788] r8169 0000:01:00.0: invalid short VPD tag 00 at offset 1 - SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast &gt;TAbort- <TAbort- <MAbort- &gt;SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 100 Region 2: Memory at b00800000 (64-bit, non-prefetchable) [size=4K] Region 4: Memory at b04800000 (64-bit, prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Address: 0000000000700010 Data: 0001 Capabilities: [70] Express (v2) Endpoint, MSI 01 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0W DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq- RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 4096 bytes DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s unlimited, L1 <64us ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp- LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1 TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR- 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix- EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit- FRS- TPHComp- ExtTPHComp- AtomicOpsCap: 32bit- 64bit- 128bitCAS- DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled, AtomicOpsCtl: ReqEn- LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1- EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest- Retimer- 2Retimers- CrosslinkRes: unsupported Capabilities: [b0] MSI-X: Enable- Count=4 Masked- Vector table: BAR=4 offset=00000000 PBA: BAR=4 offset=00000800 Capabilities: [d0] Vital Product Data pcilib: sysfs_read_vpd: read failed: Input/output error Not readable Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+ AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn- MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap- HeaderLog: 00000000 00000000 00000000 00000000 Capabilities: [140 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=ff Status: NegoPending- InProgress- Capabilities: [160 v1] Device Serial Number 65-02-00-00-68-4c-e0-00 Kernel driver in use: r8169 lspci: Unable to load libkmod resources: error -2 hi309a ~ # cat /proc/irq/100/smp_affinity 04 这是我的网卡信息,另外我的芯片不支持MSI-X的中断,下面是网卡的中断信息 100: 731 0 0 0 0 0 0 0 udrv_msi 524288 Edge eth0 请结合以上信息以及代码,帮我进行一个详细分析,为什么我使用pc对网卡进行ping IP发包时 ,通过watch -n 1 cat /proc/interrupts发现,网卡仍然在cpu0上计数,不在我绑定后的cpu上计数 通过cat /sys/class/net/eth0/queues/rx-0/rps_cpus 查看发现仍在cpu0上,给出一个详细的解释
09-25
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值