导读:
看到内核中有的 .h 文件中有下面的定义
CODE:
看到内核中有的 .h 文件中有下面的定义
我不明白
1、程序为什么有的用 u32,而有的却用 __u32 呢?
2、为什么不直接 unsigned int xxx 或者 unsigned char xxx,而用 u32 u8 定义变量呢?
QUOTE:
引自LDD 第10章
Assigning an Explicit Size to Data Items
Sometimes kernel code requires data items of a specific size, either to match predefined binary structures[39] or to align data within structures by inserting "filler' fields (but please refer to "Data Alignment" later in this chapter for information about alignment issues).
[39]This happens when reading partition tables, when executing a binary file, or when decoding a network packet.
The kernel offers the following data types to use whenever you need to know the size of your data. All the types are declared in , which in turn is included by :
u8; /* unsigned byte (8 bits) */
u16; /* unsigned word (16 bits) */
u32; /* unsigned 32-bit value */
u64; /* unsigned 64-bit value */
These data types are accessible only from kernel code (i.e., _ _KERNEL_ _ must be defined before including ). The corresponding signed types exist, but are rarely needed; just replace u with s in the name if you need them.
If a user-space program needs to use these types, it can prefix the names with a double underscore: _ _u8 and the other types are defined independent of _ _KERNEL_ _. If, for example, a driver needs to exchange binary structures with a program running in user space by means of ioctl, the header files should declare 32-bit fields in the structures as _ _u32.
It's important to remember that these types are Linux specific, and using them hinders porting software to other Unix flavors. Systems with recent compilers will support the C99-standard types, such as uint8_t and uint32_t; when possible, those types should be used in favor of the Linux-specific variety. If your code must work with 2.0 kernels, however, use of these types will not be possible (since only older compilers work with 2.0).
Assigning an Explicit Size to Data Items
Sometimes kernel code requires data items of a specific size, either to match predefined binary structures[39] or to align data within structures by inserting "filler' fields (but please refer to "Data Alignment" later in this chapter for information about alignment issues).
[39]This happens when reading partition tables, when executing a binary file, or when decoding a network packet.
The kernel offers the following data types to use whenever you need to know the size of your data. All the types are declared in , which in turn is included by :
u8; /* unsigned byte (8 bits) */
u16; /* unsigned word (16 bits) */
u32; /* unsigned 32-bit value */
u64; /* unsigned 64-bit value */
These data types are accessible only from kernel code (i.e., _ _KERNEL_ _ must be defined before including ). The corresponding signed types exist, but are rarely needed; just replace u with s in the name if you need them.
If a user-space program needs to use these types, it can prefix the names with a double underscore: _ _u8 and the other types are defined independent of _ _KERNEL_ _. If, for example, a driver needs to exchange binary structures with a program running in user space by means of ioctl, the header files should declare 32-bit fields in the structures as _ _u32.
It's important to remember that these types are Linux specific, and using them hinders porting software to other Unix flavors. Systems with recent compilers will support the C99-standard types, such as uint8_t and uint32_t; when possible, those types should be used in favor of the Linux-specific variety. If your code must work with 2.0 kernels, however, use of these types will not be possible (since only older compilers work with 2.0).
基本就是说为了一些需要精确位数的数据类型独立于平台(包括硬件,系统和编译器)。
CODE:
#if PLATFORM_INT_SIZE!=32
#if PLATFORM_LONG_SIZE==32
typedef long u32;
#elif PLATFORM_LONG_LONG_SIZE==32
typedef long long u32;
#endif
#else
typedef int u32;
#endif
int a=sizeof(u32);
#if PLATFORM_LONG_SIZE==32
typedef long u32;
#elif PLATFORM_LONG_LONG_SIZE==32
typedef long long u32;
#endif
#else
typedef int u32;
#endif
int a=sizeof(u32);
这样可以保证a的值肯定是4...
本文转自
http://linux.chinaunix.net/bbs/viewthread.php?tid=672196
本文探讨了Linux内核中为何使用__u32和u32代替unsigned int等标准类型。其目的是确保数据类型的位数在不同硬件、系统和编译器环境下保持一致,以实现平台无关性。通过这种方式,可以确保变量如a的值始终为4。
1536

被折叠的 条评论
为什么被折叠?



