What is ramfs

Ramfs is a very simple filesystem that exports Linux's disk caching mechanisms (the page cache and dentry cache) as a dynamically resizable RAM-based filesystem.

ramfs是空间规模动态变化的RAM文件系统,用来实现Linux缓存机制(缓存page cache and dentry cache)的文件系统。

Normally all files are cached in memory by Linux.  Pages of data read from backing store (usually the block device the filesystem is mounted on) are kept
around in case it's needed again, but marked as clean (freeable) in case the Virtual Memory system(虚拟存储系统) needs the memory for something else.  Similarly, data written to files is marked clean as soon as it has been written to backing store, but kept around for caching purposes until the VM reallocates the memory.  A similar mechanism (the dentry cache) greatly speeds up access to directories.

通常linux系统中的所有的文件都存储在内存中;读的过程:页面从被挂在到文件系统的块设备中读取数据,并保留在内存中,以便下次使用;如果虚拟存储系统需要这个内存用去其他的用它,这个页面被标记为“空闲的”;写的过程:当数据被写到块设备中后,页面就立即被标记为“清空”,但是他任然被保存在内存中直到虚拟存储系统重新分配内存;对目录的访问机制也是这个原理;

With ramfs, there is no backing store.  Files written into ramfs allocate dentries and page cache as usual, but there's nowhere to write them to.
This means the pages are never marked clean, so they can't be freed by the VM when it's looking to recycle memory.

The amount of code required to implement ramfs is tiny, because all the work is done by the existing Linux caching infrastructure.  Basically,
you're mounting the disk cache as a filesystem.  Because of this, ramfs is not an optional component removable via menuconfig, since there would be negligible space savings.

ramfs and ramdisk:
------------------

The older "ram disk" mechanism created a synthetic block device out of  an area of RAM and used it as backing store for a filesystem.  This block
device was of fixed size, so the filesystem mounted on it was of fixed size.  Using a ram disk also required unnecessarily copying memory from the
fake block device into the page cache (and copying changes back out), as well as creating and destroying dentries.  Plus it needed a filesystem driver
(such as ext2) to format and interpret this data.

Compared to ramfs, this wastes memory (and memory bus bandwidth), creates
unnecessary work for the CPU, and pollutes the CPU caches.  (There are tricks
to avoid this copying by playing with the page tables, but they're unpleasantly
complicated and turn out to be about as expensive as the copying anyway.)
More to the point, all the work ramfs is doing has to happen _anyway_,
since all file access goes through the page and dentry caches.  The RAM
disk is simply unnecessary; ramfs is internally much simpler.

Another reason ramdisks are semi-obsolete is that the introduction of
loopback devices offered a more flexible and convenient way to create
synthetic block devices, now from files instead of from chunks of memory.
See losetup (8) for details.

ramfs and tmpfs:
----------------

One downside of ramfs is you can keep writing data into it until you fill up all memory, and the VM can't free it because the VM thinks that files
should get written to backing store (rather than swap space), but ramfs hasn't got any backing store.  Because of this, only root (or a trusted user) should
be allowed write access to a ramfs mount.

A ramfs derivative called tmpfs was created to add size limits, and the ability to write the data to swap space.  Normal users can be allowed write access to
tmpfs mounts.  See Documentation/filesystems/tmpfs.txt for more information.

What is rootfs?
---------------

Rootfs is a special instance of ramfs (or tmpfs, if that's enabled), which is always present in 2.6 systems.  You can't unmount rootfs for approximately the
same reason you can't kill the init process; rather than having special code to check for and handle an empty list, it's smaller and simpler for the kernel
to just make sure certain lists can't become empty.

Most systems just mount another filesystem over rootfs and ignore it.  The amount of space an empty instance of ramfs takes up is tiny.

What is initramfs?
------------------

All 2.6 Linux kernels contain a gzipped "cpio" format archive, which is extracted into rootfs when the kernel boots up.  After extracting, the kernel
checks to see if rootfs contains a file "init", and if so it executes it as PID1.  If found, this init process is responsible for bringing the system the
rest of the way up, including locating and mounting the real root device (if any).  If rootfs does not contain an init program after the embedded cpio
archive is extracted into it, the kernel will fall through to the older code to locate and mount a root partition(分区), then exec some variant of /sbin/init out of that.

2.6的Linux内核包含有gzip压缩的cpio格式的文档,可以在内核引导的时候解压缩为rootfs。在解压缩后,kernel将检查rootfs中是否包含init文件。如果init文件存在,内核就会执行这个文件,并且赋予PID=1的进程号。这个init程序将整个系统引导起来,包括定位并且安装真正的root设备。如果在cpio文档解压缩后的rootfs中没有init程序(init文件),内核执行旧的代码,定位并且安装root分区,执行/sbin/init程序。

All this differs from the old initrd in several ways:

  - The old initrd was always a separate file, while the initramfs archive is linked into the linux kernel image.  (The directory linux-*/usr is devoted to generating this archive (档案,文档)during the build.)

  - The old initrd file was a gzipped filesystem image (in some file format,such as ext2, that needed a driver built into the kernel), while the new
    initramfs archive is a gzipped cpio archive (like tar only simpler, see cpio(1) and Documentation/early-userspace/buffer-format.txt).  The
    kernel's cpio extraction code is not only extremely small, it's also __init text and data that can be discarded during the boot process.

  - The program run by the old initrd (which was called /initrd, not /init) did some setup and then returned to the kernel, while the init program from
    initramfs is not expected to return to the kernel.  (If /init needs to hand off control it can overmount / with a new root device and exec another init
    program.  See the switch_root utility, below.)

  - When switching another root device, initrd would pivot_root and then umount the ramdisk.  But initramfs is rootfs: you can neither pivot_root
    rootfs, nor unmount it.  Instead delete everything out of rootfs to free up the space (find -xdev / -exec rm '{}' ';'), overmount rootfs
    with the new root (cd /newmount; mount --move . /; chroot .), attachstdin/stdout/stderr to the new /dev/console, and exec the new init.

    Since this is a remarkably persnickety process (and involves deleting commands before you can run them), the klibc package introduced a helper
    program (utils/run_init.c) to do all this for you.  Most other packages (such as busybox) have named this command "switch_root".

Populating initramfs:
---------------------

The 2.6 kernel build process always creates a gzipped cpio format initramfs archive and links it into the resulting kernel binary.  By default, this
archive is empty (consuming 134 bytes on x86).

The config option CONFIG_INITRAMFS_SOURCE (in General Setup in menuconfig,and living in usr/Kconfig) can be used to specify a source for the
initramfs archive, which will automatically be incorporated into the resulting binary.  This option can point to an existing gzipped cpio
archive, a directory containing files to be archived, or a text file specification such as the following example:

  dir /dev 755 0 0
  nod /dev/console 644 0 0 c 5 1
  nod /dev/loop0 644 0 0 b 7 0
  dir /bin 755 1000 1000
  slink /bin/sh busybox 777 0 0
  file /bin/busybox initramfs/busybox 755 0 0
  dir /proc 755 0 0
  dir /sys 755 0 0
  dir /mnt 755 0 0
  file /init initramfs/init.sh 755 0 0

Run "usr/gen_init_cpio" (after the kernel build) to get a usage message documenting the above file format.

One advantage of the configuration file is that root access is not required to set permissions or create device nodes in the new archive.  (Note that those
two example "file" entries expect to find files named "init.sh" and "busybox" in a directory called "initramfs", under the linux-2.6.* directory.  See
Documentation/early-userspace/README for more details.)

The kernel does not depend on external cpio tools.  If you specify a directory instead of a configuration file, the kernel's build infrastructure
creates a configuration file from that directory (usr/Makefile calls scripts/gen_initramfs_list.sh), and proceeds to package up that directory
using the config file (by feeding it to usr/gen_init_cpio, which is created from usr/gen_init_cpio.c).  The kernel's build-time cpio creation code is
entirely self-contained, and the kernel's boot-time extractor is also (obviously) self-contained.

The one thing you might need external cpio utilities installed for is creating or extracting your own preprepared cpio files to feed to the kernel build
(instead of a config file or directory).

The following command line can extract a cpio image (either by the above script or by the kernel build) back into its component files:

  cpio -i -d -H newc -F initramfs_data.cpio --no-absolute-filenames

The following shell script can create a prebuilt cpio archive you can use in place of the above config file:

  #!/bin/sh

  # Copyright 2006 Rob Landley <rob@landley.net> and TimeSys Corporation.
  # Licensed under GPL version 2

  if [ $# -ne 2 ]
  then
    echo "usage: mkinitramfs directory imagename.cpio.gz"
    exit 1
  fi

  if [ -d "$1" ]
  then
    echo "creating $2 from $1"
    (cd "$1"; find . | cpio -o -H newc | gzip) > "$2"
  else
    echo "First argument must be a directory"
    exit 1
  fi

Note: The cpio man page contains some bad advice that will break your initramfs
archive if you follow it.  It says "A typical way to generate the list
of filenames is with the find command; you should give find the -depth option
to minimize problems with permissions on directories that are unwritable or not
searchable."  Don't do this when creating initramfs.cpio.gz images, it won't
work.  The Linux kernel cpio extractor won't create files in a directory that
doesn't exist, so the directory entries must go before the files that go in
those directories.  The above script gets them in the right order.

External initramfs images:
--------------------------

If the kernel has initrd support enabled, an external cpio.gz archive can also
be passed into a 2.6 kernel in place of an initrd.  In this case, the kernel
will autodetect the type (initramfs, not initrd) and extract the external cpio
archive into rootfs before trying to run /init.

This has the memory efficiency advantages of initramfs (no ramdisk block
device) but the separate packaging of initrd (which is nice if you have
non-GPL code you'd like to run from initramfs, without conflating it with
the GPL licensed Linux kernel binary).

It can also be used to supplement the kernel's built-in initramfs image.  The
files in the external archive will overwrite any conflicting files in
the built-in initramfs archive.  Some distributors also prefer to customize
a single kernel image with task-specific initramfs images, without recompiling.

Contents of initramfs:
----------------------

An initramfs archive is a complete self-contained root filesystem for Linux.
If you don't already understand what shared libraries, devices, and paths
you need to get a minimal root filesystem up and running, here are some
references:
http://www.tldp.org/HOWTO/Bootdisk-HOWTO/
http://www.tldp.org/HOWTO/From-PowerUp-To-Bash-Prompt-HOWTO.html
http://www.linuxfromscratch.org/lfs/view/stable/

The "klibc" package (http://www.kernel.org/pub/linux/libs/klibc) is
designed to be a tiny C library to statically link early userspace
code against, along with some related utilities.  It is BSD licensed.

I use uClibc (http://www.uclibc.org) and busybox (http://www.busybox.net)
myself.  These are LGPL and GPL, respectively.  (A self-contained initramfs
package is planned for the busybox 1.3 release.)

In theory you could use glibc, but that's not well suited for small embedded
uses like this.  (A "hello world" program statically linked against glibc is
over 400k.  With uClibc it's 7k.  Also note that glibc dlopens libnss to do
name lookups, even when otherwise statically linked.)

A good first step is to get initramfs to run a statically linked "hello world"
program as init, and test it under an emulator like qemu (www.qemu.org) or
User Mode Linux, like so:

  cat > hello.c << EOF
  #include <stdio.h>
  #include <unistd.h>

  int main(int argc, char *argv[])
  {
    printf("Hello world!\n");
    sleep(999999999);
  }
  EOF
  gcc -static hello.c -o init
  echo init | cpio -o -H newc | gzip > test.cpio.gz
  # Testing external initramfs using the initrd loading mechanism.
  qemu -kernel /boot/vmlinuz -initrd test.cpio.gz /dev/zero

When debugging a normal root filesystem, it's nice to be able to boot with
"init=/bin/sh".  The initramfs equivalent is "rdinit=/bin/sh", and it's
just as useful.

Why cpio rather than tar?
-------------------------

This decision was made back in December, 2001.  The discussion started here:

  http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1538.html

And spawned a second thread (specifically on tar vs cpio), starting here:

  http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1587.html

The quick and dirty summary version (which is no substitute for reading
the above threads) is:

1) cpio is a standard.  It's decades old (from the AT&T days), and already
   widely used on Linux (inside RPM, Red Hat's device driver disks).  Here's
   a Linux Journal article about it from 1996:

      http://www.linuxjournal.com/article/1213

   It's not as popular as tar because the traditional cpio command line tools
   require _truly_hideous_ command line arguments.  But that says nothing
   either way about the archive format, and there are alternative tools,
   such as:

     http://freshmeat.net/projects/afio/

2) The cpio archive format chosen by the kernel is simpler and cleaner (and
   thus easier to create and parse) than any of the (literally dozens of)
   various tar archive formats.  The complete initramfs archive format is
   explained in buffer-format.txt, created in usr/gen_init_cpio.c, and
   extracted in init/initramfs.c.  All three together come to less than 26k
   total of human-readable text.

3) The GNU project standardizing on tar is approximately as relevant as
   Windows standardizing on zip.  Linux is not part of either, and is free
   to make its own technical decisions.

4) Since this is a kernel internal format, it could easily have been
   something brand new.  The kernel provides its own tools to create and
   extract this format anyway.  Using an existing standard was preferable,
   but not essential.

5) Al Viro made the decision (quote: "tar is ugly as hell and not going to be
   supported on the kernel side"):

      http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1540.html

   explained his reasoning:

      http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1550.html
      http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1638.html

   and, most importantly, designed and implemented the initramfs code.

Future directions:
------------------

Today (2.6.16), initramfs is always compiled in, but not always used.  The
kernel falls back to legacy boot code that is reached only if initramfs does
not contain an /init program.  The fallback is legacy code, there to ensure a
smooth transition and allowing early boot functionality to gradually move to
"early userspace" (I.E. initramfs).

The move to early userspace is necessary because finding and mounting the real root device is complex.  Root partitions can span multiple devices (raid or
separate journal).  They can be out on the network (requiring dhcp, setting aspecific MAC address, logging into a server, etc).  They can live on removable
media, with dynamically allocated major/minor numbers and persistent naming issues requiring a full udev implementation to sort out.  They can be
compressed, encrypted, copy-on-write, loopback mounted, strangely partitioned, and so on.

This kind of complexity (which inevitably includes policy) is rightly handled in userspace.  Both klibc and busybox/uClibc are working on simple initramfs
packages to drop into a kernel build.

The klibc package has now been accepted into Andrew Morton's 2.6.17-mm tree.The kernel's current early boot code (partition detection, etc) will probably
be migrated into a default initramfs, automatically created and used by the kernel build.

转载于:https://www.cnblogs.com/linengier/archive/2013/03/30/2990177.html

03-19
<think>好的,我现在要回答用户关于ramfs文件系统的问题,包括使用说明和实现细节。首先,我需要回忆一下ramfs的基本概念和它在Linux中的角色。ramfs是一个基于内存的文件系统,使用物理内存作为存储介质,允许用户将文件和目录存储在内存中,从而提高读写速度。不过,用户提到要参考提供的三个引用内容,我需要检查这些引用是否有相关信息。 查看引用[1],它主要讨论了根文件系统的理解,虽然提到了文件系统的重要性,但并没有直接涉及ramfs的具体内容。引用[2]描述了real_lookup函数的作用,涉及文件系统的查找操作,可能与ramfs的查找机制有关,但具体实现细节可能不在其中。引用[3]概述了文件系统的演进,提到了磁盘管理、数据布局等,但ramfs作为内存文件系统,可能不涉及磁盘管理,不过数据布局和文件结构部分可能相关。 接下来,我需要确定ramfs的主要特点和实现机制。ramfs没有使用磁盘缓存,而是直接将数据存储在内存中,这意味着其大小动态变化,直到内存耗尽。这与tmpfs不同,tmpfs允许限制大小并使用交换空间。因此,在说明使用时需要强调这一点,避免内存被耗尽的风险。 在实现细节方面,ramfs作为内核模块,可能依赖于虚拟文件系统(VFS)的接口,如inode_operations结构中的lookup函数,如引用[2]提到的。这可能涉及到如何在内核中注册文件系统类型,处理目录项和inode的创建。此外,ramfs的数据结构可能较为简单,因为它不需要处理磁盘块分配,而是直接使用内存页。 使用说明方面,用户可能关心如何挂载ramfs。例如,使用mount命令指定类型为ramfs,并提供目标目录。此外,可能需要提醒用户注意内存使用情况,避免系统因内存不足而崩溃。 最后,生成相关问题需要围绕ramfs的特点、与其他文件系统的比较、应用场景、潜在风险以及实现机制展开。确保问题能够引导用户进一步探索相关知识,并参考提供的引用内容中的相关部分,如文件系统的演进中的并发控制或数据布局,虽然ramfs可能较为简单,但可以对比其他复杂文件系统。 需要验证所提供的信息是否正确,例如确认ramfs是否真的没有大小限制,以及挂载命令的正确性。同时,确保引用标记正确,引用的内容是否确实与讨论的点相关,比如引用[3]中的文件系统演进部分可能用来对比ramfs和其他文件系统的设计差异。</think>### RAMFS文件系统使用说明与实现细节 RAMFS(RAM File System)是Linux内核内置的**内存文件系统**,直接利用物理内存作为存储介质,适用于需要**高速临时存储**的场景。以下从使用方法和实现机制两方面说明: --- #### 一、使用说明 1. **挂载RAMFS** ```bash # 创建挂载点 sudo mkdir /mnt/ramfs # 挂载ramfs(默认无大小限制) sudo mount -t ramfs none /mnt/ramfs ``` - 默认情况下,RAMFS会动态占用内存,直到系统内存耗尽[^3]。 2. **设置容量限制(可选)** RAMFS本身不支持容量限制,但可通过`mount -o size=100M`参数尝试(实际无效,需改用tmpfs): ```bash # 推荐改用tmpfs实现容量限制 sudo mount -t tmpfs -o size=100M tmpfs /mnt/tmpfs ``` 3. **读写测试** ```bash echo "Hello RAMFS" > /mnt/ramfs/test.txt cat /mnt/ramfs/test.txt ``` - 所有操作均在内存中完成,速度可达GB/s级别。 4. **卸载文件系统** ```bash sudo umount /mnt/ramfs ``` - 卸载后数据立即丢失。 --- #### 二、实现细节 1. **核心机制** - **动态内存分配**:文件写入时通过`alloc_pages()`直接申请内存页,不限制容量。 - **无持久化存储**:数据仅存于内存,断电或卸载后消失。 - **VFS接口集成**:通过`struct file_system_type`注册到内核,实现`mount`、`kill_sb`等回调函数[^2]。 2. **数据结构** - **inode与dentry**:使用内存中的`struct inode`和`struct dentry`管理文件元数据。 - **页缓存**:文件内容存储在`struct page`链表中,读写操作直接映射到内存地址。 3. **性能优势与风险** - **优势**:无磁盘I/O瓶颈,适合临时文件、缓存等场景。 - **风险**:过度使用可能导致OOM(内存耗尽),需谨慎监控。 --- #### 三、典型应用场景 1. **临时数据处理**:如编译过程中的临时文件存储。 2. **高速缓存层**:数据库查询缓存或Web服务器静态资源缓存。 3. **嵌入式系统**:无磁盘设备的环境中使用内存存储配置。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值