setbuf() has to do with the delivery of bytes between the
C library FILE* management layer and the OS I/O layer.

http://www.cplusplus.com/reference/cstdio/setbuf/

keyboard------>keyboard physical buffer------->input stream
buffer(stdin)------one character---------------->g*etchar()

"Unbuffered" doesn't mean "throw away characters instead of buffering
them". It means "make characters available to getchar() etc as soon
the operating system returns them, rather than waiting until you've
got a buffer-full".

It's best to use an explicit fflush(stdout) whenever output should definitely be visible (and especially if the text does not end with \n). [footnote] Several mechanisms attempt to perform the fflush for you, at the ``right time,'' but they tend to apply only when stdout is an interactive terminal.

Another possibility might be to use setbuf or setvbuf to turn off buffering of the output stream, but buffering is a Good Thing, and completely disabling it can lead to crippling inefficiencies.


stream buffer is a block of data that acts as intermediary between the i/o operations and the physical file associated to the stream: For output buffers, data is output to the buffer until its maximum capacity is reached, then it is flushed (i.e.: all data is sent to the physical file at once and the buffer cleared). Likewise, input buffers are filled from the physical file, from which data is sent to the operations until exhausted, at which point new data is acquired from the file to fill the buffer again.

Stream buffers can be explicitly flushed by calling fflush. They are also automatically flushed by fclose and freopen, or when the program terminates normally.

All files are opened with a default allocated buffer (fully buffered) if they are known to not refer to an interactive device. This function can be used to either set a specific memory block to be used as buffer or to disable buffering for the stream.

The default streams stdin and stdout are fully buffered by default if they are known to not refer to an interactive device. Otherwise, they may either be line buffered or unbuffered by default, depending on the system and library implementation. The same is true for stderr, which is always either line buffered or unbuffered by default.

example

fp1=fopen("myfile1.txt","w");

fp2=fopen("myfile2.txt","a");


setbuf(fp,buffer);

fputs("fsdsad");

fflush(fp);


setbuf(fp,NULL);

fputs("ada");

fclose(fp1);

fcclose(fp2);

in this example,2 files are open for writing,.The stream associated with the file

myfile1.txt is set to a user allocated buffer;a wrtieng operation to it is performed;

The data is logically part of the stream,but it has not been writen to the device

until the fflush function is called.

The second buffer is associated with the file myfile2.txt,is set to unbuffered,so the

subsequent output operation is wrtten to the device as soon as possible.

The final state,however,is the same for both buffered and unbuffered streams once

the files have been closed(closing a file flushes its buffer)


Calls to fread(), fgets(), fgetc(), and getchar() work within
whatever FILE* buffered data is available, and when that data
is exhausted, the calls request that the FILE* buffer be refilled
by the system I/O layer.

When full buffering is turned on, that refill operation results in the
FILE* layer requesting that the operating system hand it a full
buffer's worth of data; when buffering is turned off, that
refill operation results in the FILE* layer requesting that the
operating system return a single character.

Your error is in assuming that the operating system layer in
question is dealing with raw bytes directly from the terminal.
That is not the case. Instead, the relevant operating system layer
is dealing with bytes returned by the terminal device driver --
and the device driver does not pass those bytes up to the
operating system layer until the device driver is ready to do so.

As I indicated before, setting an input stream to be unbuffered
does NOT tell the operating system to tell the device driver
to go into any kind of "raw" single-character mode. There are
system-specific calls such as ioctl() and tcsetterm() that
control what the device driver will do.

In Unix-type systems, the terminal device driver by default works
on a line at a time, not passing the line onward until it detects
a sequence that indicates end-of-line. When the Unix-type
'line disciplines' are in effect, you can edit the line in various
ways before allowing it to be passed to the operating system.
For example, you might type cad and then realize you mistyped and so
press the deletion key and type an r; if you were to do so, and then
pretty return, it would be the word car that was passed to the
next layer, *not* the series of keys cad<delete>r
The device driver buffers the input to allow you to edit it,
and setting your input stream to unbuffered in your program does NOT
affect that device driver buffering.
If you want to do single-character I/O and you will worry about
things like inline editting yourself in your program, then you
will need to use system-specific calls to enable that I/O mode.

Before you head down that path, you should keep in mind that
you cannot handle mouse-highlight and copy and paste operations
just by looking at the key presses themselves: you have to work
with the graphical layer to do that, and that can get very messy.
Because of that, character-by-character I/O is probably best
reserved for interaction with non-graphical devices such as
modems and serial ports. If you -really- want character-by-
character I/O, such as because you are programming a graphical
game, then it is probably best to find a pre-written library that
handles the dirty work for you.
Effectively, at this point in your programming career, you should
probably supress the memory that setbuf() can be applied to
input streams, and just work with line-by-line I/O. You -probably-
don't have much reason to apply setbuf() to output streams,
either (but you might want to get into the practice of
putting in fflush(stdout) calls after writing out information
that the user needs in order to decide on future inputs.)
--
"I will speculate that [...] applications [...] could actually see a
performance boost for most users by going dual-core [...] because it
is running the adware and spyware that [...] are otherwise slowing
down the single CPU that user has today" -- Herb Sutter