The abrupt phone interview from Tencent(Cont.)

博主分享了一次腾讯的突发电话面试经历,面试过程中被问及Java技术、网络编程、项目经验和Linux操作等方面的问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

昨天下午再次接到Tencent突然的电话面试。。。

Tencent似乎很喜欢搞突袭,总是在我没有准备的情况下面试。这已经是第三次了,从来都没有事先通知过。

谁让人家现在是牛公司呢!

这次面试的gg就不像前天那位那么和蔼可亲了,整个过程我能听出来他应该是那种很冷酷的人,不苟言笑。所以从一开始我就有些紧张。

面试的过程中也了解到原来他是我的师兄,不过忘了攀一攀关系了,实在是失败。

还是来说说面试过程吧:

一开始就让我来个自我介绍,汗~  虽然我有准备过,但是还没来得及背啊,结果当然是说的结结巴巴,还没说完就被面试官打断了。

开局不利啊,后面的面试就更谨慎了。

然后他开始问我技术问题,好几道题都是前天一面面试过的,比如线程安全啊,hashMap和HashTable的区别,程序中能主动调用GC回收垃圾吗啊。然后问Interface和Abstract Class的区别,Servlet和CGI的区别(没答上来,其实应该说两句,至少后者不能跨平台嘛),Servlet的生命周期(忘了一个Servlet只会有一个对象,并只会被初始化一次了,哎~ 回答问题要周到啊!!!),forward和sendRedirect的区别(我巴拉巴拉说完,他又说问的不是sendRedirect,是Redirect,我觉得应该一样啊,也不知道是他错了,还是我错了),字符编码的问题(这个答得也不好,我说自己没有仔细去了解他们之间的区别,只是出现乱码的时候就去查,或者把所有编码改成一致,也不知道这样的回答他满不满意),三大框架的好处及Hibernate的机制(我说我只用了一些基本的功能)。Linux方面问了如何查看一个文件很大的log,这里提到了awk和sed(因为谨慎,我只说自己了解,并没有实际用到过)。然后问了一些项目的问题,问我在现在的公司到目前为止参与的最大的项目,我说是Migration Tool,他让我讲讲自己做了什么,我就说我做了Application Base层,又继续问我有没有做过TCP/IP的东西,我说在我们的系统里,所有用于节点传输的机制都有现成的,我们只是用用(这块说的不好,因为他们要的就是有TCP/IP经验的人,怎么地也要把CCM掰扯掰扯嘛)。这中间还问到了TCP/IP的三次握手(嘿嘿,凑巧刚刚复习过一遍)。

然后又问我有什么问题,我就把前天问的问题又问了一遍,后来一想这样很危险啊,万一两个面试官一对。。。天呐~

总结两天来的面试:

1.自我介绍说得不好,都没有机会说完,而且说得不自然,过于紧张了

2.对这样的冷面面试官有些许恐惧,总感觉他能把你看穿

3.没有和面试官攀攀关系,缓和一下气氛,没有把自己和面试官放在对等的位置上,又忘了

4.似乎不应该问两个面试官同样的问题

5.Java的知识还需要多多学习,尤其是网络的知识和多线程的知识。

6.对项目总结得不够,应该再包装包装(!=撒谎~)

 

还是要将马老师的两个面试精神继续贯彻: 不卑不亢,展现自我。

Chapter 4: Processor Architecture. This chapter covers basic combinational and sequential logic elements, and then shows how these elements can be combined in a datapath that executes a simplified subset of the x86-64 instruction set called “Y86-64.” We begin with the design of a single-cycle datapath. This design is conceptually very simple, but it would not be very fast. We then introduce pipelining, where the different steps required to process an instruction are implemented as separate stages. At any given time, each stage can work on a different instruction. Our five-stage processor pipeline is much more realistic. The control logic for the processor designs is described using a simple hardware description language called HCL. Hardware designs written in HCL can be compiled and linked into simulators provided with the textbook, and they can be used to generate Verilog descriptions suitable for synthesis into working hardware. Chapter 5: Optimizing Program Performance. This chapter introduces a number of techniques for improving code performance, with the idea being that programmers learn to write their C code in such a way that a compiler can then generate efficient machine code. We start with transformations that reduce the work to be done by a program and hence should be standard practice when writing any program for any machine. We then progress to transformations that enhance the degree of instruction-level parallelism in the generated machine code, thereby improving their performance on modern “superscalar” processors. To motivate these transformations, we introduce a simple operational model of how modern out-of-order processors work, and show how to measure the potential performance of a program in terms of the critical paths through a graphical representation of a program. You will be surprised how much you can speed up a program by simple transformations of the C code. Bryant & O’Hallaron fourth pages 2015/1/28 12:22 p. xxiii (front) Windfall Software, PCA ZzTEX 16.2 xxiv Preface Chapter 6: The Memory Hierarchy. The memory system is one of the most visible parts of a computer system to application programmers. To this point, you have relied on a conceptual model of the memory system as a linear array with uniform access times. In practice, a memory system is a hierarchy of storage devices with different capacities, costs, and access times. We cover the different types of RAM and ROM memories and the geometry and organization of magnetic-disk and solid state drives. We describe how these storage devices are arranged in a hierarchy. We show how this hierarchy is made possible by locality of reference. We make these ideas concrete by introducing a unique view of a memory system as a “memory mountain” with ridges of temporal locality and slopes of spatial locality. Finally, we show you how to improve the performance of application programs by improving their temporal and spatial locality. Chapter 7: Linking. This chapter covers both static and dynamic linking, including the ideas of relocatable and executable object files, symbol resolution, relocation, static libraries, shared object libraries, position-independent code, and library interpositioning. Linking is not covered in most systems texts, but we cover it for two reasons. First, some of the most confusing errors that programmers can encounter are related to glitches during linking, especially for large software packages. Second, the object files produced by linkers are tied to concepts such as loading, virtual memory, and memory mapping. Chapter 8: Exceptional Control Flow. In this part of the presentation, we step beyond the single-program model by introducing the general concept of exceptional control flow (i.e., changes in control flow that are outside the normal branches and procedure calls). We cover examples of exceptional control flow that exist at all levels of the system, from low-level hardware exceptions and interrupts, to context switches between concurrent processes, to abrupt changes in control flow caused by the receipt of Linux signals, to the nonlocal jumps in C that break the stack discipline. This is the part of the book where we introduce the fundamental idea of a process, an abstraction of an executing program. You will learn how processes work and how they can be created and manipulated from application programs. We show how application programmers can make use of multiple processes via Linux system calls. When you finish this chapter, you will be able to write a simple Linux shell with job control. It is also your first introduction to the nondeterministic behavior that arises with concurrent program execution. Chapter 9: Virtual Memory. Our presentation of the virtual memory system seeks to give some understanding of how it works and its characteristics. We want you to know how it is that the different simultaneous processes can each use an identical range of addresses, sharing some pages but having individual copies of others. We also cover issues involved in managing and manipulating virtual memory. In particular, we cover the operation of storage allocators such as the standard-library malloc and free operations. CovBryant & O’Hallaron fourth pages 2015/1/28 12:22 p. xxiv (front) Windfall Software, PCA ZzTEX 16.2 Preface xxv ering this material serves several purposes. It reinforces the concept that the virtual memory space is just an array of bytes that the program can subdivide into different storage units. It helps you understand the effects of programs containing memory referencing errors such as storage leaks and invalid pointer references. Finally, many application programmers write their own storage allocators optimized toward the needs and characteristics of the application. This chapter, more than any other, demonstrates the benefit of covering both the hardware and the software aspects of computer systems in a unified way. Traditional computer architecture and operating systems texts present only part of the virtual memory story. Chapter 10: System-Level I/O. We cover the basic concepts of Unix I/O such as files and descriptors. We describe how files are shared, how I/O redirection works, and how to access file metadata. We also develop a robust buffered I/O package that deals correctly with a curious behavior known as short counts, where the library function reads only part of the input data. We cover the C standard I/O library and its relationship to Linux I/O, focusing on limitations of standard I/O that make it unsuitable for network programming. In general, the topics covered in this chapter are building blocks for the next two chapters on network and concurrent programming. Chapter 11: Network Programming. Networks are interesting I/O devices to program, tying together many of the ideas that we study earlier in the text, such as processes, signals, byte ordering, memory mapping, and dynamic storage allocation. Network programs also provide a compelling context for concurrency, which is the topic of the next chapter. This chapter is a thin slice through network programming that gets you to the point where you can write a simple Web server. We cover the client-server model that underlies all network applications. We present a programmer’s view of the Internet and show how to write Internet clients and servers using the sockets interface. Finally, we introduce HTTP and develop a simple iterative Web server. Chapter 12: Concurrent Programming. This chapter introduces concurrent programming using Internet server design as the running motivational example. We compare and contrast the three basic mechanisms for writing concurrent programs—processes, I/O multiplexing, and threads—and show how to use them to build concurrent Internet servers. We cover basic principles of synchronization using P and V semaphore operations, thread safety and reentrancy, race conditions, and deadlocks. Writing concurrent code is essential for most server applications. We also describe the use of thread-level programming to express parallelism in an application program, enabling faster execution on multi-core processors. Getting all of the cores working on a single computational problem requires a careful coordination of the concurrent threads, both for correctness and to achieve high performance翻译以上英文为中文
08-05
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值