耗子叔ARTS:第六周

 

Algorithm:

 

/**

 * 141. Linked List Cycle

 Easy



 1502



 159



 Favorite



 Share

 Given a linked list, determine if it has a cycle in it.



 To represent a cycle in the given linked list, we use an integer pos which represents the position (0-indexed) in the linked list where tail connects to. If pos is -1, then there is no cycle in the linked list.







 Example 1:



 Input: head = [3,2,0,-4], pos = 1

 Output: true

 Explanation: There is a cycle in the linked list, where tail connects to the second node.





 Example 2:



 Input: head = [1,2], pos = 0

 Output: true

 Explanation: There is a cycle in the linked list, where tail connects to the first node.





 Example 3:



 Input: head = [1], pos = -1

 Output: false

 Explanation: There is no cycle in the linked list.









 Follow up:



 Can you solve it using O(1) (i.e. constant) memory?



 Accepted

 397,732

 Submissions

 1,082,769

 */



/**

 * Definition for singly-linked list.

 * class ListNode {

 * int val;

 * ListNode next;

 * ListNode(int x) {

 * val = x;

 * next = null;

 * }

 * }

 */

 

JAVA:


  static class ListNode {

        int val;

        ListNode next;



        ListNode(int x) {

            val = x;

            next = null;

        }

    }



    public static boolean hasCycle(ListNode head) {

        ListNode node1 = head;

        ListNode node2 = head;

        while (node2 != null && node2.next != null) {

            node1 = node1.next;

            node2 = node2.next.next;

            if (node1 == node2) {

                return true;

            }

        }

        return false;

    }



    public static void main(String[] args) {

        ListNode listNode1 = new ListNode(1);

        ListNode listNode2 = new ListNode(2);

        ListNode listNode3 = new ListNode(3);

        ListNode listNode4 = new ListNode(4);

        ListNode listNode5 = new ListNode(5);

        listNode1.next = listNode2;

        listNode2.next = listNode3;

        listNode3.next = listNode4;

        listNode4.next = listNode5;

        listNode5.next = listNode3;



        System.out.println(hasCycle(listNode1));

    }

}
 

GO:

type ListNode struct {

   Val  int

   Next *ListNode

}



func hasCycle(head *ListNode) bool {

   node1 := head

   node2 := head

   for node2 != nil && node2.Next != nil {

      node1, node2 = node1.Next, node2.Next.Next

      if node1 == node2 {

         return true

      }

   }

   return false

}

 

Review:

https://onezero.medium.com/googles-most-interesting-i-o-announcements-ranked-cabd3ff4fc8c

Google’s Most Interesting I/O Announcements, Ranked

Hint: The new Pixel phone isn’t one of them

May 8

https://cdn-images-1.medium.com/max/1200/1*45jqkLcDg17XgO4JWkEahA.jpeg

 

 

Photo: Justin Sullivan/Getty

 

https://cdn-images-1.medium.com/max/400/1*1H-Uc9Id0bT5_wktsycZiw.pngAsmajor tech events go, Google I/O lacks the glamour of an iPhone launch, the tension and drama of a Facebook keynote, or the cringe-inducing, over-the-top spectacle of a Samsung unveiling. The company’s announcements tend to be wonky, incremental, and heavily focused on artificial intelligence, especially its confusing inner workings.

Yet I find Google’s annual developer conference the most consistently intriguing of the four, because the company isn’t just releasing nifty gadgets: It’s pushing the boundaries of what can be automated, down to the most quotidian tasks in our everyday lives. In the process, Google is giving us glimpses of a future that often looks more like sci-fi than we’re really prepared to grapple with — even as it tries to reassure us with privacy and security measures that often feel like attempts to paper over the can of worms it just opened.

Here are the announcements that stood out during Google’s opening keynote, held at Mountain View’s Shoreline Amphitheatre on Tuesday, May 7. I’ve ranked them, not necessarily by their traditional news value, but according to my own opinion as to how interesting they are — that is, their potential to shake up the existing relationships between humans and machines.

1. A souped-up Google Assistant

Image courtesy of Google

It may lack the name recognition of Siri or Alexa, partly because it lacks a catchy name. But Google’s Assistant is one of the world’s most widely used consumer A.I. products, powering more than 1 billion devices around the world via Android phones, tablets, smart speakers, and smart displays. In many ways, it was already the most advanced — and now Google says it has found a way to make it 10 times faster, by pulling a lot of the complex computing out of the cloud and onto each user’s device.

Practically speaking, that means you can operate your Android phone faster by voice than you could by touch. In an onstage demo, a Google rep fired off a string of voice commands that required Google Assistant to access multiple apps, execute specific actions, and understand not only what the rep was saying, but what she actually meant. “Hey Google, what’s the weather today? What about tomorrow? Show me John Legend on Twitter. Get a Lyft ride to my hotel. Turn the flashlight on. Turn it off. Take a selfie.” Assistant executed the whole sequence flawlessly in a span of about 15 seconds. Further demos showed off its ability to compose texts and emails that drew on information about the user’s travel plans, traffic conditions, and photos.

All of that, of course, relies on users continuing to grant Google’s software deep access into their lives, which is why the company will have a hard time ever “pivoting to privacy,” as Facebook plans to. But Google’s push to perform this machine learning locally on your device will help control the flow of personal data to the cloud, and new privacy features, such as the ability to regularly delete old data, should help. Even so, Google’s vision of the future is still one in which it learns more and more about you all the time. How that squares with the preferences of an increasingly privacy-conscious public remains to be seen.

The next-generation Google Assistant will come first to Pixel phones later this year.

2. A big, powerful, scary, do-everything smart display

https://cdn-images-1.medium.com/max/800/1*dnE3BLLhzJf-jfEwi9DM4A.png

 

Image courtesy of Google

 

Sticking with the theme of managing your personal life, Google’s new Nest Hub Max exemplifies the type of potent, versatile hardware the company can build to take advantage of all that data and A.I. It basically throws every Google smart home device together into one, combining a Nest security camera, a Google Home Hub smart display, and Google Home Max smart speakers in a single gadget that’s meant to sit in your living room and act as a command center for your household.

The combination of all those features, especially the camera, opens some new possibilities that could make the Nest Hub Max more capable than the sum of its parts. For instance, it is beginning to incorporate some gesture controls, like the ability to raise one hand to pause a song or video — something that will come in handy for anyone who has ever tried to repeatedly yell “Hey Google, stop!” above the din of a noisy room. Face recognition allows it to distinguish between members of your family and personalize greetings and information to each, as well as alerting you if it sees a stranger in your home when they’re not supposed to be.

Creepy? It sure has that vibe, which is why Google included a green indicator light to tell you when the camera is on and a switch that cuts off power to both the camera and its mic. But for the millions who have already decided to allow smart devices from Google, Amazon, and other tech companies into their home, the Nest Hub Max could have a lot of appeal. This is the closest Google has come yet to its longtime dream of building a real-life Star Trek computer.

The Nest Hub Max will launch this summer at $229.

3. Automatic, real-time captioning for video and audio

https://cdn-images-1.medium.com/max/1200/1*mfwjmh4p718cfAXk1P_aGQ.png

 

Image courtesy of Google

 

This is one of those features that might seem minor to some people but is crucial to others, and it could have far-reaching effects. Google’s latest mobile operating system, Android Q, can transcribe the words from any video or audio you play on your device—in real time—and overlay them on your screen. That means you can effectively turn on A.I.-generated closed captioning for everything from YouTube videos to autoplay clips in your social feeds to a video you took of your friends on vacation.

 

 

Image courtesy of Google

 

On the level of the average user, it’s a relatively small convenience. But assuming it works and is widely used, it could be a big deal for mobile video more broadly: The format has arguably been held back by people not wanting to turn on the sound when they don’t have headphones in. And, of course, it’s even more of a breakthrough for the hundreds of millions of people around the world who are deaf or hard of hearing.

The Live Caption feature is built into Android Q, and you can activate it with a tap.

4. New transparency tools for A.I.

https://cdn-images-1.medium.com/max/800/1*lVSH9-xSG4GCOIqGxVYyfg.jpeg

 

 

Screenshot via Google I/O

 

A fundamental problem with cutting-edge machine learning software is that A.I. can draw conclusions based on signals and features that are opaque to the people using it and, often, even the people building it. For example, you probably couldn’t say exactly why your Instagram feed is ordered as it is. Shut into a black box, algorithms can be dangerously biased or discriminatory, even if their creators didn’t intend to make them that way.

At Google I/O, CEO Sundar Pichai touted the company’s use of an approach called TCAV, or testing with concept activation vectors, to shed light on the conceptual “reasoning” that underlies the software’s outputs. For instance, in an example that Google offered, it could tell you that the software identified an image’s subject as a doctor partly because of the white coat and stethoscope, but also partly because the person appeared to be male — presumably because it was trained on a dataset in which men were more likely than women to be doctors. Just identifying that bias doesn’t fix the problem, of course, but it’s a necessary first step toward confronting and correcting for those sorts of biases.

5. Incognito mode for Maps, Search, and YouTube

If Google is going to continue to build its business on the combination of A.I. and personal data — and I/O 2019 strongly suggests that it is — then it’s going to have to find ways to reconcile that with tougher privacy regulations, more intense media scrutiny, and greater public awareness of the trade-offs involved. Google already announced last week that it will let you auto-delete some of your sensitive data, including location and activity history.

At I/O, the company announced a new “incognito” mode for Google Maps, which will stop keeping records of your whereabouts while it’s enabled. That’s important, because your location data is some of your most sensitive, revealing behaviors that could be of interest not only to advertisers but also to stalkers and other malicious actors. It’s akin to the incognito mode that has long been part of Google’s Chrome browser. The company said it will also bring incognito features to Google Search and YouTube in the months to come.

Honorable mentions

  • A cheaper Pixel phone: While everyone else’s smartphones are getting more expensive, Google is heading the other way with its new Pixel 3a. It will be less powerful than the existing Pixel 3, but at a base price of $399, it will be half as expensive. Reviewers are already recommending it as an option for buyers who want the best smartphone camera at the lowest price.
  • Focus mode: A new feature coming to Android P and Q devices this summer will let you turn off your most distracting apps to focus on a task, while still allowing text messages, calls, and other important notifications through.
  • Augmented reality in Google Maps: AR is one of those technologies that always seems to impress the tech companies that make it more than it impresses their actual users. But Google may finally be finding some practical uses for it, like overlaying walking directions when you hold up your phone’s camera to the street in front of you.
  • Automatic rental car bookings and movie tickets: Google’s most controversial demo last year featured an A.I. system that could place a call to book restaurant reservations for you automatically. Ethicists wondered whether the receptionists on the other end would be informed that they’re talking to a machine and not a human. This year, Google found a less thorny application for its A.I. reservations bot: Duplex on the Web can rent you a car or buy your movie tickets online by filling in all the required fields with your information — no uncanny valley required.

2.4K claps

17

 

·  Follow

 

Tip:

写代码时的多if else校验

在书写的时候,先优先判断代码量少的部分,如果这部分代码判断能提前结束判断最佳。提前返回判断结果,可以不进行之后的判断,提高代码运行的效率。

Share:

JVM必问知识点:类加载过程

https://mp.weixin.qq.com/s/764Tddh1j0wZ8nL3hsiyjQ

 

 

JVM必问知识点:类加载过程

原创: SnailClimb JavaGuide 昨天

类加载过程

Class 文件需要加载到虚拟机中之后才能运行和使用,那么虚拟机是如何加载这些 Class 文件呢?

系统加载 Class 类型的文件主要三步:加载->连接->初始化。连接过程又可分为三步:验证->准备->解析

 

类加载过程

 

加载

类加载过程的第一步,主要完成下面3件事情:

  1. 通过全类名获取定义此类的二进制字节流
  2. 将字节流所代表的静态存储结构转换为方法区的运行时数据结构
  3. 在内存中生成一个代表该类的 Class 对象,作为方法区这些数据的访问入口

虚拟机规范多上面这3点并不具体,因此是非常灵活的。比如:"通过全类名获取定义此类的二进制字节流" 并没有指明具体从哪里获取、怎样获取。比如:比较常见的就是从 ZIP 包中读取(日后出现的JAREARWAR格式的基础)、其他文件生成(典型应用就是JSP)等等。

一个非数组类的加载阶段(加载阶段获取类的二进制字节流的动作)是可控性最强的阶段,这一步我们可以去完成还可以自定义类加载器去控制字节流的获取方式(重写一个类加载器的 loadClass() 方法)。数组类型不通过类加载器创建,它由 Java 虚拟机直接创建。

类加载器、双亲委派模型也是非常重要的知识点,这部分内容会在后面的文章中单独介绍到。

加载阶段和连接阶段的部分内容是交叉进行的,加载阶段尚未结束,连接阶段可能就已经开始了。

验证

 

验证阶段示意图

 

准备

准备阶段是正式为类变量分配内存并设置类变量初始值的阶段,这些内存都将在方法区中分配。对于该阶段有以下几点需要注意:

  1. 这时候进行内存分配的仅包括类变量(static),而不包括实例变量,实例变量会在对象实例化时随着对象一块分配在 Java 堆中。
  2. 这里所设置的初始值"通常情况"下是数据类型默认的零值(如0、0L、null、false等),比如我们定义了public static int value=111 ,那么 value 变量在准备阶段的初始值就是 0 而不是111(初始化阶段才会复制)。特殊情况:比如给 value 变量加上了 fianl 关键字public static final int value=111 ,那么准备阶段 value 的值就被复制为 111。

基本数据类型的零值:

 

基本数据类型的零值

 

解析

解析阶段是虚拟机将常量池内的符号引用替换为直接引用的过程。解析动作主要针对类或接口、字段、类方法、接口方法、方法类型、方法句柄和调用限定符7类符号引用进行。

符号引用就是一组符号来描述目标,可以是任何字面量。直接引用就是直接指向目标的指针、相对偏移量或一个间接定位到目标的句柄。在程序实际运行时,只有符号引用是不够的,举个例子:在程序执行方法时,系统需要明确知道这个方法所在的位置。Java 虚拟机为每个类都准备了一张方法表来存放类中所有的方法。当需要调用一个类的方法的时候,只要知道这个方法在方发表中的偏移量就可以直接调用该方法了。通过解析操作符号引用就可以直接转变为目标方法在类中方法表的位置,从而使得方法可以被调用。

综上,解析阶段是虚拟机将常量池内的符号引用替换为直接引用的过程,也就是得到类或者字段、方法在内存中的指针或者偏移量。

初始化

初始化是类加载的最后一步,也是真正执行类中定义的 Java 程序代码(字节码),初始化阶段是执行类构造器 <clinit> ()方法的过程。

对于<clinit>() 方法的调用,虚拟机会自己确保其在多线程环境中的安全性。因为 <clinit>() 方法是带锁线程安全,所以在多线程环境下进行类初始化的话可能会引起死锁,并且这种死锁很难被发现。

对于初始化阶段,虚拟机严格规范了有且只有5中情况下,必须对类进行初始化:

  1. 当遇到 new 、 getstatic、putstatic或invokestatic 这4条直接码指令时,比如 new 一个类,读取一个静态字段(未被 final 修饰)、或调用一个类的静态方法时。
  2. 使用 java.lang.reflect 包的方法对类进行反射调用时 ,如果类没初始化,需要触发其初始化。
  3. 初始化一个类,如果其父类还未初始化,则先触发该父类的初始化。
  4. 当虚拟机启动时,用户需要定义一个要执行的主类 (包含 main 方法的那个类),虚拟机会先初始化这个类。
  5. 当使用 JDK1.7 的动态动态语言时,如果一个 MethodHandle 实例的最后解析结构为 REF_getStatic、REF_putStatic、REF_invokeStatic、的方法句柄,并且这个句柄没有初始化,则需要先触发器初始化。

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值