NOTE.02-Use-a-Render-Prop!

本文介绍了React中代码复用方式的演变过程,从最初的mixins到HOC,再到现在的renderprops,详细对比了各种方案的优缺点。

Use a Render Prop!

在 Vue 中,可以使用 mixins 混入的方式实现代码复用,而在 React 中,代码复用经历从 mixins 到 HOC,然后到 render props 的演变,对于这几种方案的曲折这里梳理下。

Mixins

在 React 早期版本中可以使用 React.createClass 来创建组件,通过 mixins 来复用代码:

import React from 'react'

// 可以将样板代码放入到一个 mixin 中,让其他组件共享这些代码
const MouseMixin = {
  getInitialState() {
    return { x: 0, y: 0 }
  },

  handleMouseMove(event) {
    this.setState({
      x: event.clientX,
      y: event.clientY
    })
  }
}

const App = React.createClass({
  // 使用 mixin!
  mixins: [MouseMixin],

  render() {
    const { x, y } = this.state

    return (
      <div onMouseMove={this.handleMouseMove}>
        <h1>
          The mouse position is ({x}, {y})
        </h1>
      </div>
    )
  }
})

上面代码中,将获取鼠标坐标位置的代码提取封装为 MouseMixin,然后在 createClass 创建组件时通过 mixins 混入到组件中,这样这个新的组件就有了获取鼠标坐标位置的功能。

这和 Vue 的 mixins 使用方法类似,在 React 后来的版本中废弃了 createClass API,使用 ES6 原生 class 来创建组件,然而这样就存在一个问题 ES6 class 不支持 mixin,当然 mixins 存在的问题不止这一个,总结如下:

  1. ES6 class 不支持 mixins
  2. 不够直接:minxins 改变了 state,导致难以追查 state 的来源,尤其是存在多个 mixins 的时候
  3. 名字冲突:如果多个 mixins 要更新同一个 state,可能会相互覆盖

所以为了代替 mixins,React 社区决定使用 HOC(高阶组件) 来实现代码复用。

HOC

可以把 HOC 当成一个范式,在这个范式下,代码通过一个类似于装饰器(decorator)的技术进行共享。可以将它想象为高阶函数,当然 React 本身也有函数式组件,高阶函数是接收一个函数并返回一个包装后的函数,高阶组件也是同理,接收一个组件然后返回一个包装后的组件。这和 mixins 的重要区别是:HOC 在 装饰 组件,而不是 混入 需要的行为!

通过高阶组件可以改写上面的代码如下:

import React from 'react'

// 接收组件返回包装后的组件
const withMouse = Component => {
  return class extends React.Component {
    state = { x: 0, y: 0 }

    handleMouseMove = event => {
      this.setState({
        x: event.clientX,
        y: event.clientY
      })
    }

    render() {
      return (
        <div onMouseMove={this.handleMouseMove}>
          {/* 传入鼠标坐标 prop */}
          <Component {...this.props} mouse={this.state} />
        </div>
      )
    }
  }
}

class App extends React.Component {
  render() {
    // 这里可以得到鼠标位置的 prop,而不再需要维护自己的 state
    const { x, y } = this.props.mouse

    return (
      <div>
        <h1>
          The mouse position is ({x}, {y})
        </h1>
      </div>
    )
  }
}

// 只需要用 withMouse 包裹组件,它就能获得 mouse prop
const AppWithMouse = withMouse(App)

export default AppWithMouse

在 ES6 class 的新时代下,HOC 的确是一个能够优雅地解决代码重用问题方案,React 社区也广泛采用。但是它虽然解决了在 ES6 class 中不能使用 mixins 的问题,但是仍有两个问题尚未解决:

  1. 不够直接:同 mixins 一样,即使采用了 HOC,这个问题依旧存在,在 mixin 中不知道 state 从何而来,在 HOC 中不知道 props 从何而来
  2. 名字冲突:同 mixins 一样,两个使用了同名 prop 的 HOC 将会覆盖且没有任何错误提示

另一个 HOC 和 mixin 都有的问题就是,二者使用的是静态组合而不是动态组合。在 HOC 的范式下,当组件类(如上例中的 AppWithMouse)被创建后,发生了一次静态组合。

我们无法在 render 方法中使用 mixin 或者 HOC,而这恰是 React 动态组合模型的关键。当你在 render 中完成了组合,就可以利用到所有 React 生命期的优势了。

除了上述缺陷,由于 HOC 的实质是包裹组件并创建了一个混入现有组件的 mixin 替代,从 HOC 中返回的组件需要表现得和它包裹的组件尽可能一样,因此会产生非常的的模板代码(boilerplate code)。

总而言之,使用 ES6 class 创建的 HOC 仍然会遇到和使用 createClass 时一样的问题,它只能算一次重构

Render Props

Render Props 是指一种在 React 组件之间使用一个值为函数的 prop 在 React 组件间共享代码的简单技术。

带有 render prop 的组件带有一个返回一个 React 元素的函数并调用该函数而不是实现自己的渲染逻辑,顾名思义就是一个类型为函数的 prop,它让组件知道该渲染什么。

更通俗的说法是:不同于通过 混入 或者 装饰 来共享组件行为,一个普通组件只需要一个函数 prop 就能够进行一些 state 共享。

其代码实现示例如下:

<DataProvider render={data => <h1>Hello {data.target}</h1>} />

我们通过 render prop 再改写上面获取鼠标坐标的例子:

import React from 'react'
import PropTypes from 'prop-types'

// 与 HOC 不同,我们可以使用具有 render prop 的普通组件来共享代码
class Mouse extends React.Component {
  // 声明 render 是一个函数类型
  static propTypes = {
    render: PropTypes.func.isRequired
  }

  state = { x: 0, y: 0 }

  handleMouseMove = event => {
    this.setState({
      x: event.clientX,
      y: event.clientY
    })
  }

  render() {
    // 通过调用 props.render 函数来实现渲染逻辑
    return <div onMouseMove={this.handleMouseMove}>{this.props.render(this.state)}</div>
  }
}

class App extends React.Component {
  render() {
    return (
      <div>
        <Mouse
          render={({ x, y }) => (
            // render prop 给了我们所需要的 state 来渲染我们想要的
            <h1>
              The mouse position is ({x}, {y})
            </h1>
          )}
        />
      </div>
    )
  }
}

export default App

上面例子中,<Mouse> 组件实际上是调用了它的 render 方法来将它的 state 暴露给 <App> 组件,因此,<App> 可以随便按自己的想法使用这个 state。

当然,并非真正需要将 render prop 添加在 JSX 元素的 “attributes” 列表上,也可以嵌套在组件元素的内部,用 children prop 替代 render prop。这与 “children as a function” 是完全相同的概念,只是将 children prop 当作数据而不是视图来渲染。

这里描述的 render prop 并不是在强调一个名叫 render 的 prop,而是在强调你使用一个函数 prop 去进行渲染的概念。当在设计一个类似的 render props api 时,最好在 propTypes 里声明 children/render 应为一个函数类型:

class Mouse extends React.Component {
  static propTypes = {
    render: PropTypes.func.isRequired
  }
}

使用 render prop 规避了 mixin 和 HOC 中问题:

  1. 不够直接:不必再担心 state 或者 props 来自哪里,可以看到通过 render prop 的参数列表看到有哪些 state 或者 props 可供使用
  2. 名字冲突:不会有任何的自动属性名称合并

并且,render prop 也不会引入模板代码,因为它不会包裹和装饰其他的组件,它仅仅是一个函数!

另外,这里的组合模型是动态的!每次组合都发生在 render 内部,因此,我们就能利用到 React 生命周期以及自然流动的 props 和 state 带来的优势。使用这个模式,可以将任何 HOC 替换一个具有 render prop 的一般组件。

render prop 远比 HOC 更加强大,任何 HOC 都能使用 render prop 替代,反之则不然。这里使用具有 render prop 的 <Mouse> 组件来实现的 withMouse HOC

const withMouse = Component => {
  return class extends React.Component {
    render() {
      return <Mouse render={mouse => <Component {...this.props} mouse={mouse} />} />
    }
  }
}

使用 render prop 时需要注意:如果你在 render 方法里创建函数,那么使用 render prop 会抵消使用 React.PureComponent 带来的优势。这是因为浅 prop 比较对于新 props 总会返回 false,并且在这种情况下每一个 render 对于 render prop 将会生成一个新的值。

如下所示,即使 <Mouse> 组件继承自 React.PureComponent,但是 <App> 每次渲染时都会产生一个新的函数作为 <Mouse> 的 render prop,因此抵消了继承自 React.PureComponent 的效果。

class Mouse extends React.PureComponent {
  /* 省略细节代码 */
}

class App extends React.Component {
  render() {
    return (
      <div>
        <Mouse
          render={({ x, y }) => (
            <h1>
              The mouse position is ({x}, {y})
            </h1>
          )}
        />
      </div>
    )
  }
}

参考文章:
Use a Render Prop!

原文引自:金点网络/数媒派 作者:jamin

这个是库的 文档说明 react-native-vision Library for accessing VisionKit and visual applications of CoreML from React Native. iOS Only Incredibly super-alpha, and endeavors to provide a relatively thin wrapper between the underlying vision functionality and RN. Higher-level abstractions are @TODO and will be in a separate library. Installation yarn add react-native-vision react-native-swift react-native link Note react-native-swift is a peer dependency of react-native-vision. If you are running on a stock RN deployment (e.g. from react-native init) you will need to make sure your app is targeting IOS 11 or higher: yarn add react-native-fix-ios-version react-native link Since this module uses the camera, it will work much better on a device, and setting up permissions and codesigning in advance will help: yarn add -D react-native-camera-ios-enable yarn add -D react-native-setdevteam react-native link react-native setdevteam Then you are ready to run! react-native run-ios --device Command line - adding a Machine Learning Model with add-mlmodel react-native-vision makes it easier to bundle a pre-built machine learning model into your app. After installing, you will find the following command available: react-native add-mlmodel /path/to/mymodel.mlmodel You may also refere to the model from a URL, which is handy when getting something off the interwebs. For example, to apply the pre-built mobileNet model from apple, you can: react-native add-mlmodel https://docs-assets.developer.apple.com/coreml/models/MobileNet.mlmodel Note that the name of your model in the code will be the same as the filename minus the "mlmodel". In the above case, the model in code can be referenced as "MobileNet" Easy Start 1 : Full Frame Object Detection One of the most common easy use cases is just detecting what is in front of you. For this we use the VisionCamera component that lets you apply a model and get the classification via render props. Setup react-native init imagedetector; cd imagedetector yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with MobileNet A free download from Apple! react-native add-mlmodel https://docs-assets.developer.apple.com/coreml/models/MobileNet.mlmodel Add Some App Code import React from "react"; import { Text } from "react-native"; import { VisionCamera } from "react-native-vision"; export default () => ( <VisionCamera style={{ flex: 1 }} classifier="MobileNet"> {({ label, confidence }) => ( <Text style={{ width: "75%", fontSize: 50, position: "absolute", right: 50, bottom: 100 }} > {label + " :" + (confidence * 100).toFixed(0) + "%"} </Text> )} </VisionCamera> ); Easy Start 2: GeneratorView - for Style Transfer Most machine learning application are classifiers. But generators can be useful and a lot of fun. The GeneratorView lets you look at style transfer models that show how you can use deep learning techniques for creating whole new experiences. Setup react-native init styletest; cd styletest yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with add-mlmodel Apple has not published a style transfer model, but there are a few locations on the web where you can download them. Here is one: https://github.com/mdramos/fast-style-transfer-coreml So go to his github, navigate to his google drive, and then download the la_muse model to your personal Downloads directory. react-native add-mlmodel ~/Downloads/la_muse.mlmodel App Code This is the insanely short part. Note that the camera view is not necessary for viewing the style-transferred view: its just for reference. import React from "react"; import { GeneratorView, RNVCameraView } from "react-native-vision"; export default () => ( <GeneratorView generator="FNS-The-Scream" style={{ flex: 1 }}> <RNVCameraView style={{ position: "absolute", height: 200, width: 100, top: 0, right: 0 }} resizeMode="center" /> </GeneratorView> ); Easy Start 3: Face Camera Detect what faces are where in your camera view! Taking a page (and the model!) from (https://github.com/gantman/nicornot)[Gant Laborde's NicOrNot app], here is the entirety of an app that discerns whether the target is nicolas cage. Setup react-native init nictest; cd nictest yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with add-mlmodel react-native add-mlmodel https://s3.amazonaws.com/despiteallmyrage/MegaNic50_linear_5.mlmodel App Code import React from "react"; import { Text, View } from "react-native"; import { FaceCamera } from "react-native-vision"; import { Identifier } from "react-native-identifier"; export default () => ( <FaceCamera style={{ flex: 1 }} classifier="MegaNic50_linear_5"> {({ face, faceConfidence, style }) => face && (face == "nic" ? ( <Identifier style={{ ...style }} accuracy={faceConfidence} /> ) : ( <View style={{ ...style, justifyContent: "center", alignItems: "center" }} > <Text style={{ fontSize: 50, color: "red", opacity: faceConfidence }}> X </Text> </View> )) } </FaceCamera> ); Face Detection Component Reference FacesProvider Context Provider that extends <RNVisionProvider /> to detect, track, and identify faces. Props Inherits from <RNVisionProvider />, plus: interval: How frequently (in ms) to run the face detection re-check. (Basically lower values here keeps the face tracking more accurate) Default: 500 classifier: File URL to compiled MLModel (e.g. mlmodelc) that will be applied to detected faces updateInterval: How frequently (in ms) to update the detected faces - position, classified face, etc. Smaller values will mean smoother animation, but at the price of processor intensity. Default: 100 Example <FacesProvider isStarted={true} isCameraFront={true} classifier={this.state.classifier} > {/* my code for handling detected faces */} </FacesProvider> FacesConsumer Consumer of <FacesProvider /> context. As such, takes no props and returns a render prop function. Render Prop Members faces: Keyed object of information about the detected face. Elements of each object include: region: The key associated with this object (e.g. faces[k].region === k) x, y, height, width: Position and size of the bounding box for the detected face. faces: Array of top-5 results from face classifier, with keys label and confidence face: Label of top-scoring result from classifier (e.g. the face this is most likely to be) faceConfidence: Confidence score of top-scoring result above. Note that when there is no classifier specified, faces, face and faceConfidence are undefined Face Render prop generator to provision information about a single detected face. Can be instantiated by spread-propping the output of a single face value from <FacesConsumer> or by appling a faceID that maps to the key of a face. Returns null if no match. Props faceID: ID of the face (corresponding to the key of the faces object in FacesConsumer) Render Prop Members region: The key associated with this object (e.g. faces[k].region === k) x, y, height, width: Position and size of the bounding box for the detected face. Note These are adjusted for the visible camera view when you are rendering from that context. faces: Array of top-5 results from face classifier, with keys label and confidence face: Label of top-scoring result from classifier (e.g. the face this is most likely to be) faceConfidence: Confidence score of top-scoring result above. Note These arguments are the sam Faces A render-prop generator to provision information about all detected faces. Will map all detected faces into <Face> components and apply the children prop to each, so you have one function to generate all your faces. Designed to be similar to FlatMap implentation. Required Provider Context This component must be a descendant of a <FacesProvider> Props None Render Prop Members Same as <Face> above, but output will be mapped across all detected faces. Example of use is in the primary Face Recognizer demo code above. Props faceID: ID of the face applied. isCameraView: Whether the region frame information to generate should be camera-aware (e.g. is it adjusted for a preview window or not) Render Props This largely passes throught the members of the element that you could get from the faces collection from FaceConsumer, with the additional consideration that when isCameraView is set, style: A spreadable set of styling members to position the rectangle, in the same style as a RNVCameraRegion If faceID is provided but does not map to a member of the faces collection, the function will return null. Core Component References The package exports a number of components to facilitate the vision process. Note that the <RNVisionProvider /> needs to be ancestors to any others in the tree. So a simple single-classifier using dominant image would look something like: <RNVisionProvider isStarted={true}> <RNVDefaultRegion classifiers={[{url: this.state.FileUrlOfClassifier, max: 5}]}> {({classifications})=>{ return ( <Text> {classifications[this.state.FileUrlOfClassifier][0].label} </Text> }} </RNVDefaultRegion> </RNVisionProvider> RNVisionProvider Context provider for information captured from the camera. Allows the use of regional detection methods to initialize identification of objects in the frame. Props isStarted: Whether the camera should be activated for vision capture. Boolean isCameraFront: Facing of the camera. False for the back camera, true to use the front. Note only one camera facing can be used at a time. As of now, this is a hardware limitation. regions: Specified regions on the camera capture frame articulated as {x,y,width,height} that should always be returned by the consumer trackedObjects: Specified regions that should be tracked as objects, so that the regions returned match these object IDs and show current position. onRegionsChanged: Fires when the list of regions has been altered onDetectedFaces: Fires when the number of detected faces has changed Class imperative member detectFaces: Triggers one call to detect faces based on current active frame. Directly returns locations. RNVisionConsumer Consumer partner of RNVisionProvider. Must be its descendant in the node tree. Render Prop Members imageDimensions: Object representing size of the camera frame in {width, height} isCameraFront: Relaying whether camera is currently in selfie mode. This is important if you plan on displaying camera output, because in selfie mode a preview will be mirrored. regions: The list of detected rectangles in the most recently captured frame, where detection is driven by the RNVisionProvider props RNVRegion Props region: ID of the region (Note the default region, which is the whole frame, has an id of "" - blank.) classifiers: CoreML classifiers passed as file URLs to the classifier mlmodelc itself. Array generators: CoreML image generators passed as file URLs to the classifier mlmodelc itself. Array generators: CoreML models that generate a collection of output values passed as file URLs to the classifier mlmodelc itself. bottlenecks: A collection of CoreML models that take other CoreML model outputs as their inputs. Keys are the file URLs of the original models (that take an image as their input) and values are arrays of mdoels that generate the output passed via render props. onFrameCaptured: Callback to fire when a new image of the current frame in this region has been captured. Making non-null activates frame capture, setting to null turns it off. The callback passes a URL of the saved frame image file. Render Prop members key: ID of the region x, y, width, height: the elements of the frame containing the region. All values expressed as percentages of the overall frame size, so a 50x100 frame at origin 5,10 in a 500x500 frame would come across as {x: 0.01, y: 0.02, width: .1, height: .2}. Changes in these values are often what drives the re-render of the component (and therefore re-run of the render prop) confidence: If set, the confidence that the object identified as key is actually at this location. Used by tracked objects API of iOS Vision. Sometimes null. classifications: Collection, keyed by the file URL of the classifier passed in props, of collections of labels and probabilities. (e.g. {"file:///path/to/myclassifier.mlmodelc": {"label1": 0.84, "label2": 0.84}}) genericResults: Collection of generic results returned from generic models passed in via props to the region RNVDefaultRegion Convenience region that references the full frame. Same props as RNVRegion, except region is always set to "" - the full frame. Useful for simple style transfers or "dominant image" classifiers. Props Same as RNVRegion, with the exception that region is forced to "" Render Prop Members Same as RNVRegion, with the note that key will always be "" RNVCameraView Preview of the camera captured by the RNVisionProvider. Note that the preview is flipped in selfie mode (e.g. when isCameraFront is true) Props The properties of a View plus: gravity: how to scale the captured camera frame in the view. String. Valid values: fill: Fills the rectangle much like the "cover" in an Image resize: Leaves transparent (or style:{backgroundColor}) the parts of the rectangle that are left over from a resized version of the image. RNVCameraConsumer Render prop consumer for delivering additional context that regions will find helpful, mostly for rendering rectangles that map to the regions identified. Render Prop Members viewPortDimensions: A collection of {width, height} of the view rectangle. viewPortGravity: A pass-through of the gravity prop to help decide how to manage the math converting coordinates. RNVCameraRegion A compound consumer that blends the render prop members of RNVRegion and RNVCameraConsumer and adds a style prop that can position the region on a specified camera preview Props Same as RNVRegion Render Prop Members Includes members from RNVRegion and RNVCameraConsumer and adds: style: A pre-built colleciton of style prop members {position, width, height, left, top} that are designed to act in the context of the RNVCameraView rectangle. Spread-prop with your other style preferences (border? backgroundColor?) for easy on-screen representation. RNVImageView View for displaying output of image generators. Link it to , and the resulting image will display in this view. Useful for style transfer models. More performant because there is no round trip to JavaScript notifying of each frame update. Props id: the ID of an image generator model attached to a region. Usually is the file:/// URL of the .mlmodelc. Otherwise conforms to Image and View API. 请叫我如何做
11-06
这个报错是什么原因:[Vue tip]: Prop "readonly" is passed to component <Anonymous>, but the declared prop name is "readOnly". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "read-only" instead of "readOnly". tip @ vue.runtime.esm.js?2b0e:625 extractPropsFromVNodeData @ vue.runtime.esm.js?2b0e:2286 createComponent @ vue.runtime.esm.js?2b0e:3227 _createElement @ vue.runtime.esm.js?2b0e:3422 createElement @ vue.runtime.esm.js?2b0e:3353 vm._c @ vue.runtime.esm.js?2b0e:3491 render @ Autoreposition.vue?b778:850 Vue._render @ vue.runtime.esm.js?2b0e:3548 updateComponent @ vue.runtime.esm.js?2b0e:4066 get @ vue.runtime.esm.js?2b0e:4479 run @ vue.runtime.esm.js?2b0e:4554 flushSchedulerQueue @ vue.runtime.esm.js?2b0e:4310 eval @ vue.runtime.esm.js?2b0e:1980 flushCallbacks @ vue.runtime.esm.js?2b0e:1906 Promise.then (async) timerFunc @ vue.runtime.esm.js?2b0e:1933 nextTick @ vue.runtime.esm.js?2b0e:1990 queueWatcher @ vue.runtime.esm.js?2b0e:4402 update @ vue.runtime.esm.js?2b0e:4544 notify @ vue.runtime.esm.js?2b0e:730 reactiveSetter @ vue.runtime.esm.js?2b0e:1055 setState @ BaseMixin.js?b488:20 setOpenState @ Select.js?43a6:882 onMenuSelect @ Select.js?43a6:456 invokeWithErrorHandling @ vue.runtime.esm.js?2b0e:1854 invoker @ vue.runtime.esm.js?2b0e:2179 __emit @ BaseMixin.js?b488:37 onClick @ Menu.js?313c:68 invokeWithErrorHandling @ vue.runtime.esm.js?2b0e:1854 invoker @ vue.runtime.esm.js?2b0e:2179 __emit @ BaseMixin.js?b488:37 onClick @ SubPopupMenu.js?1462:194 click @ SubPopupMenu.js?1462:313 invokeWithErrorHandling @ vue.runtime.esm.js?2b0e:1854 invoker @ vue.runtime.esm.js?2b0e:2179 __emit @ BaseMixin.js?b488:37 onClick @ MenuItem.js?528d:115 invokeWithErrorHandling @ vue.runtime.esm.js?2b0e:1854 invoker @ vue.runtime.esm.js?2b0e:2179 original._wrapper @ vue.runtime.esm.js?2b0e:6917 vue.runtime.esm.js?2b0e:625 [Vue tip]: Prop "readonly" is passed to component <Anonymous>, but the declared prop name is "readOnly". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "read-only" instead of "readOnly". tip @ vue.runtime.esm.js?2b0e:625 extractPropsFromVNodeData @ vue.runtime.esm.js?2b0e:2286 createComponent @ vue.runtime.esm.js?2b0e:3227 _createElement @ vue.runtime.esm.js?2b0e:3422 createElement @ vue.runtime.esm.js?2b0e:3353 vm._c @ vue.runtime.esm.js?2b0e:3491 render @ Autoreposition.vue?b778:934 Vue._render @ vue.runtime.esm.js?2b0e:3548 updateComponent @ vue.runtime.esm.js?2b0e:4066 get @ vue.runtime.esm.js?2b0e:4479 run @ vue.runtime.esm.js?2b0e:4554 flushSchedulerQueue @ vue.runtime.esm.js?2b0e:4310 eval @ vue.runtime.esm.js?2b0e:1980 flushCallbacks @ vue.runtime.esm.js?2b0e:1906 Promise.then (async) timerFunc @ vue.runtime.esm.js?2b0e:1933 nextTick @ vue.runtime.esm.js?2b0e:1990 queueWatcher @ vue.runtime.esm.js?2b0e:4402 update @ vue.runtime.esm.js?2b0e:4544 notify @ vue.runtime.esm.js?2b0e:730 reactiveSetter @ vue.runtime.esm.js?2b0e:1055 setState @ BaseMixin.js?b488:20 setOpenState @ Select.js?43a6:882 onMenuSelect @ Select.js?43a6:456 invokeWithErrorHandling @ vue.runtime.esm.js?2b0e:1854 invoker @ vue.runtime.esm.js?2b0e:2179 __emit @ BaseMixin.js?b488:37 onClick @ Menu.js?313c:68 invokeWithErrorHandling @ vue.runtime.esm.js?2b0e:1854 invoker @ vue.runtime.esm.js?2b0e:2179 __emit @ BaseMixin.js?b488:37 onClick @ SubPopupMenu.js?1462:194 click @ SubPopupMenu.js?1462:313 invokeWithErrorHandling @ vue.runtime.esm.js?2b0e:1854 invoker @ vue.runtime.esm.js?2b0e:2179 __emit @ BaseMixin.js?b488:37 onClick @ MenuItem.js?528d:115 invokeWithErrorHandling @ vue.runtime.esm.js?2b0e:1854 invoker @ vue.runtime.esm.js?2b0e:2179 original._wrapper @ vue.runtime.esm.js?2b0e:6917 vue.runtime.esm.js?2b0e:625 [Vue tip]: Prop "readonly" is passed to component <Anonymous>, but the declared prop name is "readOnly". Note that HTML attributes are case-insensitive and camelCased props need to use their kebab-case equivalents when using in-DOM templates. You should probably use "read-only" instead of "readOnly".
11-13
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值