React-Native deeplink处理

本文详细介绍了React-Native中处理deeplink的步骤,包括应用未打开和已打开时的处理,以及iOS和Android的原生实现。在iOS端,讲解了info.plist的配置,以及如何处理外部文件分享。在Android端,讨论了intent-filter的设置和ACTION_SEND转换为ACTION_VIEW的问题。整个过程涉及RN、iOS和Android之间的路由通信及文件分享的处理。

一、deeplink是什么?

什么是deeplink, 通俗的讲就是其他app打开你app的任何页面,你需要处理这个一般用url表示的路由请求,React-Native处理deep link主要涉及到三个部分,RN端,iOS原生端,Android原生端, 本博就针对这三个部分分别讲解其实现步骤,就以我最近做的应用外文件分享到APP为主线进行讲解,并对开发中遇到的坑进行相应的说明,希望能帮助到大家少走弯路。

二、实现步骤

React-Native端的处理

RN端需要处理两种情况

  • 应用没打开,被其它app启动
 import { Linking } from "react-native";
 Linking.getInitialURL().then((url) => {
            // alert(url);
            if (!!url) {
                this.handleUrl(url);
            }
        }).catch(err => console.error('An error occurred', err));

注意,必须关闭debugger调试getInitialURL才会有值,否则获取到的值为null

  • 应用已打开,被其它app启动
Linking.addEventListener('url', (e) => this.handleUrl(e.url));

handleUrl就是你处理原生路由跳转的的地方,rn一般使用react-navigation处理路由跳转,就拿我处理文件转发为例

代码如下(示例):

handleUrl = async (url) => {
        console.log('Link url: ' + url);
        /// url需要解码下获取文件实际路径
        url = decodeURIComponent(url);
        if (url.startsWith('file://')) {
            console.log('march file share ');
            try {
                const stat = await RNFetchBlob.fs.stat(url.substr(7));
                console.log('share file size: ' + stat.size);

                if (stat.size === 0) {
                    Toast("文件为空,不能发送,请重新选择");
                } else if (stat.size > FILE_SIZE_LIMIT.OFFLINE_FILE_MAX_SIZE) {
                    Toast("文件大小超过100M,请重新选择");
                } else {
                    this.handleRoute('ForwardExternalAppFile', { filePath: url, fileSize: stat.size });
                }
            } catch (e) {
                console.log(`handleUrl error: ${e}`);
            }
            return;
        }
    }
 handleRoute = async (routeName, routeParams) => {
        switch (routeName) {
            case 'ForwardExternalAppFile': {
                if (! await this.hasLogin()) {
                    Toast.info('抱歉,登录后才可继续使用焦谈分享功能')
                    return;
                }
                break;
            }
        }
       _navigator.dispatch(
           NavigationActions.navigate({
              routeName,
              params,
           })
       );
    }

_navigator是什么呢,就是Navigation,rn入口render里通过ref获取暂存起来

<AppNavigator ref={navigatorRef => {
NavigationService.setTopLevelNavigator(navigatorRef);
}} />

到此RN端的处理基本就结束了,app根据业务逻辑进行相应的跳转即可,下面我们来处理原生端。

iOS端处理

ios处理deeplink,如从web页点击跳转到app里,需要在info.plist添加schema
在这里插入图片描述
在 URL Types 上添加一个 item
Identifier建议采用公司反转域名的方法保证该名字的唯一性,比如com.comname.appname

URL Schemes理论上随便填什么都可以,比如abiz

验证
在浏览器中输入abiz://,确认后就可以跳转到APP, 配置好了,拉起app时回调UIApplicationDelegate的方法

- (BOOL)application:(UIApplication *)app openURL:(NSURL *)url options:(NSDictionary<UIApplicationOpenURLOptionsKey,id> *)options {
   
   	/// ...... 
    return [RCTLinkingManager application:app openURL:url options:options];
}

我们看看 [RCTLinkingManager application:app openURL:url options:options]的源码

+ (BOOL)application:(UIApplication *)app
            openURL:(NSURL *)URL
            options:(NSDictionary<UIApplicationOpenURLOptionsKey,id> *)options
{
  postNotificationWithURL(URL, self);
  return YES;
}

static void postNotificationWithURL(NSURL *URL, id sender)
{
  NSDictionary<NSString *, id> *payload = @{@"url": URL.absoluteString};
  [[NSNotificationCenter defaultCenter] postNotificationName:kOpenURLNotification
                                                      object:sender
                                                    userInfo:payload];
}

- (void)startObserving
{
  [[NSNotificationCenter defaultCenter] addObserver:self
                                           selector:@selector(handleOpenURLNotification:)
                                               name:kOpenURLNotification
                                             object:nil];
}

- (void)stopObserving
{
  [[NSNotificationCenter defaultCenter] removeObserver:self];
}

- (void)handleOpenURLNotification:(NSNotification *)notification
{
  [self sendEventWithName:@"url" body:notification.userInfo];
}

RN端监听Linking的url事件时,startObserving 会调用,移除监听时stopObserving会调用,这样的话就不会再收到url事件了,为了事件一直都能接收到,只能在顶层入口监听事件,这样就和RN建立了连接,原理很简单,原生发送事件‘url’给RN,RN监听url进行路由的派发处理。当应用未打开时拉起app,原生启动入口回传递一些信息

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
    RCTBridge *bridge = [[RCTBridge alloc] initWithDelegate:self launchOptions:launchOptions];
    NSLog(@"launchOptions:%@", launchOptions);
    RCTRootView *rootView = [[RCTRootView alloc] initWithBridge:bridge
                                                     moduleName:@"focustalk"
                                              initialProperties:nil];
}

launchOptions包含了相关信息,应用未打开,被其他app启动时launchOptions会包含UIApplicationLaunchOptionsURLKey为键值的的url信息, 最后传到RCTBridge里存起来了,RN端调用Linking.getInitialURL()返回启动url,源码如下

RCT_EXPORT_METHOD(getInitialURL:(RCTPromiseResolveBlock)resolve
                  reject:(__unused RCTPromiseRejectBlock)reject)
{
  NSURL *initialURL = nil;
  if (self.bridge.launchOptions[UIApplicationLaunchOptionsURLKey]) {
    initialURL = self.bridge.launchOptions[UIApplicationLaunchOptionsURLKey];
  } else {
    NSDictionary *userActivityDictionary =
      self.bridge.launchOptions[UIApplicationLaunchOptionsUserActivityDictionaryKey];
    if ([userActivityDictionary[UIApplicationLaunchOptionsUserActivityTypeKey] isEqual:NSUserActivityTypeBrowsingWeb]) {
      initialURL = ((NSUserActivity *)userActivityDictionary[@"UIApplicationLaunchOptionsUserActivityKey"]).webpageURL;
    }
  }
  resolve(RCTNullIfNil(initialURL.absoluteString));
}

至此RN和iOS原生之间如何建立路由通信的纽带讲完了,但我开篇提了以外部app分享文件到app为例讲解,iOS要处理外部app分享的文件需要进行一些设置,那就是info.plist,这个文件类似Android的AndroidManifest.xml,定义了app的一些能力和权限,要想处理应用外分享的文件,需要设置Document Types
在这里插入图片描述
info.plist直接右键以Source Code打开编辑即可
在这里插入图片描述
添加如下内容

<key>CFBundleDocumentTypes</key>
	<array>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>item</string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.item</string>
			</array>
		</dict>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>content </string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.content</string>
			</array>
		</dict>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>composite-content</string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.composite-content</string>
			</array>
		</dict>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>data</string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.data</string>
			</array>
		</dict>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>database</string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.database</string>
			</array>
		</dict>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>calendar-event</string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.calendar-event</string>
			</array>
		</dict>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>message </string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.message</string>
			</array>
		</dict>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>contact </string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.contact</string>
			</array>
		</dict>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>archive </string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.archive</string>
			</array>
		</dict>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>disk-image </string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.disk-image</string>
			</array>
		</dict>
		<dict>
			<key>CFBundleTypeIconFiles</key>
			<array/>
			<key>CFBundleTypeName</key>
			<string>text</string>
			<key>Handler rank</key>
			<string>Default</string>
			<key>LSItemContentTypes</key>
			<array>
				<string>public.text</string>
			</array>
		</dict>
	</array>

Android端处理

当点击的链接或程序化请求调用网页 URI intent 时,Android 系统会按顺序尝试执行以下每项操作,直到请求成功为止:

1.如果用户指定了可以处理该 URI 的首选应用,就打开此应用。
这个步骤需要app进行DeepLink配置,还需要服务端配合处理,具体怎么配置处理,查看官方文档https://developer.android.google.cn/training/app-links?hl=en
2.打开唯一可以处理该 URI 的应用。
3.允许用户从对话框中选择应用。

App端如何设置来响应对应的请求呢,首先需要Activity添加 intent 过滤器,以外部app分享文件到app为例

 <activity
            android:name="com.focus.focustalk.MainActivity"
            android:configChanges="keyboard|keyboardHidden|orientation|screenSize|uiMode"
            android:label="@string/app_name"
            android:launchMode="singleTask"
            android:windowSoftInputMode="adjustResize">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />
                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
            <intent-filter>
                <action android:name="android.intent.action.SEND" />
                <category android:name="android.intent.category.DEFAULT" />

                <data android:mimeType="video/*" />
                <data android:mimeType="image/*" />
                <data android:mimeType="text/*" />
                <data android:mimeType="application/*" />
            </intent-filter>
        </activity>

如果希望在已经启动的Activity接收Intent,需要设置 android:launchMode="singleTask"
这样,如果Activity已启动,就会调用onNewIntent回调,配置完后,通过系统的文件管理器点击分享就能看到我们的app可供选择了,但是不要高兴的太早,能启动app但是RN端怎么都接收不到相应的路由信息,找半天也没找到原因,最后通过看源码才找到原因,源码在IntentModule


  /**
   * Return the URL the activity was started with
   *
   * @param promise a promise which is resolved with the initial URL
   */
  @Override
  public void getInitialURL(Promise promise) {
    try {
      Activity currentActivity = getCurrentActivity();
      String initialURL = null;

      if (currentActivity != null) {
        Intent intent = currentActivity.getIntent();
        String action = intent.getAction();
        Uri uri = intent.getData();

        if (uri != null
            && (Intent.ACTION_VIEW.equals(action)
                || NfcAdapter.ACTION_NDEF_DISCOVERED.equals(action))) {
          initialURL = uri.toString();
        }
      }

      promise.resolve(initialURL);
    } catch (Exception e) {
      promise.reject(
          new JSApplicationIllegalArgumentException(
              "Could not get the initial URL : " + e.getMessage()));
    }
  }

源码很简单,action只有是Intent.ACTION_VIEW或NfcAdapter.ACTION_NDEF_DISCOVERED时才有路由url信息,找到原因就好办了,我们处理分享的Action是Intent.ACTION_SEND,需要转成ACTION_VIEW,重新设置intent

 Intent newIntent = new Intent(Intent.ACTION_VIEW, Uri.parse("file://" + filePath));
 setIntent(newIntent);
@Override
    protected void onCreate(@NonNull Bundle savedInstanceState) {
        SplashScreen.show(this);  // here

        preProcessIntent(getIntent());
        super.onCreate(savedInstanceState);
        TMUtils.setStatusbg(this, R.color.color_1E79E8);
    }

    @Override
    public void onNewIntent(Intent intent) {
        preProcessIntent(intent);
        super.onNewIntent(getIntent());
    }

    /**
     *  LinkManager只能处理ACTION_VIEW, 分享的action是ACTION_SEND, 所以需要转成对应的Intent,RN才能响应对应的事件
     **/
    protected void preProcessIntent(Intent intent) {
        Bundle extras = intent.getExtras();
        String action = intent.getAction();

        String filePath = null;
        // 判断Intent是否是“分享”功能(Share Via)
        if (Intent.ACTION_SEND.equals(action)) {
            if (extras.containsKey(Intent.EXTRA_STREAM)) {
                try {
                    // 获取资源路径Uri
                    Uri uri = extras.getParcelable(Intent.EXTRA_STREAM);
                    filePath = FileUtil.getFileByUri(uri, getContentResolver());
                    Log.i("ForwardFileActivity", "uri:" + filePath);
                } catch (Exception e) {
                    e.printStackTrace();
                }
            } else {
                // 获取资源路径Uri
                filePath = extras.getString("filePath");
            }

            if (null != filePath) {
                Log.d("ACTION_SEND", filePath);
                Intent newIntent = new Intent(Intent.ACTION_VIEW, Uri.parse("file://" + filePath));
                setIntent(newIntent);
            }
        }
    }

转完后,一切正常,到此我们的原生路由和RN路由的通信机制和实现步骤讲完了

这个是库的 文档说明 react-native-vision Library for accessing VisionKit and visual applications of CoreML from React Native. iOS Only Incredibly super-alpha, and endeavors to provide a relatively thin wrapper between the underlying vision functionality and RN. Higher-level abstractions are @TODO and will be in a separate library. Installation yarn add react-native-vision react-native-swift react-native link Note react-native-swift is a peer dependency of react-native-vision. If you are running on a stock RN deployment (e.g. from react-native init) you will need to make sure your app is targeting IOS 11 or higher: yarn add react-native-fix-ios-version react-native link Since this module uses the camera, it will work much better on a device, and setting up permissions and codesigning in advance will help: yarn add -D react-native-camera-ios-enable yarn add -D react-native-setdevteam react-native link react-native setdevteam Then you are ready to run! react-native run-ios --device Command line - adding a Machine Learning Model with add-mlmodel react-native-vision makes it easier to bundle a pre-built machine learning model into your app. After installing, you will find the following command available: react-native add-mlmodel /path/to/mymodel.mlmodel You may also refere to the model from a URL, which is handy when getting something off the interwebs. For example, to apply the pre-built mobileNet model from apple, you can: react-native add-mlmodel https://docs-assets.developer.apple.com/coreml/models/MobileNet.mlmodel Note that the name of your model in the code will be the same as the filename minus the "mlmodel". In the above case, the model in code can be referenced as "MobileNet" Easy Start 1 : Full Frame Object Detection One of the most common easy use cases is just detecting what is in front of you. For this we use the VisionCamera component that lets you apply a model and get the classification via render props. Setup react-native init imagedetector; cd imagedetector yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with MobileNet A free download from Apple! react-native add-mlmodel https://docs-assets.developer.apple.com/coreml/models/MobileNet.mlmodel Add Some App Code import React from "react"; import { Text } from "react-native"; import { VisionCamera } from "react-native-vision"; export default () => ( <VisionCamera style={{ flex: 1 }} classifier="MobileNet"> {({ label, confidence }) => ( <Text style={{ width: "75%", fontSize: 50, position: "absolute", right: 50, bottom: 100 }} > {label + " :" + (confidence * 100).toFixed(0) + "%"} </Text> )} </VisionCamera> ); Easy Start 2: GeneratorView - for Style Transfer Most machine learning application are classifiers. But generators can be useful and a lot of fun. The GeneratorView lets you look at style transfer models that show how you can use deep learning techniques for creating whole new experiences. Setup react-native init styletest; cd styletest yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with add-mlmodel Apple has not published a style transfer model, but there are a few locations on the web where you can download them. Here is one: https://github.com/mdramos/fast-style-transfer-coreml So go to his github, navigate to his google drive, and then download the la_muse model to your personal Downloads directory. react-native add-mlmodel ~/Downloads/la_muse.mlmodel App Code This is the insanely short part. Note that the camera view is not necessary for viewing the style-transferred view: its just for reference. import React from "react"; import { GeneratorView, RNVCameraView } from "react-native-vision"; export default () => ( <GeneratorView generator="FNS-The-Scream" style={{ flex: 1 }}> <RNVCameraView style={{ position: "absolute", height: 200, width: 100, top: 0, right: 0 }} resizeMode="center" /> </GeneratorView> ); Easy Start 3: Face Camera Detect what faces are where in your camera view! Taking a page (and the model!) from (https://github.com/gantman/nicornot)[Gant Laborde's NicOrNot app], here is the entirety of an app that discerns whether the target is nicolas cage. Setup react-native init nictest; cd nictest yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with add-mlmodel react-native add-mlmodel https://s3.amazonaws.com/despiteallmyrage/MegaNic50_linear_5.mlmodel App Code import React from "react"; import { Text, View } from "react-native"; import { FaceCamera } from "react-native-vision"; import { Identifier } from "react-native-identifier"; export default () => ( <FaceCamera style={{ flex: 1 }} classifier="MegaNic50_linear_5"> {({ face, faceConfidence, style }) => face && (face == "nic" ? ( <Identifier style={{ ...style }} accuracy={faceConfidence} /> ) : ( <View style={{ ...style, justifyContent: "center", alignItems: "center" }} > <Text style={{ fontSize: 50, color: "red", opacity: faceConfidence }}> X </Text> </View> )) } </FaceCamera> ); Face Detection Component Reference FacesProvider Context Provider that extends <RNVisionProvider /> to detect, track, and identify faces. Props Inherits from <RNVisionProvider />, plus: interval: How frequently (in ms) to run the face detection re-check. (Basically lower values here keeps the face tracking more accurate) Default: 500 classifier: File URL to compiled MLModel (e.g. mlmodelc) that will be applied to detected faces updateInterval: How frequently (in ms) to update the detected faces - position, classified face, etc. Smaller values will mean smoother animation, but at the price of processor intensity. Default: 100 Example <FacesProvider isStarted={true} isCameraFront={true} classifier={this.state.classifier} > {/* my code for handling detected faces */} </FacesProvider> FacesConsumer Consumer of <FacesProvider /> context. As such, takes no props and returns a render prop function. Render Prop Members faces: Keyed object of information about the detected face. Elements of each object include: region: The key associated with this object (e.g. faces[k].region === k) x, y, height, width: Position and size of the bounding box for the detected face. faces: Array of top-5 results from face classifier, with keys label and confidence face: Label of top-scoring result from classifier (e.g. the face this is most likely to be) faceConfidence: Confidence score of top-scoring result above. Note that when there is no classifier specified, faces, face and faceConfidence are undefined Face Render prop generator to provision information about a single detected face. Can be instantiated by spread-propping the output of a single face value from <FacesConsumer> or by appling a faceID that maps to the key of a face. Returns null if no match. Props faceID: ID of the face (corresponding to the key of the faces object in FacesConsumer) Render Prop Members region: The key associated with this object (e.g. faces[k].region === k) x, y, height, width: Position and size of the bounding box for the detected face. Note These are adjusted for the visible camera view when you are rendering from that context. faces: Array of top-5 results from face classifier, with keys label and confidence face: Label of top-scoring result from classifier (e.g. the face this is most likely to be) faceConfidence: Confidence score of top-scoring result above. Note These arguments are the sam Faces A render-prop generator to provision information about all detected faces. Will map all detected faces into <Face> components and apply the children prop to each, so you have one function to generate all your faces. Designed to be similar to FlatMap implentation. Required Provider Context This component must be a descendant of a <FacesProvider> Props None Render Prop Members Same as <Face> above, but output will be mapped across all detected faces. Example of use is in the primary Face Recognizer demo code above. Props faceID: ID of the face applied. isCameraView: Whether the region frame information to generate should be camera-aware (e.g. is it adjusted for a preview window or not) Render Props This largely passes throught the members of the element that you could get from the faces collection from FaceConsumer, with the additional consideration that when isCameraView is set, style: A spreadable set of styling members to position the rectangle, in the same style as a RNVCameraRegion If faceID is provided but does not map to a member of the faces collection, the function will return null. Core Component References The package exports a number of components to facilitate the vision process. Note that the <RNVisionProvider /> needs to be ancestors to any others in the tree. So a simple single-classifier using dominant image would look something like: <RNVisionProvider isStarted={true}> <RNVDefaultRegion classifiers={[{url: this.state.FileUrlOfClassifier, max: 5}]}> {({classifications})=>{ return ( <Text> {classifications[this.state.FileUrlOfClassifier][0].label} </Text> }} </RNVDefaultRegion> </RNVisionProvider> RNVisionProvider Context provider for information captured from the camera. Allows the use of regional detection methods to initialize identification of objects in the frame. Props isStarted: Whether the camera should be activated for vision capture. Boolean isCameraFront: Facing of the camera. False for the back camera, true to use the front. Note only one camera facing can be used at a time. As of now, this is a hardware limitation. regions: Specified regions on the camera capture frame articulated as {x,y,width,height} that should always be returned by the consumer trackedObjects: Specified regions that should be tracked as objects, so that the regions returned match these object IDs and show current position. onRegionsChanged: Fires when the list of regions has been altered onDetectedFaces: Fires when the number of detected faces has changed Class imperative member detectFaces: Triggers one call to detect faces based on current active frame. Directly returns locations. RNVisionConsumer Consumer partner of RNVisionProvider. Must be its descendant in the node tree. Render Prop Members imageDimensions: Object representing size of the camera frame in {width, height} isCameraFront: Relaying whether camera is currently in selfie mode. This is important if you plan on displaying camera output, because in selfie mode a preview will be mirrored. regions: The list of detected rectangles in the most recently captured frame, where detection is driven by the RNVisionProvider props RNVRegion Props region: ID of the region (Note the default region, which is the whole frame, has an id of "" - blank.) classifiers: CoreML classifiers passed as file URLs to the classifier mlmodelc itself. Array generators: CoreML image generators passed as file URLs to the classifier mlmodelc itself. Array generators: CoreML models that generate a collection of output values passed as file URLs to the classifier mlmodelc itself. bottlenecks: A collection of CoreML models that take other CoreML model outputs as their inputs. Keys are the file URLs of the original models (that take an image as their input) and values are arrays of mdoels that generate the output passed via render props. onFrameCaptured: Callback to fire when a new image of the current frame in this region has been captured. Making non-null activates frame capture, setting to null turns it off. The callback passes a URL of the saved frame image file. Render Prop members key: ID of the region x, y, width, height: the elements of the frame containing the region. All values expressed as percentages of the overall frame size, so a 50x100 frame at origin 5,10 in a 500x500 frame would come across as {x: 0.01, y: 0.02, width: .1, height: .2}. Changes in these values are often what drives the re-render of the component (and therefore re-run of the render prop) confidence: If set, the confidence that the object identified as key is actually at this location. Used by tracked objects API of iOS Vision. Sometimes null. classifications: Collection, keyed by the file URL of the classifier passed in props, of collections of labels and probabilities. (e.g. {"file:///path/to/myclassifier.mlmodelc": {"label1": 0.84, "label2": 0.84}}) genericResults: Collection of generic results returned from generic models passed in via props to the region RNVDefaultRegion Convenience region that references the full frame. Same props as RNVRegion, except region is always set to "" - the full frame. Useful for simple style transfers or "dominant image" classifiers. Props Same as RNVRegion, with the exception that region is forced to "" Render Prop Members Same as RNVRegion, with the note that key will always be "" RNVCameraView Preview of the camera captured by the RNVisionProvider. Note that the preview is flipped in selfie mode (e.g. when isCameraFront is true) Props The properties of a View plus: gravity: how to scale the captured camera frame in the view. String. Valid values: fill: Fills the rectangle much like the "cover" in an Image resize: Leaves transparent (or style:{backgroundColor}) the parts of the rectangle that are left over from a resized version of the image. RNVCameraConsumer Render prop consumer for delivering additional context that regions will find helpful, mostly for rendering rectangles that map to the regions identified. Render Prop Members viewPortDimensions: A collection of {width, height} of the view rectangle. viewPortGravity: A pass-through of the gravity prop to help decide how to manage the math converting coordinates. RNVCameraRegion A compound consumer that blends the render prop members of RNVRegion and RNVCameraConsumer and adds a style prop that can position the region on a specified camera preview Props Same as RNVRegion Render Prop Members Includes members from RNVRegion and RNVCameraConsumer and adds: style: A pre-built colleciton of style prop members {position, width, height, left, top} that are designed to act in the context of the RNVCameraView rectangle. Spread-prop with your other style preferences (border? backgroundColor?) for easy on-screen representation. RNVImageView View for displaying output of image generators. Link it to , and the resulting image will display in this view. Useful for style transfer models. More performant because there is no round trip to JavaScript notifying of each frame update. Props id: the ID of an image generator model attached to a region. Usually is the file:/// URL of the .mlmodelc. Otherwise conforms to Image and View API. 请叫我如何做
最新发布
11-06
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值