Some useful Android components

本文介绍了API演示中一些有用的Android组件及其在当前跨平台架构中的集成方式。包括文本处理、自动完成视图、焦点控制等组件,并讨论了优化方向如下拉字段、屏幕尺寸变化处理等问题。
There are some useful Android components in API demos, which can be integrated into current cross-platform architecture.

1. Text, Linkify/Marquee which is very useful for multiline.
2. View->Auto Complete, every case is important. And the "Contact" case can be used for intelligence address book.
3. View->Controls --> two theme
4..View->Focus, how to make a component unfocusable but can scroll. how to jump over one component for specific orientation.

[b]to optimize:[/b]
dropdown field.
global back key.
screensize change.
interface optimize.
enter key in tree.
simply the uiargs mechanism.
1) Tree List can’t move focus when use trackball [b](Fixed)[/b]
2) Focus issue when click the Ui, and when back and click another, will focus on two items. (I have Fixed)
3) Popup of route compare can’t close
4) When switch from touch mode to trackball/key mode, when click the back key, the first time/second time will not be back.
5) How to display the button at the bottom of screen? Such as POI Icon screen (will talk with your guys next week)
6) Width/height of poup is not changed when landscape/portrait switch.
7) S the scrollable when the text is too long and the label is focused, but the scroll is not smooth.
8) Multiline: there is a blank line when lazy load the another parts. The link popup is too large.
9) Multiline: another test case seems that it doesn’t work.
10) The performance of POI List screen is not good. About the kinds of list, could we enlarge the list item > 500, and test the performance? Maybe we need add the lazy load mechanism.
11) Tree, could we have a test that focus on the special tree node when show the tree? [b]working on[/b]
【SCI一区复现】基于配电网韧性提升的应急移动电源预配置和动态调度(下)—MPS动态调度(Matlab代码实现)内容概要:本文档围绕“基于配电网韧性提升的应急移动电源预配置和动态调度”主题,重点介绍MPS(Mobile Power Sources)动态调度的Matlab代码实现,是SCI一区论文复现的技术资料。内容涵盖在灾害或故障等极端场景下,如何通过优化算法对应急移动电源进行科学调度,以提升配电网在突发事件中的恢复能力与供电可靠性。文档强调采用先进的智能优化算法进行建模求解,并结合IEEE标准测试系统(如IEEE33节点)进行仿真验证,具有较强的学术前沿性和工程应用价值。; 适合人群:具备电力系统基础知识和Matlab编程能力,从事电力系统优化、配电网韧性、应急电源调度等相关领域研究的研究生、科研人员及工程技术人员。; 使用场景及目标:①用于复现高水平期刊(SCI一区、IEEE顶刊)中关于配电网韧性与移动电源调度的研究成果;②支撑科研项目中的模型构建与算法开发,提升配电网在故障后的快速恢复能力;③为电力系统应急调度策略提供仿真工具与技术参考。; 阅读建议:建议结合前篇“MPS预配置”内容系统学习,重点关注动态调度模型的数学建模、目标函数设计与Matlab代码实现细节,建议配合YALMIP等优化工具包进行仿真实验,并参考文中提供的网盘资源获取完整代码与数据。
一款AI短视频生成工具,只需输入一句产品卖点或内容主题,软件便能自动生成脚本、配音、字幕和特效,并在30秒内渲染出成片。 支持批量自动剪辑,能够实现无人值守的循环生产。 一键生成产品营销与泛内容短视频,AI批量自动剪辑,高颜值跨平台桌面端工具。 AI视频生成工具是一个桌面端应用,旨在通过AI技术简化短视频的制作流程。用户可以通过简单的提示词文本+视频分镜素材,快速且自动的剪辑出高质量的产品营销和泛内容短视频。该项目集成了AI驱动的文案生成、语音合成、视频剪辑、字幕特效等功能,旨在为用户提供开箱即用的短视频制作体验。 核心功能 AI驱动:集成了最新的AI技术,提升视频制作效率和质量 文案生成:基于提示词生成高质量的短视频文案 自动剪辑:支持多种视频格式,自动化批量处理视频剪辑任务 语音合成:将生成的文案转换为自然流畅的语音 字幕特效:自动添加字幕和特效,提升视频质量 批量处理:支持批量任务,按预设自动持续合成视频 多语言支持:支持中文、英文等多种语言,满足不同用户需求 开箱即用:无需复杂配置,用户可以快速上手 持续更新:定期发布新版本,修复bug并添加新功能 安全可靠:完全本地本地化运行,确保用户数据安全 用户友好:简洁直观的用户界面,易于操作 多平台支持:支持Windows、macOS和Linux等多个操作系统
这个是库的 文档说明 react-native-vision Library for accessing VisionKit and visual applications of CoreML from React Native. iOS Only Incredibly super-alpha, and endeavors to provide a relatively thin wrapper between the underlying vision functionality and RN. Higher-level abstractions are @TODO and will be in a separate library. Installation yarn add react-native-vision react-native-swift react-native link Note react-native-swift is a peer dependency of react-native-vision. If you are running on a stock RN deployment (e.g. from react-native init) you will need to make sure your app is targeting IOS 11 or higher: yarn add react-native-fix-ios-version react-native link Since this module uses the camera, it will work much better on a device, and setting up permissions and codesigning in advance will help: yarn add -D react-native-camera-ios-enable yarn add -D react-native-setdevteam react-native link react-native setdevteam Then you are ready to run! react-native run-ios --device Command line - adding a Machine Learning Model with add-mlmodel react-native-vision makes it easier to bundle a pre-built machine learning model into your app. After installing, you will find the following command available: react-native add-mlmodel /path/to/mymodel.mlmodel You may also refere to the model from a URL, which is handy when getting something off the interwebs. For example, to apply the pre-built mobileNet model from apple, you can: react-native add-mlmodel https://docs-assets.developer.apple.com/coreml/models/MobileNet.mlmodel Note that the name of your model in the code will be the same as the filename minus the "mlmodel". In the above case, the model in code can be referenced as "MobileNet" Easy Start 1 : Full Frame Object Detection One of the most common easy use cases is just detecting what is in front of you. For this we use the VisionCamera component that lets you apply a model and get the classification via render props. Setup react-native init imagedetector; cd imagedetector yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with MobileNet A free download from Apple! react-native add-mlmodel https://docs-assets.developer.apple.com/coreml/models/MobileNet.mlmodel Add Some App Code import React from "react"; import { Text } from "react-native"; import { VisionCamera } from "react-native-vision"; export default () => ( <VisionCamera style={{ flex: 1 }} classifier="MobileNet"> {({ label, confidence }) => ( <Text style={{ width: "75%", fontSize: 50, position: "absolute", right: 50, bottom: 100 }} > {label + " :" + (confidence * 100).toFixed(0) + "%"} </Text> )} </VisionCamera> ); Easy Start 2: GeneratorView - for Style Transfer Most machine learning application are classifiers. But generators can be useful and a lot of fun. The GeneratorView lets you look at style transfer models that show how you can use deep learning techniques for creating whole new experiences. Setup react-native init styletest; cd styletest yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with add-mlmodel Apple has not published a style transfer model, but there are a few locations on the web where you can download them. Here is one: https://github.com/mdramos/fast-style-transfer-coreml So go to his github, navigate to his google drive, and then download the la_muse model to your personal Downloads directory. react-native add-mlmodel ~/Downloads/la_muse.mlmodel App Code This is the insanely short part. Note that the camera view is not necessary for viewing the style-transferred view: its just for reference. import React from "react"; import { GeneratorView, RNVCameraView } from "react-native-vision"; export default () => ( <GeneratorView generator="FNS-The-Scream" style={{ flex: 1 }}> <RNVCameraView style={{ position: "absolute", height: 200, width: 100, top: 0, right: 0 }} resizeMode="center" /> </GeneratorView> ); Easy Start 3: Face Camera Detect what faces are where in your camera view! Taking a page (and the model!) from (https://github.com/gantman/nicornot)[Gant Laborde's NicOrNot app], here is the entirety of an app that discerns whether the target is nicolas cage. Setup react-native init nictest; cd nictest yarn add react-native-swift react-native-vision yarn add react-native-fix-ios-version react-native-camera-ios-enable react-native-setdevteam react-native link react-native setdevteam Load your model with add-mlmodel react-native add-mlmodel https://s3.amazonaws.com/despiteallmyrage/MegaNic50_linear_5.mlmodel App Code import React from "react"; import { Text, View } from "react-native"; import { FaceCamera } from "react-native-vision"; import { Identifier } from "react-native-identifier"; export default () => ( <FaceCamera style={{ flex: 1 }} classifier="MegaNic50_linear_5"> {({ face, faceConfidence, style }) => face && (face == "nic" ? ( <Identifier style={{ ...style }} accuracy={faceConfidence} /> ) : ( <View style={{ ...style, justifyContent: "center", alignItems: "center" }} > <Text style={{ fontSize: 50, color: "red", opacity: faceConfidence }}> X </Text> </View> )) } </FaceCamera> ); Face Detection Component Reference FacesProvider Context Provider that extends <RNVisionProvider /> to detect, track, and identify faces. Props Inherits from <RNVisionProvider />, plus: interval: How frequently (in ms) to run the face detection re-check. (Basically lower values here keeps the face tracking more accurate) Default: 500 classifier: File URL to compiled MLModel (e.g. mlmodelc) that will be applied to detected faces updateInterval: How frequently (in ms) to update the detected faces - position, classified face, etc. Smaller values will mean smoother animation, but at the price of processor intensity. Default: 100 Example <FacesProvider isStarted={true} isCameraFront={true} classifier={this.state.classifier} > {/* my code for handling detected faces */} </FacesProvider> FacesConsumer Consumer of <FacesProvider /> context. As such, takes no props and returns a render prop function. Render Prop Members faces: Keyed object of information about the detected face. Elements of each object include: region: The key associated with this object (e.g. faces[k].region === k) x, y, height, width: Position and size of the bounding box for the detected face. faces: Array of top-5 results from face classifier, with keys label and confidence face: Label of top-scoring result from classifier (e.g. the face this is most likely to be) faceConfidence: Confidence score of top-scoring result above. Note that when there is no classifier specified, faces, face and faceConfidence are undefined Face Render prop generator to provision information about a single detected face. Can be instantiated by spread-propping the output of a single face value from <FacesConsumer> or by appling a faceID that maps to the key of a face. Returns null if no match. Props faceID: ID of the face (corresponding to the key of the faces object in FacesConsumer) Render Prop Members region: The key associated with this object (e.g. faces[k].region === k) x, y, height, width: Position and size of the bounding box for the detected face. Note These are adjusted for the visible camera view when you are rendering from that context. faces: Array of top-5 results from face classifier, with keys label and confidence face: Label of top-scoring result from classifier (e.g. the face this is most likely to be) faceConfidence: Confidence score of top-scoring result above. Note These arguments are the sam Faces A render-prop generator to provision information about all detected faces. Will map all detected faces into <Face> components and apply the children prop to each, so you have one function to generate all your faces. Designed to be similar to FlatMap implentation. Required Provider Context This component must be a descendant of a <FacesProvider> Props None Render Prop Members Same as <Face> above, but output will be mapped across all detected faces. Example of use is in the primary Face Recognizer demo code above. Props faceID: ID of the face applied. isCameraView: Whether the region frame information to generate should be camera-aware (e.g. is it adjusted for a preview window or not) Render Props This largely passes throught the members of the element that you could get from the faces collection from FaceConsumer, with the additional consideration that when isCameraView is set, style: A spreadable set of styling members to position the rectangle, in the same style as a RNVCameraRegion If faceID is provided but does not map to a member of the faces collection, the function will return null. Core Component References The package exports a number of components to facilitate the vision process. Note that the <RNVisionProvider /> needs to be ancestors to any others in the tree. So a simple single-classifier using dominant image would look something like: <RNVisionProvider isStarted={true}> <RNVDefaultRegion classifiers={[{url: this.state.FileUrlOfClassifier, max: 5}]}> {({classifications})=>{ return ( <Text> {classifications[this.state.FileUrlOfClassifier][0].label} </Text> }} </RNVDefaultRegion> </RNVisionProvider> RNVisionProvider Context provider for information captured from the camera. Allows the use of regional detection methods to initialize identification of objects in the frame. Props isStarted: Whether the camera should be activated for vision capture. Boolean isCameraFront: Facing of the camera. False for the back camera, true to use the front. Note only one camera facing can be used at a time. As of now, this is a hardware limitation. regions: Specified regions on the camera capture frame articulated as {x,y,width,height} that should always be returned by the consumer trackedObjects: Specified regions that should be tracked as objects, so that the regions returned match these object IDs and show current position. onRegionsChanged: Fires when the list of regions has been altered onDetectedFaces: Fires when the number of detected faces has changed Class imperative member detectFaces: Triggers one call to detect faces based on current active frame. Directly returns locations. RNVisionConsumer Consumer partner of RNVisionProvider. Must be its descendant in the node tree. Render Prop Members imageDimensions: Object representing size of the camera frame in {width, height} isCameraFront: Relaying whether camera is currently in selfie mode. This is important if you plan on displaying camera output, because in selfie mode a preview will be mirrored. regions: The list of detected rectangles in the most recently captured frame, where detection is driven by the RNVisionProvider props RNVRegion Props region: ID of the region (Note the default region, which is the whole frame, has an id of "" - blank.) classifiers: CoreML classifiers passed as file URLs to the classifier mlmodelc itself. Array generators: CoreML image generators passed as file URLs to the classifier mlmodelc itself. Array generators: CoreML models that generate a collection of output values passed as file URLs to the classifier mlmodelc itself. bottlenecks: A collection of CoreML models that take other CoreML model outputs as their inputs. Keys are the file URLs of the original models (that take an image as their input) and values are arrays of mdoels that generate the output passed via render props. onFrameCaptured: Callback to fire when a new image of the current frame in this region has been captured. Making non-null activates frame capture, setting to null turns it off. The callback passes a URL of the saved frame image file. Render Prop members key: ID of the region x, y, width, height: the elements of the frame containing the region. All values expressed as percentages of the overall frame size, so a 50x100 frame at origin 5,10 in a 500x500 frame would come across as {x: 0.01, y: 0.02, width: .1, height: .2}. Changes in these values are often what drives the re-render of the component (and therefore re-run of the render prop) confidence: If set, the confidence that the object identified as key is actually at this location. Used by tracked objects API of iOS Vision. Sometimes null. classifications: Collection, keyed by the file URL of the classifier passed in props, of collections of labels and probabilities. (e.g. {"file:///path/to/myclassifier.mlmodelc": {"label1": 0.84, "label2": 0.84}}) genericResults: Collection of generic results returned from generic models passed in via props to the region RNVDefaultRegion Convenience region that references the full frame. Same props as RNVRegion, except region is always set to "" - the full frame. Useful for simple style transfers or "dominant image" classifiers. Props Same as RNVRegion, with the exception that region is forced to "" Render Prop Members Same as RNVRegion, with the note that key will always be "" RNVCameraView Preview of the camera captured by the RNVisionProvider. Note that the preview is flipped in selfie mode (e.g. when isCameraFront is true) Props The properties of a View plus: gravity: how to scale the captured camera frame in the view. String. Valid values: fill: Fills the rectangle much like the "cover" in an Image resize: Leaves transparent (or style:{backgroundColor}) the parts of the rectangle that are left over from a resized version of the image. RNVCameraConsumer Render prop consumer for delivering additional context that regions will find helpful, mostly for rendering rectangles that map to the regions identified. Render Prop Members viewPortDimensions: A collection of {width, height} of the view rectangle. viewPortGravity: A pass-through of the gravity prop to help decide how to manage the math converting coordinates. RNVCameraRegion A compound consumer that blends the render prop members of RNVRegion and RNVCameraConsumer and adds a style prop that can position the region on a specified camera preview Props Same as RNVRegion Render Prop Members Includes members from RNVRegion and RNVCameraConsumer and adds: style: A pre-built colleciton of style prop members {position, width, height, left, top} that are designed to act in the context of the RNVCameraView rectangle. Spread-prop with your other style preferences (border? backgroundColor?) for easy on-screen representation. RNVImageView View for displaying output of image generators. Link it to , and the resulting image will display in this view. Useful for style transfer models. More performant because there is no round trip to JavaScript notifying of each frame update. Props id: the ID of an image generator model attached to a region. Usually is the file:/// URL of the .mlmodelc. Otherwise conforms to Image and View API. 请叫我如何做
11-06
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值