Android ---Check System Version at Runtime(在软件运行时检查判断系统版本)

本文介绍了如何在Android应用程序运行时使用Build类的VERSION和VERSION_CODES内部类来检查系统版本。通过SDK_INT获取版本信息,并根据版本代码进行条件判断,以适配不同API级别的功能。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Android为我们提供了一个常量类Build,其中最主要是Build中的两个内部类VERSION和VERSION_CODES

VERSION表示当前系统版本的信息,其中就包括SDK的版本信息,用于成员SDK_INT表示;

对于VERSION_CODES在SDK开发文档中时这样描述的,Enumeration of the currently known SDK version codes. These are the values that can be found in SDK. Version numbers increment monotonically with each official platform release.

在我们自己开发应用过程中,常常使用如下的代码形式判断运行新API还是旧的API:

if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB) 
    {
            // 包含新API的代码块
    }
    else
    {
            // 包含旧的API的代码块
    }

以下为版本号:


 
intBASEOctober 2008: The original, first, version of Android.
intBASE_1_1February 2009: First Android update, officially called 1.1.
intCUPCAKEMay 2009: Android 1.5.
intCUR_DEVELOPMENTMagic version number for a current development build, which has not yet turned into an official release.
intDONUTSeptember 2009: Android 1.6.
intECLAIRNovember 2009: Android 2.0

Applications targeting this or a later release will get these new changes in behavior:

intECLAIR_0_1December 2009: Android 2.0.1
intECLAIR_MR1January 2010: Android 2.1
intFROYOJune 2010: Android 2.2
intGINGERBREADNovember 2010: Android 2.3

Applications targeting this or a later release will get these new changes in behavior:

  • The application's notification icons will be shown on the new dark status bar background, so must be visible in this situation.
intGINGERBREAD_MR1February 2011: Android 2.3.3.
intHONEYCOMBFebruary 2011: Android 3.0.
intHONEYCOMB_MR1May 2011: Android 3.1.
intHONEYCOMB_MR2June 2011: Android 3.2.
intICE_CREAM_SANDWICHOctober 2011: Android 4.0.
intICE_CREAM_SANDWICH_MR1December 2011: Android 4.0.3.
intJELLY_BEANJune 2012: Android 4.1.
intJELLY_BEAN_MR1Android 4.2: Moar jelly beans!

Applications targeting this or a later release will get these new changes in behavior:

  • Content Providers: The default value of android:exported is now false.

### ONNXRuntime-GPU Version 1.18.1 Installation and Usage ONNX Runtime is an open-source runtime for machine learning models that supports multiple frameworks, including PyTorch, TensorFlow, Keras, Scikit-learn, etc., enabling high-performance inference across various platforms. Below are the details regarding installing `onnxruntime-gpu` version 1.18.1. #### Prerequisites Before proceeding with the installation of `onnxruntime-gpu`, ensure the following prerequisites are met: 1. **CUDA Toolkit**: Ensure CUDA is installed correctly as per your GPU architecture requirements[^3]. For example, if using NVIDIA GPUs, verify compatibility between the CUDA toolkit version and the driver. 2. **cuDNN Library**: Confirm cuDNN has been successfully set up to work alongside CUDA[^2]. 3. **Python Environment**: It's recommended to use a virtual environment (e.g., via Conda or venv). This ensures dependencies do not conflict with other projects[^4]. #### Installing onnxruntime-gpu Using Pip To install `onnxruntime-gpu` version 1.18.1 through pip while leveraging a mirror repository such as TUNA at Tsinghua University, execute the command below: ```bash pip install --upgrade onnxruntime-gpu==1.18.1 -i https://pypi.tuna.tsinghua.edu.cn/simple ``` This leverages the specified URL for faster downloads by utilizing China’s largest academic software mirror service[^1]. Alternatively, without specifying any mirrors but ensuring access to global repositories, run this standard line instead: ```bash pip install onnxruntime-gpu==1.18.1 ``` For users preferring Anaconda environments over traditional Python setups, consider employing conda-forge channel since it often provides precompiled binaries optimized specifically within its ecosystem: ```bash conda install -c conda-forge onnxruntime-gpu=1.18.1 ``` #### Verifying Successful Installation After completing the setup process outlined above, validate whether everything functions properly by importing inside python scripts like so: ```python import onnxruntime print(onnxruntime.__version__) ``` If no errors occur during execution and output matches exactly what was intended (`'1.18.1'`), congratulations! Everything works fine thus far. Additionally, check available providers which indicate support status towards hardware acceleration features provided by specific backends e.g., TensorRT or DirectML depending upon system configurations: ```python providers = onnxruntime.get_available_providers() if 'CUDAExecutionProvider' in providers: print('GPU Support Enabled') else: print('Only CPU Execution Available.') ``` #### Basic Example Code Demonstrating Inference With ONNX Model Loaded By Onnxruntime-GPU Here follows a minimalistic demonstration showing how one might load pretrained neural networks saved into .onnx format files then perform predictions against new input data samples efficiently thanks largely due to enhanced performance brought forth courtesy of specialized libraries designed around maximizing computational efficiency when dealing extensively large scale tensor operations common throughout modern deep learning pipelines today. ```python import numpy as np import onnxruntime as ort session_options = ort.SessionOptions() # Enable optimizations & parallelism settings here... model_path = './path/to/model.onnx' ort_session = ort.InferenceSession(model_path, session_options) input_name = ort_session.get_inputs()[0].name output_names = [o.name for o in ort_session.get_outputs()] dummy_input_data = np.random.rand(1, 3, 224, 224).astype(np.float32) outputs = ort_session.run(output_names=output_names, input_feed={input_name: dummy_input_data}) for idx, out_tensor in enumerate(outputs): print(f"Output {idx}: Shape={out_tensor.shape}") ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值