Supporting New Screen Sizes and Scales(size classes)

Apps linked against iOS 8 and later should be prepared to support the larger screen size of iPhone 6 and iPhone 6 Plus. On the iPhone 6 Plus, apps should also be prepared to support a new screen scale. In particular, apps that support OpenGL ES and Metal can also choose to size their renderingCAEAGLLayer or CAMetalLayer to get the best possible performance on the iPhone 6 Plus.

To let the system know that your app supports the iPhone 6 screen sizes, include a storyboard launch screen file in your app’s bundle. At runtime, the system looks for a storyboard launch screen file. If such an file is present, the system assumes that your app supports the iPhone 6 and 6 Plus explicitly and runs it in fullscreen mode. If such an image is not present, the system reports a smaller screen size (either 320 by 480 points or 320 by 568 points) so that your app’s screen-based calculations continue to be correct. The contents are then scaled to fit the larger screen.

For more information about specifying the launch images for your app, see Adding App Icons and a Launch Screen File in App Distribution Guide.

iOS 8 adds new features that make dealing with screen size and orientation much more versatile. It is easier than ever to create a single interface for your app that works well on both iPad and iPhone, adjusting to orientation changes and different screen sizes as needed. Using size classes, you can retrieve general information about the size of a device in its current orientation. You can use this information to make initial assumptions about which content should be displayed and how those interface elements are related to each other. Then, use Auto Layout to resize and reposition these elements to fit the actual size of the area provided. Xcode 6 uses size classes and autolayout to create storyboards that adapt automatically to size class changes and different screen sizes.

Traits Describe the Size Class and Scale of an Interface

Size classes are traits assigned to a user interface element, such as a screen or a view. There are two types of size classes in iOS 8: regular and compact. A regular size class denotes either a large amount of screen space, such as on an iPad, or a commonly adopted paradigm that provides the illusion of a large amount of screen space, such as scrolling on an iPhone. Every device is defined by a size class, both vertically and horizontally.

Figure 1 and Figure 2 show the native size classes for the iPad. With the amount of screen space available, the iPad has a regular size class in the vertical and horizontal directions in both portrait and landscape orientations.

Figure 1  iPad size classes in portrait Figure 2  iPad size classes in landscape

The size classes for iPhones differ based on the kind of device and its orientation. In portrait, the screen has a compact size class horizontally and a regular size class vertically. This corresponds to the common usage paradigm of scrolling vertically for more information. When iPhones are in landscape, their size classes vary. Most iPhones have a compact size class both horizontally and vertically, as shown in Figure 3 and Figure 4. The iPhone 6 Plus has a screen large enough to support regular width in landscape mode, as shown in Figure 5.

Figure 3  iPhone size classes in portrait Figure 4  iPhone size classes in landscape Figure 5  iPhone 6 Plus classes in landscape

You can change the size classes associated with a view. This flexibility is especially useful when a smaller view is contained within a larger view. Use the default size classes to arrange the user interface of the larger view and arrange information in the subview based on whatever size classes you feel is most appropriate to that subview.

To support size classes, the following classes are new or modified:

  • The UITraitCollection class is used to describe a collection of traits assigned to an object. Traits specify the size class, display scale, and idiom for a particular object. Classes that support the UITraitEnvironment protocol (such as UIScreenUIViewController and UIView) own a trait collection. You can retrieve an object’s trait collection and perform actions when those traits change.

  • The UIImageAsset class is used to group like images together based on their traits. Combine similar images with slightly different traits into a single asset and then automatically retrieve the correct image for a particular trait collection from the image asset. The UIImage class has been modified to work with image assets.

  • Classes that support the UIAppearance protocol can customize an object’s appearance based on its trait collection.

  • The UIViewController class adds the ability to retrieve the trait collection for a child view. You can also lay out the view by changing the size class change through the viewWillTransitionToSize:withTransitionCoordinator: method.

Xcode 6 supports unified storyboards. A storyboard can add or remove views and layout constraints based on the size class that the view controller is displayed in. Rather than maintaining two separate (but similar) storyboards, you can make a single storyboard for multiple size classes. First, design your storyboard with a common interface and then customize it for different size classes, adapting the interface to the strengths of each form factor. Use Xcode 6 to test your app in a variety of size classes and screen sizes to make sure that your interface adapts to the new sizes properly.

Supporting New Screen Scales

The iPhone 6 Plus uses a new Retina HD display with a very high DPI screen. To support this resolution, iPhone 6 Plus creates a UIScreen object with a screen size of 414 x 736 points and a screen scale of 3.0 (1242 x 2208 pixels). After the contents of the screen are rendered, UIKit samples this content down to fit the actual screen dimensions of 1080 x 1920. To support this rendering behavior, include new artwork designed for the new 3xscreen scale. In Xcode 6, asset catalogs can include images at 1x2x, and 3x sizes; simply add the new image assets and iOS will choose the correct assets when running on an iPhone 6 Plus. The image loading behavior in iOS also recognizes an @3x suffix.

In a graphics app that uses Metal or OpenGL ES, content can be easily rendered at the precise dimensions of the display without requiring an additional sampling stage. This is critical in high-performance 3D apps that perform many calculations for each rendered pixel. Instead, create buffers to render into that are the exact resolution of the display.

UIScreen object provides a new property (nativeScale) that provides the native screen scale factor for the screen. When the nativeScaleproperty has the same value as the screen’sscale property, then the rendered pixel dimensions are the same as the screen’s native pixel dimensions. When the two values differ, then you can expect the contents to be sampled before they are displayed.

If you are writing an OpenGL ES app, a GLKView object automatically creates its renderbuffer objects based on the view’s size and the value of itscontentScaleFactor property. After the view has been added to a window, set the view’s contentScaleFactor to the value stored in the screen’snativeScale property, as shown in Listing 1.

Listing 1  Supporting native scale in a GLKView object

- (void) didMoveToWindow
{
    self.contentScaleFactor = self.window.screen.nativeScale;
}

In a Metal app, your own view class should have code similar to the code found in Listing 1. In addition, whenever your view’s size changes, and prior to asking the Metal layer for a new drawable, calculate and set the metal layer’s drawableSize property as shown in Listing 2. (An OpenGL ES app that is creating its own renderbuffers would use a similar calculation.)

Listing 2  Adjusting the size of a Metal layer to match the native screen scale

CGSize drawableSize = self.bounds.size;
drawableSize.width  *= self.contentScaleFactor;
drawableSize.height *= self.contentScaleFactor;
metalLayer.drawableSize = drawableSize;

See MetalBasic3D for a working example. The Xcode templates for OpenGL ES and Metal also demonstrate these same techniques.


资源下载链接为: https://pan.quark.cn/s/d37d4dbee12c A:计算机视觉,作为人工智能领域的关键分支,致力于赋予计算机系统 “看懂” 世界的能力,从图像、视频等视觉数据中提取有用信息并据此决策。 其发展历程颇为漫长。早期图像处理技术为其奠基,后续逐步探索三维信息提取,与人工智能结合,又经历数学理论深化、机器学习兴起,直至当下深度学习引领浪潮。如今,图像生成和合成技术不断发展,让计算机视觉更深入人们的日常生活。 计算机视觉综合了图像处理、机器学习、模式识别和深度学习等技术。深度学习兴起后,卷积神经网络成为核心工具,能自动提炼复杂图像特征。它的工作流程,首先是图像获取,用相机等设备捕获视觉信息并数字化;接着进行预处理,通过滤波、去噪等操作提升图像质量;然后进入关键的特征提取和描述环节,提炼图像关键信息;之后利用这些信息训练模型,学习视觉模式和规律;最终用于模式识别、分类、对象检测等实际应用。 在实际应用中,计算机视觉用途极为广泛。在安防领域,能进行人脸识别、目标跟踪,保障公共安全;在自动驾驶领域,帮助车辆识别道路、行人、交通标志,实现安全行驶;在医疗领域,辅助医生分析医学影像,进行疾病诊断;在工业领域,用于产品质量检测、机器人操作引导等。 不过,计算机视觉发展也面临挑战。比如图像生成技术带来深度伪造风险,虚假图像和视频可能误导大众、扰乱秩序。为此,各界积极研究检测技术,以应对这一问题。随着技术持续进步,计算机视觉有望在更多领域发挥更大作用,进一步改变人们的生活和工作方式 。
内容概要:该论文探讨了光纤通信中光脉冲传输性能的数值仿真方法,重点研究了光脉冲在光纤中传输时受到色散、损耗和非线性效应的影响。文章采用分步傅里叶方法求解非线性薛定谔方程(NLSE),并对高斯脉冲、超高斯脉冲和双曲正割脉冲三种常见光脉冲的传输演化进行了仿真分析。结果显示,双曲正割脉冲在相同的传输条件下具有最佳的抗色散和非线性能力。此外,论文还通过改变群速度色散(GVD)和非线性系数等参数,进一步分析了不同条件对光脉冲传输性能的影响,并提出了脉冲选择策略和系统设计建议。最后,通过性能评估指标对三种脉冲的传输性能进行了量化分析,验证了分步傅里叶方法的有效性,并为光纤通信系统的优化设计提供了指导。 适合人群:从事光纤通信领域研究的技术人员、研究生以及对光脉冲传输感兴趣的科研人员。 使用场景及目标:①理解光脉冲在光纤中传输时所受的色散、损耗和非线性效应的影响;②掌握分步傅里叶方法求解非线性薛定谔方程的具体步骤;③评估不同脉冲形状在光纤中的传输性能,选择合适的脉冲类型和优化系统参数;④为实际工程应用提供理论依据和技术支持。 其他说明:论文不仅提供了详细的仿真代码和结果分析,还通过扩展仿真展示了不同参数对光脉冲传输性能的影响,有助于读者更全面地理解光脉冲传输的物理机制和工程应用前景。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值