Unity C++ 底层渲染 plugin
1. Android Unity Render
Unity 主要是用C#语言书写的,为了更好的泛用性,它支持各种其他语言的plugin(就是可以使用其他语言预先编译好的代码库,并使用其中的函数方法进行运算)。对我个人来说,用的最多的是C++和android java。最近在开发手机unity ar app的时候,遇到了渲染的问题:手机相机开启由谁开启,开启之后图片由谁渲染(android还是unity)。
最后,我们想出了几种解决方法。
Unity is primarily written in C#. For better versatility, it supports plugins in a variety of other languages (that is, you can use pre-compiled code libraries in other languages and use the function methods in them). For my cases, the most used are C++ and android java. Recently when I developed the mobile phone unity AR application, I encountered a rendering problem: who opened the phone camera, and who rendered the image (android or unity) after opening.
Finally, we came up with several solutions.
1.1 android handle all the process
图片相机全由android aar开启和渲染。同时其他物体(虚拟物体)由unity渲染,同时设定untiy的背景的透明度。为此,大致的步骤如下:
The image camera is fully opened and rendered by android aar. At the same time other objects (virtual objects) are rendered by unity, while setting the transparency of the background of the untiy. To do this, the general steps are as follows:
-
unity的“player setting”中勾选设置”Preserve framebuffer alpha“ 。
-
unity中的camera相机选择”clear flags“设置为”solid color“,并且对应的颜色”color“设置为全是0,并且注意alpha也要设置为0(即透明)。
-
在安卓项目的layout配置文件”.xml“中在对应的unity窗口配置中增加" android:windowsIsTranslucent = “true” ",并且注意unity的窗口要在android的窗口之后。
-
下面是我的layout的例子。
-
unity set ”Preserve framebuffer alpha“ in “player setting”
-
Select in unity"clear flags" set to “solid color” of the camera object, and the corresponding color “color” is set to all 0, and note that alpha is also set to 0 (ie, transparent).
-
In the layout file “.xml” of the Android project, add "android:windowsIsTranslucent=“true” in the corresponding unity window configuration, and note that the window of unity should be after the android window.
-
The following is my example layout.
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="horizontal">
<org.opencv.android.JavaCameraView
android:id="@+id/opencvView"
android:layout_width="700dp"
android:layout_height="350dp"
android:fillViewport="true"
app:camera_id="any"
app:show_fps="true"/>
<LinearLayout
android:id="@+id/unity_view"
android:windowIsTranslucent="true"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="horizontal"
android:visibility="visible">
</LinearLayout>
</FrameLayout>
1.2 unity handle all the process
这种方法从程序运行的角度来说是非常不效率的,但是开发的难度是最低的。所以,我为了快速开发,也尝试的这种方法。总的来说,
- 就是unity使用web camera类来开启相机。
- 然后获取到相机的texture转换为texture2d,再将它转换为byte数组的格式。
- 将这个byte数组传给C++方法调用,获取结果。
This method is very inefficient from the point of view of program operation, but the development is the least difficult. So, I tried this method for rapid development. In general,
- It is unity using the web camera class to turn on the camera.
- Then get the texture of the camera converted to texture2d, and then convert it to the format of a byte array.
- Pass this byte array to the C++ method call to get the result.
实现大致如下(非常的不效率):
The realize is followed:
private Texture2D texture2D;
private WebCamTexture camTexture;
private byte[] GetCamImage()
{
if (!camTexture.isPlaying)
{
return null;
}
Color32[] color32s = camTexture.GetPixels32();
texture2D = new Texture2D(camTexture.width, camTexture.height, TextureFormat.RGB24, false);
texture2D.SetPixels32(c