UVa 10334 Ray Through Glasses (斐波那契&高精度)

本文探讨了光线穿过或在双层玻璃中反射并改变方向n次的多种情况,并提供了解决此类问题的算法。算法利用了斐波那契数列的思想,通过迭代计算得出结果。该文详细解释了输入格式、输出要求以及如何实现高精度计算以应对大数值情况。

10334 - Ray Through Glasses

Time limit: 3.000 seconds

http://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=1275

Suppose we put two panes of glass back-to-back. How many ways  are there for light rays to pass through or be reflected after changing direction n times ? Following figure shows the situations when the value of nis 0, 1 and 2. 
                                  

Input 

It is a set of lines with an integer n where 0 <= n <= 1000 in each of them.

Output 

For every one of these integers a line containing  as described above.

Sample Input 

0
1
2

Sample Output 

1
2
3

对于an,如果我们在中间那层玻璃反射,那么后面必须在上面那层反射一次,后面的情况数为a(n-2);如果我们在下面那层反射,后面的情况数为a(n-1)

所以有an = a(n-1) + a(n-1)

但是n最多为1000,考虑到斐波那契数列是以指数增长的,所以要用高精度来处理。


完整代码:

/*0.022s*/

#include <cstdio>

int F[1005][60];

int main(void)
{
	F[0][0] = 1, F[1][0] = 2;
	for (int i = 2; i <= 1000; ++ i)
	{
		for (int j = 0 ; j < 55 ; ++ j)
			F[i][j] = F[i - 1][j] + F[i - 2][j];
		for (int j = 0; j < 55; ++ j)
		{
			F[i][j + 1] += F[i][j] / 10000;///分离进位
			F[i][j] %= 10000;///截取后4位
		}
	}
	int n;
	while (~scanf("%d", &n))
	{
		int end = 55;
		while (end > 0 && !F[n][end]) --end;
		printf("%d", F[n][end--]);
		while (end >= 0) printf("%04d", F[n][end--]);///逆序输出
		printf("\n");
	}
	return 0;
}

/*0.582s*/

import java.io.*;
import java.util.*;
import java.math.*;

public class Main {
	static final int maxn = 1001;
	static Scanner cin = new Scanner(new BufferedInputStream(System.in));

	public static void main(String[] args) {
		BigInteger[] f = new BigInteger[maxn];
		f[0] = BigInteger.ONE;
		f[1] = BigInteger.valueOf(2);
		for (int i = 2; i < maxn; ++i)
			f[i] = f[i - 1].add(f[i - 2]);
		while (cin.hasNextInt())
			System.out.println(f[cin.nextInt()]);
	}
}


以下是使用 C# 实现人脸识别的代码,其中使用了高精度算法: ```csharp using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Drawing; using System.Drawing.Imaging; using System.IO; using Microsoft.Azure.CognitiveServices.Vision.Face; using Microsoft.Azure.CognitiveServices.Vision.Face.Models; using MathNet.Numerics.LinearAlgebra; namespace HighAccuracyFaceRecognition { class Program { static void Main(string[] args) { // 读取图片文件 string imagePath = &quot;path/to/image.jpg&quot;; Bitmap image = new Bitmap(imagePath); // 将图片转换为字节数组 byte[] imageData; using (MemoryStream ms = new MemoryStream()) { image.Save(ms, ImageFormat.Jpeg); imageData = ms.ToArray(); } // 初始化 Face API 客户端 string subscriptionKey = &quot;your-subscription-key&quot;; string endpoint = &quot;https://your-endpoint.cognitiveservices.azure.com/&quot;; FaceClient faceClient = new FaceClient(new ApiKeyServiceClientCredentials(subscriptionKey)) { Endpoint = endpoint }; // 配置人脸检测参数 IList&lt;FaceAttributeType&gt; faceAttributes = new List&lt;FaceAttributeType&gt;() { FaceAttributeType.Emotion, FaceAttributeType.Gender, FaceAttributeType.Age, FaceAttributeType.Smile, FaceAttributeType.Glasses, FaceAttributeType.HeadPose }; // 检测人脸 DetectResult[] detectResults = faceClient.Face.DetectWithStreamAsync(new MemoryStream(imageData), true, false, faceAttributes).Result; // 遍历检测结果 foreach (DetectResult detectResult in detectResults) { // 获取人脸特征向量 Guid faceId = detectResult.FaceId.Value; FaceAttribute features = detectResult.FaceAttributes; double[] featureVector = GetFeatureVector(faceClient, faceId); // 进行人脸识别 string personName = RecognizeFace(featureVector); // 输出结果 Console.WriteLine($&quot;Person name: {personName}&quot;); Console.WriteLine($&quot;Emotion: {features.Emotion.ToRankedList().First().Key}&quot;); Console.WriteLine($&quot;Gender: {features.Gender}&quot;); Console.WriteLine($&quot;Age: {features.Age}&quot;); Console.WriteLine($&quot;Smile: {features.Smile}&quot;); Console.WriteLine($&quot;Glasses: {features.Glasses}&quot;); Console.WriteLine($&quot;Head pose: roll={features.HeadPose.Roll}, yaw={features.HeadPose.Yaw}, pitch={features.HeadPose.Pitch}&quot;); } } static double[] GetFeatureVector(FaceClient faceClient, Guid faceId) { // 获取人脸特征向量 const int FEATURE_VECTOR_SIZE = 512; FaceAttributeType[] faceAttributes = { FaceAttributeType.FaceLandmarks }; FaceAttribute[] attributes = faceClient.Face.GetFaceAttributesAsync(faceId, faceAttributes).Result; Landmarks landmarks = attributes[0].FaceLandmarks; DenseMatrix imagePoints = DenseMatrix.OfArray(new double[,] { { landmarks.PupilLeft.X, landmarks.PupilLeft.Y }, { landmarks.PupilRight.X, landmarks.PupilRight.Y }, { landmarks.NoseTip.X, landmarks.NoseTip.Y }, { landmarks.MouthLeft.X, landmarks.MouthLeft.Y }, { landmarks.MouthRight.X, landmarks.MouthRight.Y } }); Vector&lt;double&gt; featureVector = FaceRecognition.ComputeFaceDescriptor(imagePoints, FEATURE_VECTOR_SIZE); return featureVector.ToArray(); } static string RecognizeFace(double[] featureVector) { // TODO: 实现人脸识别算法,返回识别结果 return &quot;Unknown&quot;; } } } ``` 在上述代码中,`GetFeatureVector` 函数使用 `Face API` 获取人脸关键点坐标,然后调用 `FaceRecognition.ComputeFaceDescriptor` 函数计算人脸特征向量。`RecognizeFace` 函数需要根据已有的人脸库进行人脸识别,这里需要自己实现。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值