BitmapFactory.decodeStream第二次无效

本文探讨了如何在有限内存条件下高效地从相册读取图片并转换为Bitmap,包括两次解析流程及利用mark和reset方法避免内存溢出的问题。同时,介绍了将输入流转化为字节数组的方法,确保图片数据不丢失。

我要实现一个从相册读取图片到bitmap的功能,其中一步需要把inputstream解析成bitmap,用到decodeStream. 

decodeStream一次是没有问题的。

但是我不想直接解析,那样图片有可能太大了,我想先解析一下大小,然后在把bitmap解析出来,这是一个常规的思路,那么就需要decodeStream两次。

那么问题来了:

第二次解析的时候返回null。

为什么呢?因为inputstream再第一次的时候读完了,到达尾部了,再次使用当然就没有数据了。

怎么办?

可以回头再读一遍流,利用mark和reset。

mark是在需要重读的地方记录一下,reset是回到mark的这个地方,当然我这里就是在开头mark一下,第一次decode之后reset一下。

但是。。。

/**
     * Sets a mark position in this InputStream. The parameter {@code readlimit}
     * indicates how many bytes can be read before the mark is invalidated.
     * Sending {@code reset()} will reposition the stream back to the marked
     * position provided {@code readLimit} has not been surpassed.
     * <p>
     * This default implementation does nothing and concrete subclasses must
     * provide their own implementation.
     *
     * @param readlimit
     *            the number of bytes that can be read from this stream before
     *            the mark is invalidated.
     * @see #markSupported()
     * @see #reset()
     */
    public void mark(int readlimit) {
        /* empty */
    }

mark需要带一个参数readlimit,就是说如果从mark的地方你读取的时候超过了这个limit,mark就失效了,我去!!!这尼玛

猜测应该是mark之后,系统会再开一个buffer来记录。

先不管开销,但是问题是我去哪儿知道这个inputsteam多大呢?这可尼玛是图片呀!


这个问题暂时没解决,现在先用另一个方法把图片解出来,就是先把流转成字节数组,然后再解析,这总不可能把数据丢了吧

public static byte[] readStream(InputStream inStream) throws Exception {
		byte[] buffer = new byte[1024];
		int len = -1;
		ByteArrayOutputStream outStream = new ByteArrayOutputStream();
		while ((len = inStream.read(buffer)) != -1) {
			outStream.write(buffer, 0, len);
		}
		byte[] data = outStream.toByteArray();
		outStream.close();
		inStream.close();
		return data;
	}

public static Bitmap decodeBitmapFromBytes(byte[] ib, int reqWidth,
			int reqHeight) {
		// 第一次解析将inJustDecodeBounds设置为true,来获取图片大小
		final BitmapFactory.Options options = new BitmapFactory.Options();
		options.inJustDecodeBounds = true;
		BitmapFactory.decodeByteArray(ib, 0, ib.length, options);
		// 调用上面定义的方法计算inSampleSize值
		options.inSampleSize = calculateInSampleSize(options, reqWidth,
				reqHeight);
		// 使用获取到的inSampleSize值再次解析图片
		options.inJustDecodeBounds = false;
		Bitmap bm = BitmapFactory.decodeByteArray(ib, 0, ib.length, options);
		return bm;
	}
以下代码中 surface在使用之前就销毁了 如何修改 // Tencent is pleased to support the open source community by making ncnn available. // // Copyright (C) 2020 THL A29 Limited, a Tencent company. All rights reserved. // // Licensed under the BSD 3-Clause License (the "License"); you may not use this file except // in compliance with the License. You may obtain a copy of the License at // // https://opensource.org/licenses/BSD-3-Clause // // Unless required by applicable law or agreed to in writing, software distributed // under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR // CONDITIONS OF ANY KIND, either express or implied. See the License for the // specific language governing permissions and limitations under the License. package com.tencent.yolov5ncnn; import android.Manifest; import android.app.Activity; import android.content.ContentResolver; import android.content.ContentUris; import android.content.Intent; import android.content.res.AssetFileDescriptor; import android.database.Cursor; import android.graphics.Bitmap; import android.graphics.ImageFormat; import android.graphics.Bitmap.Config; import android.graphics.BitmapFactory; import android.graphics.PixelFormat; import android.graphics.Rect; import android.graphics.YuvImage; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.media.ExifInterface; import android.graphics.Matrix; import android.media.MediaMetadataRetriever; import android.media.ThumbnailUtils; import android.net.Uri; import android.os.Bundle; import android.os.Environment; import android.os.Handler; import android.provider.MediaStore; import android.util.Log; import android.view.View; import android.view.Surface; import android.view.SurfaceView; import android.view.SurfaceHolder; import android.widget.Button; import android.widget.ImageView; import java.nio.ByteBuffer; import android.media.Image; import android.media.MediaCodec; import android.media.MediaExtractor; import android.media.MediaFormat; import android.media.MediaCodecInfo; // import com.tbruyelle.rxpermissions2.RxPermissions; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.InputStream; import java.io.ByteArrayOutputStream; import java.io.IOException; public class MainActivity extends Activity { private static final int SELECT_IMAGE = 1; private static final int SELECT_VIDEO = 2; private SurfaceView mSurfaceView; private SurfaceHolder mSurfaceHolder; private ImageView imageView; private Bitmap bitmap = null; private Bitmap yourSelectedImage = null; Bitmap bitmapVideo; private YoloV5Ncnn yolov5ncnn = new YoloV5Ncnn(); private static final String TAG = "MainActivity"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); boolean ret_init = yolov5ncnn.Init(getAssets()); if (!ret_init) { Log.e("MainActivity", "yolov5ncnn Init failed"); } imageView = (ImageView) findViewById(R.id.imageView); mSurfaceView = findViewById(R.id.SurfaceView); Button buttonImage = (Button) findViewById(R.id.buttonImage); buttonImage.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View arg0) { Intent i = new Intent(Intent.ACTION_PICK); i.setType("image/*"); startActivityForResult(i, SELECT_IMAGE); } }); Button buttonVideo = (Button) findViewById(R.id.buttonVideo); buttonVideo.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Video.Media.EXTERNAL_CONTENT_URI); startActivityForResult(intent, SELECT_VIDEO); } }); mSurfaceHolder = mSurfaceView.getHolder(); mSurfaceHolder.addCallback(new SurfaceHolder.Callback() { @Override public void surfaceCreated(SurfaceHolder holder) { System.out.println("surfaceCreated!!!!!!!!!!!!!!!"); } @Override public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { System.out.println("surfaceChanged!!!!!!!!!!!!!!!"); } @Override public void surfaceDestroyed(SurfaceHolder holder) { System.out.println("surfaceDestroyed!!!!!!!!!!!!!!!"); } }); } private void showObjects(YoloV5Ncnn.Obj[] objects) { if (objects == null) { imageView.setImageBitmap(bitmap); return; } Log.d(TAG, "showObjects: =========================="+bitmap); imageView.setImageBitmap(bitmap); // draw objects on bitmap Bitmap rgba = bitmap.copy(Bitmap.Config.ARGB_8888, true); final int[] colors = new int[] { Color.rgb( 54, 67, 244), Color.rgb( 99, 30, 233), Color.rgb(176, 39, 156), Color.rgb(183, 58, 103), Color.rgb(181, 81, 63), Color.rgb(243, 150, 33), Color.rgb(244, 169, 3), Color.rgb(212, 188, 0), Color.rgb(136, 150, 0), Color.rgb( 80, 175, 76), Color.rgb( 74, 195, 139), Color.rgb( 57, 220, 205), Color.rgb( 59, 235, 255), Color.rgb( 7, 193, 255), Color.rgb( 0, 152, 255), Color.rgb( 34, 87, 255), Color.rgb( 72, 85, 121), Color.rgb(158, 158, 158), Color.rgb(139, 125, 96) }; Canvas canvas = new Canvas(rgba); Paint paint = new Paint(); paint.setStyle(Paint.Style.STROKE); paint.setStrokeWidth(4); Paint textbgpaint = new Paint(); textbgpaint.setColor(Color.WHITE); textbgpaint.setStyle(Paint.Style.FILL); Paint textpaint = new Paint(); textpaint.setColor(Color.BLACK); textpaint.setTextSize(26); textpaint.setTextAlign(Paint.Align.LEFT); String text = null; for (int i = 0; i < objects.length; i++) { paint.setColor(colors[i % 19]); canvas.drawRect(objects[i].x, objects[i].y, objects[i].x + objects[i].w, objects[i].y + objects[i].h, paint); // draw filled text inside image { text = objects[i].label + " = " + String.format("%.1f", objects[i].prob * 100) + "%"; Log.d(TAG, "showObjects-----------------------------: "+text); float text_width = textpaint.measureText(text); float text_height = - textpaint.ascent() + textpaint.descent(); float x = objects[i].x; float y = objects[i].y - text_height; if (y < 0) y = 0; if (x + text_width > rgba.getWidth()) x = rgba.getWidth() - text_width; canvas.drawRect(x, y, x + text_width, y + text_height, textbgpaint); canvas.drawText(text, x, y - textpaint.ascent(), textpaint); } } imageView.setImageBitmap(rgba); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (resultCode == RESULT_OK && null != data) { Uri dataData = data.getData(); try { if (requestCode == SELECT_IMAGE) { bitmap = decodeUri(dataData); yourSelectedImage = bitmap.copy(Bitmap.Config.ARGB_8888, true); imageView.setImageBitmap(bitmap); if (yourSelectedImage == null) return; YoloV5Ncnn.Obj[] objects = yolov5ncnn.Detect(yourSelectedImage, false); showObjects(objects); }else if(requestCode == SELECT_VIDEO){ decodeVideo(dataData); } } catch (FileNotFoundException e) { Log.e("MainActivity", "FileNotFoundException"); return; } } } private Bitmap getBitmapFromYUV(byte[] date, int width, int height, int rotation) { YuvImage yuvImage = new YuvImage(date, ImageFormat.NV21,width,height,null); ByteArrayOutputStream baos = new ByteArrayOutputStream(); yuvImage.compressToJpeg(new Rect(0,0,width,height),100,baos); byte[] jdate = baos.toByteArray(); BitmapFactory.Options bitmapFatoryOptions = new BitmapFactory.Options(); bitmapFatoryOptions.inPreferredConfig = Bitmap.Config.ARGB_8888; final int REQUIRED_SIZE = 320; int width_tmp = width; int height_tmp = height; int scale = 1; while (true) { if (width_tmp / 2 < REQUIRED_SIZE || height_tmp / 2 < REQUIRED_SIZE) { break; } width_tmp /= 2; height_tmp /= 2; scale *= 2; } bitmapFatoryOptions.inSampleSize = scale; if(rotation == 0){ Bitmap bmp = BitmapFactory.decodeByteArray(jdate,0,jdate.length,bitmapFatoryOptions); System.out.println(bmp.getWidth()); System.out.println(bmp.getHeight()); System.out.println(bmp.getByteCount()); System.out.println(bmp.getAllocationByteCount()); System.out.println(bmp.isRecycled()); return bmp; }else { Matrix m = new Matrix(); m.postRotate(rotation); Bitmap bmp = BitmapFactory.decodeByteArray(jdate,0,jdate.length,bitmapFatoryOptions); Bitmap bml = Bitmap.createBitmap(bmp,0,0,bmp.getWidth(),bmp.getHeight(),m,true); return bml; } } private static ByteBuffer getuvBufferWithoutPaddingP(ByteBuffer uBuffer,ByteBuffer vBuffer, int width, int height, int rowStride, int pixelStride){ int pos = 0; byte []byteArray = new byte[height*width/2]; for (int row=0; row<height/2; row++) { for (int col=0; col<width/2; col++) { int vuPos = col*pixelStride + row*rowStride; byteArray[pos++] = vBuffer.get(vuPos); byteArray[pos++] = uBuffer.get(vuPos); } } ByteBuffer bufferWithoutPaddings=ByteBuffer.allocate(byteArray.length); // 数组放到buffer中 bufferWithoutPaddings.put(byteArray); //重置 limit postion 值否则 buffer 读取数据不对 bufferWithoutPaddings.flip(); return bufferWithoutPaddings; } //Semi-Planar格式(SP)的处理y通道的数据 private static ByteBuffer getBufferWithoutPadding(ByteBuffer buffer, int width, int rowStride, int times,boolean isVbuffer){ if(width == rowStride) return buffer; //没有buffer,不用处理。 int bufferPos = buffer.position(); int cap = buffer.capacity(); byte []byteArray = new byte[times*width]; int pos = 0; //对于y平面,要逐行赋值的次数就是height次。对于uv交替的平面,赋值的次数是height/2次 for (int i=0;i<times;i++) { buffer.position(bufferPos); //part 1.1 对于u,v通道,会缺失最后一个像u值或者v值,因此需要特殊处理,否则会crash if(isVbuffer && i==times-1){ width = width -1; } buffer.get(byteArray, pos, width); bufferPos+= rowStride; pos = pos+width; } //nv21数组转成buffer并返回 ByteBuffer bufferWithoutPaddings=ByteBuffer.allocate(byteArray.length); // 数组放到buffer中 bufferWithoutPaddings.put(byteArray); //重置 limit postion 值否则 buffer 读取数据不对 bufferWithoutPaddings.flip(); return bufferWithoutPaddings; } private static byte[] YUV_420_888toNV21(Image image) { int width = image.getWidth(); int height = image.getHeight(); ByteBuffer yBuffer = getBufferWithoutPadding(image.getPlanes()[0].getBuffer(), image.getWidth(), image.getPlanes()[0].getRowStride(),image.getHeight(),false); ByteBuffer vBuffer; //part1 获得真正的消除padding的ybufferubuffer。需要对P格式SP格式做不同的处理。如果是P格式的话只能逐像素去做,性能会降低。 if(image.getPlanes()[2].getPixelStride()==1){ //如果为true,说明是P格式。 vBuffer = getuvBufferWithoutPaddingP(image.getPlanes()[1].getBuffer(), image.getPlanes()[2].getBuffer(), width,height,image.getPlanes()[1].getRowStride(),image.getPlanes()[1].getPixelStride()); }else{ vBuffer = getBufferWithoutPadding(image.getPlanes()[2].getBuffer(), image.getWidth(), image.getPlanes()[2].getRowStride(),image.getHeight()/2,true); } //part2 将y数据uv的交替数据(除去最后一个v值)赋值给nv21 int ySize = yBuffer.remaining(); int vSize = vBuffer.remaining(); byte[] nv21; int byteSize = width*height*3/2; nv21 = new byte[byteSize]; yBuffer.get(nv21, 0, ySize); vBuffer.get(nv21, ySize, vSize); //part3 最后一个像素值的u值是缺失的,因此需要从u平面取一下。 ByteBuffer uPlane = image.getPlanes()[1].getBuffer(); byte lastValue = uPlane.get(uPlane.capacity() - 1); nv21[byteSize - 1] = lastValue; return nv21; } private void processByExtractor(MediaExtractor extractor, MediaCodec mediaCodec, int width, int height, int rotation, long duration) { MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo(); long timeOut = 5* 1000;//5ms boolean inputDone = false; boolean outputDone = false; ByteBuffer[] inputBuffers = null; inputBuffers = mediaCodec.getInputBuffers(); //开始解码 int count = 0; while (!outputDone){ if(!inputDone){ int inputBufferIndex = mediaCodec.dequeueInputBuffer(timeOut); if(inputBufferIndex >= 0){ ByteBuffer inputBuffer; inputBuffer = mediaCodec.getInputBuffer(inputBufferIndex); int sampleDate = extractor.readSampleData(inputBuffer,0); if(sampleDate > 0 && count * 1000 <= duration){ long sampleTime = extractor.getSampleTime(); int sampleFlags = extractor.getSampleFlags(); mediaCodec.queueInputBuffer(inputBufferIndex,0,sampleDate,sampleTime,0); extractor.advance(); count++; }else { //小于0,就说明读完了 mediaCodec.queueInputBuffer(inputBufferIndex,0,0,0,MediaCodec.BUFFER_FLAG_END_OF_STREAM); inputDone = true; } } } if(!outputDone){ //获取数据 int status = mediaCodec.dequeueOutputBuffer(bufferInfo,timeOut); if(status == MediaCodec.INFO_TRY_AGAIN_LATER){ }else if(status == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED){ }else if(status == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED){ }else { if((bufferInfo.flags&MediaCodec.BUFFER_FLAG_END_OF_STREAM)!=0){ outputDone = true; } boolean doRender = (bufferInfo.size !=0); //获取图片并保存,getOutputImage格式是YUV_420_888 Image image = mediaCodec.getOutputImage(status); System.out.println("成功获取到图片"+"!!!!!!!!!!!" + doRender); if (doRender) { byte[] bytes = YUV_420_888toNV21(image); Bitmap bitmap = getBitmapFromYUV(bytes,width,height,rotation); // try { // yourSelectedImage = bitmap.copy(Bitmap.Config.ARGB_8888, true); // if (yourSelectedImage == null) // { // return; // } // YoloV5Ncnn.Obj[] objects = yolov5ncnn.Detect(yourSelectedImage, false); // showObjects(objects); // } catch (Exception e) { // e.printStackTrace(); // } imageView.setImageBitmap(bitmap); mediaCodec.getOutputBuffer(status); mediaCodec.releaseOutputBuffer(status,doRender); } } } } } private void decodeVideo(Uri uri) { File file = new File((uri.getPath())); Uri contentUri = null; Log.d(TAG, "decodeVideo: ------------" + file.getPath() +"==============="+uri.getPath()); String videoPath = null; ContentResolver cr = this.getContentResolver(); Cursor cursor = cr.query(uri, null, null, null, null); if (cursor != null) { if (cursor.moveToFirst()) { int videoId = cursor.getInt(cursor.getColumnIndexOrThrow(MediaStore.Video.Media._ID)); String title = cursor.getString(cursor.getColumnIndexOrThrow(MediaStore.Video.Media.DISPLAY_NAME)); videoPath = cursor.getString(cursor.getColumnIndexOrThrow(MediaStore.Video.Media.DATA)); contentUri = ContentUris.withAppendedId(MediaStore.Video.Media.EXTERNAL_CONTENT_URI, videoId); String imagePath = cursor.getString(cursor.getColumnIndexOrThrow(MediaStore.Images.Media.DATA)); // 缩略图ID:MediaStore.Audio.Media._ID int imageId = cursor.getInt(cursor.getColumnIndexOrThrow(MediaStore.Images.Media._ID)); // 方法一 Thumbnails 利用createVideoThumbnail 通过路径得到缩略图,保持为视频的默认比例 // 第一个参数为 ContentResolver,第二个参数为视频缩略图ID, 第三个参数kind有两种为:MICRO_KINDMINI_KIND 字面意思理解为微型迷你两种缩略模式,前者分辨率更低一些。 Bitmap bitmap1 = MediaStore.Video.Thumbnails.getThumbnail(cr, imageId, MediaStore.Video.Thumbnails.MICRO_KIND, null); // 方法二 ThumbnailUtils 利用createVideoThumbnail 通过路径得到缩略图,保持为视频的默认比例 // 第一个参数为 视频/缩略图的位置,第二个依旧是分辨率相关的kind Bitmap bitmap2 = ThumbnailUtils.createVideoThumbnail(imagePath, MediaStore.Video.Thumbnails.MICRO_KIND); // 如果追求更好的话可以利用 ThumbnailUtils.extractThumbnail 把缩略图转化为的制定大小 // ThumbnailUtils.extractThumbnail(bitmap, width,height ,ThumbnailUtils.OPTIONS_RECYCLE_INPUT); } cursor.close(); } MediaMetadataRetriever retriever = new MediaMetadataRetriever(); Log.d(TAG, "decodeVideo_address: ===================="+videoPath +"-------"+Environment.getExternalStorageDirectory()); retriever.setDataSource(MainActivity.this,uri); MediaExtractor extractor; MediaFormat videoFormat = null; MediaCodec mediaCodec; int rotation; long duration; int width; int height; extractor = new MediaExtractor(); try { AssetFileDescriptor afd = cr.openAssetFileDescriptor(uri, "r"); extractor.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength()); afd.close(); //获取轨道个数 int trackCount = extractor.getTrackCount(); for (int i = 0; i <trackCount ; i++) { //获取某一轨道的媒体格式 MediaFormat trackFormat = extractor.getTrackFormat(i); if(trackFormat.getString(MediaFormat.KEY_MIME).contains("video")){ videoFormat = trackFormat; extractor.selectTrack(i); break; } } if(videoFormat == null){ throw new IllegalArgumentException("没有获取到视频的Format"); } videoFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Flexible); videoFormat.setInteger(MediaFormat.KEY_WIDTH,videoFormat.getInteger(MediaFormat.KEY_WIDTH)); videoFormat.setInteger(MediaFormat.KEY_HEIGHT,videoFormat.getInteger(MediaFormat.KEY_HEIGHT)); width = videoFormat.getInteger(MediaFormat.KEY_WIDTH); height = videoFormat.getInteger(MediaFormat.KEY_HEIGHT); if(videoFormat.containsKey(MediaFormat.KEY_ROTATION)){ //获取旋转多少的值 rotation = videoFormat.getInteger(MediaFormat.KEY_ROTATION); } else { rotation = 0; } //内容持续时间 duration = videoFormat.getLong(MediaFormat.KEY_DURATION); mediaCodec = MediaCodec.createDecoderByType(videoFormat.getString(MediaFormat.KEY_MIME)); Surface surface = mSurfaceHolder.getSurface(); mediaCodec.configure(videoFormat,surface,null,0); mediaCodec.start(); //开始解码 processByExtractor(extractor, mediaCodec, width, height, rotation, duration); } catch (IOException e) { e.printStackTrace(); } } private Bitmap decodeUri(Uri selectedImage) throws FileNotFoundException { // Decode image size BitmapFactory.Options o = new BitmapFactory.Options(); o.inJustDecodeBounds = true; BitmapFactory.decodeStream(getContentResolver().openInputStream(selectedImage), null, o); // The new size we want to scale to final int REQUIRED_SIZE = 320; // Find the correct scale value. It should be the power of 2. int width_tmp = o.outWidth, height_tmp = o.outHeight; int scale = 1; while (true) { if (width_tmp / 2 < REQUIRED_SIZE || height_tmp / 2 < REQUIRED_SIZE) { break; } width_tmp /= 2; height_tmp /= 2; scale *= 2; } // Decode with inSampleSize BitmapFactory.Options o2 = new BitmapFactory.Options(); o2.inSampleSize = scale; Bitmap bitmap = BitmapFactory.decodeStream(getContentResolver().openInputStream(selectedImage), null, o2); // Rotate according to EXIF int rotate = 0; try { ExifInterface exif = new ExifInterface(getContentResolver().openInputStream(selectedImage)); int orientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL); switch (orientation) { case ExifInterface.ORIENTATION_ROTATE_270: rotate = 270; break; case ExifInterface.ORIENTATION_ROTATE_180: rotate = 180; break; case ExifInterface.ORIENTATION_ROTATE_90: rotate = 90; break; } } catch (IOException e) { Log.e("MainActivity", "ExifInterface IOException"); } Matrix matrix = new Matrix(); matrix.postRotate(rotate); return Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), matrix, true); } }
11-01
评论 1
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值