CF - 407 - B. Long Path(dp)

本文解析了一个关于迷宫中特定行走规则的问题,并给出了高效的求解算法。通过分析人物的行动规律,利用动态规划的方法推导出了状态转移方程,最终实现了在大数值范围内的正确求解。

题意:一个迷宫,有n+1个房间(1 ≤ n ≤ 1000) ,每个房间有2个门,第一个门通往i+1,第二个门通往pi(1 <= pi <=  i),一个人,初始在房间1,他要走到房间n+1,每次他到一个房间,他就在房顶标记一次,然后他统计下他在这间房标记了多少次,如果是奇数,他会选择pi那个门,如果是偶数,他会选择i+1那个门,问他走到n+1房间时共走了多少次。

题目链接:http://codeforces.com/contest/407/problem/B

——>>他走到一间房子,标记一次,总记数为1,是奇数,于是他会选择往回的门走到pi,下次再到这个房间时,标记一次,总计数为2,偶数,前进,如果后面折回到这个房间,标一次,总计数变奇数,于是回折回pi,再次上来到这个房间时,总计数变偶数,往前。。可知,他每走过一个房间,就会将这个房间总计数变加偶数,就相当于初始状态没走过一样。

——>>如果直接模拟,考虑最坏情况,就是pi全为1的时候,一步步的标记统计模拟??不忍吧。。

——>>按CF的题解:设d[i]表示从房间1到房间i需要走的步数,那么,要到房间i+1的话,只能从房间i走第1个门,但是第一次到房间i(需要d[i]步)时,根据上面的分析可知他会往回走到pi(需要1步),于是他要从pi到i(需要d[i]-d[pi]步),再从i到i+1(需要1步)。于是得状态转移方程为:

d[i+1] = d[i] + 1 + d[i] - d[pi] + 1 = 2d[i] + 2 - d[pi]。。

最后还是不小心WA了一次。。上面的式了有“-”号,取模运算后可能有负数出现。。所以。。T_T。。加上个模再取模吧。。

#include <cstdio>

using namespace std;

const int maxn = 1000 + 10;
const int mod = 1000000000 + 7;

int main()
{
    int n, p[maxn];
    long long d[maxn];
    while(scanf("%d", &n) == 1) {
        d[1] = 0;
        for(int i = 1; i <= n; i++) {
            scanf("%d", p+i);
            d[i+1] = (2 * d[i] + 2 - d[p[i]] + mod) % mod;
        }
        printf("%I64d\n", d[n+1]);
    }
    return 0;
}


---------------------------- PROCESS STARTED (4049) for package com.example.text ---------------------------- 2025-08-13 16:37:55.834 4049-4049 nativeloader com.example.text D Configuring clns-7 for other apk /data/app/~~WCNvV5k3N06w3BNSEhMbLw==/com.example.text-ptlyfexyOl6oNzrAuTtjOw==/base.apk. target_sdk_version=36, uses_libraries=, library_path=/data/app/~~WCNvV5k3N06w3BNSEhMbLw==/com.example.text-ptlyfexyOl6oNzrAuTtjOw==/lib/x86_64:/data/app/~~WCNvV5k3N06w3BNSEhMbLw==/com.example.text-ptlyfexyOl6oNzrAuTtjOw==/base.apk!/lib/x86_64, permitted_path=/data:/mnt/expand:/data/user/0/com.example.text 2025-08-13 16:37:55.844 4049-4049 GraphicsEnvironment com.example.text V Currently set values for: 2025-08-13 16:37:55.844 4049-4049 GraphicsEnvironment com.example.text V angle_gl_driver_selection_pkgs=[] 2025-08-13 16:37:55.844 4049-4049 GraphicsEnvironment com.example.text V angle_gl_driver_selection_values=[] 2025-08-13 16:37:55.845 4049-4049 GraphicsEnvironment com.example.text V Global.Settings values are invalid: number of packages: 0, number of values: 0 2025-08-13 16:37:55.845 4049-4049 GraphicsEnvironment com.example.text V Neither updatable production driver nor prerelease driver is supported. 2025-08-13 16:37:55.968 4049-4049 AppCompatDelegate com.example.text D Checking for metadata for AppLocalesMetadataHolderService : Service not found 2025-08-13 16:37:56.215 4049-4049 om.example.text com.example.text W Accessing hidden method Landroid/view/ViewGroup;->makeOptionalFitsSystemWindows()V (unsupported, reflection, allowed) 2025-08-13 16:37:56.226 4049-4049 TransactionExecutor com.example.text E Failed to execute the transaction: tId:-655929506 ClientTransaction{ tId:-655929506 transactionItems=[ tId:-655929506 LaunchActivityItem{activityToken=android.os.BinderProxy@6d3cbb9,intent=Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.text/.MainActivity },ident=260133464,info=ActivityInfo{f0592e9 com.example.text.MainActivity},curConfig={1.0 310mcc260mnc [en_US] ldltr sw411dp w411dp h914dp 420dpi nrml long port finger qwerty/v/v dpad/v winConfig={ mBounds=Rect(0, 0 - 1080, 2400) mAppBounds=Rect(0, 0 - 1080, 2400) mMaxBounds=Rect(0, 0 - 1080, 2400) mDisplayRotation=ROTATION_0 mWindowingMode=fullscreen mActivityType=undefined mAlwaysOnTop=undefined mRotation=ROTATION_0} as.2 s.18 fontWeightAdjustment=0},overrideConfig={1.0 310mcc260mnc [en_US] ldltr sw411dp w411dp h914dp 420dpi nrml long port finger qwerty/v/v dpad/v winConfig={ mBounds=Rect(0, 0 - 1080, 2400) mAppBounds=Rect(0, 0 - 1080, 2400) mMaxBounds=Rect(0, 0 - 1080, 2400) mDisplayRotation=ROTATION_0 mWindowingMode=fullscreen mActivityType=standard mAlwaysOnTop=undefined mRotation=ROTATION_0} as.2 s.2 fontWeightAdjustment=0},deviceId=0,referrer=com.android.shell,procState=2,state=null,persistentState=null,pendingResults=null,pendingNewIntents=null,sceneTransitionInfo=null,profilerInfo=null,assistToken=android.os.BinderProxy@7d8cf22,shareableActivityToken=android.os.BinderProxy@cb8bdb3,activityWindowInfo=ActivityWindowInfo{isEmbedded=false, taskBounds=Rect(0, 0 - 1080, 2400), taskFragmentBounds=Rect(0, 0 - 1080, 2400)}} tId:-655929506 ResumeActivityItem{mActivityToken=android.os.BinderProxy@6d3cbb9,procState=-1,updateProcState=false,isForward=true,shouldSendCompatFakeFocus=false} tId:-655929506 Target activity: com.example.text.MainActivity tId:-655929506 ] tId:-655929506 } 2025-08-13 16:37:56.226 4049-4049 AndroidRuntime com.example.text D Shutting down VM 2025-08-13 16:37:56.229 4049-4049 AndroidRuntime com.example.text E FATAL EXCEPTION: main Process: com.example.text, PID: 4049 java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.text/com.example.text.MainActivity}: java.lang.IllegalStateException: This Activity already has an action bar supplied by the window decor. Do not request Window.FEATURE_SUPPORT_ACTION_BAR and set windowActionBar to false in your theme to use a Toolbar instead. at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:4048) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:4235) at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:112) at android.app.servertransaction.TransactionExecutor.executeNonLifecycleItem(TransactionExecutor.java:174) at android.app.servertransaction.TransactionExecutor.executeTransactionItems(TransactionExecutor.java:109) at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:81) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2636) at android.os.Handler.dispatchMessage(Handler.java:107) at android.os.Looper.loopOnce(Looper.java:232) at android.os.Looper.loop(Looper.java:317) at android.app.ActivityThread.main(ActivityThread.java:8705) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:580) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:886) Caused by: java.lang.IllegalStateException: This Activity already has an action bar supplied by the window decor. Do not request Window.FEATURE_SUPPORT_ACTION_BAR and set windowActionBar to false in your theme to use a Toolbar instead. at androidx.appcompat.app.AppCompatDelegateImpl.setSupportActionBar(AppCompatDelegateImpl.java:606) at androidx.appcompat.app.AppCompatActivity.setSupportActionBar(AppCompatActivity.java:181) at com.example.text.MainActivity.onCreate(MainActivity.kt:22) at android.app.Activity.performCreate(Activity.java:9002) at android.app.Activity.performCreate(Activity.java:8980) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1526) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:4030) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:4235) at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:112) at android.app.servertransaction.TransactionExecutor.executeNonLifecycleItem(TransactionExecutor.java:174) at android.app.servertransaction.TransactionExecutor.executeTransactionItems(TransactionExecutor.java:109) at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:81) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2636) at android.os.Handler.dispatchMessage(Handler.java:107) at android.os.Looper.loopOnce(Looper.java:232) at android.os.Looper.loop(Looper.java:317) at android.app.ActivityThread.main(ActivityThread.java:8705) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:580) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:886) 2025-08-13 16:37:56.239 4049-4049 Process com.example.text I Sending signal. PID: 4049 SIG: 9 ---------------------------- PROCESS ENDED (4049) for package com.example.text ---------------------------- 中文告诉无我问题
08-14
package com.weishitechsub.quanminchangKmianfei.fragment.lilv; import android.Manifest; import android.content.Intent; import android.graphics.Color; import android.media.MediaPlayer; import android.os.Bundle; import android.os.Environment; import android.util.Log; import android.view.View; import android.widget.ImageView; import android.widget.SeekBar; import android.widget.TextView; import androidx.annotation.NonNull; import com.bumptech.glide.Glide; import com.bumptech.glide.load.resource.bitmap.RoundedCorners; import com.czt.mp3recorder.Mp3Recorder; import com.czt.mp3recorder.Mp3RecorderUtil; import com.hfd.common.util.DensityUtil; import com.hfd.common.util.ToastUtil; import com.hw.lrcviewlib.ILrcViewSeekListener; import com.hw.lrcviewlib.LrcRow; import com.weishitechsub.quanminchangKmianfei.LCAppcation; import com.weishitechsub.quanminchangKmianfei.R; import com.weishitechsub.quanminchangKmianfei.bean.CloseSong; import com.weishitechsub.quanminchangKmianfei.bean.MusicBean; import com.weishitechsub.quanminchangKmianfei.bean.ReRecord; import com.weishitechsub.quanminchangKmianfei.dialog.CommonDialog; import com.weishitechsub.quanminchangKmianfei.dialog.PermissionDialog; import com.weishitechsub.quanminchangKmianfei.dialog.SetSoundDialog; import com.weishitechsub.quanminchangKmianfei.adtakubase.activity.BaseBindActivity; import com.weishitechsub.quanminchangKmianfei.databinding.ActivitySingBinding; import com.weishitechsub.quanminchangKmianfei.utils.DisplayUtils; import org.greenrobot.eventbus.EventBus; import org.greenrobot.eventbus.Subscribe; import org.greenrobot.eventbus.ThreadMode; import java.io.File; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Collections; import java.util.Date; import java.util.List; import java.util.Locale; import java.util.regex.Matcher; import java.util.regex.Pattern; import pub.devrel.easypermissions.EasyPermissions; public class SingActivity extends BaseBindActivity<ActivitySingBinding> implements EasyPermissions.PermissionCallbacks{ private MediaPlayer mediaPlayer;//播音 private Thread thread; private boolean flag = false; private int recordTime = 0;//录制时间 private static final String[] PERMS_CAMERA = new String[] {Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.WRITE_EXTERNAL_STORAGE,Manifest.permission.RECORD_AUDIO}; // 需要请求的权限数组 private static final int RC_CAMERA_PERM = 123; // 请求码,唯一标识 Mp3Recorder mRecorder;//录音 List<LrcRow> lrcRows; long getCurrentTime = 0; boolean topStart = false;//头部播放器状态 private static final int PERMISSION_REQUEST_CODE = 123; private String lrcText; private MusicBean.DataBean song; private File outFile; @Override protected void init() { // 获取传递过来的歌曲数据 Intent intent = getIntent(); if (intent != null && intent.hasExtra("song_data")) { song = (MusicBean.DataBean) intent.getSerializableExtra("song_data"); // 如果用了 Glide 加载图片 Glide.with(this) .load(song.getCover()) .transform(new RoundedCorners(DisplayUtils.dp2px(LCAppcation.getInstance(), 10f))) .into(mBinding.ivTopIcon); mBinding.tvTitle.setText(song.getTitle()); mBinding.tvTopSongName.setText(song.getTitle()); mBinding.tvTopSingName.setText(song.getSinger()); mBinding.tvTimeEnd.setText(song.getSongTime()); // 播放或处理音乐文件 String musicUrl = song.getMusic(); initMediaPlayer(musicUrl); lrcText = song.getLrc(); } getCurrentTime = System.currentTimeMillis(); setOnClick(); } private void setOnClick() { // 设置拖动条的变化监听器,用于更新音乐播放进度和显示已经播放的时间以及还未播放的时间 mBinding.seekBar.setOnSeekBarChangeListener(new SeekBar.OnSeekBarChangeListener() { @Override public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) { if (fromUser && mediaPlayer != null) { mediaPlayer.seekTo(progress); // 用户拖动拖动条时更新播放进度 //playToggle.setImageResource(R.drawable.ic_pause); // 设置暂停图标 flag = false; // 重新启动更新进度条的线程 if (thread == null || !thread.isAlive()) { thread = new Thread(new SeekBarThread()); // 启动线程 thread.start(); } } // 更新已经播放的时间 mBinding.textViewProgress.setText(millisecondsToString(progress)); mBinding.tvTimeStart.setText(millisecondsToString(progress)); // 更新还未播放的时间 if (mediaPlayer != null) { mBinding.textViewDuration.setText(millisecondsToString(mediaPlayer.getDuration() - progress)); } } @Override public void onStartTrackingTouch(SeekBar seekBar) { // 拖动条开始被用户触摸时触发的方法,这里可以暂停音乐播放 if (mediaPlayer != null && mediaPlayer.isPlaying()) { mediaPlayer.pause(); } } @Override public void onStopTrackingTouch(SeekBar seekBar) { // 拖动条停止被用户触摸时触发的方法,这里可以恢复音乐播放 if (mediaPlayer != null) { mediaPlayer.start(); } } }); lrcRows = new LrcDataBuilder().builtFromText(lrcText); mBinding.lyricView.getLrcSetting() .setTimeTextSize(40)//时间字体大小 .setSelectLineColor(Color.parseColor("#ff0000"))//选中线颜色 .setSelectLineTextSize(25)//选中线大小 .setNormalRowColor(Color.parseColor("#333333")) .setHeightRowColor(Color.parseColor("#ff0000"))//高亮字体颜色 .setNormalRowTextSize(DensityUtil.sp2px(SingActivity.this, 17))//正常行字体大小 .setHeightLightRowTextSize(DensityUtil.sp2px(SingActivity.this, 17))//高亮行字体大小 .setTrySelectRowTextSize(DensityUtil.sp2px(SingActivity.this, 17))//尝试选中行字体大小 .setTimeTextColor(Color.parseColor("#ff0000"))//时间字体颜色 .setTrySelectRowColor(Color.parseColor("#ff0000"));//尝试选中字体颜色 mBinding.lyricView.commitLrcSettings(); mBinding.lyricView.setLrcData(lrcRows); mBinding.lyricView.setLrcViewSeekListener(new ILrcViewSeekListener() { @Override public void onSeek(LrcRow currentLrcRow, long CurrentSelectedRowTime) { mediaPlayer.seekTo((int) CurrentSelectedRowTime); // 跳转到指定时间 } }); Mp3RecorderUtil.init(SingActivity.this ,false); EventBus.getDefault().register(this); mBinding.tvBack.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if(mBinding.lineUtil.getVisibility() == View.VISIBLE && !mBinding.tvSing.getText().toString().equals("开始演唱")){ mBinding.tvSing.setText("暂停"); mBinding.ivSing.setImageResource(R.mipmap.img_sing_ht); if(null != mediaPlayer){ mediaPlayer.pause(); } if(null != mRecorder){ mRecorder.pause(); } CommonDialog commonDialog = new CommonDialog(SingActivity.this,"确定退出录制并放弃录制歌曲?"); commonDialog.show(); commonDialog.setOnDialogClickListener(new CommonDialog.OnDialogClickListener() { @Override public void onSureClickListener() { commonDialog.dismiss(); finish(); } @Override public void onQuitClickListener() { commonDialog.dismiss(); } }); }else { finish(); } } }); //切换伴奏 mBinding.tvType.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ToastUtil.showShortToast("暂无伴奏"); } }); //调音 mBinding.tvVoice.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { new SetSoundDialog(SingActivity.this).show(); } }); //开始录音 mBinding.line.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if(mBinding.tvSing.getText().toString().equals("开始演唱")){ getPermissions(); }else if(mBinding.tvSing.getText().toString().equals("正在录制")){ mBinding.tvSing.setText("暂停"); mBinding.ivSing.setImageResource(R.mipmap.img_sing_ht); mediaPlayer.pause(); mRecorder.pause(); }else if(mBinding.tvSing.getText().toString().equals("暂停")){ mBinding.tvSing.setText("正在录制"); mBinding.ivSing.setImageResource(R.mipmap.img_sing_lz); mediaPlayer.start(); mRecorder.start(); }else { } } }); //重新录制 mBinding.tvReRecord.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if(mBinding.tvSing.getText().toString().equals("开始演唱")){ ToastUtil.showShortToast("你还没有开始录制呢"); }else{ // 在重新录制逻辑中 mBinding.tvTimeStart.setText("00:00"); mBinding.tvSing.setText("暂停"); mBinding.ivSing.setImageResource(R.mipmap.img_sing_ht); mediaPlayer.pause(); mRecorder.pause(); CommonDialog commonDialog = new CommonDialog(SingActivity.this,"重录后当前数据会被删除,是否重录"); commonDialog.show(); commonDialog.setOnDialogClickListener(new CommonDialog.OnDialogClickListener() { @Override public void onSureClickListener() { mBinding.seekBar.setProgress(0); mediaPlayer.seekTo(0); mBinding.lyricView.smoothScrollToTime(0); mBinding.tvSing.setText("开始演唱"); mBinding.ivSing.setImageResource(R.mipmap.img_sing_lz); recordTime = 0; mBinding.tvTimeStart.setText("00:00"); mRecorder.stop(Mp3Recorder.ACTION_STOP_ONLY); commonDialog.dismiss(); } @Override public void onQuitClickListener() { commonDialog.dismiss(); } }); } } }); //录制完成 mBinding.tvEnd.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if(mBinding.tvSing.getText().toString().equals("开始演唱")){ ToastUtil.showShortToast("你还没有开始录制呢"); }else{ mBinding.tvSing.setText("暂停"); mBinding.ivSing.setImageResource(R.mipmap.img_sing_ht); mediaPlayer.pause(); mRecorder.pause(); CommonDialog commonDialog = new CommonDialog(SingActivity.this,"作品录制最少20秒,是否完成录制"); commonDialog.show(); commonDialog.setOnDialogClickListener(new CommonDialog.OnDialogClickListener() { @Override public void onSureClickListener() { if(recordTime < 20){ ToastUtil.showShortToast("作品录制最少20秒"); }else{ mediaPlayer.pause(); mRecorder.stop(Mp3Recorder.ACTION_STOP_ONLY); Bundle bundle = new Bundle(); bundle.putString("recorded_audio_path", outFile.getAbsolutePath()); bundle.putSerializable("song_data", song); // 原歌曲信息也传过去 toClass(SingEndActivity.class, bundle); } commonDialog.dismiss(); } @Override public void onQuitClickListener() { commonDialog.dismiss(); } }); } } }); } private void initMediaPlayer(String musicUrl) { try { if (mediaPlayer == null) { mediaPlayer = new MediaPlayer(); } else { mediaPlayer.reset(); // 重置状态 } mediaPlayer.setDataSource(musicUrl); // 可以是网络 URL 或本地文件 mediaPlayer.prepareAsync(); // 异步准备,避免阻塞主线程 mediaPlayer.setOnPreparedListener(new MediaPlayer.OnPreparedListener() { @Override public void onPrepared(MediaPlayer mp) { mBinding.seekBar.setMax(mediaPlayer.getDuration()); mBinding.textViewProgress.setText("00:00"); mBinding.tvTimeEnd.setText(millisecondsToString(mediaPlayer.getDuration())); } }); mediaPlayer.setOnCompletionListener(new MediaPlayer.OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { flag = true; // 停止进度更新线程 mBinding.tvSing.setText("开始演唱"); mBinding.ivSing.setImageResource(R.mipmap.img_sing_lz); } }); } catch (Exception e) { e.printStackTrace(); ToastUtil.showShortToast("播放失败:" + e.getMessage()); } } private File getOutputAudioFile() { File dir = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_MUSIC), "MySinging"); if (!dir.exists()) dir.mkdirs(); String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss", Locale.getDefault()).format(new Date()); return new File(dir, "REC_" + timeStamp + ".mp3"); } // 将格式化的时间字符串(00:00)转换为毫秒 private int timeToMilliseconds(String time) { // 拆分时间字符串为小时、分钟和秒 String[] parts = time.split(":"); int hours = 0, minutes = 0, seconds = 0; if (parts.length == 2) { minutes = Integer.parseInt(parts[0]); seconds = Integer.parseInt(parts[1]); } else if (parts.length == 3) { hours = Integer.parseInt(parts[0]); minutes = Integer.parseInt(parts[1]); seconds = Integer.parseInt(parts[2]); } // 将小时、分钟和秒转换为毫秒数 return hours * 3600000 + minutes * 60000 + seconds * 1000; } // 将毫秒转换为格式化的时间字符串(00:00) private String millisecondsToString(int milliseconds) { int seconds = (milliseconds / 1000) % 60; int minutes = (milliseconds / (1000 * 60)) % 60; return String.format("%02d:%02d", minutes, seconds); } // 自定义的线程 class SeekBarThread implements Runnable { @Override public void run() { while (!flag) { if (mediaPlayer != null && mediaPlayer.isPlaying()) { try { final int currentPosition = mediaPlayer.getCurrentPosition(); // 在主线程更新 UI runOnUiThread(new Runnable() { @Override public void run() { // 更新播放进度条 mBinding.seekBar.setProgress(currentPosition); // 使用真实播放位置更新 tvTimeStart mBinding.tvTimeStart.setText(millisecondsToString(currentPosition)); // 更新歌词视图 mBinding.lyricView.smoothScrollToTime(currentPosition); } }); } catch (IllegalStateException e) { break; } try { Thread.sleep(100); // 改为每 100ms 刷新一次(更平滑) } catch (InterruptedException e) { break; } } else { // 如果没在播放,也尝试更新 UI?不需要频繁执行 try { Thread.sleep(500); } catch (InterruptedException e) { break; } } } } } /** * 检查并请求权限 */ private void getPermissions(){ if (!EasyPermissions.hasPermissions(SingActivity.this, PERMS_CAMERA)) { new PermissionDialog(SingActivity.this, "权限说明: 存储权限:获取该存储权限主要是为了读写音频资源文件。录音权限:获取该录音权限主要为了录制您的声音便于后期音乐合成。若拒绝将无法使用唱K功能").show(); EasyPermissions.requestPermissions(SingActivity.this, "我们需要存储权限和录音权限", RC_CAMERA_PERM, PERMS_CAMERA); } else { // 检查并请求权限 if (mediaPlayer != null) { Log.d("MediaPlayerInfo", "MediaPlayer 对象已获取:" + mediaPlayer); if(null == mRecorder){ //开始录音 mRecorder = new Mp3Recorder(); startRecordingAndPlayback(); } mRecorder.start(); mBinding.lyricView.smoothScrollToTime(0); mediaPlayer.start(); mBinding.tvSing.setText("正在录制"); //重新录制 mediaPlayer.seekTo(0); mBinding.ivSing.setImageResource(R.mipmap.img_sing_lz); flag = false; if (thread == null || !thread.isAlive()) { thread = new Thread(new SeekBarThread()); // 启动线程 thread.start(); } } else { Log.d("MediaPlayerInfo", "未能获取到 MediaPlayer 对象"); } } } private void startRecordingAndPlayback() { try { outFile = getOutputAudioFile(); mRecorder.setOutputFile(outFile.getAbsolutePath()) .setMaxDuration(0); // 0 表示不限时长 mRecorder.start(); // 同时播放背景音乐 if (mediaPlayer != null && !mediaPlayer.isPlaying()) { mediaPlayer.seekTo(0); mediaPlayer.start(); } mBinding.seekBar.setMax(timeToMilliseconds(song.getSongTime())); mBinding.seekBar.setProgress(0); mBinding.tvSing.setText("正在录制"); //重新录制 mediaPlayer.seekTo(0); mBinding.ivSing.setImageResource(R.mipmap.img_sing_lz); recordTime = 0; flag = false; if (thread == null || !thread.isAlive()) { thread = new Thread(new SeekBarThread()); thread.start(); } ToastUtil.showShortToast("开始演唱,正在录音..."); } catch (Exception e) { e.printStackTrace(); ToastUtil.showShortToast("录音启动失败:" + e.getMessage()); } } @Override public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) { super.onRequestPermissionsResult(requestCode, permissions, grantResults); // 把执行结果的操作给EasyPermissions EasyPermissions.onRequestPermissionsResult(requestCode, permissions, grantResults, this); } @Override public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) { if(requestCode == RC_CAMERA_PERM){ getPermissions(); } } @Override public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) { } // 使用@Subscribe注解标记方法以监听事件 @Subscribe(threadMode = ThreadMode.MAIN) public void onEvent(ReRecord reRecord) { //重新录制 mBinding.seekBar.setProgress(0); mediaPlayer.seekTo(0); mBinding.lyricView.smoothScrollToTime(0); mBinding.tvSing.setText("开始演唱"); mBinding.ivSing.setImageResource(R.mipmap.img_sing_lz); recordTime = 0; mBinding.tvTimeStart.setText("00:00"); } // 使用@Subscribe注解标记方法以监听事件 @Subscribe(threadMode = ThreadMode.MAIN) public void onEvent(CloseSong song) { finish(); } // 内部歌词解析器(保持不变) public static class LrcDataBuilder { // 加上 static public List<LrcRow> builtFromText(String lrcText) { List<LrcRow> rows = new ArrayList<>(); if (lrcText == null || lrcText.trim().isEmpty()) { return rows; } String[] lines = lrcText.split("\\r?\\n"); for (String line : lines) { line = line.trim(); if (line.isEmpty()) continue; List<LrcRow> parsed = parseLrcLine(line); if (!parsed.isEmpty()) { rows.addAll(parsed); } } Collections.sort(rows); return rows; } private List<LrcRow> parseLrcLine(String line) { List<LrcRow> result = new ArrayList<>(); if (line.isEmpty()) return result; // 支持多种换行符预处理(避免后续污染) line = line.replaceAll("[\\r\\n\\u2028\\u2029]+", " "); Pattern pattern = Pattern.compile("\\[(\\d{1,2}):(\\d{2})(?:\\.(\\d{2,3}))?\\]"); Matcher matcher = pattern.matcher(line); while (matcher.find()) { int min = Integer.parseInt(matcher.group(1)); int sec = Integer.parseInt(matcher.group(2)); int milli = 0; if (matcher.group(3) != null) { milli = Integer.parseInt(matcher.group(3)); if (matcher.group(3).length() == 2) milli *= 10; // 补齐毫秒 } long timeMillis = (min * 60L + sec) * 1000 + milli; int textStart = matcher.end(); int textEnd = line.length(); // 查找下一个时间标签作为结束点 Matcher nextMatcher = pattern.matcher(line); if (nextMatcher.find(matcher.start() + 1)) { textEnd = nextMatcher.start(); } String rawText = line.substring(textStart, textEnd).trim(); // 再次防御性清理各种空白与特殊字符 String text = rawText .replaceAll("[\\p{Cc}\\p{Cf}&&[^\\t]]", " ") // 移除控制字符 .replace((char) 0xA0, ' ') // 不间断空格 .replaceAll("\\s+", " ") // 多空格合并 .trim(); if (!text.isEmpty()) { result.add(new LrcRow(text, matcher.group(0), timeMillis)); } } return result; } } @Override public void onDestroy() { flag = true; try { if (thread != null) { thread.join(); // 等待播放进度更新线程结束 } } catch (InterruptedException e) { e.printStackTrace(); } if (mediaPlayer != null && mediaPlayer.isPlaying()) { mediaPlayer.stop(); } if (mediaPlayer != null) { mediaPlayer.release(); } if(null != mRecorder){ mRecorder.reset(); } super.onDestroy(); EventBus.getDefault().unregister(this); } @Override public void onBackPressed() { if(true){ if(mBinding.lineUtil.getVisibility() == View.VISIBLE && !mBinding.tvSing.getText().toString().equals("开始演唱")){ mBinding.tvSing.setText("暂停"); mBinding.ivSing.setImageResource(R.mipmap.img_sing_ht); if(null != mediaPlayer){ mediaPlayer.pause(); } if(null != mRecorder){ mRecorder.pause(); } CommonDialog commonDialog = new CommonDialog(SingActivity.this,"确定退出录制并放弃录制歌曲?"); commonDialog.show(); commonDialog.setOnDialogClickListener(new CommonDialog.OnDialogClickListener() { @Override public void onSureClickListener() { commonDialog.dismiss(); finish(); } @Override public void onQuitClickListener() { commonDialog.dismiss(); } }); }else{ super.onBackPressed(); } }else{ super.onBackPressed(); } //super.onBackPressed(); } } 2025-12-11 12:06:26.882 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E Attempt to call getDuration in wrong state: mPlayer=0xb4000074b0075550, mCurrentState=4 2025-12-11 12:06:26.882 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E error (-38, 0) 2025-12-11 12:06:26.882 29839-29839 MediaPlayer com...itechsub.quanminchangKmianfei E Error (-38,0) 2025-12-11 12:06:27.178 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E Attempt to call getDuration in wrong state: mPlayer=0xb4000074b0075550, mCurrentState=0 2025-12-11 12:06:27.178 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E error (-38, 0) 2025-12-11 12:06:27.179 29839-29839 MediaPlayer com...itechsub.quanminchangKmianfei E Error (-38,0) 2025-12-11 12:06:27.383 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E Attempt to call getDuration in wrong state: mPlayer=0xb4000074b0075550, mCurrentState=0 2025-12-11 12:06:27.383 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E error (-38, 0) 2025-12-11 12:06:27.384 29839-29839 MediaPlayer com...itechsub.quanminchangKmianfei E Error (-38,0) 2025-12-11 12:06:27.680 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E Attempt to call getDuration in wrong state: mPlayer=0xb4000074b0075550, mCurrentState=0 2025-12-11 12:06:27.680 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E error (-38, 0) 2025-12-11 12:06:27.681 29839-29839 MediaPlayer com...itechsub.quanminchangKmianfei E Error (-38,0) 2025-12-11 12:06:27.886 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E Attempt to call getDuration in wrong state: mPlayer=0xb4000074b0072e40, mCurrentState=4 2025-12-11 12:06:27.886 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E error (-38, 0) 2025-12-11 12:06:27.886 29839-29839 MediaPlayer com...itechsub.quanminchangKmianfei E Error (-38,0) 2025-12-11 12:06:28.183 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E Attempt to call getDuration in wrong state: mPlayer=0xb4000074b0072e40, mCurrentState=0 2025-12-11 12:06:28.184 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E error (-38, 0) 2025-12-11 12:06:28.184 29839-29839 MediaPlayer com...itechsub.quanminchangKmianfei E Error (-38,0) 2025-12-11 12:06:28.388 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E Attempt to call getDuration in wrong state: mPlayer=0xb4000074b0072e40, mCurrentState=0 2025-12-11 12:06:28.389 29839-29839 MediaPlayerNative com...itechsub.quanminchangKmianfei E error (-38, 0) 2025-12-11 12:06:28.389 29839-29839 MediaPlayer com...itechsub.quanminchangKmianfei E Error (-38,0)
最新发布
12-12
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license """ Train a YOLOv5 model on a custom dataset Usage: $ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640 """ import argparse import logging import math import os import random import sys import time from copy import deepcopy from pathlib import Path import numpy as np import torch import torch.distributed as dist import torch.nn as nn import yaml from torch.cuda import amp from torch.nn.parallel import DistributedDataParallel as DDP from torch.optim import Adam, SGD, lr_scheduler from tqdm import tqdm FILE = Path(__file__).resolve() ROOT = FILE.parents[0] # YOLOv5 root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative import val # for end-of-epoch mAP from models.experimental import attempt_load from models.yolo import Model from utils.autoanchor import check_anchors from utils.datasets import create_dataloader from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \ strip_optimizer, get_latest_run, check_dataset, check_git_status, check_img_size, check_requirements, \ check_file, check_yaml, check_suffix, print_args, print_mutation, set_logging, one_cycle, colorstr, methods from utils.downloads import attempt_download from utils.loss import ComputeLoss from utils.plots import plot_labels, plot_evolve from utils.torch_utils import EarlyStopping, ModelEMA, de_parallel, intersect_dicts, select_device, \ torch_distributed_zero_first from utils.loggers.wandb.wandb_utils import check_wandb_resume from utils.metrics import fitness from utils.loggers import Loggers from utils.callbacks import Callbacks LOGGER = logging.getLogger(__name__) LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html RANK = int(os.getenv('RANK', -1)) WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) def train(hyp, # path/to/hyp.yaml or hyp dictionary opt, device, callbacks ): save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze, = \ Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \ opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze # Directories w = save_dir / 'weights' # weights dir (w.parent if evolve else w).mkdir(parents=True, exist_ok=True) # make dir last, best = w / 'last.pt', w / 'best.pt' # Hyperparameters if isinstance(hyp, str): with open(hyp, errors='ignore') as f: hyp = yaml.safe_load(f) # load hyps dict LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items())) # Save run settings with open(save_dir / 'hyp.yaml', 'w') as f: yaml.safe_dump(hyp, f, sort_keys=False) with open(save_dir / 'opt.yaml', 'w') as f: yaml.safe_dump(vars(opt), f, sort_keys=False) data_dict = None # Loggers if RANK in [-1, 0]: loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance if loggers.wandb: data_dict = loggers.wandb.data_dict if resume: weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp # Register actions for k in methods(loggers): callbacks.register_action(k, callback=getattr(loggers, k)) # Config plots = not evolve # create plots cuda = device.type != 'cpu' init_seeds(1 + RANK) with torch_distributed_zero_first(LOCAL_RANK): data_dict = data_dict or check_dataset(data) # check if None train_path, val_path = data_dict['train'], data_dict['val'] nc = 1 if single_cls else int(data_dict['nc']) # number of classes names = ['item'] if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}' # check is_coco = data.endswith('coco.yaml') and nc == 9 # COCO dataset # Model check_suffix(weights, '.pt') # check weights pretrained = weights.endswith('.pt') if pretrained: with torch_distributed_zero_first(LOCAL_RANK): weights = attempt_download(weights) # download if not found locally ckpt = torch.load(weights, map_location=device, weights_only=False) # load checkpoint model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 csd = intersect_dicts(csd, model.state_dict(), exclude=exclude) # intersect model.load_state_dict(csd, strict=False) # load LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}') # report else: model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create # Freeze freeze = [f'model.{x}.' for x in range(freeze)] # layers to freeze for k, v in model.named_parameters(): v.requires_grad = True # train all layers if any(x in k for x in freeze): print(f'freezing {k}') v.requires_grad = False # Optimizer nbs = 64 # nominal batch size accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay LOGGER.info(f"Scaled weight_decay = {hyp['weight_decay']}") g0, g1, g2 = [], [], [] # optimizer parameter groups for v in model.modules(): if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter): # bias g2.append(v.bias) if isinstance(v, nn.BatchNorm2d): # weight (no decay) g0.append(v.weight) elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter): # weight (with decay) g1.append(v.weight) if opt.adam: optimizer = Adam(g0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum else: optimizer = SGD(g0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True) optimizer.add_param_group({'params': g1, 'weight_decay': hyp['weight_decay']}) # add g1 with weight_decay optimizer.add_param_group({'params': g2}) # add g2 (biases) LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__} with parameter groups " f"{len(g0)} weight, {len(g1)} weight (no decay), {len(g2)} bias") del g0, g1, g2 # Scheduler if opt.linear_lr: lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear else: lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs) # EMA ema = ModelEMA(model) if RANK in [-1, 0] else None # Resume start_epoch, best_fitness = 0, 0.0 if pretrained: # Optimizer if ckpt['optimizer'] is not None: optimizer.load_state_dict(ckpt['optimizer']) best_fitness = ckpt['best_fitness'] # EMA if ema and ckpt.get('ema'): ema.ema.load_state_dict(ckpt['ema'].float().state_dict()) ema.updates = ckpt['updates'] # Epochs start_epoch = ckpt['epoch'] + 1 if resume: assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.' if epochs < start_epoch: LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.") epochs += ckpt['epoch'] # finetune additional epochs del ckpt, csd # Image sizes gs = max(int(model.stride.max()), 32) # grid size (max stride) nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj']) imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2) # verify imgsz is gs-multiple # DP mode if cuda and RANK == -1 and torch.cuda.device_count() > 1: logging.warning('DP not recommended, instead use torch.distributed.run for best DDP Multi-GPU results.\n' 'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.') model = torch.nn.DataParallel(model) # SyncBatchNorm if opt.sync_bn and cuda and RANK != -1: model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) LOGGER.info('Using SyncBatchNorm()') # Trainloader train_loader, dataset = create_dataloader(train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls, hyp=hyp, augment=True, cache=opt.cache, rect=opt.rect, rank=LOCAL_RANK, workers=workers, image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: ')) mlc = int(np.concatenate(dataset.labels, 0)[:, 0].max()) # max label class nb = len(train_loader) # number of batches assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}' # Process 0 if RANK in [-1, 0]: val_loader = create_dataloader(val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls, hyp=hyp, cache=None if noval else opt.cache, rect=True, rank=-1, workers=workers, pad=0.5, prefix=colorstr('val: '))[0] if not resume: labels = np.concatenate(dataset.labels, 0) # c = torch.tensor(labels[:, 0]) # classes # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency # model._initialize_biases(cf.to(device)) if plots: plot_labels(labels, names, save_dir) # Anchors if not opt.noautoanchor: check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) model.half().float() # pre-reduce anchor precision callbacks.run('on_pretrain_routine_end') # DDP mode if cuda and RANK != -1: model = DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK) # Model parameters hyp['box'] *= 3. / nl # scale to layers hyp['cls'] *= nc / 80. * 3. / nl # scale to classes and layers hyp['obj'] *= (imgsz / 640) ** 2 * 3. / nl # scale to image size and layers hyp['label_smoothing'] = opt.label_smoothing model.nc = nc # attach number of classes to model model.hyp = hyp # attach hyperparameters to model model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights model.names = names # Start training t0 = time.time() nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations) # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training last_opt_step = -1 maps = np.zeros(nc) # mAP per class results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls) scheduler.last_epoch = start_epoch - 1 # do not move scaler = amp.GradScaler(enabled=cuda) stopper = EarlyStopping(patience=opt.patience) compute_loss = ComputeLoss(model) # init loss class LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n' f'Using {train_loader.num_workers} dataloader workers\n' f"Logging results to {colorstr('bold', save_dir)}\n" f'Starting training for {epochs} epochs...') for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ model.train() # Update image weights (optional, single-GPU only) if opt.image_weights: cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx # Update mosaic border (optional) # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs) # dataset.mosaic_border = [b - imgsz, -b] # height, width borders mloss = torch.zeros(3, device=device) # mean losses if RANK != -1: train_loader.sampler.set_epoch(epoch) pbar = enumerate(train_loader) LOGGER.info(('\n' + '%10s' * 7) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'labels', 'img_size')) if RANK in [-1, 0]: pbar = tqdm(pbar, total=nb) # progress bar optimizer.zero_grad() for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- ni = i + nb * epoch # number integrated batches (since train start) imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0 # Warmup if ni <= nw: xi = [0, nw] # x interp # compute_loss.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou) accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round()) for j, x in enumerate(optimizer.param_groups): # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)]) if 'momentum' in x: x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) # Multi-scale if opt.multi_scale: sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size sf = sz / max(imgs.shape[2:]) # scale factor if sf != 1: ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple) imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) # Forward with amp.autocast(enabled=cuda): pred = model(imgs) # forward loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size if RANK != -1: loss *= WORLD_SIZE # gradient averaged between devices in DDP mode if opt.quad: loss *= 4. # Backward scaler.scale(loss).backward() # Optimize if ni - last_opt_step >= accumulate: scaler.step(optimizer) # optimizer.step scaler.update() optimizer.zero_grad() if ema: ema.update(model) last_opt_step = ni # Log if RANK in [-1, 0]: mloss = (mloss * i + loss_items) / (i + 1) # update mean losses mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G' # (GB) pbar.set_description(('%10s' * 2 + '%10.4g' * 5) % ( f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1])) callbacks.run('on_train_batch_end', ni, model, imgs, targets, paths, plots, opt.sync_bn) # end batch ------------------------------------------------------------------------------------------------ # Scheduler lr = [x['lr'] for x in optimizer.param_groups] # for loggers scheduler.step() if RANK in [-1, 0]: # mAP callbacks.run('on_train_epoch_end', epoch=epoch) ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights']) final_epoch = (epoch + 1 == epochs) or stopper.possible_stop if not noval or final_epoch: # Calculate mAP results, maps, _ = val.run(data_dict, batch_size=batch_size // WORLD_SIZE * 2, imgsz=imgsz, model=ema.ema, single_cls=single_cls, dataloader=val_loader, save_dir=save_dir, plots=False, callbacks=callbacks, compute_loss=compute_loss) # Update best mAP fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95] if fi > best_fitness: best_fitness = fi log_vals = list(mloss) + list(results) + lr callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi) # Save model if (not nosave) or (final_epoch and not evolve): # if save ckpt = {'epoch': epoch, 'best_fitness': best_fitness, 'model': deepcopy(de_parallel(model)).half(), 'ema': deepcopy(ema.ema).half(), 'updates': ema.updates, 'optimizer': optimizer.state_dict(), 'wandb_id': loggers.wandb.wandb_run.id if loggers.wandb else None} # Save last, best and delete torch.save(ckpt, last) if best_fitness == fi: torch.save(ckpt, best) if (epoch > 0) and (opt.save_period > 0) and (epoch % opt.save_period == 0): torch.save(ckpt, w / f'epoch{epoch}.pt') del ckpt callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi) # Stop Single-GPU if RANK == -1 and stopper(epoch=epoch, fitness=fi): break # Stop DDP TODO: known issues shttps://github.com/ultralytics/yolov5/pull/4576 # stop = stopper(epoch=epoch, fitness=fi) # if RANK == 0: # dist.broadcast_object_list([stop], 0) # broadcast 'stop' to all ranks # Stop DPP # with torch_distributed_zero_first(RANK): # if stop: # break # must break all DDP ranks # end epoch ---------------------------------------------------------------------------------------------------- # end training ----------------------------------------------------------------------------------------------------- if RANK in [-1, 0]: LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.') for f in last, best: if f.exists(): strip_optimizer(f) # strip optimizers if f is best: LOGGER.info(f'\nValidating {f}...') results, _, _ = val.run(data_dict, batch_size=batch_size // WORLD_SIZE * 2, imgsz=imgsz, model=attempt_load(f, device).half(), iou_thres=0.65 if is_coco else 0.60, # best pycocotools results at 0.65 single_cls=single_cls, dataloader=val_loader, save_dir=save_dir, save_json=is_coco, verbose=True, plots=True, callbacks=callbacks, compute_loss=compute_loss) # val best model with plots callbacks.run('on_train_end', last, best, plots, epoch) LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}") torch.cuda.empty_cache() return results def parse_opt(known=False): parser = argparse.ArgumentParser() parser.add_argument('--weights', type=str, default=ROOT / 'yolov5n.pt', help='initial weights path') parser.add_argument('--cfg', type=str, default=ROOT /'my_data/my_yolov5n.yaml', help='') parser.add_argument('--data', type=str, default=ROOT / 'my_data/my_coco128.yaml', help='dataset.yaml path') parser.add_argument('--hyp', type=str, default=ROOT / 'my_data/my_hyp.scratch.yaml', help='hyperparameters path') parser.add_argument('--epochs', type=int, default=170) parser.add_argument('--batch-size', type=int, default=8, help='total batch size for all GPUs') parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)') parser.add_argument('--rect', action='store_true', help='rectangular training') parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') parser.add_argument('--noval', action='store_true', help='only validate final epoch') parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check') parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations') parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"') parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer') parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers') parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name') parser.add_argument('--name', default='exp', help='save to project/name') parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') parser.add_argument('--quad', action='store_true', help='quad dataloader') parser.add_argument('--linear-lr', action='store_true', help='linear LR') parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)') parser.add_argument('--freeze', type=int, default=0, help='Number of layers to freeze. backbone=10, all=24') parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)') parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify') # Weights & Biases arguments parser.add_argument('--entity', default=None, help='W&B: Entity') parser.add_argument('--upload_dataset', action='store_true', help='W&B: Upload dataset as artifact table') parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval') parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use') opt = parser.parse_known_args()[0] if known else parser.parse_args() return opt def main(opt, callbacks=Callbacks()): # Checks set_logging(RANK) if RANK in [-1, 0]: print_args(FILE.stem, opt) check_git_status() check_requirements(exclude=['thop']) # Resume if opt.resume and not check_wandb_resume(opt) and not opt.evolve: # resume an interrupted run ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist' with open(Path(ckpt).parent.parent / 'opt.yaml', errors='ignore') as f: opt = argparse.Namespace(**yaml.safe_load(f)) # replace opt.cfg, opt.weights, opt.resume = '', ckpt, True # reinstate LOGGER.info(f'Resuming training from {ckpt}') else: opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \ check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' if opt.evolve: opt.project = str(ROOT / 'runs/evolve') opt.exist_ok, opt.resume = opt.resume, False # pass resume to exist_ok and disable resume opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # DDP mode device = select_device(opt.device, batch_size=opt.batch_size) if LOCAL_RANK != -1: assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command' assert opt.batch_size % WORLD_SIZE == 0, '--batch-size must be multiple of CUDA device count' assert not opt.image_weights, '--image-weights argument is not compatible with DDP training' assert not opt.evolve, '--evolve argument is not compatible with DDP training' torch.cuda.set_device(LOCAL_RANK) device = torch.device('cuda', LOCAL_RANK) dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo") # Train if not opt.evolve: train(opt.hyp, opt, device, callbacks) if WORLD_SIZE > 1 and RANK == 0: LOGGER.info('Destroying process group... ') dist.destroy_process_group() # Evolve hyperparameters (optional) else: # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr 'box': (1, 0.02, 0.2), # box loss gain 'cls': (1, 0.2, 4.0), # cls loss gain 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight 'iou_t': (0, 0.1, 0.7), # IoU training threshold 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) 'scale': (1, 0.0, 0.9), # image scale (+/- gain) 'shear': (1, 0.0, 10.0), # image shear (+/- deg) 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) 'fliplr': (0, 0.0, 0.0), # image flip left-right (probability) 'mosaic': (1, 0.0, 1.0), # image mixup (probability) 'mixup': (1, 0.0, 1.0), # image mixup (probability) 'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability) with open(opt.hyp, errors='ignore') as f: hyp = yaml.safe_load(f) # load hyps dict if 'anchors' not in hyp: # anchors commented in hyp.yaml hyp['anchors'] = 3 opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir) # only val/save final epoch # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv' if opt.bucket: os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {save_dir}') # download evolve.csv if exists for _ in range(opt.evolve): # generations to evolve if evolve_csv.exists(): # if evolve.csv exists: select best hyps and mutate # Select parent(s) parent = 'single' # parent selection method: 'single' or 'weighted' x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1) n = min(5, len(x)) # number of previous results to consider x = x[np.argsort(-fitness(x))][:n] # top n mutations w = fitness(x) - fitness(x).min() + 1E-6 # weights (sum > 0) if parent == 'single' or len(x) == 1: # x = x[random.randint(0, n - 1)] # random selection x = x[random.choices(range(n), weights=w)[0]] # weighted selection elif parent == 'weighted': x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination # Mutate mp, s = 0.8, 0.2 # mutation probability, sigma npr = np.random npr.seed(int(time.time())) g = np.array([meta[k][0] for k in hyp.keys()]) # gains 0-1 ng = len(meta) v = np.ones(ng) while all(v == 1): # mutate until a change occurs (prevent duplicates) v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0) for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300) hyp[k] = float(x[i + 7] * v[i]) # mutate # Constrain to limits for k, v in meta.items(): hyp[k] = max(hyp[k], v[1]) # lower limit hyp[k] = min(hyp[k], v[2]) # upper limit hyp[k] = round(hyp[k], 5) # significant digits # Train mutation results = train(hyp.copy(), opt, device, callbacks) # Write mutation results print_mutation(results, hyp.copy(), save_dir, opt.bucket) # Plot results plot_evolve(evolve_csv) print(f'Hyperparameter evolution finished\n' f"Results saved to {colorstr('bold', save_dir)}\n" f'Use best hyperparameters example: $ python train.py --hyp {evolve_yaml}') def run(**kwargs): # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt') opt = parse_opt(True) for k, v in kwargs.items(): setattr(opt, k, v) main(opt) if __name__ == "__main__": opt = parse_opt() main(opt) 代码是否已启用GPU,若未启用需要修改哪里
07-03
import argparse import logging import math import os import random import time from copy import deepcopy from pathlib import Path from threading import Thread import ckpt import numpy as np import torch.distributed as dist import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.optim.lr_scheduler as lr_scheduler import torch.utils.data import yaml from torch.cuda import amp from torch.nn.parallel import DistributedDataParallel as DDP from torch.utils.tensorboard import SummaryWriter from tqdm import tqdm import test # import test.py to get mAP after each epoch from models.experimental import attempt_load from models.yolo import Model from utils.autoanchor import check_anchors from utils.datasets import create_dataloader from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \ fitness, strip_optimizer, get_latest_run, check_dataset, check_file, check_git_status, check_img_size, \ check_requirements, print_mutation, set_logging, one_cycle, colorstr from utils.google_utils import attempt_download from utils.loss import ComputeLoss from utils.plots import plot_images, plot_labels, plot_results, plot_evolution from utils.torch_utils import ModelEMA, select_device, intersect_dicts, torch_distributed_zero_first, is_parallel from utils.wandb_logging.wandb_utils import WandbLogger, check_wandb_resume import chardet logger = logging.getLogger(__name__) def train(hyp, opt, device, tb_writer=None): logger.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items())) save_dir, epochs, batch_size, total_batch_size, weights, rank = \ Path(opt.save_dir), opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank # Directories wdir = save_dir / 'weights' wdir.mkdir(parents=True, exist_ok=True) # make dir last = wdir / 'last.pt' best = wdir / 'best.pt' results_file = save_dir / 'results.txt' # Save run settings with open(save_dir / 'hyp.yaml', 'w') as f: yaml.dump(hyp, f, sort_keys=False) with open(save_dir / 'opt.yaml', 'w') as f: yaml.dump(vars(opt), f, sort_keys=False) # Configure plots = not opt.evolve # create plots cuda = device.type != 'cpu' init_seeds(2 + rank) with open(opt.data) as f: data_dict = yaml.load(f, Loader=yaml.SafeLoader) # data dict is_coco = opt.data.endswith('coco.yaml') # Logging- Doing this before checking the dataset. Might update data_dict loggers = {'wandb': None} # loggers dict if rank in [-1, 0]: opt.hyp = hyp # add hyperparameters run_id = torch.load(weights).get('wandb_id') if weights.endswith('.pt') and os.path.isfile(weights) else None wandb_logger = WandbLogger(opt, Path(opt.save_dir).stem, run_id, data_dict) loggers['wandb'] = wandb_logger.wandb data_dict = wandb_logger.data_dict if wandb_logger.wandb: weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp # WandbLogger might update weights, epochs if resuming nc = 1 if opt.single_cls else int(data_dict['nc']) # number of classes names = ['item'] if opt.single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, opt.data) # check # Model pretrained = weights.endswith('.pt') if pretrained: with torch_distributed_zero_first(rank): attempt_download(weights) # download if not found locally # ckpt = torch.load(weights, map_location=device) # load checkpoint ckpt = torch.load(weights, map_location=device, weights_only=False) # load checkpoint model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create exclude = ['anchor'] if (opt.cfg or hyp.get('anchors')) and not opt.resume else [] # exclude keys state_dict = ckpt['model'].float().state_dict() # to FP32 state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude) # intersect model.load_state_dict(state_dict, strict=False) # load logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights)) # report else: model = Model(opt.cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create with torch_distributed_zero_first(rank): check_dataset(data_dict) # check train_path = data_dict['train'] test_path = data_dict['val'] # Freeze freeze = [] # parameter names to freeze (full or partial) for k, v in model.named_parameters(): v.requires_grad = True # train all layers if any(x in k for x in freeze): print('freezing %s' % k) v.requires_grad = False # Optimizer nbs = 64 # nominal batch size accumulate = max(round(nbs / total_batch_size), 1) # accumulate loss before optimizing hyp['weight_decay'] *= total_batch_size * accumulate / nbs # scale weight_decay logger.info(f"Scaled weight_decay = {hyp['weight_decay']}") pg0, pg1, pg2 = [], [], [] # optimizer parameter groups for k, v in model.named_modules(): if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter): pg2.append(v.bias) # biases if isinstance(v, nn.BatchNorm2d): pg0.append(v.weight) # no decay elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter): pg1.append(v.weight) # apply decay if opt.adam: optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum else: optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True) optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay optimizer.add_param_group({'params': pg2}) # add pg2 (biases) logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0))) del pg0, pg1, pg2 # Scheduler https://arxiv.org/pdf/1812.01187.pdf # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR if opt.linear_lr: lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear else: lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs) # EMA ema = ModelEMA(model) if rank in [-1, 0] else None # Resume start_epoch, best_fitness = 0, 0.0 if pretrained: # Optimizer if ckpt['optimizer'] is not None: optimizer.load_state_dict(ckpt['optimizer']) best_fitness = ckpt['best_fitness'] # EMA if ema and ckpt.get('ema'): ema.ema.load_state_dict(ckpt['ema'].float().state_dict()) ema.updates = ckpt['updates'] # Results if ckpt.get('training_results') is not None: results_file.write_text(ckpt['training_results']) # write results.txt # Epochs start_epoch = ckpt['epoch'] + 1 if opt.resume: assert start_epoch > 0, '%s training to %g epochs is finished, nothing to resume.' % (weights, epochs) if epochs < start_epoch: logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' % (weights, ckpt['epoch'], epochs)) epochs += ckpt['epoch'] # finetune additional epochs del ckpt, state_dict # Image sizes gs = max(int(model.stride.max()), 32) # grid size (max stride) nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj']) imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples # DP mode if cuda and rank == -1 and torch.cuda.device_count() > 1: model = torch.nn.DataParallel(model) # SyncBatchNorm if opt.sync_bn and cuda and rank != -1: model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) logger.info('Using SyncBatchNorm()') # Trainloader dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt, hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, rank=rank, world_size=opt.world_size, workers=opt.workers, image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: ')) mlc = np.concatenate(dataset.labels, 0)[:, 0].max() # max label class nb = len(dataloader) # number of batches assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1) # Process 0 if rank in [-1, 0]: testloader = create_dataloader(test_path, imgsz_test, batch_size * 2, gs, opt, # testloader hyp=hyp, cache=opt.cache_images and not opt.notest, rect=True, rank=-1, world_size=opt.world_size, workers=opt.workers, pad=0.5, prefix=colorstr('val: '))[0] if not opt.resume: labels = np.concatenate(dataset.labels, 0) c = torch.tensor(labels[:, 0]) # classes # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency # model._initialize_biases(cf.to(device)) if plots: plot_labels(labels, names, save_dir, loggers) if tb_writer: tb_writer.add_histogram('classes', c, 0) # Anchors if not opt.noautoanchor: check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) model.half().float() # pre-reduce anchor precision # DDP mode if cuda and rank != -1: model = DDP(model, device_ids=[opt.local_rank], output_device=opt.local_rank, # nn.MultiheadAttention incompatibility with DDP https://github.com/pytorch/pytorch/issues/26698 find_unused_parameters=any(isinstance(layer, nn.MultiheadAttention) for layer in model.modules())) # Model parameters hyp['box'] *= 3. / nl # scale to layers hyp['cls'] *= nc / 80. * 3. / nl # scale to classes and layers hyp['obj'] *= (imgsz / 640) ** 2 * 3. / nl # scale to image size and layers hyp['label_smoothing'] = opt.label_smoothing model.nc = nc # attach number of classes to model model.hyp = hyp # attach hyperparameters to model model.gr = 1.0 # iou loss ratio (obj_loss = 1.0 or iou) model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights model.names = names # Start training t0 = time.time() nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations) # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training maps = np.zeros(nc) # mAP per class results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls) scheduler.last_epoch = start_epoch - 1 # do not move scaler = amp.GradScaler(enabled=cuda) compute_loss = ComputeLoss(model) # init loss class logger.info(f'Image sizes {imgsz} train, {imgsz_test} test\n' f'Using {dataloader.num_workers} dataloader workers\n' f'Logging results to {save_dir}\n' f'Starting training for {epochs} epochs...') for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ model.train() # Update image weights (optional) if opt.image_weights: # Generate indices if rank in [-1, 0]: cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx # Broadcast if DDP if rank != -1: indices = (torch.tensor(dataset.indices) if rank == 0 else torch.zeros(dataset.n)).int() dist.broadcast(indices, 0) if rank != 0: dataset.indices = indices.cpu().numpy() # Update mosaic border # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs) # dataset.mosaic_border = [b - imgsz, -b] # height, width borders mloss = torch.zeros(4, device=device) # mean losses if rank != -1: dataloader.sampler.set_epoch(epoch) pbar = enumerate(dataloader) logger.info(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'total', 'labels', 'img_size')) if rank in [-1, 0]: pbar = tqdm(pbar, total=nb) # progress bar optimizer.zero_grad() for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- ni = i + nb * epoch # number integrated batches (since train start) imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0 # Warmup if ni <= nw: xi = [0, nw] # x interp # model.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou) accumulate = max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round()) for j, x in enumerate(optimizer.param_groups): # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)]) if 'momentum' in x: x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) # Multi-scale if opt.multi_scale: sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size sf = sz / max(imgs.shape[2:]) # scale factor if sf != 1: ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple) imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) # Forward with amp.autocast(enabled=cuda): pred = model(imgs) # forward loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size if rank != -1: loss *= opt.world_size # gradient averaged between devices in DDP mode if opt.quad: loss *= 4. # Backward scaler.scale(loss).backward() # Optimize if ni % accumulate == 0: scaler.step(optimizer) # optimizer.step scaler.update() optimizer.zero_grad() if ema: ema.update(model) # Print if rank in [-1, 0]: mloss = (mloss * i + loss_items) / (i + 1) # update mean losses mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB) s = ('%10s' * 2 + '%10.4g' * 6) % ( '%g/%g' % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1]) pbar.set_description(s) # Plot if plots and ni < 3: f = save_dir / f'train_batch{ni}.jpg' # filename Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start() # if tb_writer: # tb_writer.add_image(f, result, dataformats='HWC', global_step=epoch) # tb_writer.add_graph(torch.jit.trace(model, imgs, strict=False), []) # add model graph elif plots and ni == 10 and wandb_logger.wandb: wandb_logger.log({"Mosaics": [wandb_logger.wandb.Image(str(x), caption=x.name) for x in save_dir.glob('train*.jpg') if x.exists()]}) # end batch ------------------------------------------------------------------------------------------------ # end epoch ---------------------------------------------------------------------------------------------------- # Scheduler lr = [x['lr'] for x in optimizer.param_groups] # for tensorboard scheduler.step() # DDP process 0 or single-GPU if rank in [-1, 0]: # mAP ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride', 'class_weights']) final_epoch = epoch + 1 == epochs if not opt.notest or final_epoch: # Calculate mAP wandb_logger.current_epoch = epoch + 1 results, maps, times = test.test(data_dict, batch_size=batch_size * 2, imgsz=imgsz_test, model=ema.ema, single_cls=opt.single_cls, dataloader=testloader, save_dir=save_dir, verbose=nc < 50 and final_epoch, plots=plots and final_epoch, wandb_logger=wandb_logger, compute_loss=compute_loss, is_coco=is_coco) # Write with open(results_file, 'a') as f: f.write(s + '%10.4g' * 7 % results + '\n') # append metrics, val_loss if len(opt.name) and opt.bucket: os.system('gsutil cp %s gs://%s/results/results%s.txt' % (results_file, opt.bucket, opt.name)) # Log tags = ['train/box_loss', 'train/obj_loss', 'train/cls_loss', # train loss 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', 'val/box_loss', 'val/obj_loss', 'val/cls_loss', # val loss 'x/lr0', 'x/lr1', 'x/lr2'] # params for x, tag in zip(list(mloss[:-1]) + list(results) + lr, tags): if tb_writer: tb_writer.add_scalar(tag, x, epoch) # tensorboard if wandb_logger.wandb: wandb_logger.log({tag: x}) # W&B # Update best mAP fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95] if fi > best_fitness: best_fitness = fi wandb_logger.end_epoch(best_result=best_fitness == fi) # Save model if (not opt.nosave) or (final_epoch and not opt.evolve): # if save ckpt = {'epoch': epoch, 'best_fitness': best_fitness, 'training_results': results_file.read_text(), 'model': deepcopy(model.module if is_parallel(model) else model).half(), 'ema': deepcopy(ema.ema).half(), 'updates': ema.updates, 'optimizer': optimizer.state_dict(), 'wandb_id': wandb_logger.wandb_run.id if wandb_logger.wandb else None} # Save last, best and delete torch.save(ckpt, last) if best_fitness == fi: torch.save(ckpt, best) if wandb_logger.wandb: if ((epoch + 1) % opt.save_period == 0 and not final_epoch) and opt.save_period != -1: wandb_logger.log_model( last.parent, opt, epoch, fi, best_model=best_fitness == fi) del ckpt # end epoch ---------------------------------------------------------------------------------------------------- # end training if rank in [-1, 0]: # Plots if plots: plot_results(save_dir=save_dir) # save as results.png if wandb_logger.wandb: files = ['results.png', 'confusion_matrix.png', *[f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R')]] wandb_logger.log({"Results": [wandb_logger.wandb.Image(str(save_dir / f), caption=f) for f in files if (save_dir / f).exists()]}) # Test best.pt logger.info('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600)) if opt.data.endswith('coco.yaml') and nc == 80: # if COCO for m in (last, best) if best.exists() else (last): # speed, mAP tests results, _, _ = test.test(opt.data, batch_size=batch_size * 2, imgsz=imgsz_test, conf_thres=0.001, iou_thres=0.7, model=attempt_load(m, device).half(), single_cls=opt.single_cls, dataloader=testloader, save_dir=save_dir, save_json=True, plots=False, is_coco=is_coco) # Strip optimizers final = best if best.exists() else last # final model for f in last, best: if f.exists(): strip_optimizer(f) # strip optimizers if opt.bucket: os.system(f'gsutil cp {final} gs://{opt.bucket}/weights') # upload if wandb_logger.wandb and not opt.evolve: # Log the stripped model wandb_logger.wandb.log_artifact(str(final), type='model', name='run_' + wandb_logger.wandb_run.id + '_model', aliases=['last', 'best', 'stripped']) wandb_logger.finish_run() else: dist.destroy_process_group() torch.cuda.empty_cache() return results if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--weights', type=str, default='v5Lite-s.pt', help='initial weights path') parser.add_argument('--cfg', type=str, default='models/v5Lite-s.yaml', help='model.yaml path') parser.add_argument('--data', type=str, default='data/mydata.yaml', help='data.yaml path') parser.add_argument('--hyp', type=str, default='data/hyp.scratch.yaml', help='hyperparameters path') parser.add_argument('--epochs', type=int, default=300) parser.add_argument('--batch-size', type=int, default=3, help='total batch size for all GPUs') parser.add_argument('--img-size', nargs='+', type=int, default=[320, 320], help='[train, test] image sizes') parser.add_argument('--rect', action='store_true', help='rectangular training') parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') parser.add_argument('--notest', action='store_true', help='only test final epoch') parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check') parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters') parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') parser.add_argument('--cache-images', action='store_true', help='cache images for faster training') parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer') parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify') parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers') parser.add_argument('--project', default='runs/train', help='save to project/name') parser.add_argument('--entity', default=None, help='W&B entity') parser.add_argument('--name', default='exp', help='save to project/name') parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') parser.add_argument('--quad', action='store_true', help='quad dataloader') parser.add_argument('--linear-lr', action='store_true', help='linear LR') parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table') parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B') parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch') parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used') opt = parser.parse_args() # Set DDP variables opt.world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1 opt.global_rank = int(os.environ['RANK']) if 'RANK' in os.environ else -1 set_logging(opt.global_rank) if opt.global_rank in [-1, 0]: check_git_status() check_requirements() # Resume wandb_run = check_wandb_resume(opt) if opt.resume and not wandb_run: # resume an interrupted run # 修改后的加载代码 ckpt = torch.load(ckpt, map_location='cpu', weights_only=False) # 添加 weights_only=False assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist' apriori = opt.global_rank, opt.local_rank with open(Path(ckpt).parent.parent / 'opt.yaml') as f: opt = argparse.Namespace(**yaml.load(f, Loader=yaml.SafeLoader)) # replace opt.cfg, opt.weights, opt.resume, opt.batch_size, opt.global_rank, opt.local_rank = '', ckpt, True, opt.total_batch_size, *apriori # reinstate logger.info('Resuming training from %s' % ckpt) else: # opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml') opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test) opt.name = 'evolve' if opt.evolve else opt.name opt.save_dir = increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok | opt.evolve) # increment run # DDP mode opt.total_batch_size = opt.batch_size device = select_device(opt.device, batch_size=opt.batch_size) if opt.local_rank != -1: assert torch.cuda.device_count() > opt.local_rank torch.cuda.set_device(opt.local_rank) device = torch.device('cuda', opt.local_rank) dist.init_process_group(backend='nccl', init_method='env://') # distributed backend assert opt.batch_size % opt.world_size == 0, '--batch-size must be multiple of CUDA device count' opt.batch_size = opt.total_batch_size // opt.world_size # Hyperparameters with open(opt.hyp) as f: hyp = yaml.load(f, Loader=yaml.SafeLoader) # load hyps # Train logger.info(opt) if not opt.evolve: tb_writer = None # init loggers if opt.global_rank in [-1, 0]: prefix = colorstr('tensorboard: ') logger.info(f"{prefix}Start with 'tensorboard --logdir {opt.project}', view at http://localhost:6006/") tb_writer = SummaryWriter(opt.save_dir) # Tensorboard train(hyp, opt, device, tb_writer) # Evolve hyperparameters (optional) else: # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr 'box': (1, 0.02, 0.2), # box loss gain 'cls': (1, 0.2, 4.0), # cls loss gain 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight 'iou_t': (0, 0.1, 0.7), # IoU training threshold 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) 'scale': (1, 0.0, 0.9), # image scale (+/- gain) 'shear': (1, 0.0, 10.0), # image shear (+/- deg) 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability) 'mosaic': (1, 0.0, 1.0), # image mixup (probability) 'mixup': (1, 0.0, 1.0)} # image mixup (probability) assert opt.local_rank == -1, 'DDP mode not implemented for --evolve' opt.notest, opt.nosave = True, True # only test/save final epoch # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices yaml_file = Path(opt.save_dir) / 'hyp_evolved.yaml' # save best result here if opt.bucket: os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket) # download evolve.txt if exists for _ in range(300): # generations to evolve if Path('evolve.txt').exists(): # if evolve.txt exists: select best hyps and mutate # Select parent(s) parent = 'single' # parent selection method: 'single' or 'weighted' x = np.loadtxt('evolve.txt', ndmin=2) n = min(5, len(x)) # number of previous results to consider x = x[np.argsort(-fitness(x))][:n] # top n mutations w = fitness(x) - fitness(x).min() # weights if parent == 'single' or len(x) == 1: # x = x[random.randint(0, n - 1)] # random selection x = x[random.choices(range(n), weights=w)[0]] # weighted selection elif parent == 'weighted': x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination # Mutate mp, s = 0.8, 0.2 # mutation probability, sigma npr = np.random npr.seed(int(time.time())) g = np.array([x[0] for x in meta.values()]) # gains 0-1 ng = len(meta) v = np.ones(ng) while all(v == 1): # mutate until a change occurs (prevent duplicates) v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0) for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300) hyp[k] = float(x[i + 7] * v[i]) # mutate # Constrain to limits for k, v in meta.items(): hyp[k] = max(hyp[k], v[1]) # lower limit hyp[k] = min(hyp[k], v[2]) # upper limit hyp[k] = round(hyp[k], 5) # significant digits # Train mutation results = train(hyp.copy(), opt, device) # Write mutation results print_mutation(hyp.copy(), results, yaml_file, opt.bucket) # Plot results plot_evolution(yaml_file) print(f'Hyperparameter evolution complete. Best results saved as: {yaml_file}\n' f'Command to train a new model with these hyperparameters: $ python train.py --hyp {yaml_file}') 上述文件运行时显示”Traceback (most recent call last): File "D:\YOLOv5-Lite-1.4\YOLOv5-Lite-master\train.py", line 11, in <module> import ckpt File "D:\YOLOv5-Lite-1.4\YOLOv5-Lite-master\.venv\Lib\site-packages\ckpt\__init__.py", line 5, in <module> from .config import get_ckpt_dir, set_ckpt_dir File "D:\YOLOv5-Lite-1.4\YOLOv5-Lite-master\.venv\Lib\site-packages\ckpt\config.py", line 81, in <module> set_ckpt_dir() ~~~~~~~~~~~~^^ File "D:\YOLOv5-Lite-1.4\YOLOv5-Lite-master\.venv\Lib\site-packages\ckpt\config.py", line 53, in set_ckpt_dir ckpt_dir = resolve_ckpt_dir(ckpt_dir) File "D:\YOLOv5-Lite-1.4\YOLOv5-Lite-master\.venv\Lib\site-packages\ckpt\config.py", line 40, in resolve_ckpt_dir raise Exception("Could not find ckpt-directory") Exception: Could not find ckpt-directory“给出完整详细的解决方案,给出修改后的完整的可运行的代码,给出可运行的代码构成,给出具体修改的代码位置行数
05-12
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值