本次项目关于后端数据库的部分,我已经做完,于是,我又转向了前端界面的优化内容。
一、首先我们来看一下优化效果:
优化前:
优化后:
二、任务背景
作为英语学习应用的核心功能之一,AI语音对话模块承担着帮助学生提升口语能力的重要任务。在最近的开发过程中,我们发现虽然现有功能已经能够满足基本需求,但在用户体验和界面美观度方面还有较大提升空间。为此,我主导了对该模块的全面优化工作。
三、优化目标
-
提升视觉吸引力:通过现代化设计语言增强界面美观度,让用户“耳目一新!”
-
改善交互体验:使语音交互更加自然流畅
-
增强反馈机制:让用户更清晰地了解系统状态
-
保持功能完整性:确保所有核心功能不受影响
-
提高可访问性:优化对不同设备的适配性,我们对此增加了媒体查询,能够满足不同设备,比如手机,平板,电脑的需求
四、具体改进
1. 界面视觉重构
问题分析:原界面是队友一开始搞的,只是把最基础的界面给弄出来了,采用基本布局,缺乏专业设计感,色彩单一,视觉层次不分明,一句话就是:不是很好看。
解决方案:
-
采用卡片式设计语言,添加圆角和大阴影
-
引入蓝紫色渐变主题色,体现科技感
-
为消息气泡添加发送者标签,指明右侧为用户,左侧为AI
-
添加装饰性背景元素增强深度感
<div class="message-bubble user-message">
{{ msg.content }}
</div>
.user-message {
background: linear-gradient(135deg, #6366f1 0%, #8b5cf6 100%);
color: white;
align-self: flex-end;
border-bottom-right-radius: 5px;
margin-left: 15%;
}
2. 交互体验优化
问题分析:原录音按钮设计普通,状态变化不明显,缺乏视觉反馈。
解决方案:
-
重新设计录音按钮为大圆形按钮
-
添加脉冲动画效果指示录音状态
-
按钮悬停时添加上浮效果
<button @click="toggleRecording" :class="['record-button', isRecording ? 'recording' : '']">
<i :class="isRecording ? 'fas fa-microphone-alt' : 'fas fa-microphone'"></i>
{{ isRecording ? '正在录音...' : '按住说话' }}
</button>
@keyframes pulse {
0% { box-shadow: 0 0 0 0 rgba(239, 68, 68, 0.6); }
70% { box-shadow: 0 0 0 15px rgba(239, 68, 68, 0); }
100% { box-shadow: 0 0 0 0 rgba(239, 68, 68, 0); }
}
3. 消息系统增强
问题分析:消息显示单调,新消息进入不自然,历史消息管理不足。
解决方案:
-
为消息添加淡入动画
-
实现自动滚动到底部功能
-
限制历史消息长度(保留最近4轮对话)
-
区分用户消息和AI消息的视觉样式
watch(messages, () => {
nextTick(() => {
const chatArea = document.querySelector('.chat-area');
if (chatArea) chatArea.scrollTop = chatArea.scrollHeight;
});
if (messages.value.length > MAX_HISTORY_LENGTH * 2) {
messages.value = messages.value.slice(-MAX_HISTORY_LENGTH * 2);
}
}, { deep: true });
4. 状态反馈系统
问题分析:系统状态反馈不足,用户难以了解当前状态。
解决方案:
-
添加点状动画加载指示器
-
增强识别文本的显示样式
-
优化错误提示的视觉效果
-
添加AI思考中的状态提示
<div v-if="isLoadingAIResponse" class="loading-indicator">
<div class="dot-flashing"></div>
<span>AI正在思考中...</span>
</div>
5. 响应式设计改进
问题分析:原界面在小屏幕上显示效果不佳。
解决方案:
-
优化消息气泡在小屏幕上的最大宽度
-
调整字体大小和间距
-
重新布局输入区域
-
优化滚动条样式
-
重点是利用媒体查询!
@media (max-width: 600px) {
.speech-container {
padding: 20px 15px;
height: 550px;
}
.message-bubble {
max-width: 90%;
padding: 12px 15px;
font-size: 1rem;
}
}
五、源码
<template>
<div class="speech-container">
<div class="decoration dec-1"></div>
<div class="decoration dec-2"></div>
<div class="header">
<h2>AI 语音对话助手</h2>
<p>与AI进行自然对话,提升您的英语口语能力</p>
</div>
<div class="chat-area">
<div v-for="(msg, index) in messages" :key="index"
:class="['message-bubble', msg.isUser ? 'user-message' : 'ai-message']">
{{ msg.content }}
</div>
<div v-if="isLoadingAIResponse" class="loading-indicator">
<div class="dot-flashing"></div>
<span>AI正在思考中...</span>
</div>
</div>
<div class="input-section">
<button @click="toggleRecording" :class="['record-button', isRecording ? 'recording' : '']">
<i :class="isRecording ? 'fas fa-microphone-alt' : 'fas fa-microphone'"></i>
{{ isRecording ? '正在录音...' : '按住说话' }}
</button>
<div v-if="recognizedText" class="recognized-text">
<i class="fas fa-comment-alt"></i>
<span>识别内容: {{ recognizedText }}</span>
</div>
</div>
</div>
</template>
<script setup lang="ts">
import { ref, watch, nextTick } from 'vue';
import * as SpeechSDK from 'microsoft-cognitiveservices-speech-sdk';
interface ChatMessage {
content: string;
isUser: boolean;
}
const messages = ref<ChatMessage[]>([
{ content: '你好!我是您的英语学习助手,让我们开始对话吧!', isUser: false }
]);
const isRecording = ref(false);
const recognizedText = ref('');
const isLoadingAIResponse = ref(false);
const MAX_HISTORY_LENGTH = 8;
// 自动滚动到底部
const scrollToBottom = () => {
nextTick(() => {
const chatArea = document.querySelector('.chat-area');
if (chatArea) chatArea.scrollTop = chatArea.scrollHeight;
});
};
// 在添加消息后调用滚动
watch(messages, () => {
scrollToBottom();
if (messages.value.length > MAX_HISTORY_LENGTH * 2) {
messages.value = messages.value.slice(-MAX_HISTORY_LENGTH * 2);
}
}, { deep: true });
const speechKey = "CDBB85HgvNuqkZxmCWCR9pRMPesII7ABmyK2UNgdEqEKCFqDa0szJQQJ99BDAC3pKaRXJ3w3AAAYACOGKt7b";
const speechRegion = "eastasia";
let recognizer: SpeechSDK.SpeechRecognizer | null = null;
const speechConfig = SpeechSDK.SpeechConfig.fromSubscription(speechKey, speechRegion);
speechConfig.speechRecognitionLanguage = "en-US";
speechConfig.speechSynthesisVoiceName = "en-US-GuyNeural";
const audioConfig = SpeechSDK.AudioConfig.fromDefaultMicrophoneInput();
const initializeRecognizer = () => {
if (!recognizer) {
recognizer = new SpeechSDK.SpeechRecognizer(speechConfig, audioConfig);
recognizer.recognized = async (s, e) => {
if (e.result.reason === SpeechSDK.ResultReason.RecognizedSpeech) {
const userText = e.result.text;
console.log(`RECOGNIZED: Text=${userText}`);
recognizedText.value = userText;
if (userText.trim()) {
messages.value.push({ content: userText, isUser: true });
await sendToAIAndSpeak(userText);
}
} else if (e.result.reason === SpeechSDK.ResultReason.NoMatch) {
console.log("NOMATCH: Speech could not be recognized.");
recognizedText.value = '无法识别语音';
}
stopRecordingInternal();
};
recognizer.canceled = (s, e) => {
console.log(`CANCELED: Reason=${e.reason}`);
if (e.reason === SpeechSDK.CancellationReason.Error) {
console.error(`CANCELED: ErrorCode=${e.errorCode}`);
console.error(`CANCELED: ErrorDetails=${e.errorDetails}`);
recognizedText.value = '识别出错,请检查麦克风权限和网络';
}
stopRecordingInternal();
};
recognizer.sessionStopped = (s, e) => {
console.log("\n Session stopped event.");
stopRecordingInternal();
};
}
};
const startRecording = () => {
if (isRecording.value) return;
initializeRecognizer();
if (recognizer) {
recognizedText.value = '';
recognizer.startContinuousRecognitionAsync(
() => {
console.log("Recognition started.");
isRecording.value = true;
},
(err) => {
console.error(`ERROR starting recognition: ${err}`);
isRecording.value = false;
}
);
}
};
const stopRecordingInternal = () => {
if (recognizer && isRecording.value) {
recognizer.stopContinuousRecognitionAsync(
() => {
console.log("Recognition stopped.");
},
(err) => {
console.error(`ERROR stopping recognition: ${err}`);
}
);
}
isRecording.value = false;
};
const toggleRecording = () => {
if (isRecording.value) {
stopRecordingInternal();
} else {
startRecording();
}
}
const sendToAIAndSpeak = async (text: string) => {
isLoadingAIResponse.value = true;
try {
const chatHistory = messages.value.map(msg => ({
role: msg.isUser ? "user" : "assistant",
content: msg.content
}));
const response = await fetch('/api/ai/chat/speech', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ chatHistory })
});
if (!response.ok) throw new Error('Network response was not ok.');
const data = await response.json();
const aiReply = data.reply;
messages.value.push({ content: aiReply, isUser: false });
await speakText(aiReply);
} catch (error) {
console.error('Error communicating with AI or speaking:', error);
messages.value.push({ content: '抱歉,AI回复出错。', isUser: false });
} finally {
isLoadingAIResponse.value = false;
}
};
const speakText = async (textToSpeak: string) => {
if (!textToSpeak.trim()) return;
const synthSpeechConfig = SpeechSDK.SpeechConfig.fromSubscription(speechKey, speechRegion);
synthSpeechConfig.speechSynthesisVoiceName = "en-US-JennyNeural";
const audioConfigSynth = SpeechSDK.AudioConfig.fromDefaultSpeakerOutput();
const synthesizer = new SpeechSDK.SpeechSynthesizer(synthSpeechConfig, audioConfigSynth);
return new Promise<void>((resolve, reject) => {
synthesizer.speakTextAsync(
textToSpeak,
result => {
if (result.reason === SpeechSDK.ResultReason.SynthesizingAudioCompleted) {
console.log("AI Speech synthesized to speaker.");
} else {
console.error("AI Speech synthesis canceled, " + result.errorDetails);
}
synthesizer.close();
resolve();
},
error => {
console.error("Error synthesizing AI speech: " + error);
synthesizer.close();
reject(error);
}
);
});
};
</script>
<style scoped>
.speech-container {
display: flex;
flex-direction: column;
height: 700px;
width: 900px;
margin: 0 auto;
background-color: white;
border-radius: 20px;
box-shadow: 0 15px 40px rgba(99, 102, 241, 0.15);
padding: 30px;
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
position: relative;
overflow: hidden;
}
.header {
text-align: center;
padding-bottom: 20px;
border-bottom: 1px solid #eef2ff;
margin-bottom: 20px;
position: relative;
}
.header h2 {
color: #6366f1;
font-size: 2.2rem;
margin-bottom: 10px;
font-weight: 700;
background: linear-gradient(135deg, #6366f1 0%, #8b5cf6 100%);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.header p {
color: #6b7280;
font-size: 1.1rem;
max-width: 600px;
margin: 0 auto;
}
.chat-area {
flex-grow: 1;
overflow-y: auto;
padding: 20px 15px;
background-color: #f9fafc;
border-radius: 15px;
margin-bottom: 25px;
display: flex;
flex-direction: column;
gap: 20px;
position: relative;
}
.message-bubble {
padding: 15px 20px;
border-radius: 20px;
max-width: 80%;
word-wrap: break-word;
line-height: 1.6;
font-size: 1.05rem;
position: relative;
animation: fadeIn 0.3s ease;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.05);
}
@keyframes fadeIn {
from { opacity: 0; transform: translateY(10px); }
to { opacity: 1; transform: translateY(0); }
}
.user-message {
background: linear-gradient(135deg, #6366f1 0%, #8b5cf6 100%);
color: white;
align-self: flex-end;
border-bottom-right-radius: 5px;
margin-left: 15%;
}
.user-message::before {
content: "你";
position: absolute;
top: -25px;
right: 0;
font-size: 0.9rem;
color: #6b7280;
}
.ai-message {
background: white;
color: #333;
align-self: flex-start;
border-bottom-left-radius: 5px;
border: 1px solid #eef2ff;
margin-right: 15%;
}
.ai-message::before {
content: "AI助手";
position: absolute;
top: -25px;
left: 0;
font-size: 0.9rem;
color: #6b7280;
}
.input-section {
display: flex;
flex-direction: column;
align-items: center;
padding-top: 15px;
border-top: 1px solid #eef2ff;
}
.record-button {
background: linear-gradient(135deg, #6366f1 0%, #8b5cf6 100%);
color: white;
border: none;
padding: 15px 40px;
border-radius: 50px;
cursor: pointer;
font-size: 1.1rem;
font-weight: 600;
transition: all 0.3s ease;
margin-bottom: 15px;
box-shadow: 0 6px 15px rgba(99, 102, 241, 0.3);
display: flex;
align-items: center;
gap: 10px;
}
.record-button:hover {
transform: translateY(-3px);
box-shadow: 0 8px 20px rgba(99, 102, 241, 0.4);
}
.record-button.recording {
background: linear-gradient(135deg, #ef4444 0%, #f87171 100%);
animation: pulse 1.5s infinite;
}
@keyframes pulse {
0% {
box-shadow: 0 0 0 0 rgba(239, 68, 68, 0.6);
}
70% {
box-shadow: 0 0 0 15px rgba(239, 68, 68, 0);
}
100% {
box-shadow: 0 0 0 0 rgba(239, 68, 68, 0);
}
}
.recognized-text {
margin-top: 15px;
color: #555;
font-size: 1rem;
min-height: 20px;
text-align: center;
background: #f0f4ff;
padding: 12px 25px;
border-radius: 30px;
width: 100%;
max-width: 600px;
display: flex;
align-items: center;
gap: 10px;
}
.recognized-text i {
color: #6366f1;
}
.loading-indicator {
display: flex;
justify-content: center;
padding: 10px 0;
gap: 10px;
color: #6366f1;
}
.dot-flashing {
position: relative;
width: 10px;
height: 10px;
border-radius: 50%;
background-color: #6366f1;
color: #6366f1;
animation: dotFlashing 1s infinite linear alternate;
animation-delay: 0.5s;
}
.dot-flashing::before, .dot-flashing::after {
content: '';
display: inline-block;
position: absolute;
top: 0;
width: 10px;
height: 10px;
border-radius: 50%;
background-color: #6366f1;
color: #6366f1;
}
.dot-flashing::before {
left: -15px;
animation: dotFlashing 1s infinite alternate;
animation-delay: 0s;
}
.dot-flashing::after {
left: 15px;
animation: dotFlashing 1s infinite alternate;
animation-delay: 1s;
}
@keyframes dotFlashing {
0% { background-color: #6366f1; }
50%, 100% { background-color: rgba(99, 102, 241, 0.2); }
}
/* 滚动条样式 */
.chat-area::-webkit-scrollbar {
width: 8px;
}
.chat-area::-webkit-scrollbar-track {
background: #f0f4f8;
border-radius: 8px;
}
.chat-area::-webkit-scrollbar-thumb {
background: linear-gradient(135deg, #6366f1 0%, #8b5cf6 100%);
border-radius: 8px;
}
.chat-area::-webkit-scrollbar-thumb:hover {
background: #6366f1;
}
.decoration {
position: absolute;
border-radius: 50%;
opacity: 0.1;
z-index: 0;
}
.dec-1 {
width: 300px;
height: 300px;
background: #6366f1;
top: -150px;
right: -150px;
}
.dec-2 {
width: 200px;
height: 200px;
background: #8b5cf6;
bottom: -100px;
left: -100px;
}
@media (max-width: 900px) {
.speech-container {
width: 95%;
height: 600px;
}
.message-bubble {
max-width: 85%;
}
}
@media (max-width: 600px) {
.speech-container {
padding: 20px 15px;
height: 550px;
}
.header h2 {
font-size: 1.8rem;
}
.record-button {
padding: 12px 30px;
font-size: 1rem;
}
.message-bubble {
max-width: 90%;
padding: 12px 15px;
font-size: 1rem;
}
}
</style>
六、认识
这次界面优化工作让我深刻体会到,一个好的语音交互界面不仅需要强大的技术支撑,更需要从用户角度出发,打造自然、流畅的交互体验。我们将继续迭代优化,为用户提供更出色的英语学习工具。