Http message handle/Delegating handler

本文介绍ASP.NET Web API中的消息处理器及其使用方法,包括如何创建自定义消息处理器来处理请求和响应,实现如X-HTTP-Method-Override等功能。

http://www.strathweb.com/2014/03/per-request-tracing-asp-net-web-api/


Typically, a series of message handlers are chained together. The first handler receives an HTTP request, does some processing, and gives the request to the next handler. At some point, the response is created and goes back up the chain. This pattern is called a delegating handler.

Server-Side Message Handlers

On the server side, the Web API pipeline uses some built-in message handlers:

  • HttpServer gets the request from the host.
  • HttpRoutingDispatcher dispatches the request based on the route.
  • HttpControllerDispatcher sends the request to a Web API controller.

You can add custom handlers to the pipeline. Message handlers are good for cross-cutting concerns that operate at the level of HTTP messages (rather than controller actions). For example, a message handler might:

  • Read or modify request headers.
  • Add a response header to responses.
  • Validate requests before they reach the controller.

This diagram shows two custom handlers inserted into the pipeline:

On the client side, HttpClient also uses message handlers. For more information, see HttpClient Message Handlers.

Custom Message Handlers

To write a custom message handler, derive from System.Net.Http.DelegatingHandler and override theSendAsync method. This method has the following signature:

Task<HttpResponseMessage> SendAsync(
    HttpRequestMessage request, CancellationToken cancellationToken);

The method takes an HttpRequestMessage as input and asynchronously returns an HttpResponseMessage. A typical implementation does the following:

  1. Process the request message.
  2. Call base.SendAsync to send the request to the inner handler.
  3. The inner handler returns a response message. (This step is asynchronous.)
  4. Process the response and return it to the caller.

Here is a trivial example:

public class MessageHandler1 : DelegatingHandler
{
    protected async override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request, CancellationToken cancellationToken)
    {
        Debug.WriteLine("Process request");
        // Call the inner handler.
        var response = await base.SendAsync(request, cancellationToken);
        Debug.WriteLine("Process response");
        return response;
    }
}

The call to base.SendAsync is asynchronous. If the handler does any work after this call, use the await keyword, as shown.

A delegating handler can also skip the inner handler and directly create the response:

public class MessageHandler2 : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request, CancellationToken cancellationToken)
    {
        // Create the response.
        var response = new HttpResponseMessage(HttpStatusCode.OK)
        {
            Content = new StringContent("Hello!")
        };

        // Note: TaskCompletionSource creates a task that does not contain a delegate.
        var tsc = new TaskCompletionSource<HttpResponseMessage>();
        tsc.SetResult(response);   // Also sets the task state to "RanToCompletion"
        return tsc.Task;
    }
}

If a delegating handler creates the response without calling base.SendAsync, the request skips the rest of the pipeline. This can be useful for a handler that validates the request (creating an error response).

Adding a Handler to the Pipeline

To add a message handler on the server side, add the handler to the HttpConfiguration.MessageHandlerscollection. If you used the "ASP.NET MVC 4 Web Application" template to create the project, you can do this inside the WebApiConfig class:

public static class WebApiConfig
{
    public static void Register(HttpConfiguration config)
    {
        config.MessageHandlers.Add(new MessageHandler1());
        config.MessageHandlers.Add(new MessageHandler2());

        // Other code not shown...
    }
}

Message handlers are called in the same order that they appear in MessageHandlers collection. Because they are nested, the response message travels in the other direction. That is, the last handler is the first to get the response message.

Notice that you don't need to set the inner handlers; the Web API framework automatically connects the message handlers.

If you are self-hosting, create an instance of the HttpSelfHostConfiguration class and add the handlers to theMessageHandlers collection.

var config = new HttpSelfHostConfiguration("http://localhost");
config.MessageHandlers.Add(new MessageHandler1());
config.MessageHandlers.Add(new MessageHandler2());

Now let's look at some examples of custom message handlers.

Example: X-HTTP-Method-Override

X-HTTP-Method-Override is a non-standard HTTP header. It is designed for clients that cannot send certain HTTP request types, such as PUT or DELETE. Instead, the client sends a POST request and sets the X-HTTP-Method-Override header to the desired method. For example:

X-HTTP-Method-Override: PUT

Here is a message handler that adds support for X-HTTP-Method-Override:

public class MethodOverrideHandler : DelegatingHandler      
{
    readonly string[] _methods = { "DELETE", "HEAD", "PUT" };
    const string _header = "X-HTTP-Method-Override";

    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request, CancellationToken cancellationToken)
    {
        // Check for HTTP POST with the X-HTTP-Method-Override header.
        if (request.Method == HttpMethod.Post && request.Headers.Contains(_header))
        {
            // Check if the header value is in our methods list.
            var method = request.Headers.GetValues(_header).FirstOrDefault();
            if (_methods.Contains(method, StringComparer.InvariantCultureIgnoreCase))
            {
                // Change the request method.
                request.Method = new HttpMethod(method);
            }
        }
        return base.SendAsync(request, cancellationToken);
    }
}

In the SendAsync method, the handler checks whether the request message is a POST request, and whether it contains the X-HTTP-Method-Override header. If so, it validates the header value, and then modifies the request method. Finally, the handler calls base.SendAsync to pass the message to the next handler.

When the request reaches the HttpControllerDispatcher class, HttpControllerDispatcher will route the request based on the updated request method.

Example: Adding a Custom Response Header

Here is a message handler that adds a custom header to every response message:

// .Net 4.5
public class CustomHeaderHandler : DelegatingHandler
{
    async protected override Task<HttpResponseMessage> SendAsync(
            HttpRequestMessage request, CancellationToken cancellationToken)
    {
        HttpResponseMessage response = await base.SendAsync(request, cancellationToken);
        response.Headers.Add("X-Custom-Header", "This is my custom header.");
        return response;
    }
}

First, the handler calls base.SendAsync to pass the request to the inner message handler. The inner handler returns a response message, but it does so asynchronously using a Task<T> object. The response message is not available until base.SendAsync completes asynchronously.

This example uses the await keyword to perform work asynchronously after SendAsync completes. If you are targeting .NET Framework 4.0, use the Task.ContinueWith method:

public class CustomHeaderHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request, CancellationToken cancellationToken)
    {
        return base.SendAsync(request, cancellationToken).ContinueWith(
            (task) =>
            {
                HttpResponseMessage response = task.Result;
                response.Headers.Add("X-Custom-Header", "This is my custom header.");
                return response;
            }
        );
    }
}

Example: Checking for an API Key

Some web services require clients to include an API key in their request. The following example shows how a message handler can check requests for a valid API key:

public class ApiKeyHandler : DelegatingHandler
{
    public string Key { get; set; }

    public ApiKeyHandler(string key)
    {
        this.Key = key;
    }

    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (!ValidateKey(request))
        {
            var response = new HttpResponseMessage(HttpStatusCode.Forbidden);
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);    
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }

    private bool ValidateKey(HttpRequestMessage message)
    {
        var query = message.RequestUri.ParseQueryString();
        string key = query["key"];
        return (key == Key);
    }
}

This handler looks for the API key in the URI query string. (For this example, we assume that the key is a static string. A real implementation would probably use more complex validation.) If the query string contains the key, the handler passes the request to the inner handler.

If the request does not have a valid key, the handler creates a response message with status 403, Forbidden. In this case, the handler does not call base.SendAsync, so the inner handler never receives the request, nor does the controller. Therefore, the controller can assume that all incoming requests have a valid API key.

If the API key applies only to certain controller actions, consider using an action filter instead of a message handler. Action filters run after URI routing is performed.

Per-Route Message Handlers

Handlers in the HttpConfiguration.MessageHandlers collection apply globally.

Alternatively, you can add a message handler to a specific route when you define the route:

public static class WebApiConfig
{
    public static void Register(HttpConfiguration config)
    {
        config.Routes.MapHttpRoute(
            name: "Route1",
            routeTemplate: "api/{controller}/{id}",
            defaults: new { id = RouteParameter.Optional }
        );

        config.Routes.MapHttpRoute(
            name: "Route2",
            routeTemplate: "api2/{controller}/{id}",
            defaults: new { id = RouteParameter.Optional },
            constraints: null,
            handler: new MessageHandler2()  // per-route message handler
        );

        config.MessageHandlers.Add(new MessageHandler1());  // global message handler
    }
}

In this example, if the request URI matches "Route2", the request is dispatched to MessageHandler2. The following diagram shows the pipeline for these two routes:

Notice that MessageHandler2 replaces the default HttpControllerDispatcher. In this example, MessageHandler2creates the response, and requests that match "Route2" never go to a controller. This lets you replace the entire Web API controller mechanism with your own custom endpoint.

Alternatively, a per-route message handler can delegate to HttpControllerDispatcher, which then dispatches to a controller.

The following code shows how to configure this route:

// List of delegating handlers.
DelegatingHandler[] handlers = new DelegatingHandler[] {
    new MessageHandler3()
};

// Create a message handler chain with an end-point.
var routeHandlers = HttpClientFactory.CreatePipeline(
    new HttpControllerDispatcher(config), handlers);

config.Routes.MapHttpRoute(
    name: "Route2",
    routeTemplate: "api2/{controller}/{id}",
    defaults: new { id = RouteParameter.Optional },
    constraints: null,
    handler: routeHandlers
);
09-28 13:13:27.417706 1170 1230 I URCC_CORE_SERVER: [getRequestFromRunRequestMap] can not get request in runing map for handle:21159 09-28 13:13:27.417773 1170 1230 E URCC_REQUEST_QUEUE: removeNodeFromQueue is called (removeNodeFromQueue){#23:vendor/oplus/cpu/urcc/halservice/urcccore/UrccRequestQueue.cpp} 09-28 13:13:27.419470 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.419779 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.420866 5061 5123 I dnpp : (REDACTED) #audio# audio client(%d)'s route start remote request 09-28 13:13:27.421438 5061 5123 I dnle : (REDACTED) #audio# consulting policy for audio input params client(%s) intent(%s) 09-28 13:13:27.421982 5061 5123 I dnlw : (REDACTED) #audio# selecting audio format..., request=%s 09-28 13:13:27.422184 5061 5123 I dnlq : (REDACTED) #audio# selecting audio config, intent=%s... 09-28 13:13:27.422565 5061 5123 I dnlw : (REDACTED) #audio# selected audio format=%s 09-28 13:13:27.422824 5061 5123 I dnax : (REDACTED) #audio# starting audio session(%d) for %s 09-28 13:13:27.423010 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.423351 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.423890 5061 5123 I dnax : (REDACTED) #audio# enforcing concurrency state on new audio session of %s 09-28 13:13:27.424919 1895 1924 D AS.AudioService: uid 10129, callingApp = com.google.android.googlequicksearchbox 09-28 13:13:27.425989 5061 5123 I dnax : (REDACTED) #audio# audio session(%d) OK to start 09-28 13:13:27.426913 5061 5123 I dnca : (REDACTED) #audio# starting audio source session(%d) on %s for %s 09-28 13:13:27.427166 5061 5123 I dnca : (REDACTED) #audio# enforcing concurrency state on a new audio source of %s 09-28 13:13:27.427577 5061 5123 I dmhl : (REDACTED) #audio# loading processor(%s) 09-28 13:13:27.427840 5061 5123 I dmhh : (REDACTED) #audio# loading processor(%s) 09-28 13:13:27.428057 5061 5123 I dmhb : (REDACTED) #audio# loading processor(%s) 09-28 13:13:27.430332 5061 5123 I dnnt : (REDACTED) #audio# enforcing routes concurrency state on %s of %s 09-28 13:13:27.430502 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.431535 5061 5145 I dmub : (REDACTED) #audio# referencing(%s) to(%s), (%s) 09-28 13:13:27.432659 23473 23970 I ajtn : ClockLookupExecutor is not enabled. 09-28 13:13:27.432774 5061 5123 I dnsc : (REDACTED) #audio# skipping session(%s) storing process, not enabled 09-28 13:13:27.432854 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.433305 5061 7857 I dmzv : (REDACTED) #audio# opening audio source(%s), offset(%s) 09-28 13:13:27.433451 5061 5123 I dnnt : (REDACTED) #audio# update route state on %s of %s 09-28 13:13:27.433574 5061 7857 I dmky : (REDACTED) #audio# delegating source(%s) opening to(%s) 09-28 13:13:27.433779 5061 7857 I dmkq : (REDACTED) #audio# open audio source(%s) 09-28 13:13:27.434191 5061 5123 I dmug : (REDACTED) #audio# schedule timeout(token(%s), duration(%s)) 09-28 13:13:27.434361 5061 7857 I dmjy : #audio# createAudioRecord 09-28 13:13:27.434454 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.434515 5061 7857 D AudioRecord: getMinFrameCount 640 09-28 13:13:27.435047 5061 7857 D AudioRecordExtImpl: AudioRecordExtImpl init 09-28 13:13:27.435283 5061 7857 D AudioRecordExtImpl: audiorecordTest 09-28 13:13:27.435386 5061 7857 D AudioRecord: set(): inputSource 6, sampleRate 16000, format 0x1, channelMask 0x10, frameCount 128000, notificationFrames 0, sessionId 0, transferType 0, flags 0, attributionSource AttributionSourceState{pid: 5061, uid: 10129, deviceId: 0, packageName: com.google.android.googlequicksearchbox, attributionTag: robin_android.audio, token: binder:0xb40000735bc37b80, renouncedPermissions: [], next: []}uid -1, pid -1 09-28 13:13:27.435965 23473 23970 I aesj : (REDACTED) FCF System FeatureName to check: %s 09-28 13:13:27.436174 23473 23970 I aesj : (REDACTED) FCF: Device model check status : %s 09-28 13:13:27.436496 23473 23970 I deby : (REDACTED) isAppDefaultAssistant setting: %s 09-28 13:13:27.436629 23473 23970 I deby : (REDACTED) isAppDefaultAssistant defaultComponentName: %s 09-28 13:13:27.436679 5061 7857 D AudioRecord: set(): mSessionId 0 09-28 13:13:27.436713 23473 23778 I dmru : #audio# client session ack-signal received from remote 09-28 13:13:27.437259 5061 7857 D AudioRecord: set(): 0xb40000735bc92000, Create AudioRecordThread, tid = 7858 09-28 13:13:27.437407 23473 23970 I akdd : getRequiredCapabilities: Bypassed, feature disabled. 09-28 13:13:27.437523 5061 7857 D AudioRecordExtImpl: doSchedBoost, set audioapp task boost, pid=5061, tid=7857, enable=1, ret=0 09-28 13:13:27.438321 1197 13387 D AudioFlingerExtImpl: oplusGetParametersBypassLock:only system or list app is allowed to get oplus parameters:DetectPulseEnable, uid:10129 09-28 13:13:27.438691 5061 7857 D AudioDetectPulse: setDetectPulse, mIsDetectPulse 0, 0x741ede40e0 09-28 13:13:27.439194 23473 23970 I deby : (REDACTED) isAppDefaultAssistant setting: %s 09-28 13:13:27.439285 23473 23970 I deby : (REDACTED) isAppDefaultAssistant defaultComponentName: %s 09-28 13:13:27.439429 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.439560 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.439700 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.439788 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.440246 1197 13387 D AudioBoost: audioSchedBoost, pid=1197, tid=13387, enable=1, ret=0 09-28 13:13:27.440342 1197 13387 D AudioFlingerExtImpl: addActiveAudioSession, add 2081, uid 10129, map size 1 09-28 13:13:27.440727 1197 13387 D APM_AudioPolicyManager: [MTK_APM_Input]getInputForAttr() source 6, sampling rate 16000, format 0x1, channel mask 0x10, session 2081, flags 0 attributes={ Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_UNKNOWN Source: AUDIO_SOURCE_VOICE_RECOGNITION Flags: 0x800 Tags: } requested device ID 0 09-28 13:13:27.441128 23473 23970 I deby : (REDACTED) isAppDefaultAssistant setting: %s 09-28 13:13:27.441208 23473 23970 I deby : (REDACTED) isAppDefaultAssistant defaultComponentName: %s 09-28 13:13:27.441287 1197 13387 D APM::AudioPolicyEngine: getDeviceForInputSource()input source 6, device 80000004 09-28 13:13:27.442071 1092 1840 D AudioALSAStreamManager: openInputStream(), devices = 0x80000004, format = 0x1, channels = 0x10, sampleRate = 16000, status = 110, acoustics = 0x0, input_flag 0x0 09-28 13:13:27.442178 1092 1840 D AudioALSAStreamIn: AudioALSAStreamIn() 09-28 13:13:27.442217 1092 1840 D AudioUtility: AudioThrottleTimeControl(), mIsOutput = 0, mBytesSum = 0, mThrottleControlStartTime = 0 09-28 13:13:27.442239 1092 1840 D AudioALSAHardware: InputSetScenceFromProp: mOplusInputScence = 0 09-28 13:13:27.442251 1092 1840 D AudioALSAStreamIn: AudioALSAStreamIn(), set input scence 0 09-28 13:13:27.442282 1092 1840 D WCNChipController: IsBTI2SSupport(), BTChipHWInterface() = 3, ret = 0 09-28 13:13:27.442293 1092 1840 D AudioDspStreamManager: getDspBtscoEnable(), BTSCO offload = 0 09-28 13:13:27.442312 1092 1840 D AudioSpeechEnhanceInfo: IsBesRecTuningEnable()- 0 09-28 13:13:27.442339 1092 1840 D AudioSpeechEnhanceInfo: IsAPDMNRTuningEnable(), 0 09-28 13:13:27.442355 1092 1840 D AudioALSAStreamIn: checkOpenStreamSampleRate(), origin sampleRate 16000, kDefaultInputSourceSampleRate 48000. 09-28 13:13:27.442366 1092 1840 D AudioALSAStreamIn: set() done, devices: 0x80000004, flags: 0, acoustics: 0x0, format: 0x1, sampleRate: 16000/16000, num_channels: 0x10/1, buffer_size: 640, tempDebugflag: 0 09-28 13:13:27.442384 1092 1840 D AudioALSAStreamManager: openInputStream(), add streamIn 0xb400007d61a7ce00, idenity 110 in mInputPoolInfo, size = 1 09-28 13:13:27.442391 1092 1840 D AudioALSAStreamManager: openInputStream(), SetInputMute(mAllInputMute) 09-28 13:13:27.442400 1092 1840 D AudioALSAStreamIn: SetInputMute(), 0 09-28 13:13:27.442439 1092 1840 D AudioALSAStreamIn: setParameters(), this = 0xb400007d61a7ce00, InputSource: 0 => 6, reopen 09-28 13:13:27.442491 1092 1840 D DeviceHAL: openInputStreamCore, flags: 0, open_input_stream success, mOpenedStreamsCount 6 09-28 13:13:27.442543 1092 1840 D AudioALSAStreamIn: updateSinkMetadataV7(), device:0x0 09-28 13:13:27.444178 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.446640 1197 13387 D AudioBoost: audioSchedBoost, pid=1197, tid=7859, enable=1, ret=0 09-28 13:13:27.446748 1197 7859 I AudioFlinger: AudioFlinger's thread 0xb400007ab08e5ac8 tid=7859 ready to run 09-28 13:13:27.446964 1092 1840 D AudioALSAStreamIn: standby(), halRequest 0, mDestroy 0, flag 0, this 0xb400007d61a7ce00 09-28 13:13:27.447335 1197 7859 D AudioFlinger: updateWakeLockUids_l AudioIn_6E uids: 09-28 13:13:27.447772 1092 1840 D AudioALSAStreamIn: standby(), halRequest 0, mDestroy 0, flag 0, this 0xb400007d61a7ce00 09-28 13:13:27.451294 23473 23970 I deby : (REDACTED) isAppDefaultAssistant setting: %s 09-28 13:13:27.451402 23473 23970 I deby : (REDACTED) isAppDefaultAssistant defaultComponentName: %s 09-28 13:13:27.451890 23473 23970 I akff : (REDACTED) Provider package not found: %s 09-28 13:13:27.452314 23473 23970 I ajrs : DeviceStateLookUpExecutor getRequiredCapabilities 09-28 13:13:27.452311 1895 3204 I OplusHansManager : unfreeze uid: 10372 com.instagram.android pids: [1076] reason: W_AsyncBinder scene: LcdOn 09-28 13:13:27.452359 1106 1174 I OriginChannel: wise light debug, data[2].type is 65627,data[2].sensor is 91:[92.000000,,200.000000,,119.000000,595.000000,220387555073684] 09-28 13:13:27.452769 1895 4064 I OStatsManager_OplusBatteryStatsManager: updateStateTime: PkgName = com.instagram.android, uid = 10372, state to bg, mForegroundCnt = 0, isFg = false 09-28 13:13:27.452872 1895 4064 D OStatsManager_OplusBatteryStatsManager: notePkgUnFrozenInner uid = 10372, pkgName = com.instagram.android 09-28 13:13:27.453727 23473 23777 I deby : (REDACTED) isAppDefaultAssistant setting: %s 09-28 13:13:27.453904 23473 23777 I deby : (REDACTED) isAppDefaultAssistant defaultComponentName: %s 09-28 13:13:27.454142 1197 13387 D AudioFlinger: createRecordTrack_l, notificationFrameCount 320, maxNotificationFrames 320 09-28 13:13:27.454796 1197 13387 D AF_RecordTrack_ExtImpl: AudioFlinger_RecordThread_RecordTrackExtImpl init 09-28 13:13:27.455259 23473 23777 I aevl : (REDACTED) #FCF: Managed profile check status : %s 09-28 13:13:27.455325 1197 13387 I AF::RecordTrack: RecordTrack, track(0xb400007ab085b600): mId(299), mFrameCount 128000, mSampleRate 16000, mFormat 1, mChannelCount 1, thread 0xb400007ab08e5ac0, sessionId 2081, (null), primary-0 09-28 13:13:27.455392 1197 13387 D AudioFlingerExtImpl: setMagicVoiceLoopbackEnable mMagicVoiceLoopbackEnable 0 09-28 13:13:27.455406 1197 13387 D AF_RecordTrack_ExtImpl: notifyRecordTrackWiredDevice sessionId 2081, WiredDevice false 09-28 13:13:27.455506 23473 23777 I azzp : (REDACTED) getSimplifiedOobeViaDeeplinkUser: %b 09-28 13:13:27.455514 1197 13387 D AudioBoost: audioSchedBoost, pid=1197, tid=13387, enable=0, ret=0 09-28 13:13:27.455575 23473 23777 I azgw : (REDACTED) isSimplifiedOobeViaDeeplinkUser OobeStatus: %s, ZeroStateStatus: %s 09-28 13:13:27.456191 23473 23777 I aiew : Device location is not granted. 09-28 13:13:27.456181 5061 7857 I AudioRecord: openRecord_l: 0xb40000735bc92000, mCblk = 0x742d469000 09-28 13:13:27.456681 5061 7857 D AudioRecordExtImpl: doSchedBoost, set audioapp task boost, pid=5061, tid=7857, enable=0, ret=0 09-28 13:13:27.457638 1076 1076 D InputMethodDumpHelper: handleDump [log, always, 1] 09-28 13:13:27.457872 1076 1714 E MvfstCallbacks.cpp: readError: streamid=0; error=LocalError: Idle timeout, Idle timeout, num non control streams: 4 09-28 13:13:27.457995 1076 1714 E MvfstCallbacks.cpp: readError: streamid=3; error=LocalError: Idle timeout, Idle timeout, num non control streams: 4 09-28 13:13:27.458040 1076 1714 E MvfstCallbacks.cpp: readError: streamid=7; error=LocalError: Idle timeout, Idle timeout, num non control streams: 4 09-28 13:13:27.458248 5061 7857 D android.media.AudioRecord: thirdPartyPlaybackCaptureSupported = false 09-28 13:13:27.458102 1076 1714 E MvfstCallbacks.cpp: readError: streamid=11; error=LocalError: Idle timeout, Idle timeout, num non control streams: 4 09-28 13:13:27.459517 1076 1076 D OplusInputMethodUtil: update sDebug to true, update sDebugIme to true, update sAlwaysOn to true 09-28 13:13:27.459590 1076 1714 E MvfstCallbacks.cpp: onConnectionError: error=LocalError: Idle timeout, Idle timeout, num non control streams: 4 09-28 13:13:27.459659 1076 1076 D OplusInputMethodUtil: updateDebugToClass InputMethodManager.DEBUG = true 09-28 13:13:27.459913 1076 1076 D OplusInputMethodUtil: updateDebugToClass ImeFocusController.DEBUG = true 09-28 13:13:27.460156 5061 7857 I dmkq : (REDACTED) #audio# create stream of audio record source(identity(%s), recordId(%d)) 09-28 13:13:27.461702 5061 7857 D AudioRecord: start(282): sync event 0 trigger session 0 09-28 13:13:27.461988 1197 16718 D AF_RecordTrack_ExtImpl: startMagicVoice mYouMeSdk = 0 09-28 13:13:27.462047 1197 16718 D AudioFlinger: RecordThread::start event 0, triggerSession 0 09-28 13:13:27.466005 1197 16718 D AudioPolicyService: updateUidStates_l() current->uid=10129 current->pid=5061 allowCapture=1 UidState=2 apmStat=2 09-28 13:13:27.466086 1197 16718 D APM_AudioPolicyManager: BIRecord: In startInput, profile is primary input 09-28 13:13:27.466103 1197 16718 D APM_AudioPolicyManager: startInput() portId 282 input 110 session 2081 09-28 13:13:27.466971 1895 1980 D QosSceneRecognizer: notifyAudioStatusChanged uid: 10129, isOn: true 09-28 13:13:27.467145 1895 1980 D QosSceneRecognizer[AudioScene]: [10129, -1, true, audio] 09-28 13:13:27.467149 1197 16718 D APM::AudioPolicyEngine: getDeviceForInputSource()input source 6, device 80000004 09-28 13:13:27.467167 1076 1076 D OplusInputMethodUtil: updateDebugToClass OnBackInvokedDispatcher.DEBUG = true 09-28 13:13:27.467193 1895 1980 I QosSceneRecognizer[AudioScene]: onStateChanged add uid 10129 09-28 13:13:27.467241 1076 1076 D OplusInputMethodUtil: updateDebugToClass InsetsController.DEBUG = true 09-28 13:13:27.467274 1895 2736 D Osense-CleanPolicy: notifyAppStatus:IntegratedData{mResId=2, mTime=1759045407466, mInfo=Bundle[{pkgName=10129, uid=10129, isAdded=true}]} 09-28 13:13:27.467296 1895 1980 I QosSceneRecognizer[AudioScene]: onSceneChangeNotify:Bundle[{reason=audio, package=null, process=null, pid=-1, uid=10129, begin=true}] 09-28 13:13:27.467387 1895 1980 I QosSceneRecognizer: onSceneStateChange: sceneId 5 uid 10129 pid -1 type 0 reason audio 09-28 13:13:27.467423 1895 1980 D QosScheduler[QoSPolicy]: onAudioScene begin=true,pkgName=null,uid=10129,pid=-1,type=0 09-28 13:13:27.467436 1895 1980 D QosScheduler[QoSPolicy]: Audio On. 09-28 13:13:27.467463 1895 2736 V Athena : StateManager: stateValue:23, packageName:com.google.android.googlequicksearchbox, uid:10129, state:1 09-28 13:13:27.467750 1895 1980 I QosScheduler[QoSPolicy]: setSceneQoS begin:true sceneId:5 09-28 13:13:27.467802 1895 1980 W QosScheduler: executeSceneQosStrategy failed! qos feature no enable 09-28 13:13:27.467949 1895 1980 D QosSceneRecognizer[VideoScene]: [true, 10129, audio] 09-28 13:13:27.467967 1895 2736 D Osense-AppCompactPolicy: notifyAppStatus:IntegratedData{mResId=2, mTime=1759045407466, mInfo=Bundle[{pkgName=10129, uid=10129, isAdded=true}]} 09-28 13:13:27.468140 2708 2897 I SystemUi--QuickSettings: OplusSeparateQSPrivacyManager-->onPrivacyItemsChanged permGroupName android.permission-group.MICROPHONE 09-28 13:13:27.468172 2708 2708 I SystemEventCoordinator: isQsExpanded true: false, false 09-28 13:13:27.468174 1895 2736 D Osense-OfreezerPolicy: notifyAppStatus:IntegratedData{mResId=2, mTime=1759045407466, mInfo=Bundle[{pkgName=10129, uid=10129, isAdded=true}]} 09-28 13:13:27.468247 2708 2897 I SystemUi--QuickSettings: OplusSeparateQSPrivacyManager-->onPrivacyItemsChanged packageName com.google.android.googlequicksearchbox, uid 10129, size: 1 09-28 13:13:27.468252 1895 2736 D Osense-AppCpuLimitPolicy: notifyAppStatus:IntegratedData{mResId=2, mTime=1759045407466, mInfo=Bundle[{pkgName=10129, uid=10129, isAdded=true}]} 09-28 13:13:27.468351 1895 2736 D Osense-OGuardPolicy: notifyAppStatus:IntegratedData{mResId=2, mTime=1759045407466, mInfo=Bundle[{pkgName=10129, uid=10129, isAdded=true}]} 09-28 13:13:27.468567 1895 4047 D OGuardManager_WeakAwareManager: updateAppStatus uid:10129 start:true type:13 09-28 13:13:27.468699 1197 1527 D AudioSystem: +setParameters(): CamcorderSwitchMic=false 09-28 13:13:27.468834 1076 1076 D OplusInputMethodUtil: updateDebugToClass BaseInputConnection.DEBUG = true 09-28 13:13:27.469104 1197 1527 D AudioFlingerExtImpl: AudioFlingerExtImpl oplusSetParameters: keyvalue CamcorderSwitchMic=false 09-28 13:13:27.469452 1197 7859 D AudioFlinger: updateWakeLockUids_l AudioIn_6E uids:10129 09-28 13:13:27.469697 1092 1840 D OplusMicSwap: CamcorderSetParameters: output set camcorderret = 5, camcorderstrvalue = false 09-28 13:13:27.469716 1092 1840 D OplusMicSwap: CamcorderSetParameters: camera switch mic 0 09-28 13:13:27.470023 1092 1092 D AudioALSAStreamIn: updateSinkMetadataV7(), device:0x0 09-28 13:13:27.470503 1197 16718 D APM_AudioPolicyManager: [MTK_APM_Route]setInputDevice mIoHandle 110 : changing device 0x80000004 to 0x80000004, force = 1 09-28 13:13:27.470937 1895 4874 D BatteryStatsServiceExtImpl: report audio global start... 09-28 13:13:27.471329 1895 4064 I OStatsManager_Calc: noteAudio: uid = 10129, start = true 09-28 13:13:27.471628 1197 1527 V AudioFlinger::PatchPanel: createAudioPatch_l() num_sources 1 num_sinks 1 handle 0 09-28 13:13:27.472102 1092 1092 D AudioALSAHardware: +createAudioPatch() num_sources [1] , num_sinks [1], handle [0x0], current mAudioHalPatchVector size 5 09-28 13:13:27.472142 1092 1092 D AudioALSAStreamManager: +setParameters(), IOport = 110, keyValuePairs = input_source=6;routing=-2147483644 09-28 13:13:27.472169 1092 1092 D AudioALSAStreamManager: +routingInputDevice(), input_device: 0x80000004 => 0x80000004 09-28 13:13:27.472189 1092 1092 D WCNChipController: IsBTI2SSupport(), BTChipHWInterface() = 3, ret = 0 09-28 13:13:27.472194 1092 1092 D AudioDspStreamManager: getDspBtscoEnable(), BTSCO offload = 0 09-28 13:13:27.472208 1092 1092 D AudioALSAStreamManager: routingInputDevice(), haveOtherSharedDevice = 0 09-28 13:13:27.472213 1092 1092 W AudioALSAStreamManager: -routingInputDevice(), input_device(0x80000004) is AUDIO_DEVICE_NONE(0x0) or current_input_device(0x80000004), return 09-28 13:13:27.472221 1092 1092 D AudioALSAStreamIn: setParameters(), this = 0xb400007d61a7ce00, idenity = 110, mPolicyDevice = 0x80000004 09-28 13:13:27.472247 1092 1092 D AudioALSAHardware: -createAudioPatch() num_sources [1] , num_sinks [1], handle [0xa] 09-28 13:13:27.473753 1092 1841 D AudioALSAStreamIn: updateSinkMetadataV7(), device:0x0 09-28 13:13:27.474018 1197 1527 V AudioFlinger::PatchPanel: createAudioPatch_l() status 0 09-28 13:13:27.474105 1197 1527 V AudioFlinger::PatchPanel: -createAudioPatch_l() status 0 09-28 13:13:27.474248 1197 16718 D APM_AudioPolicyManager: setInputDevice() AF::createAudioPatch returned 0 patchHandle 108 num_sources 1 num_sinks 1 09-28 13:13:27.473860 1895 1924 I IPCThreadState: oneway function results for code 10007 on binder at 0xb4000073994e2320 will be dropped but finished with status UNKNOWN_TRANSACTION 09-28 13:13:27.475412 1197 16718 W AudioPolicyClientImpl: setSoundTriggerCaptureState active 1 09-28 13:13:27.475473 1895 1924 I IPCThreadState: oneway function results for code 10006 on binder at 0xb4000073994e2320 will be dropped but finished with status UNKNOWN_TRANSACTION 09-28 13:13:27.475567 1197 16718 D AudioPolicyManagerExtImpl: startinput client uid = 10129 09-28 13:13:27.476315 23473 23778 I ahve : initRequest 09-28 13:13:27.476620 1197 1527 D AudioSystem: +setParameters(): startinput_pid=10129 09-28 13:13:27.476914 1197 1527 D AudioFlingerExtImpl: AudioFlingerExtImpl oplusSetParameters: keyvalue startinput_pid=10129 09-28 13:13:27.477468 23473 23719 I ahve : (REDACTED) createInitializedStream, initialQuery: %s 09-28 13:13:27.477643 1092 1840 D AudioALSAHardware: +setParameters(): startinput_pid=10129 09-28 13:13:27.477756 1092 1840 D AudioALSAPlaybackHandlerKTVThread: setParameters: startinput_pid ktvret = 5, value = 10129 09-28 13:13:27.477774 1092 1840 D AudioALSAPlaybackHandlerKTVThread: KTV set false 09-28 13:13:27.477831 1092 1840 D AudioALSAHardware: =====>GetInstance()->MMI_Auto_Test 09-28 13:13:27.477849 1092 1840 D audio_engineer_test: do_audio_test_process Enter all_test[0]:NoneTest in_call:false 09-28 13:13:27.477928 1092 1840 D OplusAudioALSASuperVolume: parseAudioSuperVolumeParam(), not super_volume 09-28 13:13:27.478005 1092 1840 D AudioALSAHardware: -setParameters(): startinput_pid=10129 09-28 13:13:27.479535 1895 3201 I OplusHansManager : uid=10372, pkg=com.instagram.android F exit(), F stay=392, reason=W_AsyncBinder, FrozenActionKeepProxy=false 09-28 13:13:27.479603 1895 3204 I OplusHansManager : forceUnfreezeApp uid= 10372 unfreezeForKernel: W_AsyncBinder false 09-28 13:13:27.479638 1895 4884 D AudioSystem: Reset audio port generation 09-28 13:13:27.479905 1895 4921 D SoundTriggerHalConcurrentCaptureHandler: onCaptureStateChange, mCaptureState = true 09-28 13:13:27.480250 1197 16718 D AtlasEventUploadUtils: setEvent event:atlas_event_audio_record_start 09-28 13:13:27.481985 23473 23969 I akzz : (REDACTED) #retrieve id(%s) 09-28 13:13:27.482315 1895 4921 I IPCThreadState: oneway function results for code 10006 on binder at 0xb4000073994e2320 will be dropped but finished with status UNKNOWN_TRANSACTION 09-28 13:13:27.483924 1092 1092 W StreamHAL: Error from HAL stream in function set_microphone_direction: Function not implemented 09-28 13:13:27.484621 1092 1092 W StreamHAL: Error from HAL stream in function set_microphone_field_dimension: Function not implemented 09-28 13:13:27.485308 1895 3207 I OplusHansManager : up_BC uid=10372 pkg=com.instagram.android 09-28 13:13:27.485462 1092 7861 D AudioALSAStreamIn: open(), flag 0x0, this = 0xb400007d61a7ce00 09-28 13:13:27.485493 1092 7861 D WCNChipController: IsBTI2SSupport(), BTChipHWInterface() = 3, ret = 0 09-28 13:13:27.485499 1092 7861 D AudioDspStreamManager: getDspBtscoEnable(), BTSCO offload = 0 09-28 13:13:27.485528 1092 7861 D WCNChipController: IsBTI2SSupport(), BTChipHWInterface() = 3, ret = 0 09-28 13:13:27.485533 1092 7861 D AudioDspStreamManager: getDspBtscoEnable(), BTSCO offload = 0 09-28 13:13:27.485538 1092 7861 D AudioALSAStreamIn: open(),flag 0x0 current input share device 0x80000004 -> policy share device 0x80000004 09-28 13:13:27.485543 1092 7861 D AudioALSAStreamManager: +createCaptureHandler(), mAudioMode = 0, input_source = 6, input_device = 0x80000004, mBypassDualMICProcessUL=0, rate=16000, flag=0x0 09-28 13:13:27.485551 1092 7861 D WCNChipController: IsBTI2SSupport(), BTChipHWInterface() = 3, ret = 0 09-28 13:13:27.485554 1092 7861 D AudioDspStreamManager: getDspBtscoEnable(), BTSCO offload = 0 09-28 13:13:27.485559 1092 7861 D WCNChipController: IsBTI2SSupport(), BTChipHWInterface() = 3, ret = 0 09-28 13:13:27.485562 1092 7861 D AudioDspStreamManager: getDspBtscoEnable(), BTSCO offload = 0 09-28 13:13:27.485592 1092 7861 D AudioALSACaptureHandlerNormal: AudioALSACaptureHandlerNormal() 09-28 13:13:27.485597 1092 7861 D AudioALSACaptureHandlerNormal: init() 09-28 13:13:27.485600 1092 7861 D AudioALSAStreamManager: setHDRRecord(), setHDRRecord = 0, mHDRRecordEna = 0 09-28 13:13:27.485604 1092 7861 D AudioALSAHardwareResourceManager: setHDRRecord(), setHDRRecord = 0, mHDRRecordEna = 0 09-28 13:13:27.485616 1092 7861 D AudioALSACaptureHandlerBase: getCaptureHandlerType(), mCaptureHandlerType = 2 09-28 13:13:27.485620 1092 7861 D AudioALSACaptureHandlerNormal: +open(), input_device = 0x80000004, input_source = 0x6, sample_rate=16000, num_channels=1 09-28 13:13:27.485632 1092 7861 D AudioALSACaptureDataClientAurisysNormal: AudioALSACaptureDataClientAurisysNormal(+) 09-28 13:13:27.485646 1092 7861 D AudioALSACaptureDataProviderBase: configStreamAttribute(), audio_mode: 0 => 0, input_device: 0x80000004 => 0x80000004, flag: 0x0 => 0x0, input_source: 6->6, output_device: 0x2 => 0x2, sample_rate: 48000 => 16000, period_us: 0 => 0, DSP out sample_rate: 16000 => 16000 09-28 13:13:27.485650 1092 7861 D AudioALSACaptureDataProviderBase: configStreamAttribute(), mInputScence: 0 => 0, mOutputScence: 0 => 0 09-28 13:13:27.485655 1092 7861 D AudioALSACaptureDataProviderBase: attach(), 0xb400007d61a83c00, mCaptureDataClientVector.size()=0, Identity=0xb400007d4f794780, mCaptureDataProviderType = 0 09-28 13:13:27.485664 1092 7861 D AudioALSADeviceParser: compare pcm success = 10, stringpair = Capture_1 09-28 13:13:27.485670 1092 7861 D AudioALSACaptureDataProviderNormal: open(+), audiomode=0, cardindex = 0, pcmindex = 10 09-28 13:13:27.485878 1895 3207 I OplusBinderProxy: un_pb uid: 10372 pid: 1076 rst: true , Desc: android.hardware.display.IDisplayManagerCallback/1 09-28 13:13:27.486051 1895 3207 I OplusBinderProxy: proxyBinder uid: 10372 pkg: com.instagram.android proxy: false calling: OFreezer 09-28 13:13:27.486085 1895 3207 D OplusHansManager : Restore alarm is disabled. 09-28 13:13:27.486600 1092 7861 D audio_engineer_test: stop_capturing Enter onOff:1 09-28 13:13:27.486625 1092 7861 D OplusMicSwap: SetSwapMicEnable: oplus_get_config = SUPPORT isenable = 1 09-28 13:13:27.486635 1092 7861 D AudioSpeechEnhanceInfo: GetHifiRecord, mHiFiRecordEnable=0 09-28 13:13:27.486641 1092 7861 D AudioALSACaptureDataProviderNormal: buffersizemax: 262144, bHifiRecord: 0, btempDebug: 0 09-28 13:13:27.486652 1092 7861 D AudioALSAHardwareResourceManager: paramPath = InputSource,VoiceRecognition 09-28 13:13:27.486676 1092 7861 D AudioSpeechEnhanceInfo: GetHifiRecord, mHiFiRecordEnable=0 09-28 13:13:27.486685 1092 7861 D AudioALSACaptureDataProviderBase: getInputSampleRate(), input_device: 0x80000004, output_device 0x2, hifi_record = 0, phone call open = 0 09-28 13:13:27.486677 4173 5201 D OplusAtlas.OplusAudioRecordInfoUploadHelper: callback success event:atlas_event_audio_record_start eventInfo:recorderPid:5061,TrackSession:2081,inputDevice:80000004 09-28 13:13:27.486734 1092 7861 D AudioALSACaptureDataProviderNormal: mConfig format: 3, channels: 2, rate: 48000, period_size: 960, period_count: 4, latency: 20, kReadBufferSize: 7680, mCaptureDropSize: 0 09-28 13:13:27.486754 1092 7861 D AudioALSADeviceConfigManager: ApplyDeviceTurnonSequenceByName() DeviceName = ADDA_TO_CAPTURE1 descriptor->DeviceStatusCounte = 0, Ctlsize=4 09-28 13:13:27.486851 4173 5201 D OplusAtlasAudioDetectionManager: callback success event:atlas_event_audio_record_start eventInfo:recorderPid:5061,TrackSession:2081,inputDevice:80000004 09-28 13:13:27.486975 1092 7861 D AudioALSAHardwareResourceManager: +startInputDevice_l(), new_device: 0x80000004, mInputDevice: 0x0, mStartInputDeviceCount: 0, mMicInverse: 0, InputChannel: 2, mBuiltInMicSpecificTyp: 0 09-28 13:13:27.487009 1092 7861 D AudioALSAHardwareResourceManager: enableTurnOnSequence(), sequence: builtin_Mic_DualMic 09-28 13:13:27.487021 1092 7861 D AudioALSADeviceConfigManager: ApplyDeviceTurnonSequenceByName() DeviceName = builtin_Mic_DualMic descriptor->DeviceStatusCounte = 0, Ctlsize=16 09-28 13:13:27.487251 4173 5216 D OplusAtlas.OplusAudioRecordInfoUploadHelper: setRecordEvent pid:5061 session:2081 09-28 13:13:27.487289 4173 5216 D OplusAtlas.OplusAudioRecordInfoUploadHelper: new AudioRecordInfo session:2081 09-28 13:13:27.487519 1092 7861 D AudioALSAHardwareResourceManager: -startInputDevice_l(), mInputDevice = 0x80000004, mStartInputDeviceCount = 1 09-28 13:13:27.487988 1895 3204 I OplusHansManager : unfreeze uid: 99910372 com.instagram.android pids: [2114] reason: W_AsyncBinder scene: LcdOn 09-28 13:13:27.488481 1895 4064 I OStatsManager_OplusBatteryStatsManager: updateStateTime: PkgName = com.instagram.android, uid = 99910372, state to bg, mForegroundCnt = 0, isFg = false 09-28 13:13:27.488538 1895 3201 I OplusHansManager : uid=99910372, pkg=com.instagram.android F exit(), F stay=177, reason=W_AsyncBinder, FrozenActionKeepProxy=false 09-28 13:13:27.488607 1895 4064 D OStatsManager_OplusBatteryStatsManager: notePkgUnFrozenInner uid = 99910372, pkgName = com.instagram.android 09-28 13:13:27.488676 1895 3204 I OplusHansManager : forceUnfreezeApp uid= 99910372 unfreezeForKernel: W_AsyncBinder false 09-28 13:13:27.488755 1895 3207 I OplusHansManager : up_BC uid=99910372 pkg=com.instagram.android 09-28 13:13:27.488813 1895 3207 I OplusBinderProxy: proxyBinder uid: 99910372 pkg: com.instagram.android proxy: false calling: OFreezer 09-28 13:13:27.488831 1895 3207 D OplusHansManager : Restore alarm is disabled. 09-28 13:13:27.488900 1895 3204 I OplusHansManager : forceUnfreezeApp uid= 99910372 unfreezeForKernel: W_AsyncBinder false 09-28 13:13:27.489310 1895 3204 I OplusHansManager : forceUnfreezeApp uid= 99910372 unfreezeForKernel: W_AsyncBinder false 09-28 13:13:27.490051 1197 16718 D AudioBoost: audioAppKeyThreadReport() pid 5061, tid 7857, enable 1, inUid 1, eventIt -1, threadType 0, id 299 09-28 13:13:27.490120 1197 16718 D AudioBoost: appThreadInfo update:start 7857, write -1, cbk 7858 read -1 09-28 13:13:27.490130 1197 16718 D AudioBoost: audioAppKeyThreadReport(), restore boost for threadType 0, tid 7857 09-28 13:13:27.490192 1197 16718 D AudioBoost: audioAppKeyThreadReport(), restore boost for threadType 2, tid 7858 09-28 13:13:27.490524 5061 7857 I AudioRecord: start(282): return status 0 09-28 13:13:27.490618 1895 4921 D AudioService.RecordingActivityMonitor: recorderEvent, event = 0 09-28 13:13:27.491659 23473 23969 I ahue : Process query connecting 09-28 13:13:27.491769 23473 23719 I ahue : (REDACTED) Send ProcessQuery initial with query type: %s 09-28 13:13:27.491898 5061 7857 I dmls : #audio# start audio buffering 09-28 13:13:27.492429 23473 23719 I ahue : (REDACTED) Send ProcessQuery initial with s3Connection and enableManualEndpointing: %s for queryTriggerType: %s, supportedUiSource: %s 09-28 13:13:27.492786 23473 23719 I ahue : Send ProcessQuery initial with blocking session 09-28 13:13:27.495011 1197 1516 D AudioBoost: startBoostForWake mId 299, p 5061, t 7857, wp 5061, wt 7857. 09-28 13:13:27.495091 1197 1516 D AudioBoost: audioSchedBoost, pid=5061, tid=7857, enable=1, ret=0 09-28 13:13:27.496051 5061 5142 I dmwv : (REDACTED) #audio# starting listening client(%s) route(%s) took %d(ms) 09-28 13:13:27.496420 2114 2356 E MvfstCallbacks.cpp: readError: streamid=0; error=LocalError: Idle timeout, Idle timeout, num non control streams: 5 09-28 13:13:27.496622 2114 2356 E MvfstCallbacks.cpp: readError: streamid=3; error=LocalError: Idle timeout, Idle timeout, num non control streams: 5 09-28 13:13:27.497661 5061 5145 I dmwv : (REDACTED) #audio# AudioRequestListeningSession start listening status: %s 09-28 13:13:27.498074 1092 7861 D AudioALSAPlaybackHandlerKTVThread: openKTV2(), ktvapppid = 10129, ktvfgpid = 0,input_device:80000004,voip:0, reverbStatus:0 09-28 13:13:27.498101 1092 7861 D AudioALSAPlaybackHandlerKTVThread: +setinputActive(), pre_active_num = 0, start_input = 2147483652 09-28 13:13:27.498113 1092 7861 D OplusAudioALSACaptureDataProviderBase: calculateMuteSize muteTime = 120, mMuteSize = 46080 09-28 13:13:27.498122 1092 7861 D OplusAudioALSACaptureDataProviderBase: calculateFadeInSize fadeInTime = 60, mFadeInSize = 23040 mStep = 0.000174, audio_format(0x4) 09-28 13:13:27.498127 1092 7861 D AudioALSACaptureDataProviderNormal: open(-) 09-28 13:13:27.498135 1092 7861 D AudioALSACaptureDataClientAurisysNormal: mLatency 20, mRawDataPeriodBufSize 7680, mProcessedDataPeriodBufSize 640, mEchoRefDataPeriodBufSize 0, stream_attribute_target->audio_format = 1 mStreamAttributeSource->audio_format = 4 09-28 13:13:27.498142 1092 7861 D AudioMTKGainController: +SetCaptureGain(), mode=0, source=6, input device=0x80000004, output device=0x2 09-28 13:13:27.498364 1092 7861 D AudioALSACaptureDataClientAurisysNormal: CreateAurisysLibManager(), voip: 0, HDR record: 0,low_latency: 0, aec: 0, input_source: 6, flag: 0x0 => mAurisysScenario: 3 09-28 13:13:27.498402 1092 7861 D AudioALSAHardware: UpdateSceneToAurisys(), mInputScence = 0, mMultiRecordScene = 0 09-28 13:13:27.498419 1092 7861 D aurisys_utility: input dev: 0x80000004, fmt = 0x4, fs: 48000, max fs: 48000, ch: 2, max ch: 2, ch maks: 0xc, hw_info_mask: 0x0; output dev: 0x2, fmt = 0x0, fs: 0, max fs: 48000, ch: 0, max ch: 2, ch maks: 0x0, hw_info_mask: 0x4; task_scene: 3, audio_mode: 0, stream_type: 0, output_flags: 0x0, input_source: 6, input_flags: 0x0; network_info: 0, enhancement_feature_mask: 0x0 09-28 13:13:27.498434 1092 7861 D AudioUtility: setupCustomInfoStr(), custom_info = "SetAudioCustomScene=;MTK_REC_AINR=false;", (scene = , vol_level = -1, bt_codec = -1) 09-28 13:13:27.498491 1092 7861 D aurisys_utility: lib, working fs: 48000, fmt: 0x4, frame = 20, b_interleave = 0, num_ul_ref_buf_array = 0, num_dl_ref_buf_array = 0 09-28 13:13:27.498497 1092 7861 D aurisys_utility: ul in[type:0], ch: 2, ch_mask: 0xc, buf fs: 48000, buf content fs: 48000, fmt: 0x4; ul out[type:1], ch: 2, ch_mask: 0xc, buf fs: 48000, buf content fs: 48000, fmt: 0x4 09-28 13:13:27.498542 1092 7861 D android.hardware.audio.service.mediatek: App_table: 4 09-28 13:13:27.498546 1092 7861 D android.hardware.audio.service.mediatek: categoryPath = Scene,Default,Application,VR,Profile,Handset, custom_scene = Default 09-28 13:13:27.498576 1092 7861 D android.hardware.audio.service.mediatek: RECORD_PARAM_EXT IS NULL! 09-28 13:13:27.498667 1092 7861 D aurisys_lib_handler: aurisys_arsi_parsing_param_file(), gProductInfo "platform=MT6835,device=RE60AFL1,model=RMX5078", file_path "/odm/etc/audio/audio_param/Speech_AudioParam.xml", enhancement_mode 0, param_buf_size 4716, data_size 4716, custom_info SetAudioCustomScene=;MTK_REC_AINR=false; 09-28 13:13:27.499562 1197 1197 D AudioFlingerExtImpl: getRecordInfo result=6,1,10,16000 09-28 13:13:27.500826 5061 5142 I dmvu : #audio# reportMicUpdate 09-28 13:13:27.502423 1092 7861 D aurisys_lib_handler: aurisys_arsi_create_handler(), lib_name mtk_speech_enh, 0xb400007d4f01a800, memory_size 3948384, arsi_handler 0xb400007d36e00000, retval 0x0 09-28 13:13:27.502503 1092 7861 D aurisys_lib_manager: UL Lib, lib_name mtk_speech_enh, 0xb400007d4f01a800, sample_rate: 48000 => 48000, num_channels: 2 => 2, audio_format: 0x4 => 0x4, interleave: 1 => 0, frame: 0 => 20 09-28 13:13:27.502517 1092 7861 D aurisys_lib_manager: UL out, sample_rate: 48000 => 16000, num_channels: 2 => 1, audio_format: 0x4 => 0x1, interleave: 0 => 1, frame: 20 => 0 09-28 13:13:27.502716 5061 5145 I dmga : (REDACTED) #audio# registering receiver(%s), range(offset=%d, minimal=%s), buffer(%s) 09-28 13:13:27.502741 1092 7861 D MtkAudioChannelConvertInC: InitMtkAudioChannelConverterInC 09-28 13:13:27.502902 1092 7861 D aurisys_lib_handler: lib_name mtk_speech_enh, 0xb400007d4f01a800, set ul_analog_gain_ref_only 48, ul_digital_gain 48, retval 0 09-28 13:13:27.502948 1092 7861 W aurisys_controller: EXCHG not found for any <library>!! return fail!! 09-28 13:13:27.503015 2114 2332 W RealtimeClientManager: Channel state: MqttChannelState{mConnectionState=DISCONNECTED, mDisconnectionReason=null, mLastConnectionMs=220200453, mLastDisconnectMs=220387607} 09-28 13:13:27.503080 23473 23777 I dmru : #audio# start listening status received from remote 09-28 13:13:27.503209 1092 7861 D AudioALSACaptureDataClientAurisysNormal: AudioALSACaptureDataClientAurisysNormal(-), mDropMs = 60, mDropPopSize = 1920 09-28 13:13:27.503235 1092 7861 D AudioALSACaptureHandlerNormal: -open() 09-28 13:13:27.503251 1092 7861 D AudioALSAStreamIn: open(), Set RawStartFrameCount = 0 09-28 13:13:27.503294 1092 7861 D AudioALSAStreamIn: openWavDump(), mDumpFile is NULL 09-28 13:13:27.503989 1197 1516 D AudioBoost: startBoostForWake mId 299, p 5061, t 7858, wp 5061, wt 7858. 09-28 13:13:27.504084 5061 5120 I dnqr : (REDACTED) #audio# audio request client(token(%d)) session(token(%d)) start listening status(%s) 09-28 13:13:27.504169 1197 1516 D AudioBoost: audioSchedBoost, pid=5061, tid=7858, enable=1, ret=0 09-28 13:13:27.504400 2114 2356 E MvfstCallbacks.cpp: readError: streamid=7; error=LocalError: Idle timeout, Idle timeout, num non control streams: 5 09-28 13:13:27.504502 2114 2356 E MvfstCallbacks.cpp: readError: streamid=11; error=LocalError: Idle timeout, Idle timeout, num non control streams: 5 09-28 13:13:27.504511 23473 23777 I amvs : (REDACTED) Audio session opened, streaming audio with config %s... 09-28 13:13:27.504555 2114 2356 E MvfstCallbacks.cpp: readError: streamid=4; error=LocalError: Idle timeout, Idle timeout, num non control streams: 5 09-28 13:13:27.504648 2114 2356 E MvfstCallbacks.cpp: onConnectionError: error=LocalError: Idle timeout, Idle timeout, num non control streams: 5 09-28 13:13:27.504834 1092 7869 D AudioALSACaptureDataProviderNormal: +readThread(), pid: 1092, tid: 7869, kReadBufferSize=0x1e00, open_index=5, UPLINK_SET_AMICDCC_BUFFER_TIME_MS = 80, counter=1 09-28 13:13:27.504854 1092 7869 D AudioALSACaptureDataProviderBase: pcm_start 09-28 13:13:27.504868 2114 2114 D InputMethodDumpHelper: handleDump [log, always, 1] 09-28 13:13:27.505264 2114 2114 D OplusInputMethodUtil: update sDebug to true, update sDebugIme to true, update sAlwaysOn to true 09-28 13:13:27.505379 2114 2114 D OplusInputMethodUtil: updateDebugToClass InputMethodManager.DEBUG = true 09-28 13:13:27.505358 1133 3309 E PerformanceService: [WAKER_IDENTIFY] pid message 7858;1;1 is writen to proc 09-28 13:13:27.505398 2114 2356 E MvfstCallbacks.cpp: writeChain error: Connection closed 09-28 13:13:27.505429 2114 2114 D OplusInputMethodUtil: updateDebugToClass ImeFocusController.DEBUG = true 09-28 13:13:27.505461 2114 2114 D OplusInputMethodUtil: updateDebugToClass OnBackInvokedDispatcher.DEBUG = true 09-28 13:13:27.505484 2114 2114 D OplusInputMethodUtil: updateDebugToClass InsetsController.DEBUG = true 09-28 13:13:27.505510 2114 2114 D OplusInputMethodUtil: updateDebugToClass BaseInputConnection.DEBUG = true 09-28 13:13:27.505545 2114 2356 E MvfstCallbacks.cpp: writeChain error: Connection closed 09-28 13:13:27.506012 1197 1516 D AudioBoost: audioWakeTracing setWakeSeedThread end, pid 5061, tid 7858, inUid 1, id 299, isWakee 1, -> handle = 2717. 09-28 13:13:27.504909 23473 23777 I amye : Sending InitializeRequest 09-28 13:13:27.507445 23473 23775 I amvs : Start streaming main mic audio... 09-28 13:13:27.508167 23473 23969 I dmga : (REDACTED) #audio# registering receiver(%s), range(offset=%d, minimal=%s), buffer(%s) 09-28 13:13:27.508455 2114 2255 E MQTTBypassDGWStreamGroupCallbacks.cpp: onStreamGroupError TransientError domain: MNSQUICConnection code: 1 detail: Idle timeout, num non control streams: 5 09-28 13:13:27.508739 23473 23777 I amye : (REDACTED) getInitialS3Request: audioConfig=%s 09-28 13:13:27.508757 2114 2255 W MQTTBypassDGWStreamGroupConnection.cpp: Restarting due to reason: StreamError 09-28 13:13:27.508809 1243 1243 I OplusRequestedLayerState: setFlags sequence=7208, flags[0x00000000], mask[0x00000001], newFlags[0x00000001], name=Sprite#7208 09-28 13:13:27.511482 2114 2294 W LibevQuicAsyncUDPSocket.h: setTosOrTrafficClass not implemented in LibevQuicAsyncUDPSocket 09-28 13:13:27.512236 2114 2124 W FastPrintWriter: Write failure java.io.IOException: write failed: EPIPE (Broken pipe) at libcore.io.IoBridge.write(IoBridge.java:651) at java.io.FileOutputStream.write(FileOutputStream.java:436) at com.android.internal.util.FastPrintWriter.flushBytesLocked(FastPrintWriter.java:355) at com.android.internal.util.FastPrintWriter.flushLocked(FastPrintWriter.java:378) at com.android.internal.util.FastPrintWriter.flush(FastPrintWriter.java:413) at android.os.Binder.dump(Binder.java:1077) at android.os.Binder.onTransact(Binder.java:945) at com.android.internal.inputmethod.IInputMethodClient$Stub.onTransact(IInputMethodClient.java:276) at android.os.Binder.execTransactInternal(Binder.java:1444) at android.os.Binder.execTransact(Binder.java:1378) Caused by: android.system.ErrnoException: write failed: EPIPE (Broken pipe) at libcore.io.Linux.writeBytes(Native Method) at libcore.io.Linux.write(Linux.java:336) at libcore.io.ForwardingOs.write(ForwardingOs.java:948) at libcore.io.BlockGuardOs.write(BlockGuardOs.java:448) at libcore.io.ForwardingOs.write(ForwardingOs.java:948) at libcore.io.IoBridge.write(IoBridge.java:646) ... 9 more 09-28 13:13:27.514011 17128 17128 D OplusBluetoothWearCheck: receive action: android.media.ACTION_AUDIO_RECORD_START 09-28 13:13:27.515866 1895 1895 D InputMethodDumpHelper: handleDump [log, always, 1] 09-28 13:13:27.516480 1895 1895 D OplusInputMethodUtil: update sDebug to true, update sDebugIme to true, update sAlwaysOn to true 09-28 13:13:27.516555 1895 1895 D OplusInputMethodUtil: updateDebugToClass InputMethodManager.DEBUG = true 09-28 13:13:27.516581 1895 1895 D OplusInputMethodUtil: updateDebugToClass ImeFocusController.DEBUG = true 09-28 13:13:27.515759 4173 5216 D OplusAtlasService: isAppInRecording result :true 09-28 13:13:27.516605 1895 1895 D OplusInputMethodUtil: updateDebugToClass OnBackInvokedDispatcher.DEBUG = true 09-28 13:13:27.516628 1895 1895 D OplusInputMethodUtil: updateDebugToClass InsetsController.DEBUG = true 09-28 13:13:27.516648 1895 1895 D OplusInputMethodUtil: updateDebugToClass BaseInputConnection.DEBUG = true 09-28 13:13:27.518756 1895 2736 D Osense-ResourceCallback: onReceive:android.media.ACTION_AUDIO_RECORD_START 09-28 13:13:27.518919 1895 2736 D Osense-ResourceCallback: handleAudioRecordChange...uid:10129, pkgName:com.google.android.googlequicksearchbox, isAdded:true 09-28 13:13:27.519041 3170 3170 D SubsysBroadcastManager: onReceive action:android.media.ACTION_AUDIO_RECORD_START 09-28 13:13:27.519068 1895 2736 D Osense-CleanPolicy: notifyAppStatus:IntegratedData{mResId=18, mTime=1759045407519, mInfo=Bundle[{pkgName=com.google.android.googlequicksearchbox, pid=5061, uid=10129, isAdded=true}]} 09-28 13:13:27.519087 3170 3170 D OSService: onAudioRecordChange 09-28 13:13:27.519178 1895 4884 D AS.AudioService: handleRecordingConfigurationChanged: event = 2 ,sampleRate = 16000 ,uid = 10129 ,source = 6 ,device = 15 09-28 13:13:27.519221 1895 2736 D Osense-AppCompactPolicy: notifyAppStatus:IntegratedData{mResId=18, mTime=1759045407519, mInfo=Bundle[{pkgName=com.google.android.googlequicksearchbox, pid=5061, uid=10129, isAdded=true}]} 09-28 13:13:27.519270 1895 2736 D Osense-OfreezerPolicy: notifyAppStatus:IntegratedData{mResId=18, mTime=1759045407519, mInfo=Bundle[{pkgName=com.google.android.googlequicksearchbox, pid=5061, uid=10129, isAdded=true}]} 09-28 13:13:27.519335 3170 3170 D SysBroadcastReceiver: android.media.ACTION_AUDIO_RECORD_START 09-28 13:13:27.519372 1895 2736 D Osense-AppCpuLimitPolicy: notifyAppStatus:IntegratedData{mResId=18, mTime=1759045407519, mInfo=Bundle[{pkgName=com.google.android.googlequicksearchbox, pid=5061, uid=10129, isAdded=true}]} 09-28 13:13:27.519407 3170 3170 D SysBroadcastReceiver: EventId=1010001 ID_AUDIO_RECORD_START TimeStamp =1759045407519 Intent =Intent { act=android.media.ACTION_AUDIO_RECORD_START flg=0x4000010 xflg=0x4 (has extras) } Content =null Pid =0 PkgName =null 09-28 13:13:27.519500 1895 2736 D Osense-OGuardPolicy: notifyAppStatus:IntegratedData{mResId=18, mTime=1759045407519, mInfo=Bundle[{pkgName=com.google.android.googlequicksearchbox, pid=5061, uid=10129, isAdded=true}]} 09-28 13:13:27.519782 1895 4047 D OGuardManager_WeakAwareManager: updateAppStatus uid:10129 start:true type:10 09-28 13:13:27.519847 3170 17696 D SystemState: noteAudioInStart: pid = 5061, uid = 10129 09-28 13:13:27.519892 1895 1980 D QosSceneRecognizer: notifyRecordStatusChanged uid: 10129, pid: 5061, isOn: true 09-28 13:13:27.519939 1895 4047 I OGuardManager_WeakAwareManager: stop uid:10129 type:AudioRecorder mRunning:false duration:0 09-28 13:13:27.520017 1895 1980 D QosSceneRecognizer[AudioScene]: [10129, 5061, true, record] 09-28 13:13:27.520052 3170 17696 I ConferenceScene: audioRecordStateChanged 5061 10129 true 09-28 13:13:27.520152 1895 1980 I QosSceneRecognizer[AudioScene]: onSceneChangeNotify:Bundle[{reason=record, package=com.google.android.googlequicksearchbox, process=com.google.android.googlequicksearchbox:interactor, pid=5061, uid=10129, begin=true}] 09-28 13:13:27.520190 3170 17696 I VideoCallScene: isInVideoCall, mIsVoiceCall = false, mIsActivityVoiceCall = false, mActivityVoiceCallUid = -1, mVoiceCallPkg = , mVoiceCallUid = -1, cameraUid = -1, mIsVideoPlay = false 09-28 13:13:27.520273 1895 1980 I QosSceneRecognizer: onSceneStateChange: sceneId 5 uid 10129 pid 5061 type 0 reason record 09-28 13:13:27.520319 1895 1980 D QosScheduler[QoSPolicy]: onAudioScene begin=true,pkgName=com.google.android.googlequicksearchbox,uid=10129,pid=5061,type=0 09-28 13:13:27.520342 1895 1980 W QosScheduler: executeSceneQosStrategy failed! qos feature no enable 09-28 13:13:27.520482 1895 1980 I QosSceneRecognizer[VideoCallScene]: notifySceneState, audio record start = true 09-28 13:13:27.520581 3158 3497 D DataLimitControl: onRecordingConfigChanged: mRecordingsApps = [10129] 09-28 13:13:27.520653 1895 1980 I QosSceneRecognizer[VideoCallScene]: isInVideoCall, mIsInVoiceCall = false,mVoiceCallPkg =null,mVoiceCallUid=0, cameraUid = 0 09-28 13:13:27.520722 1895 1980 I QosSceneRecognizer[VideoCallScene]: isInVideoCall = false hasState = false 09-28 13:13:27.520779 1895 1980 D QosSceneRecognizer[VideoScene]: [true, 10129, record] 09-28 13:13:27.521213 1092 7869 D OplusAudioALSAThreadSched: kernelVersion 515 09-28 13:13:27.521501 1092 7869 D OplusAudioALSAThreadSched: setAudioHalSchedToUx after setUx(pid:1092, tid:7869) 日志分析,Gemini应用程序中的语音命令有时没有响应
最新发布
10-12
### Linux System Shortcut Offload Configuration and Usage In the context of network processing, offloading refers to delegating certain tasks from the CPU to specialized hardware components such as Network Interface Cards (NICs). This can significantly reduce the load on the main processor by handling operations like checksum calculations, segmentation, or even more complex functions directly within the NIC. For configuring shortcuts or offloads in a Linux environment, several parameters are available through ethtool—a powerful utility that allows querying and controlling various settings related to Ethernet devices. The following sections provide an overview of how these configurations work: #### Checking Current Offload Settings To view current offload settings for a specific interface named `eth0`, one would use: ```bash ethtool -k eth0 ``` This command displays all possible features along with their status—whether they're enabled (`on`) or disabled (`off`). Features include TCP Segmentation Offload (TSO), Generic Receive Offload (GRO), Large Receive Offload (LRO), etc.[^1] #### Enabling/Disabling Specific Offloads If changes need to be made to any particular feature, this can also be accomplished using `ethtool`. For example, disabling TSO could look something like this: ```bash sudo ethtool --offload eth0 tso off ``` Similarly, enabling GRO might involve running: ```bash sudo ethtool --offload eth0 gro on ``` These commands allow administrators fine-grained control over which aspects of packet processing should occur at the driver level versus being handled entirely by software stacks higher up in the networking stack hierarchy[^1]. #### Persistent Configuration Across Reboots Changes applied via `ethtool` do not persist across reboots unless explicitly saved into configuration files depending upon distribution specifics. On systems utilizing systemd-networkd service manager, adding entries under `[Network]` section inside `/etc/systemd/network/*.network` file may help achieve persistence. On other distributions where traditional init scripts manage interfaces, appending appropriate options within relevant script located usually around `/etc/sysconfig/network-scripts/ifcfg-*`.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值