Android Camera HAL3中拍照Capture模式下多模块间的交互与帧Resu

更新时间:2024-04-08 10:19:01 阅读量: 综合文库 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

Android Camera HAL3中拍照Capture模式下多模块间的交互与帧Result与帧数据回调

本文均属自己阅读源码的点滴总结,转账请注明出处谢谢。

欢迎和大家交流。qq:1037701636 email:gzzaigcn2009@163.com Software:系统源码Android5.1

前沿:

之前的两篇博文算是比较详细的记录了整个Camera3 HAL3架构下完全不同于HAL1的preview预览处理过程,包括主要涉及到的控制流和视频流等。比较详细的阐述了Camera2Client下streamProcessor、CallbackProcessor、

CaptureSequencer等模块在Camera3架构下的功能。分析得出每个模块下均会在Camera3Device下以一个Stream的形式存在,而每个stream又是由多个

buffer来构成主体的。与HAL3进行数据的交互时,以Request和result来作为数据传输的载体。在这些基础上本文将描述具体拍照Capture模式下的数据流和控制流,主要会涉及到jpegprocessor、CaptureSequencer这几个模块的工作原理。鉴于Capture模式下的数据流更复杂,在这里重点会分析数据流result回传时,每个模块的响应以及处理过程,填补前一博文的空白。

1. HAL3中Camera2Client下的take picture的入口函数

作为标准的capture picture功能的入口,主要完成了以下两件事情: updateProcessorStream(mJpegProcessor, l.mParameters); mCaptureSequencer->startCapture(msgType)

对于JpegProcessor模块而言,他的stream流第一次是在preview阶段进行了create与初始化,这里之所以再次调用JpegProcessor::updateStream目的是参考原先JpegProcessor stream的width与height是否变化即是否照片要求的分辨率发生了变化,如果是的话就需要delete原先的stream,重新建立一个stream。 在JpegProcessor中重点关注CpuConsumer与Surface的生产者与消费者处理模式,官方称之为Create CPU buffer queue endpoint。

2. CaptureSequencer模块

CaptureSequencer模块是take picture下操作的重点,在Camera2Client中进行了创建,首先来看CaptureSequencer线程的threadLoop函数: [cpp] view plaincopy

1. bool CaptureSequencer::threadLoop() { 2. 3. sp client = mClient.promote(); 4. if (client == 0) return false; 1

5. 6. CaptureState currentState; 7. { 8. Mutex::Autolock l(mStateMutex); 9. currentState = mCaptureState; 10. } 11. 12. currentState = (this->*kStateManagers[currentState])(client); 13. 14. Mutex::Autolock l(mStateMutex); 15. if (currentState != mCaptureState) { 16. if (mCaptureState != IDLE) { 17. ATRACE_ASYNC_END(kStateNames[mCaptureState], mStateTransitionCount); 18. } 19. mCaptureState = currentState;//保留新的状态 20. mStateTransitionCount++; 21. if (mCaptureState != IDLE) { 22. ATRACE_ASYNC_BEGIN(kStateNames[mCaptureState], mStateTransitionCount); 23. } 24. ALOGV(\, 25. client->getCameraId(), kStateNames[mCaptureState]); 26. mStateChanged.signal(); 27. } 28. 29. if (mCaptureState == ERROR) { 30. ALOGE(%ue to error\, 31. client->getCameraId()); 32. return false; 33. } 34. 35. return true; 36. } CaptureSequencer是一个以不同的state状态机来循环工作的模块, currentState = (this->*kStateManagers[currentState])(client)函数是执行对应状态机下的执行函数,其中的state值如下: [cpp] view plaincopy

1. const CaptureSequencer::StateManager 2

2. CaptureSequencer::kStateManagers[CaptureSequencer::NUM_CAPTURE_STATES-1] = { 3. &CaptureSequencer::manageIdle, 4. &CaptureSequencer::manageStart, 5. &CaptureSequencer::manageZslStart, 6. &CaptureSequencer::manageZslWaiting, 7. &CaptureSequencer::manageZslReprocessing, 8. &CaptureSequencer::manageStandardStart, 9. &CaptureSequencer::manageStandardPrecaptureWait, 10. &CaptureSequencer::manageStandardCapture, 11. &CaptureSequencer::manageStandardCaptureWait, 12. &CaptureSequencer::manageBurstCaptureStart, 13. &CaptureSequencer::manageBurstCaptureWait, 14. &CaptureSequencer::manageDone, 15. }; 我们以一个standard capture的操作模式,来分析一次完成的take picture的过程。初始化的 mCaptureState(IDLE),进入的函数入口为manageIdle: [cpp] view plaincopy

1. CaptureSequencer::CaptureState CaptureSequencer::manageIdle( 2. sp &/*client*/) { 3. status_t res; 4. Mutex::Autolock l(mInputMutex); 5. while (!mStartCapture) { 6. res = mStartCaptureSignal.waitRelative(mInputMutex, 7. kWaitDuration); 8. if (res == TIMED_OUT) break; 9. } 10. if (mStartCapture) { 11. mStartCapture = false; 12. mBusy = true; 13. return START; 14. } 15. return IDLE; 16. } 函数主要在轮训mStartCapture的值,这个值是由CameraService端的拍照触发线程来启动的,代码如下: [cpp] view plaincopy

1. status_t CaptureSequencer::startCapture(int msgType) { 3

2. ALOGV(\, __FUNCTION__); 3. ATRACE_CALL(); 4. Mutex::Autolock l(mInputMutex); 5. if (mBusy) { 6. ALOGE(\, __FUNCTION__); 7. return INVALID_OPERATION; 8. } 9. if (!mStartCapture) { 10. mMsgType = msgType; 11. mStartCapture = true; 12. mStartCaptureSignal.signal();//启动CaptureSequencer 13. } 14. return OK; 15. } 对比CaptureSequencer Threadloop线程中,在阻塞式的等待mStartCapture = true,并在修改完mStartCapture 后向Threadloop发出signal。Threadloop线程被唤醒后,执行返回一个新的状态机mCaptureState = START:

2.1 START状态机

主要调用了updateCaptureRequest(l.mParameters, client)函数: [cpp] view plaincopy

1. status_t CaptureSequencer::updateCaptureRequest(const Parameters ?ms, 2. sp &client) { 3. ATRACE_CALL(); 4. status_t res; 5. if (mCaptureRequest.entryCount() == 0) { 6. res = client->getCameraDevice()->createDefaultRequest( 7. CAMERA2_TEMPLATE_STILL_CAPTURE, 8. &mCaptureRequest); 9. if (res != OK) { 10. ALOGE(\ault still image request:\ 11. \, __FUNCTION__, client->getCameraId(), 12. strerror(-res), res); 13. return res; 14. } 15. } 4

16. 17. res = params.updateRequest(&mCaptureRequest); 18. if (res != OK) { 19. ALOGE(\entries of capture \ 20. \, __FUNCTION__, client->getCameraId(), 21. strerror(-res), res); 22. return res; 23. } 24. 25. res = params.updateRequestJpeg(&mCaptureRequest);//更新JPEG需要的参数 26. if (res != OK) { 27. ALOGE(\tries of capture \ 28. \, __FUNCTION__, client->getCameraId(), 29. strerror(-res), res); 30. return res; 31. } 32. 33. return OK; 34. } 该函数和preview模式下的updatePreviewRequest很类似,这里首先检查mCaptureRequest是否是一个空的CameraMetadata,如果为空则由

createDefaultRequest来请求HAL3来创建一个Request,其中相应的类型为CAMERA2_TEMPLATE_STILL_CAPTURE。随后分别是使用当前模式下的配置参数来更新CameraMetadata mCaptureRequest中不同tag的参数值,便于传递给HAL3,这个过程是类似与以前Camera1中直接的setParamters操作string的过程。

2.2 STANDARD_START状态manageStandardCapture 该状态是启动整个take picture的重点所在: [cpp] view plaincopy

1. CaptureSequencer::CaptureState CaptureSequencer::manageStandardCapture( 2. sp &client) { 3. status_t res; 4. ATRACE_CALL(); 5. SharedParameters::Lock l(client->getParameters()); 5

6. Vector outputStreams; 7. uint8_t captureIntent = static_cast(ANDROID_CONTROL_CAPTURE_INTENT_STILL_CAPTURE); 8. 9. /** 10. * Set up output streams in the request 11. * - preview 12. * - capture/jpeg 13. * - callback (if preview callbacks enabled) 14. * - recording (if recording enabled) 15. */ 16. outputStreams.push(client->getPreviewStreamId());//preview Stream 17. outputStreams.push(client->getCaptureStreamId());//capture Stream 18. 19. if (l.mParameters.previewCallbackFlags & 20. CAMERA_FRAME_CALLBACK_FLAG_ENABLE_MASK) { 21. outputStreams.push(client->getCallbackStreamId());//capture callback 22. } 23. 24. if (l.mParameters.state == Parameters::VIDEO_SNAPSHOT) { 25. outputStreams.push(client->getRecordingStreamId()); 26. captureIntent = static_cast(ANDROID_CONTROL_CAPTURE_INTENT_VIDEO_SNAPSHOT); 27. } 28. 29. res = mCaptureRequest.update(ANDROID_REQUEST_OUTPUT_STREAMS, 30. outputStreams); 31. if (res == OK) { 32. res = mCaptureRequest.update(ANDROID_REQUEST_ID, 33. &mCaptureId, 1);//当前request对应的ID 34. } 35. if (res == OK) { 36. res = mCaptureRequest.update(ANDROID_CONTROL_CAPTURE_INTENT, 37. &captureIntent, 1); 6

38. } 39. if (res == OK) { 40. res = mCaptureRequest.sort(); 41. } 42. 43. if (res != OK) { 44. ALOGE(\apture request: %s (%d)\, 45. __FUNCTION__, client->getCameraId(), strerror(-res), res); 46. return DONE; 47. } 48. 49. // Create a capture copy since CameraDeviceBase#capture takes ownership 50. CameraMetadata captureCopy = mCaptureRequest; 51. if (captureCopy.entryCount() == 0) { 52. ALOGE(\equest for HAL device\, 53. __FUNCTION__, client->getCameraId()); 54. return DONE; 55. } 56. 57. /** 58. * Clear the streaming request for still-capture pictures 59. * (as opposed to i.e. video snapshots) 60. */ 61. if (l.mParameters.state == Parameters::STILL_CAPTURE) { 62. // API definition of takePicture() - stop preview before taking pic 63. res = client->stopStream(); 64. if (res != OK) { 65. ALOGE(\ew for still capture: \ 66. \, 67. __FUNCTION__, client->getCameraId(), strerror(-res), res); 68. return DONE; 69. } 70. } 71. // TODO: Capture should be atomic with setStreamingRequest here 7

72. res = client->getCameraDevice()->capture(captureCopy);//启动camera3device的capture,提交capture request 73. if (res != OK) { 74. ALOGE(\mage capture request: \ 75. \, 76. __FUNCTION__, client->getCameraId(), strerror(-res), res); 77. return DONE; 78. } 79. 80. mTimeoutCount = kMaxTimeoutsForCaptureEnd; 81. return STANDARD_CAPTURE_WAIT; 82. } 83. 84. CaptureSequencer::CaptureState CaptureSequencer::manageStandardCaptureWait( 85. sp &client) { 86. status_t res; 87. ATRACE_CALL(); 88. Mutex::Autolock l(mInputMutex); 89. 90. // Wait for new metadata result (mNewFrame) 91. while (!mNewFrameReceived) { 92. res = mNewFrameSignal.waitRelative(mInputMutex, kWaitDuration);//wait new 一帧metadata 93. if (res == TIMED_OUT) { 94. mTimeoutCount--; 95. break; 96. } 97. } 98. 99. // Approximation of the shutter being closed 100. // - TODO: use the hal3 exposure callback in Camera3Device instead 101. if (mNewFrameReceived && !mShutterNotified) { 102. SharedParameters::Lock l(client->getParameters()); 103. /* warning: this also locks a SharedCameraCallbacks */ 104. shutterNotifyLocked(l.mParameters, client, mMsgType); 8

105. mShutterNotified = true; 106. } 107. 108. // Wait until jpeg was captured by JpegProcessor 109. while (mNewFrameReceived && !mNewCaptureReceived) { 110. res = mNewCaptureSignal.waitRelative(mInputMutex, kWaitDuration);//等待JPEG数据 111. if (res == TIMED_OUT) { 112. mTimeoutCount--; 113. break; 114. } 115. } 116. if (mTimeoutCount <= 0) { 117. ALOGW(\te\); 118. return DONE; 119. } 120. if (mNewFrameReceived && mNewCaptureReceived) {//满足mNewFrameReceived 121. if (mNewFrameId != mCaptureId) { 122. ALOGW(\ted %d, got %d\, 123. mCaptureId, mNewFrameId); 124. } 125. camera_metadata_entry_t entry; 126. entry = mNewFrame.find(ANDROID_SENSOR_TIMESTAMP); 127. if (entry.count == 0) { 128. ALOGE(\!\); 129. } else if (entry.count == 1) { 130. if (entry.data.i64[0] != mCaptureTimestamp) { 131. ALOGW(\Metadata frame %\ PRId64 \ 132. \ PRId64, 133. entry.data.i64[0], 134. mCaptureTimestamp); 135. } 136. } else { 137. ALOGE(\); 138. } 9

139. client->removeFrameListener(mCaptureId, mCaptureId + 1, this); 140. 141. mNewFrameReceived = false; 142. mNewCaptureReceived = false; 143. return DONE; 144. } 145. return STANDARD_CAPTURE_WAIT; 146. } 整个函数的处理可以分为以下几个小点: a:Vector outputStreams;

outputStreams.push(client->getPreviewStreamId());//preview Stream

outputStreams.push(client->getCaptureStreamId());//capture jpeg Stream outputStreams.push(client->getCallbackStreamId());//capture callback

通过以上的操作,可以很清楚是看到,这里集合了take picture所需要使用到的stream流,对应的模块分别是:

streamProcessor、jpegProcessor、CallbackProcessor。

这个过程和Preview模式下是类似的,收集当前Camera2Client下的所有stream,并以stream的ID号作为区别。

b: 将当前操作所有的stream信息全部加入到CameraMetadata mCaptureRequest

res = mCaptureRequest.update(ANDROID_REQUEST_OUTPUT_STREAMS, outputStreams); if (res == OK) {

res = mCaptureRequest.update(ANDROID_REQUEST_ID, &mCaptureId, 1);//当前request对应的ID }

ANDROID_REQUEST_ID这项值表明,当前只存在3种Request类型: [cpp] view plaincopy

1. 预览Request mPreviewRequest: mPreviewRequestId(Camera2Client::kPreviewRequestIdStart), 2. 拍照Request mCaptureRequest:mCaptureId(Camera2Client::kCaptureRequestIdStart), 3. 录像Request mRecordingRequest: mRecordingRequestId(Camera2Client::kRecordingRequestIdStart),

10

12. 13. void CaptureSequencer::onCaptureAvailable(nsecs_t timestamp, 14. sp captureBuffer) { 15. ATRACE_CALL(); 16. ALOGV(\, __FUNCTION__); 17. Mutex::Autolock l(mInputMutex); 18. mCaptureTimestamp = timestamp; 19. mCaptureBuffer = captureBuffer; 20. if (!mNewCaptureReceived) { 21. mNewCaptureReceived = true; 22. mNewCaptureSignal.signal();//真实的一帧jpeg图像 23. } 24. } 那么这两个on回调函数是怎么触发的呢?下面来作具体的分析:

3.1.明确picture模式下,一次处理需要的stream数目

需要明确的是一次take picture需要的stream分别有JpegProcessor、

CallbackProcessor、StreamingProcessor三种,第一个主要接收的是jpeg格式的帧图像,第二个主要接收的是一帧的preview模式下回调到APP的视频帧,而最后一个是直接获取一帧视频图像后直接进行显示用的视频帧。

3.2.帧数据回调响应的由来processCaptureResult函数: 无论是哪一个模块,数据回调响应最初的入口是HAL3的

process_capture_result函数即processCaptureResult()函数,该函数的处理之所以复杂是因为HAL3.0中允许一次result回来的数据可以是不完整的,其中以3A相关的cameraMetadata的数据为主,这里需要说明每一帧的result回来时

camera3_capture_result都是含有一个camera_metadata_t的,包含着一帧图像的各种信息tag字段,其中以3A信息为主。在processCaptureResult函数中由三个核心函数:

processPartial3AResult():处理回传回来的部分cameraMetadata result数据; returnOutputBuffers():返回这次result中各个stream对应的buffer数据; sendCaptureResult():处理的是一次完整的cameraMetadata result数据;

3.3. FrameProcessor模块的帧Result响应,以3A回调处理为主

processPartial3AResult()函数与sendCaptureResult()函数都是将3A的result结果发送给FrameProcessor去作处理的,因为无论是Request还是result都是必然带有一个类似stream的cameraMetadata的,所以在这个模块有别于其他模块,故不需要单独的stream流来交互数据的。 [cpp] view plaincopy

16

1. if (isPartialResult) { 2. // Fire off a 3A-only result if possible 3. if (!request.partialResult.haveSent3A) {//返回的只是3A的数据 4. request.partialResult.haveSent3A = 5. processPartial3AResult(frameNumber, 6. request.partialResult.collectedResult, 7. request.resultExtras);// frame含有3A则notify 处理 8. } 9. } processPartial3AResult是将当前帧收集到的partialResult进行处理,需要明确的是partialResult是指定帧framenum下返回的result最新组成的result:

其内部需要确保目前收集到的result需要至少含有如下的tag的值,才算一次3A数据可True:

[cpp] view plaincopy

1. gotAllStates &= get3AResult(partial, ANDROID_CONTROL_AF_MODE, 2. &afMode, frameNumber); 3. 4. gotAllStates &= get3AResult(partial, ANDROID_CONTROL_AWB_MODE, 5. &awbMode, frameNumber); 6. 7. gotAllStates &= get3AResult(partial, ANDROID_CONTROL_AE_STATE, 8. &aeState, frameNumber); 9. 10. gotAllStates &= get3AResult(partial, ANDROID_CONTROL_AF_STATE, 11. &afState, frameNumber); 12. 13. gotAllStates &= get3AResult(partial, ANDROID_CONTROL_AWB_STATE, 14. &awbState, frameNumber); if (!gotAllStates) return false; 17

只有这样才满足构建一个CaptureResult minResult的要求,上述过程表明对已有的Result需要AE、AF、AWB同时OK时才会构建一个CaptureResult。 接着对比着来看sendCaptureResult: [cpp] view plaincopy

1. void Camera3Device::sendCaptureResult(CameraMetadata &pendingMetadata, 2. CaptureResultExtras &resultExtras, 3. CameraMetadata &collectedPartialResult, 4. uint32_t frameNumber) { 5. if (pendingMetadata.isEmpty()) 6. return; 7. 8. Mutex::Autolock l(mOutputLock); 9. 10. // TODO: need to track errors for tighter bounds on expected frame number 11. if (frameNumber < mNextResultFrameNumber) { 12. SET_ERR(\order capture result metadata submitted! \ 13. \, 14. frameNumber, mNextResultFrameNumber); 15. return; 16. } 17. mNextResultFrameNumber = frameNumber + 1;//下一帧 18. 19. CaptureResult captureResult; 20. captureResult.mResultExtras = resultExtras; 21. captureResult.mMetadata = pendingMetadata; 22. 23. if (captureResult.mMetadata.update(ANDROID_REQUEST_FRAME_COUNT, 24. (int32_t*)&frameNumber, 1) != OK) { 25. SET_ERR(\\, 26. frameNumber); 27. return; 28. } else { 29. ALOGVV(\(%d)\, 30. __FUNCTION__, mId, frameNumber); 31. } 18

32. 33. // Append any previous partials to form a complete result 34. if (mUsePartialResult && !collectedPartialResult.isEmpty()) { 35. captureResult.mMetadata.append(collectedPartialResult);// 36. } 37. 38. captureResult.mMetadata.sort(); 39. 40. // Check that there's a timestamp in the result metadata 41. camera_metadata_entry entry = 42. captureResult.mMetadata.find(ANDROID_SENSOR_TIMESTAMP); 43. if (entry.count == 0) { 44. SET_ERR(\e %d!\, 45. frameNumber); 46. return; 47. } 48. 49. // Valid result, insert into queue 50. List::iterator queuedResult = 51. mResultQueue.insert(mResultQueue.end(), CaptureResult(captureResult)); 52. ALOGVV(\ PRId32 %umber = %\ PRId64 53. \ PRId32, __FUNCTION__, 54. queuedResult->mResultExtras.requestId, 55. queuedResult->mResultExtras.frameNumber, 56. queuedResult->mResultExtras.burstId); 57. 58. mResultSignal.signal();//发送signal 59. } 该函数的主要工作是创建一个CaptureResult,可以看到对于之前帧回传回来的部分result,需要在这里进行组合成一帧完整的result。collectedPartialResult指的是当一次Request下发时,回传的result可能是分几次返回的,比如第一次的result只含有部分的信息,在第二次返回如果result已经被标记为完全上传回到Threadloop中,那么这里就需要对前几次的result进行组合,而前几次的result都是保存在当前帧的Request的,整个Request以唯一的一个framenumber作为索引,确保返回的result组合后是对应的同一个Request。

19

个人理解这个partialResult的处理机制是每次返回的Result并不一定包含了当前frameNumber帧号所需要的tag信息,而且这个每次回传的mNumPartialResults值是由HAL3.0层来决定的。在每次一的Result中,会收集

其中 isPartialResult = (result->partial_result < mNumPartialResults)决定了当前的Result是否还是一个处于partial Result的模式,是的话每次都进行collectResult,此外对于此模式下会收集3A的tag信息,调用

processPartial3AResult来处理3A的值,而这个过程也是单列的处理。而一旦当前的Result返回处于非partial模式时,直接提取之前collect的Result并和当前的Result共同组成一个新的Capture Result。生成的CaptureResult会加入到mResultQueue队列。

至此分析完了HAL3返回的Captrue Result的处理过程,最终

mResultSignal.signal()唤醒相应的等待线程,而这个过程就是由FrameProcessor模块来响应的。

FrameProcessorBase是一个FrameProcessor的基类,会启动一个Threadloop:

[cpp] view plaincopy

1. bool FrameProcessorBase::threadLoop() { 2. status_t res; 3. 4. sp device; 5. { 6. device = mDevice.promote(); 7. if (device == 0) return false; 8. } 9. 10. res = device->waitForNextFrame(kWaitDuration); 11. if (res == OK) { 12. processNewFrames(device);// 3A相关的处理等待 13. } else if (res != TIMED_OUT) { 14. ALOGE(\ew \ 15. \, strerror(-res), res); 16. } 17. 18. return true; 19. } 调用camera3device的waitForNextFrame,等待周期为10ms. [cpp] view plaincopy

20

1. status_t Camera3Device::waitForNextFrame(nsecs_t timeout) { 2. status_t res; 3. Mutex::Autolock l(mOutputLock); 4. 5. while (mResultQueue.empty()) {//capture result 结果非空则继续执行 6. res = mResultSignal.waitRelative(mOutputLock, timeout); 7. if (res == TIMED_OUT) { 8. return res; 9. } else if (res != OK) { 10. ALOGW(\ PRId64 \, 11. __FUNCTION__, mId, timeout, strerror(-res), res); 12. return res; 13. } 14. } 15. return OK; 16. } 在这里一是看到了mResultQueue,二是看到了mResultSignal。对应于Camera3Device::sendCaptureResult()中的mOutputLock以及signal。 线程被唤醒后调用processNewFrames来处理当前帧

[cpp] view plaincopy

1. void FrameProcessorBase::processNewFrames(const sp &device) { 2. status_t res; 3. ATRACE_CALL(); 4. CaptureResult result; 5. 6. ALOGV(\, __FUNCTION__, device->getId()); 7. 8. while ( (res = device->getNextResult(&result)) == OK) { 9. 10. // TODO: instead of getting frame number from metadata, we should read 11. // this from result.mResultExtras when CameraDeviceBase interface is fixed. 21

12. camera_metadata_entry_t entry; 13. 14. entry = result.mMetadata.find(ANDROID_REQUEST_FRAME_COUNT); 15. if (entry.count == 0) { 16. ALOGE(\number\, 17. __FUNCTION__, device->getId()); 18. break; 19. } 20. ATRACE_INT(\, entry.data.i32[0]); 21. 22. if (!processSingleFrame(result, device)) {//单独处理一帧 23. break; 24. } 25. 26. if (!result.mMetadata.isEmpty()) { 27. Mutex::Autolock al(mLastFrameMutex); 28. mLastFrame.acquire(result.mMetadata); 29. } 30. } 31. if (res != NOT_ENOUGH_DATA) { 32. ALOGE(\: %s (%d)\, 33. __FUNCTION__, device->getId(), strerror(-res), res); 34. return; 35. } 36. 37. return; 38. } device->getNextResult(&result)是从mResultQueue提取一个可用的

CaptureResult,提取完成后作erase的处理。再检验这个Result是否属于一个固定的framenum,然后由processSingleFrame来完成一件事: [cpp] view plaincopy

1. bool FrameProcessor::processSingleFrame(CaptureResult &frame, 2. const sp &device) {//处理帧 3. 4. 22

5. sp client = mClient.promote(); 6. if (!client.get()) { 7. return false; 8. } 9. 10. 11. bool isPartialResult = false; 12. if (mUsePartialResult) { 13. if (client->getCameraDeviceVersion() >= CAMERA_DEVICE_API_VERSION_3_2) { 14. isPartialResult = frame.mResultExtras.partialResultCount < mNumPartialResults; 15. } else { 16. camera_metadata_entry_t entry; 17. entry = frame.mMetadata.find(ANDROID_QUIRKS_PARTIAL_RESULT); 18. if (entry.count > 0 && 19. entry.data.u8[0] == ANDROID_QUIRKS_PARTIAL_RESULT_PARTIAL) { 20. isPartialResult = true; 21. } 22. } 23. } 24. 25. 26. if (!isPartialResult && processFaceDetect(frame.mMetadata, client) != OK) { 27. return false; 28. } 29. 30. 31. if (mSynthesize3ANotify) { 32. process3aState(frame, client); 33. } 34. 35. 36. return FrameProcessorBase::processSingleFrame(frame, device); 37. } [cpp] view plaincopy

1. bool FrameProcessorBase::processSingleFrame(CaptureResult &result, 23

2. const sp &device) { 3. ALOGV(\)\, 4. __FUNCTION__, device->getId(), result.mMetadata.isEmpty()); 5. return processListeners(result, device) == OK;//处理所有的listener 6. } [cpp] view plaincopy

1. status_t FrameProcessorBase::processListeners(const CaptureResult &result, 2. const sp &device) { 3. ATRACE_CALL(); 4. 5. camera_metadata_ro_entry_t entry; 6. 7. // Check if this result is partial. 8. bool isPartialResult = false; 9. if (device->getDeviceVersion() >= CAMERA_DEVICE_API_VERSION_3_2) { 10. isPartialResult = result.mResultExtras.partialResultCount < mNumPartialResults; 11. } else { 12. entry = result.mMetadata.find(ANDROID_QUIRKS_PARTIAL_RESULT); 13. if (entry.count != 0 && 14. entry.data.u8[0] == ANDROID_QUIRKS_PARTIAL_RESULT_PARTIAL) { 15. ALOGV(\sult\, 16. __FUNCTION__, device->getId()); 17. isPartialResult = true; 18. } 19. } 20. 21. // TODO: instead of getting requestID from CameraMetadata, we should get it 22. // from CaptureResultExtras. This will require changing Camera2Device. 23. // Currently Camera2Device uses MetadataQueue to store results, which does not 24

24. // include CaptureResultExtras. 25. entry = result.mMetadata.find(ANDROID_REQUEST_ID); 26. if (entry.count == 0) { 27. ALOGE(\, __FUNCTION__, device->getId()); 28. return BAD_VALUE; 29. } 30. int32_t requestId = entry.data.i32[0]; 31. 32. List > listeners; 33. { 34. Mutex::Autolock l(mInputMutex); 35. 36. List::iterator item = mRangeListeners.begin(); 37. // Don't deliver partial results to listeners that don't want them 38. while (item != mRangeListeners.end()) { 39. if (requestId >= item->minId && requestId < item->maxId && 40. (!isPartialResult || item->sendPartials)) { 41. sp listener = item->listener.promote(); 42. if (listener == 0) { 43. item = mRangeListeners.erase(item); 44. continue; 45. } else { 46. listeners.push_back(listener); 47. } 48. } 49. item++; 50. } 51. } 52. ALOGV(\of %zu\, __FUNCTION__, 53. device->getId(), listeners.size(), mRangeListeners.size()); 54. 55. List >::iterator item = listeners.begin(); 56. for (; item != listeners.end(); item++) { 25

57. (*item)->onResultAvailable(result);//所有注册的listener,告知有result返回 58. } 59. return OK; 60. } 这里简单的理解是在获取一个正常的CaptureResult时,就需要将这个Result分发给哪些感兴趣的模块,而这个过程由一个FilteredListener来完成:

其他模块如果想要listen FrameProcessor模块,可以调用registerListener来注册,保存在mRangeListeners之中,具体的接口如下: [cpp] view plaincopy

1. status_t Camera2Client::registerFrameListener(int32_t minId, int32_t maxId, 2. wp listener, bool sendPartials) { 3. return mFrameProcessor->registerListener(minId, maxId, listener, sendPartials); 4. } 在这个对完整的Result的处理过程中,重点关注FrameProcessor下的3A回调与人脸检测回调,3A中的AF回回传AF的状态信息以CAMERA_MSG_FOCUS的形式通过notifyCallback. FaceDetect会以camera_frame_metadata_t的形式将人脸检测的定位的数据通过dataCallback回传,数据类型为CAMERA_MSG_PREVIEW_FRAME。

其中CaptureSequencer::manageStandardStart()在处理时,调用了registerFrameListener完成了listen的注册。

有了这些listener,在processListeners处理函数中,通过遍历

mRangeListeners,来确保当前的CaptureResult 中对象的Request id和注册时的区间相匹配。在提取到适合处理当前Result的listener后,回调onResultAvailable()函数。

到这里void CaptureSequencer::onResultAvailable()就会被覆盖调用,经而我们定位到了mNewFrameReceived = Ture的回调过程。

3.4. 帧数据的回调:

上面重点是分析一个队CameraMetadata Result结果的分析,看上去还没有真正的视频帧数据的出现。对于视频流buffer的操作,上面提到了肯定是需要stream的,而不像FrameProcessor不需要建立stream来进行数据的传输。

对于数据的Callback处理,接口是returnOutputBuffers函数,该函数在preview模式下已经进行过分析,其重点就是将当前Result回来的buffer数据信息进行提取,然后分发给不同模块所维护着的Camera3Stream去作处理,本质是将当前

26

Result返回的camera3_stream_buffer提取buffer_handle后通过queue_buffer操作后,就交由对应的Consumer去作处理。

对于直接预览的模块StreamProcessor,其Consumer是直接的SurfaceFlinger用于实时显示,CallbackProcessor是将CpuConsumer来将帧数据回传给APP使用,这些过程和Preview模式下都是类似,也是takepicture模式下同样需要处理的过程,而对于JpegProcessor而言,属于Picture模式专属,我们来看他接收到一帧HAL3 Jpeg Buffer的处理过程: [cpp] view plaincopy

1. void JpegProcessor::onFrameAvailable(const BufferItem& /*item*/) { 2. Mutex::Autolock l(mInputMutex); 3. if (!mCaptureAvailable) { 4. mCaptureAvailable = true; 5. mCaptureAvailableSignal.signal();//采集到一帧jpeg图像 6. } 7. } 上述调用的过程可参看博文Android5.1中surface和CpuConsumer下生产者和消费者间的处理框架简述来加深理解,作为take picture模式下独有的模块,对应的Threadloop线程获得响应: [cpp] view plaincopy

1. bool JpegProcessor::threadLoop() { 2. status_t res; 3. 4. { 5. Mutex::Autolock l(mInputMutex); 6. while (!mCaptureAvailable) { 7. res = mCaptureAvailableSignal.waitRelative(mInputMutex, 8. kWaitDuration); 9. if (res == TIMED_OUT) return true; 10. } 11. mCaptureAvailable = false; 12. } 13. 14. do { 15. res = processNewCapture();//处理新的jpeg采集帧 16. } while (res == OK); 17. 27

18. 19. return true; } 调用processNewCapture(),函数内部主要包括以下几个方面:

mCaptureConsumer->lockNextBuffer(&imgBuffer);这是从CPUConsumer中获取一个已经queuebuffer的buffer,lock过程最重要的是将这个buffer作mmap操作后映射到当前线程中。

然后通过这个虚拟地址将buffer地址copy到本进程的一个heap中,随后将这个buffer进行ummap操作。

最后是调用如下代码,去将本地的jpegbuffer传输给CaptureSequencer,所以可以说CaptureSequence虽然负责收集jpeg等数据,负责整个take picture的启动与控制,但本质上jpeg等数据的真正提取都是交由jpegprocessor、zslprocessor等模块来完成:

[cpp] view plaincopy

1. sp sequencer = mSequencer.promote(); 2. if (sequencer != 0) { 3. sequencer->onCaptureAvailable(imgBuffer.timestamp, captureBuffer);//通知capturesequence有jpeg buffer到了 4. } 到这里,就解决了CaptureSequeuer的wait状态机中的另一个wait等待的signal。

至此为止onResultAvailable()与onCaptureAvailable()均完成了回调,前者主要是由FrameProcessor来触发的,后者是有jpegProcessor来触发的,前者是回传的一帧jpeg图像的附加信息如timestamp/3A等,而后者是回传了一帧真正的jpeg图像。

下面是我小节的takepicture模式下几个模块间数据交互的过程图,本质是几个线程间Threadloop的响应与处理过程。

可以看到jpeg模式下,每次课回传给APP的数据包括原始的Callback数据流,jpegprocessor中的jpeg图像流,以及其他比如AF的状态,人脸识别后的人脸坐标原始信息camera_frame_metadata_t回传给APP。

28

29

本文来源:https://www.bwwdw.com/article/2lpr.html

Top