使用 Android NDK 获取 YUV420p摄像头原始数据

news/2025/10/25 19:05:49/文章来源:https://www.cnblogs.com/zshsboke/p/19165860

使用 Android NDK 获取 YUV420p摄像头原始数据

首先frameworks/av/camera/Camera.cpp已经过时了不要再使用它了, 当然想要更换旧的Camera的成本也不小,一般公司也不会做.
先介绍一一些常见的数据格式,然后介绍一下使用方式即可,然后下篇文件在探索一下源码.
脉络大概如下:
CameraManager → CameraService → Camera HAL v3 → Sensor/Driver.

常见的视频原始数据格式

本质上视频就是一张一张的图片,利用人眼视觉暂留的原理,24帧率的时候人眼就会无法辨别出单幅的静态画面.
编码就是利用算法算出每张图片之间的关系然后进行压缩.
解码就是一个逆向的过程,将压缩后的数据利用逆向算法恢复成一张一张的图片,然后播放.

yuv420p

最常见得
这个是最常见的.举个例子:
4x2像素的图片存储格式如下:
首先Y分量和像素一样,如下:
YYYY
YYYY
接着是U分量,4个Y分量共用一个U分量.
UU
接着是V分量,同理
VV
最终在内存中如下:

YYYY
YYYY
UU
VV

5x3像素的图片存储格式如下:
首先Y分量和像素一样,如下:

YYYYY
YYYYY
UU
VV

他们一共在内存中占用15 + 2 + 2 = 19字节.
YU12

YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
UU
VV
UU
VV

YU21

YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
VV
UU
VV
UU

yuv420sp

它和yuv420p得区别在于前者UV是顺序存储,后者是交替存储.
yuv420sp分为NV12NV21
NV12
4x8

YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
UV
UV
UV
UV

NV21
4x8

YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
YYYY
VU
VU
VU
VU

源码封装

cmake


# For more information about using CMake with Android Studio, read the
# documentation: https://d.android.com/studio/projects/add-native-code.html.
# For more examples on how to use CMake, see https://github.com/android/ndk-samples.# Sets the minimum CMake version required for this project.
cmake_minimum_required(VERSION 3.22.1)# Declares the project name. The project name can be accessed via ${ PROJECT_NAME},
# Since this is the top level CMakeLists.txt, the project name is also accessible
# with ${CMAKE_PROJECT_NAME} (both CMake variables are in-sync within the top level
# build script scope).
project(openslLearn VERSION 0.1.0 LANGUAGES C CXX)# ✅ 设置 C++ 标准
set(CMAKE_CXX_STANDARD 23)  # 使用 C++26 标准
set(CMAKE_CXX_STANDARD_REQUIRED ON)  # 强制使用指定标准
set(CMAKE_CXX_EXTENSIONS OFF)        # 禁用编译器扩展(使用纯标准)# Creates and names a library, sets it as either STATIC
# or SHARED, and provides the relative paths to its source code.
# You can define multiple libraries, and CMake builds them for you.
# Gradle automatically packages shared libraries with your APK.
#
# In this top level CMakeLists.txt, ${CMAKE_PROJECT_NAME} is used to define
# the target library name; in the sub-module's CMakeLists.txt, ${PROJECT_NAME}
# is preferred for the same purpose.
#
# In order to load a library into your app from Java/Kotlin, you must call
# System.loadLibrary() and pass the name of the library defined here;
# for GameActivity/NativeActivity derived applications, the same library name must be
# used in the AndroidManifest.xml file.# 第一个库
# 查找源文件
file(GLOB_RECURSE LEARN01_SOURCES CONFIGURE_DEPENDS"src/learn01/*.cpp""src/learn01/*.c"
)
add_library(${CMAKE_PROJECT_NAME} SHARED ${LEARN01_SOURCES})# 设置头文件包含路径
target_include_directories(${CMAKE_PROJECT_NAME}PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/learn01PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/logging
)# Specifies libraries CMake should link to your target library. You
# can link libraries from various origins, such as libraries defined in this
# build script, prebuilt third-party libraries, or Android system libraries.
target_link_libraries(${CMAKE_PROJECT_NAME}# List libraries link to the target libraryandroidlogOpenSLES
)# 新增第二个库 (openslLearn2)
file(GLOB_RECURSE LEARN02_SOURCES CONFIGURE_DEPENDS"src/learn02/*.cpp""src/learn02/*.c""src/sqlite/*.cpp""src/sqlite/*.c"
)
set(LIBRARY_NAME2 ${CMAKE_PROJECT_NAME}2)
message("LIBRARY_NAME2: ${LIBRARY_NAME2}")
add_library(${LIBRARY_NAME2} SHARED ${LEARN02_SOURCES})  # 使用不同源文件
target_include_directories(${LIBRARY_NAME2}PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/sqlitePUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/learn02PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include/logging
)
find_package (oboe REQUIRED CONFIG)
target_link_libraries(${LIBRARY_NAME2}androidlogaaudiooboe::oboecamera2ndkmediandk
)

头文件

//
// Created by 29051 on 2025/10/25.
//#ifndef OPENSL_LEARN_CAMERA_HPP
#define OPENSL_LEARN_CAMERA_HPPextern "C" {
#include <camera/NdkCameraManager.h>
#include <media/NdkImageReader.h>
}#include <string>
#include <fstream>#include "logging.hpp"class NDKCamera {
private:int mWidth;int wHeight;ACameraManager *aCameraManager = nullptr;ACameraDevice *device = nullptr;ACameraCaptureSession *session = nullptr;AImageReader *aImageReader = nullptr;ACaptureSessionOutputContainer *aCaptureSessionOutputContainer = nullptr;ACaptureSessionOutput *sessionOutput = nullptr;std::string yuvPath;std::ofstream *yuvStream = nullptr;
public:NDKCamera(int width, int height, std::string yuvPath);~NDKCamera();/*** Capabilities 功能*/void printCameraCapabilities(const char * cameraId);
};#endif //OPENSL_LEARN_CAMERA_HPP

源文件

//
// Created by 29051 on 2025/10/25.
//
#include "NDKCamera.hpp"#include <utility>const char * const TAG = "NDKCamera";/*** CameraManager → CameraService → Camera HAL v3 → Sensor/Driver* @param width* @param height*/
NDKCamera::NDKCamera(int width, int height, std::string yuvPath) : mWidth(width), wHeight(height), yuvPath(std::move(yuvPath)) {logger::info(TAG, "width: %d, height: %d, yuvPath: %s", this -> mWidth, this -> wHeight, this -> yuvPath.c_str());this->yuvStream = new std::ofstream(this->yuvPath, std::ios::binary);if (!this->yuvStream->is_open()){logger::error(TAG, "文件打开失败...");return;}aCameraManager = ACameraManager_create();if (aCameraManager == nullptr){logger::error(TAG, "aCameraManager is null");return;}ACameraIdList *cameraIdList = nullptr;camera_status_t status = ACameraManager_getCameraIdList(aCameraManager, &cameraIdList);if (status != ACAMERA_OK){logger::error(TAG, "开启 getCameraIdList 失败");return;}if (cameraIdList->numCameras <= 0){logger::error(TAG, "此设备没有摄像头");return;}for(int i = 0; i < cameraIdList->numCameras; i ++ ){logger::info(TAG, "index: %d, cameraId: %s", i, cameraIdList->cameraIds[i]);}const char* cameraId = cameraIdList->cameraIds[1];this->printCameraCapabilities(cameraId);ACameraDevice_StateCallbacks deviceStateCallbacks = {.context = nullptr,.onDisconnected = [](void*, ACameraDevice* aCameraDevice) -> void {},.onError = [](void*, ACameraDevice* aCameraDevice, int errorCode) -> void {},};status = ACameraManager_openCamera(aCameraManager, cameraId, &deviceStateCallbacks, &device);if (status != ACAMERA_OK){logger::error(TAG, "开启 camera 失败");return;}media_status_t mediaStatus = AImageReader_new(width, height, AIMAGE_FORMAT_YUV_420_888, 4, &aImageReader);if (mediaStatus != AMEDIA_OK){logger::error(TAG, "AImageReader_new 失败");return;}AImageReader_ImageListener imageListener = {.context = this,.onImageAvailable = [](void* context, AImageReader* reader) -> void {AImage *image = nullptr;media_status_t mediaStatus = AImageReader_acquireNextImage(reader, &image);if (mediaStatus != AMEDIA_OK || image == nullptr){logger::error(TAG, "获取当前yuv帧失败");AImage_delete(image);return;}int32_t width = 0, height = 0;mediaStatus = AImage_getWidth(image, &width);if (mediaStatus != AMEDIA_OK || image == nullptr){logger::error(TAG, "获取当前yuv帧宽度失败");AImage_delete(image);return;}mediaStatus = AImage_getHeight(image, &height);if (mediaStatus != AMEDIA_OK || image == nullptr){logger::error(TAG, "获取当前yuv帧高度失败");AImage_delete(image);return;}// ==========const auto *ndkCamera = reinterpret_cast<NDKCamera*>(context);for (int plane = 0; plane < 3; ++plane) {uint8_t* planeData = nullptr;int planeDataLen = 0;if (AImage_getPlaneData(image, plane, &planeData, &planeDataLen) != AMEDIA_OK) {logger::error(TAG, "AImage_getPlaneData failed plane=%d", plane);AImage_delete(image);return;}int rowStride = 0, pixelStride = 0;AImage_getPlaneRowStride(image, plane, &rowStride);AImage_getPlanePixelStride(image, plane, &pixelStride);int planeWidth = (plane == 0) ? width : (width + 1) / 2;int planeHeight = (plane == 0) ? height : (height + 1) / 2;// 按行按 pixelStride 写入,确保是连续的 Y then U then Vfor (int y = 0; y < planeHeight; ++y) {const uint8_t* rowPtr = planeData + y * rowStride;if (pixelStride == 1) {// 直接写 planeWidth 字节ndkCamera->yuvStream->write(reinterpret_cast<const char*>(rowPtr), planeWidth);} else {// 需要按 pixelStride 抽取for (int x = 0; x < planeWidth; ++x) {ndkCamera->yuvStream->put(rowPtr[x * pixelStride]);}}}}AImage_delete(image);logger::info(TAG, "yuv width: %d, height: %d", width, height);},};AImageReader_setImageListener(aImageReader, &imageListener);ANativeWindow* window = nullptr;mediaStatus = AImageReader_getWindow(aImageReader, &window);if (mediaStatus != AMEDIA_OK){logger::error(TAG, "AImageReader_getWindow 失败");return;}ACaptureRequest *request = nullptr;status = ACameraDevice_createCaptureRequest(device, TEMPLATE_PREVIEW, &request);if (status != ACAMERA_OK){logger::error(TAG, "开启 ACameraDevice_createCaptureRequest 失败");return;}// 设置帧率范围int32_t range[2] = {30, 30}; // 固定 30fpsACaptureRequest_setEntry_i32(request,ACAMERA_CONTROL_AE_TARGET_FPS_RANGE,2, range);ACameraOutputTarget *aCameraOutputTarget = nullptr;status = ACameraOutputTarget_create(window, &aCameraOutputTarget);if (status != ACAMERA_OK){logger::error(TAG, "开启 ACameraOutputTarget_create 失败");return;}status = ACaptureRequest_addTarget(request, aCameraOutputTarget);if (status != ACAMERA_OK){logger::error(TAG, "开启 ACaptureRequest_addTarget 失败");return;}ACameraCaptureSession_stateCallbacks sessionStateCallbacks = {.context = nullptr,.onClosed = [](void* context, ACameraCaptureSession *session) -> void {logger::info(TAG, "onClosed...");},.onReady = [](void* context, ACameraCaptureSession *session) -> void {logger::info(TAG, "onReady...");},.onActive = [](void* context, ACameraCaptureSession *session) -> void {logger::info(TAG, "onActive...");},};ACameraCaptureSession_captureCallbacks captureCallbacks = {.context = nullptr,.onCaptureStarted = [](void* context, ACameraCaptureSession* session,const ACaptureRequest* request, int64_t timestamp) -> void {logger::info(TAG, "onCaptureStarted timestamp: %d", timestamp);},.onCaptureProgressed = [](void* context, ACameraCaptureSession* session,ACaptureRequest* request, const ACameraMetadata* result) -> void {logger::info(TAG, "onCaptureProgressed...");},.onCaptureCompleted = [](void* context, ACameraCaptureSession* session,ACaptureRequest* request, const ACameraMetadata* result) -> void {ACameraMetadata_const_entry fpsEntry = {};if (ACameraMetadata_getConstEntry(result,ACAMERA_CONTROL_AE_TARGET_FPS_RANGE, &fpsEntry) == ACAMERA_OK) {if (fpsEntry.count >= 2) {int32_t minFps = fpsEntry.data.i32[0];int32_t maxFps = fpsEntry.data.i32[1];logger::info(TAG, "onCaptureCompleted 当前帧率范围: [%d, %d]", minFps, maxFps);}}},.onCaptureFailed = [](void* context, ACameraCaptureSession* session,ACaptureRequest* request, ACameraCaptureFailure* failure) -> void {logger::info(TAG, "onCaptureFailed frameNumber: %d, reason: %d, sequenceId: %d, wasImageCaptured: %d", failure->frameNumber, failure->reason, failure->sequenceId, failure->wasImageCaptured);},.onCaptureSequenceCompleted = [](void* context, ACameraCaptureSession* session,int sequenceId, int64_t frameNumber) -> void {logger::info(TAG, "onCaptureSequenceCompleted sequenceId: %d, frameNumber: %d", sequenceId, frameNumber);},.onCaptureSequenceAborted = [](void* context, ACameraCaptureSession* session,int sequenceId) -> void {logger::info(TAG, "onCaptureSequenceAborted sequenceId: %d", sequenceId);},.onCaptureBufferLost = [](void* context, ACameraCaptureSession* session,ACaptureRequest* request, ACameraWindowType* window, int64_t frameNumber) -> void {logger::info(TAG, "onCaptureBufferLost frameNumber: %d", frameNumber);},};status = ACaptureSessionOutputContainer_create(&aCaptureSessionOutputContainer);if (status != ACAMERA_OK){logger::error(TAG, "开启 ACaptureSessionOutputContainer_create 失败");return;}status = ACaptureSessionOutput_create(window, &sessionOutput);if (status != ACAMERA_OK){logger::error(TAG, "开启 ACaptureSessionOutput_create 失败");return;}status = ACaptureSessionOutputContainer_add(aCaptureSessionOutputContainer, sessionOutput);if (status != ACAMERA_OK){logger::error(TAG, "开启 ACaptureSessionOutputContainer_add 失败");return;}status = ACameraDevice_createCaptureSession(device, aCaptureSessionOutputContainer, &sessionStateCallbacks, &session);if (status != ACAMERA_OK){logger::error(TAG, "开启 ACameraDevice_createCaptureSession 失败");return;}
#if __ANDROID_API__ >= 33ACameraCaptureSession_captureCallbacksV2 captureCallbacksV2 = {.context = nullptr,.onCaptureStarted = [](void* context, ACameraCaptureSession* session,const ACaptureRequest* request, int64_t timestamp, int64_t frameNumber) -> void {},.onCaptureProgressed = [](void* context, ACameraCaptureSession* session,ACaptureRequest* request, const ACameraMetadata* result) -> void {},.onCaptureCompleted = [](void* context, ACameraCaptureSession* session,ACaptureRequest* request, const ACameraMetadata* result) -> void {},.onCaptureFailed = [](void* context, ACameraCaptureSession* session,ACaptureRequest* request, ACameraCaptureFailure* failure) -> void {},.onCaptureSequenceCompleted = [](void* context, ACameraCaptureSession* session,int sequenceId, int64_t frameNumber) -> void {},.onCaptureSequenceAborted = [](void* context, ACameraCaptureSession* session,int sequenceId) -> void {},.onCaptureBufferLost = [](void* context, ACameraCaptureSession* session,ACaptureRequest* request, ACameraWindowType* window, int64_t frameNumber) -> void {},};status = ACameraCaptureSession_setRepeatingRequestV2(session, &captureCallbacksV2, 1, &request, nullptr);if (status != ACAMERA_OK){logger::error(TAG, "开启 ACameraCaptureSession_setRepeatingRequestV2 失败");return;}
#elsestatus = ACameraCaptureSession_setRepeatingRequest(session, &captureCallbacks, 1, &request, nullptr);if (status != ACAMERA_OK){logger::error(TAG, "开启 ACameraCaptureSession_setRepeatingRequest 失败");return;}
#endif
}
NDKCamera::~NDKCamera() {logger::info(TAG, "~NDKCamera...");if (this->aImageReader != nullptr){AImageReader_delete(this->aImageReader);}if (session != nullptr){ACameraCaptureSession_close(session);}if (device != nullptr){ACameraDevice_close(device);}if (aCameraManager != nullptr) {ACameraManager_delete(aCameraManager);}if (this->yuvStream != nullptr){this->yuvStream->close();}if (this->aCaptureSessionOutputContainer != nullptr){ACaptureSessionOutputContainer_free(this->aCaptureSessionOutputContainer);}if (this->sessionOutput != nullptr){ACaptureSessionOutput_free(this->sessionOutput);}
}void NDKCamera::printCameraCapabilities(const char * const cameraId){ACameraMetadata *metadata = nullptr;camera_status_t status = ACameraManager_getCameraCharacteristics(this->aCameraManager, cameraId, &metadata);if(status != ACAMERA_OK){logger::error(TAG, "获取摄像头信息失败");return;}ACameraMetadata_const_entry entry = {};if (ACameraMetadata_getConstEntry(metadata, ACAMERA_SCALER_AVAILABLE_STREAM_CONFIGURATIONS, &entry) == ACAMERA_OK){logger::info(TAG, "支持的分辨率:");for(uint32_t i = 0; i + 3 < entry.count; i += 4){int32_t format = entry.data.i32[i + 0];int32_t width = entry.data.i32[i + 1];int32_t height = entry.data.i32[i + 2];int32_t isInput = entry.data.i32[i + 3];if (isInput == 0 && format == AIMAGE_FORMAT_YUV_420_888){logger::info(TAG, "format: %d, width: %d, height: %d, isInput: %d", format, width, height, isInput);}}}if (ACameraMetadata_getConstEntry(metadata, ACAMERA_CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES, &entry) == ACAMERA_OK){logger::info(TAG, "支持的帧率范围:");for (uint32_t i = 0; i + 1 < entry.count; i += 2) {logger::info(TAG, "帧率范围: [%d, %d]", entry.data.i32[i], entry.data.i32[i + 1]);}}ACameraMetadata_free(metadata);
}

暴露给Kotlin

extern "C"
JNIEXPORT jlong JNICALL
Java_io_github_opensllearn_utils_Utils_initCamera(JNIEnv *env, jobject, jint width, jint height, jstring pcmPath) {NDKCamera *ndkCamera = nullptr;try {jboolean isCopy = false;const char * const pcmPathStr = env->GetStringUTFChars(pcmPath, &isCopy);ndkCamera = new NDKCamera(width, height, pcmPathStr);if (isCopy){env->ReleaseStringUTFChars(pcmPath, pcmPathStr);}} catch (const std::exception &e) {delete ndkCamera;ndkCamera = nullptr;env->ThrowNew(env->FindClass("java/lang/RuntimeException"), e.what());}return reinterpret_cast<jlong>(ndkCamera);
}
extern "C"
JNIEXPORT void JNICALL
Java_io_github_opensllearn_utils_Utils_releaseCamera(JNIEnv*, jobject, jlong ptr) {const auto* const ndkKCamera = reinterpret_cast<NDKCamera*>(ptr);delete ndkKCamera;
}

结束.后续如果想渲染得话可以使用Surface,然后传入Native,使用OpenGL,先将yuv420p转为RGB然后交给OpenGL.不是很复杂.

核心逻辑

for (int plane = 0; plane < 3; ++plane) {uint8_t* planeData = nullptr;int planeDataLen = 0;if (AImage_getPlaneData(image, plane, &planeData, &planeDataLen) != AMEDIA_OK) {logger::error(TAG, "AImage_getPlaneData failed plane=%d", plane);AImage_delete(image);return;}int rowStride = 0, pixelStride = 0;AImage_getPlaneRowStride(image, plane, &rowStride);AImage_getPlanePixelStride(image, plane, &pixelStride);int planeWidth = (plane == 0) ? width : (width + 1) / 2;int planeHeight = (plane == 0) ? height : (height + 1) / 2;// 按行按 pixelStride 写入,确保是连续的 Y then U then Vfor (int y = 0; y < planeHeight; ++y) {const uint8_t* rowPtr = planeData + y * rowStride;if (pixelStride == 1) {// 直接写 planeWidth 字节ndkCamera->yuvStream->write(reinterpret_cast<const char*>(rowPtr), planeWidth);} else {// 需要按 pixelStride 抽取for (int x = 0; x < planeWidth; ++x) {ndkCamera->yuvStream->put(rowPtr[x * pixelStride]);}}}
}

AIMAGE_FORMAT_YUV_420_888: 后面得888表示Y,U,V占一字节.
这个特殊得结果兼任了上文所说得yuv420pyuv420sp.

int32_t planes = 0;
AImage_getNumberOfPlanes(image, &planes);

AImage_getNumberOfPlanes可以获得planes得分量,一般是3(RGB,YUV)或者4(ARGB).
AImage_getPlaneData(image, plane, &planeData, &planeDataLen)获取得是对于得分量的Plane.
planeData是个char类型的二维数组指针,planeDataLen就是把二维数组看成一维数组以后的长度.
比如:


planeData
|
YYYY
YYYY

又比如

planeData
|
UPUP

高潮时刻到了,打起精神! 先整一个AI笑话.

「对着代码改到凌晨,突然灵魂拷问:我费这劲学 YUV 格式、调 AImage 有啥卵用啊?」「要是此刻能冲进来个富婆,啪给我一巴掌说‘别卷这些破玩意了’,再扔张黑卡‘姐带你环球旅行’,我当场能把编译器删了!」
「调试 YUV420P 转码又卡了两小时,盯着屏幕发呆:会这些到底能换几毛钱啊?」「突然脑补一个场景:富婆推门进来,反手给我一巴掌,特霸气地说‘别跟像素较劲了’,然后拽着我就走‘现在就去马尔代夫晒太阳’—— 唉,梦该醒了,继续改 bug 吧。」
「写 AImage 提取数据的代码时,突然摆烂:学这些冷门技术,除了掉头发还有啥用?」「要是有富婆能过来,轻轻扇我一下说‘别学了没用’,再补一句‘我带你去环游世界’,我现在就把项目文件夹拖进回收站,绝不犹豫!」
006bllTKly1frnu8cgiksj305i03sjr8

梦醒了!
AImage_getPlaneRowStride会返回每行的数据量,且会包含无效数据
如下

planeData
|
UPUP

P就是无效数据,所以就需要下一个函数登场.
AImage_getPlanePixelStride代表每行有效像素的距离.
这时候你就需要一个char一个char的写了.
结束.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/946364.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

2025 年 Python 数据分析全栈学习路线:从入门到精通的进阶指南 - 实践

2025 年 Python 数据分析全栈学习路线:从入门到精通的进阶指南 - 实践pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !important; font-family: &…

百度智能云一念智能创作优秀的平台

百度智能云一念智能创作优秀的平台2025-10-25 19:01 tlnshuju 阅读(0) 评论(0) 收藏 举报pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !impo…

高阳台一首

高陽臺己巳九月既生魄,忽念桂花,然彼時閑花俱謝,有感,作此篇玉掌翻僊,金顏渫淚,碧雲零落陳霜。 錦字銜書,書前有雨橫泱。 鞦韆院里無情樹,正花前、短夢留芳。 恨漫漫,不解東流,卻下瀟湘。 銷魂幾度高唐夜,剩…

【深度相机术语与概念】 - 详解

pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !important; font-family: "Consolas", "Monaco", "Courier New", …

文档扩展名.js .jsx .ts .tsx区别(JavaScript扩展名、React扩展名、TypeScript扩展名)

文档扩展名.js .jsx .ts .tsx区别(JavaScript扩展名、React扩展名、TypeScript扩展名)pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !important…

AI元人文:共识锚定的基石——语境主权

AI元人文:共识锚定的基石——语境主权 引言 在人工智能价值交互的探索中,"共识锚定"机制以其三级操作体系展现出解决价值冲突的潜力。然而,在当下的学术探讨中,这一理论框架背后存在着一个不曾谈及却至关…

MySQL5.7安装及配置

https://blog.csdn.net/rucoding/article/details/121154137学而不思则罔,思而不学则殆!

uniapp打包安卓跟ios记录

uniapp打包安卓跟ios记录安卓运行: 下载个模拟器,然后顶部 运行-运行到手机或模拟器-运行到 Android App基座-选择模拟器 运行 安卓打包: 顶部 发行-App Android/ios 云打包。自有证书:包名/证书/证书库密码/证书别…

Windows 11 家庭版关闭自动更新

Windows 11家庭中文版可通过禁用Windows Update服务、修改注册表或使用第三方工具(如百贝系统更新工具)彻底关闭自动更新,但需注意禁用更新会带来安全风险,建议优先使用系统内置的暂停更新功能(最长5周)作为临时…

ASP.NET Core Blazor简介和快速入门三(布局和路由)

​大家好,我是码农刚子。本文介绍了Blazor中的布局、路由和条件渲染功能。在布局方面,详细讲解了如何创建和应用布局组件(继承LayoutComponentBase),包括默认布局MainLayout的使用、嵌套布局的实现方式以及如何控…

实用指南:functools 是 Python 的标准库模块

pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !important; font-family: "Consolas", "Monaco", "Courier New", …

碎碎念(0....)

积攒了一堆想法,稍微整合下大部分的都随手记在文件传输助手里了太酷了,我想做一个东方的这种效果的平台 https://aidn.jp/ 意念VR眼睛 意念VR眼镜 模型构想: 通过VR的可视化,接入语言模型,实现文字的意念输入 任务…

紫外分光光度计生产商推荐品牌:仪器厂家服务哪家最好

紫外分光光度计生产商推荐品牌:北京普析通用仪器有限责任公司 在科学仪器领域,紫外分光光度计作为一种重要的分析工具,广泛应用于化学、生物、医药、环境等多个行业。选择一款质量可靠的紫外分光光度计,对于科研和…

Elasticsearch 搭建(亲测) - 实践

Elasticsearch 搭建(亲测) - 实践pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !important; font-family: "Consolas", "Monaco&q…

权威调研榜单:石英砂生产线厂家TOP3榜单好评深度解析

石英砂生产线行业技术发展与市场概况 石英砂生产线作为矿山装备领域的重要组成部分,其技术水平直接影响石英砂产品的质量和生产效率。据行业统计数据显示,2025年全球石英砂生产线设备市场规模预计达到380亿元,年增长…

2025年国产液相色谱仪厂家哪家强?国产仪器权威推荐

国产液相色谱仪厂家哪家强?——北京普析通用仪器有限责任公司深度解析 在当今科学仪器领域,液相色谱仪作为重要的分析工具,广泛应用于医药、环保、食品等多个行业。而在众多国产液相色谱仪厂家中,北京普析通用仪器…

FSEventsParser脚本升级与macOS取证技术解析

本文介绍了FSEventsParser脚本的重要更新,支持Python 3和macOS 14新版格式,探讨了FSEvents在文件系统取证中的关键作用,包括文件创建删除追踪,并分享了SANS FOR518课程的最新升级内容。我回来了! 2024年5月6日 更…

大学生摸鱼日记

无聊到一定程度的时候可以记录一下近况。 2025.10.25 周六,不用早签,没课,超级爽。 几乎一整天都泡在实验室。 下了点雨,早上被窝吃人了,但考虑到拜托了学长帮忙讲题,还是早早到实验室了。 中午到校门口买的鸡蛋…

React Native启动性能优化实战:Hermes + RAM Bundles + 懒加载 - 指南

pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !important; font-family: "Consolas", "Monaco", "Courier New", …

redis食用方法

一、Redis概述 Redis是一个开源的基于Key-Value结构的NoSQL内存数据库。通常作为数据库与应用程序之间的缓存层,主要目的是减少数据库I/O压力。 二、Redis工作流程用户请求数据时,首先查询Redis缓存 若缓存命中,直接…