IJKPLAYER源码分析-OpenGL ES渲染

1 前言

  • IJKPLAYER在视频render之时,并非简单使用SDL渲染API,而是用了OpenGL ES,再分别在Android和iOS平台做视频的显示;
  • 一言以蔽之,OpenGL ES并不能做到直接在窗口上render并显示,而是需要一个中间媒介。这个中间媒介,将OpenGL ES的render结果,最终输出到窗口上显示;
  • 在Android端,这个中间媒介是EGL,是surface输出,而iOS端则是Layer,具体到IJKPLAYER是CAEAGLLayer;
  • 本文只介绍IJKPLAYER使用OpenGL ES跨Android和iOS平台的视频render,显示部分将另起文章介绍;

    提示:阅读本文需要有一定的OpenGL shader编程基础。

2 接口

2.1 SDL_Vout接口

  • 与SDL_Aout接口类似,SDL_Vout是IJKPLAYER对Android和iOS端视频输出的抽象;
  • 输入端是解码后的像素数据,输出端是OpenGL ES,由OpenGL ES做具体render工作;

    关于SDL_Vout实例的创建,将在Android和iOS平台显示一文再做介绍。此处略过。 

    以下是视频输出在render外围的几个重要结构体定义:

typedef struct SDL_Vout_Opaque {ANativeWindow   *native_window;SDL_AMediaCodec *acodec;int              null_native_window_warned; // reduce log for null windowint              next_buffer_id;ISDL_Array       overlay_manager;ISDL_Array       overlay_pool;IJK_EGL         *egl;
} SDL_Vout_Opaque;typedef struct SDL_VoutOverlay_Opaque SDL_VoutOverlay_Opaque;
typedef struct SDL_VoutOverlay SDL_VoutOverlay;
struct SDL_VoutOverlay {int w; /**< Read-only */int h; /**< Read-only */Uint32 format; /**< Read-only */int planes; /**< Read-only */Uint16 *pitches; /**< in bytes, Read-only */Uint8 **pixels; /**< Read-write */int is_private;int sar_num;int sar_den;SDL_Class               *opaque_class;SDL_VoutOverlay_Opaque  *opaque;void    (*free_l)(SDL_VoutOverlay *overlay);int     (*lock)(SDL_VoutOverlay *overlay);int     (*unlock)(SDL_VoutOverlay *overlay);void    (*unref)(SDL_VoutOverlay *overlay);int     (*func_fill_frame)(SDL_VoutOverlay *overlay, const AVFrame *frame);
};typedef struct SDL_Vout_Opaque SDL_Vout_Opaque;
typedef struct SDL_Vout SDL_Vout;
struct SDL_Vout {SDL_mutex *mutex;SDL_Class       *opaque_class;SDL_Vout_Opaque *opaque;SDL_VoutOverlay *(*create_overlay)(int width, int height, int frame_format, SDL_Vout *vout);void (*free_l)(SDL_Vout *vout);int (*display_overlay)(SDL_Vout *vout, SDL_VoutOverlay *overlay);Uint32 overlay_format;
};

2.2 render接口

  • IJKPLAYER对像素格式的支持,无外乎yuv系列和rgb系列,以及iOS的videotoolbox的硬解像素格式;
  • 每种像素格式的render遵循共同的接口规范:IJK_GLES2_Renderer;
  • 值得一提的是,若视频源像素格式不在IJKPLAYER所支持的范围内,会提供一个选项,使得视频源的像素格式转换为IJKPLAYER所支持的目标格式,再render;

    像素格式render的接口定义: 

typedef struct IJK_GLES2_Renderer
{IJK_GLES2_Renderer_Opaque *opaque;GLuint program;GLuint vertex_shader;GLuint fragment_shader;GLuint plane_textures[IJK_GLES2_MAX_PLANE];GLuint av4_position;GLuint av2_texcoord;GLuint um4_mvp;GLuint us2_sampler[IJK_GLES2_MAX_PLANE];GLuint um3_color_conversion;GLboolean (*func_use)(IJK_GLES2_Renderer *renderer);GLsizei   (*func_getBufferWidth)(IJK_GLES2_Renderer *renderer, SDL_VoutOverlay *overlay);GLboolean (*func_uploadTexture)(IJK_GLES2_Renderer *renderer, SDL_VoutOverlay *overlay);GLvoid    (*func_destroy)(IJK_GLES2_Renderer *renderer);GLsizei buffer_width;GLsizei visible_width;GLfloat texcoords[8];GLfloat vertices[8];int     vertices_changed;int     format;int     gravity;GLsizei layer_width;GLsizei layer_height;int     frame_width;int     frame_height;int     frame_sar_num;int     frame_sar_den;GLsizei last_buffer_width;
} IJK_GLES2_Renderer;

3 render

    此处以常见的yuv420p进行介绍,其他类似,所不同的是视频像素格式排列差异。

3.1 shader

    所谓shader,其实是一段可以在GPU上执行的程序。为何视频render要用到shader?原因在于,可以利用GPU强大的浮点运算和并行计算能力,发挥硬件加速的作用,代替CPU做它不擅长的运算,进行图形render。

    视频的render需要关注的shader有2个,一是顶点shader,一是片段shader。

3.1.1 vertex shader

3.1.2 顶点坐标

typedef struct IJK_GLES2_Renderer
{// ......// 顶点坐标GLfloat vertices[8];// ......
} IJK_GLES2_Renderer;
  •  顶点坐标初始化,以确定图像显示区域;
  • iOS端图像缩放可以基于顶点坐标进行等比缩放或拉伸;
  • Android端图像缩放或拉伸是放在Java层实现的;

    顶点坐标的初始化: 

static void IJK_GLES2_Renderer_Vertices_apply(IJK_GLES2_Renderer *renderer)
{switch (renderer->gravity) {case IJK_GLES2_GRAVITY_RESIZE_ASPECT:break;case IJK_GLES2_GRAVITY_RESIZE_ASPECT_FILL:break;case IJK_GLES2_GRAVITY_RESIZE:IJK_GLES2_Renderer_Vertices_reset(renderer);return;default:ALOGE("[GLES2] unknown gravity %d\n", renderer->gravity);IJK_GLES2_Renderer_Vertices_reset(renderer);return;}if (renderer->layer_width <= 0 ||renderer->layer_height <= 0 ||renderer->frame_width <= 0 ||renderer->frame_height <= 0){ALOGE("[GLES2] invalid width/height for gravity aspect\n");IJK_GLES2_Renderer_Vertices_reset(renderer);return;}float width     = renderer->frame_width;float height    = renderer->frame_height;if (renderer->frame_sar_num > 0 && renderer->frame_sar_den > 0) {width = width * renderer->frame_sar_num / renderer->frame_sar_den;}const float dW  = (float)renderer->layer_width	/ width;const float dH  = (float)renderer->layer_height / height;float dd        = 1.0f;float nW        = 1.0f;float nH        = 1.0f;// 2种等比缩放,以填充指定屏幕:iOS支持,Android则是在Java层处理缩放switch (renderer->gravity) {case IJK_GLES2_GRAVITY_RESIZE_ASPECT_FILL:  dd = FFMAX(dW, dH); break;case IJK_GLES2_GRAVITY_RESIZE_ASPECT:       dd = FFMIN(dW, dH); break;}nW = (width  * dd / (float)renderer->layer_width);nH = (height * dd / (float)renderer->layer_height);renderer->vertices[0] = - nW;renderer->vertices[1] = - nH;renderer->vertices[2] =   nW;renderer->vertices[3] = - nH;renderer->vertices[4] = - nW;renderer->vertices[5] =   nH;renderer->vertices[6] =   nW;renderer->vertices[7] =   nH;
}

    其实,仔细查看代码,Android和iOS在shader的顶点初始化时,均会调用到此函数,但Android略有不同,由于Android端图像的缩放在Android SDK之上层做,因此,Android端shader的顶点坐标初始化,其实是在以下函数:

static void IJK_GLES2_Renderer_Vertices_reset(IJK_GLES2_Renderer *renderer)
{renderer->vertices[0] = -1.0f;renderer->vertices[1] = -1.0f;renderer->vertices[2] =  1.0f;renderer->vertices[3] = -1.0f;renderer->vertices[4] = -1.0f;renderer->vertices[5] =  1.0f;renderer->vertices[6] =  1.0f;renderer->vertices[7] =  1.0f;
}

3.1.3 纹理坐标

    为何有了顶点坐标,还要有纹理坐标?个人浅见:

  • 纹理坐标可以确定图像的区域,宽高比;
  • 但纹理坐标需要转换为顶点坐标,才能最终确定图像的显示区域,也才能进一步render显示;
typedef struct IJK_GLES2_Renderer
{// ......// 纹理坐标GLfloat texcoords[8];
} IJK_GLES2_Renderer;

3.1.4 model view projection

  • model是模型矩阵,view是视图矩阵,projection是投影矩阵。在shader中,我们先将模型矩阵model与视图矩阵view相乘,得到模型视图矩阵mv;

  • 然后将顶点坐标position乘以mv矩阵,并最后乘以投影矩阵projection得到最终的屏幕坐标gl_Position;

  • 这个操作通常在渲染场景之前,在CPU上计算好相应的模型视图矩阵和投影矩阵,然后通过统一变量(uniform variable)传递到GLSL着色器中;

    // 通过model view projection矩阵,将顶点坐标转换为屏幕坐标IJK_GLES_Matrix modelViewProj;IJK_GLES2_loadOrtho(&modelViewProj, -1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);glUniformMatrix4fv(renderer->um4_mvp, 1, GL_FALSE, modelViewProj.m);                    IJK_GLES2_checkError_TRACE("glUniformMatrix4fv(um4_mvp)");

    在CPU上计算好model view projecttion矩阵,然后通过uniform variable传递给GLSL shader: 

void IJK_GLES2_loadOrtho(IJK_GLES_Matrix *matrix, GLfloat left, GLfloat right, GLfloat bottom, GLfloat top, GLfloat near, GLfloat far)
{GLfloat r_l = right - left;GLfloat t_b = top - bottom;GLfloat f_n = far - near;GLfloat tx = - (right + left) / (right - left);GLfloat ty = - (top + bottom) / (top - bottom);GLfloat tz = - (far + near) / (far - near);matrix->m[0] = 2.0f / r_l;matrix->m[1] = 0.0f;matrix->m[2] = 0.0f;matrix->m[3] = 0.0f;matrix->m[4] = 0.0f;matrix->m[5] = 2.0f / t_b;matrix->m[6] = 0.0f;matrix->m[7] = 0.0f;matrix->m[8] = 0.0f;matrix->m[9] = 0.0f;matrix->m[10] = -2.0f / f_n;matrix->m[11] = 0.0f;matrix->m[12] = tx;matrix->m[13] = ty;matrix->m[14] = tz;matrix->m[15] = 1.0f;
}

3.1.5 fragment shader

  • 分别通过yuv的纹理采样器,采样到纹理的yuv分量;
  • 然后将yuv乘以1个bt709的矩阵,把yuv转为rgb,再让fragment shader着色;
static const char g_shader[] = IJK_GLES_STRING(precision highp float;varying   highp vec2 vv2_Texcoord;uniform         mat3 um3_ColorConversion;uniform   lowp  sampler2D us2_SamplerX;uniform   lowp  sampler2D us2_SamplerY;uniform   lowp  sampler2D us2_SamplerZ;void main(){mediump vec3 yuv;lowp    vec3 rgb;yuv.x = (texture2D(us2_SamplerX, vv2_Texcoord).r - (16.0 / 255.0));yuv.y = (texture2D(us2_SamplerY, vv2_Texcoord).r - 0.5);yuv.z = (texture2D(us2_SamplerZ, vv2_Texcoord).r - 0.5);rgb = um3_ColorConversion * yuv;gl_FragColor = vec4(rgb, 1);}
);

    yuv转rgb的颜色空间转换矩阵,bt709标准的: 

glUniformMatrix3fv(renderer->um3_color_conversion, 1, GL_FALSE, IJK_GLES2_getColorMatrix_bt709());// BT.709, which is the standard for HDTV.
static const GLfloat g_bt709[] = {1.164,  1.164,  1.164,0.0,   -0.213,  2.112,1.793, -0.533,  0.0,
};
const GLfloat *IJK_GLES2_getColorMatrix_bt709()
{return g_bt709;
}

    bt709矩阵在创建了对应像素格式的render之后通过uniform变量传入GPU:

    创建bt709矩阵的外围函数:

        ......opaque->renderer = IJK_GLES2_Renderer_create(overlay);if (!opaque->renderer) {ALOGE("[EGL] Could not create render.");return EGL_FALSE;}if (!IJK_GLES2_Renderer_use(opaque->renderer)) {ALOGE("[EGL] Could not use render.");IJK_GLES2_Renderer_freeP(&opaque->renderer);return EGL_FALSE;}......
/** Per-Renderer routine*/
GLboolean IJK_GLES2_Renderer_use(IJK_GLES2_Renderer *renderer)
{if (!renderer)return GL_FALSE;assert(renderer->func_use);if (!renderer->func_use(renderer))return GL_FALSE;// 通过model view projection矩阵,将顶点坐标转换为屏幕坐标IJK_GLES_Matrix modelViewProj;IJK_GLES2_loadOrtho(&modelViewProj, -1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);glUniformMatrix4fv(renderer->um4_mvp, 1, GL_FALSE, modelViewProj.m);                    IJK_GLES2_checkError_TRACE("glUniformMatrix4fv(um4_mvp)");IJK_GLES2_Renderer_TexCoords_reset(renderer);IJK_GLES2_Renderer_TexCoords_reloadVertex(renderer);IJK_GLES2_Renderer_Vertices_reset(renderer);IJK_GLES2_Renderer_Vertices_reloadVertex(renderer);return GL_TRUE;
}

     然后,在此将bt709矩阵传入GPU:

static GLboolean yuv420sp_use(IJK_GLES2_Renderer *renderer)
{ALOGI("use render yuv420sp\n");glPixelStorei(GL_UNPACK_ALIGNMENT, 1);glUseProgram(renderer->program);            IJK_GLES2_checkError_TRACE("glUseProgram");if (0 == renderer->plane_textures[0])glGenTextures(2, renderer->plane_textures);for (int i = 0; i < 2; ++i) {glActiveTexture(GL_TEXTURE0 + i);glBindTexture(GL_TEXTURE_2D, renderer->plane_textures[i]);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);glUniform1i(renderer->us2_sampler[i], i);}// 在此将bt709矩阵传入GPUglUniformMatrix3fv(renderer->um3_color_conversion, 1, GL_FALSE, IJK_GLES2_getColorMatrix_bt709());return GL_TRUE;
}

3.1.6 加载shader程序

    使用shader程序的固定步骤如下:

  • 首先创建shader,并把所编写shader程序code传入,通过编译;
  • 再通过glCreateProgram创建GPU程序,attach program => link program => use program;
  • 在经过以上固定步骤之后,正常情况下就可以使用所编写的shader程序了;

    创建shader,设置shader程序,并通过编译: 

GLuint IJK_GLES2_loadShader(GLenum shader_type, const char *shader_source)
{assert(shader_source);GLuint shader = glCreateShader(shader_type);        IJK_GLES2_checkError("glCreateShader");if (!shader)return 0;assert(shader_source);glShaderSource(shader, 1, &shader_source, NULL);    IJK_GLES2_checkError_TRACE("glShaderSource");glCompileShader(shader);                            IJK_GLES2_checkError_TRACE("glCompileShader");GLint compile_status = 0;glGetShaderiv(shader, GL_COMPILE_STATUS, &compile_status);if (!compile_status)goto fail;return shader;fail:if (shader) {IJK_GLES2_printShaderInfo(shader);glDeleteShader(shader);}return 0;
}

     shader程序要真正跑起来,必须先附加到program,再link到program程序并使用该程序方可,这是固定步骤:

IJK_GLES2_Renderer *IJK_GLES2_Renderer_create_base(const char *fragment_shader_source)
{assert(fragment_shader_source);IJK_GLES2_Renderer *renderer = (IJK_GLES2_Renderer *)calloc(1, sizeof(IJK_GLES2_Renderer));if (!renderer)goto fail;renderer->vertex_shader = IJK_GLES2_loadShader(GL_VERTEX_SHADER, IJK_GLES2_getVertexShader_default());if (!renderer->vertex_shader)goto fail;renderer->fragment_shader = IJK_GLES2_loadShader(GL_FRAGMENT_SHADER, fragment_shader_source);if (!renderer->fragment_shader)goto fail;renderer->program = glCreateProgram();                          IJK_GLES2_checkError("glCreateProgram");if (!renderer->program)goto fail;glAttachShader(renderer->program, renderer->vertex_shader);     IJK_GLES2_checkError("glAttachShader(vertex)");glAttachShader(renderer->program, renderer->fragment_shader);   IJK_GLES2_checkError("glAttachShader(fragment)");glLinkProgram(renderer->program);                               IJK_GLES2_checkError("glLinkProgram");GLint link_status = GL_FALSE;glGetProgramiv(renderer->program, GL_LINK_STATUS, &link_status);if (!link_status)goto fail;renderer->av4_position = glGetAttribLocation(renderer->program, "av4_Position");                IJK_GLES2_checkError_TRACE("glGetAttribLocation(av4_Position)");renderer->av2_texcoord = glGetAttribLocation(renderer->program, "av2_Texcoord");                IJK_GLES2_checkError_TRACE("glGetAttribLocation(av2_Texcoord)");renderer->um4_mvp      = glGetUniformLocation(renderer->program, "um4_ModelViewProjection");    IJK_GLES2_checkError_TRACE("glGetUniformLocation(um4_ModelViewProjection)");return renderer;fail:if (renderer && renderer->program)IJK_GLES2_printProgramInfo(renderer->program);IJK_GLES2_Renderer_free(renderer);return NULL;
}

3.2 render 

    好了,以上对shader的使用有了个大概了解之后,现在可以进入到真正视频render阶段了。

3.2.1 像素数据

  • 首先video的render对象是像素数据,诸如yuv系列或rgb系列;
  • 不管Android还是iOS平台,也不论软解或硬解,IJKPLAYER对此分析出共性,对解码出来的像素数据,统一入口处理;
  • 对不支持的像素格式,通过sws_scale()或libyuv(Android平台)进行转换,转换为支持的像素格式,再render;

    解码后像素格式的调用链,入口均是queue_picture(): 

queue_picture() => SDL_VoutFillFrameYUVOverlay() => func_fill_frame()

    此处以FFmpeg软解为例,Android的mediacodec硬解及iOS的videotoolbox均不涉及:

  • 解码后的像素数据输入到SDL_VoutOverlay,具体是pixels及pitches中,准备render;
  • video源和所支持的像素格式不一致,先转换像素格式,再render;
static int func_fill_frame(SDL_VoutOverlay *overlay, const AVFrame *frame)
{assert(overlay);SDL_VoutOverlay_Opaque *opaque = overlay->opaque;AVFrame swscale_dst_pic = { { 0 } };av_frame_unref(opaque->linked_frame);int need_swap_uv = 0;int use_linked_frame = 0;enum AVPixelFormat dst_format = AV_PIX_FMT_NONE;switch (overlay->format) {case SDL_FCC_YV12:need_swap_uv = 1;// no break;case SDL_FCC_I420:if (frame->format == AV_PIX_FMT_YUV420P || frame->format == AV_PIX_FMT_YUVJ420P) {// ALOGE("direct draw frame");use_linked_frame = 1;dst_format = frame->format;} else {// ALOGE("copy draw frame");dst_format = AV_PIX_FMT_YUV420P;}break;case SDL_FCC_I444P10LE:if (frame->format == AV_PIX_FMT_YUV444P10LE) {// ALOGE("direct draw frame");use_linked_frame = 1;dst_format = frame->format;} else {// ALOGE("copy draw frame");dst_format = AV_PIX_FMT_YUV444P10LE;}break;case SDL_FCC_RV32:dst_format = AV_PIX_FMT_0BGR32;break;case SDL_FCC_RV24:dst_format = AV_PIX_FMT_RGB24;break;case SDL_FCC_RV16:dst_format = AV_PIX_FMT_RGB565;break;default:ALOGE("SDL_VoutFFmpeg_ConvertPicture: unexpected overlay format %s(%d)",(char*)&overlay->format, overlay->format);return -1;}// setup frameif (use_linked_frame) {// linked frameav_frame_ref(opaque->linked_frame, frame);overlay_fill(overlay, opaque->linked_frame, opaque->planes);if (need_swap_uv)FFSWAP(Uint8*, overlay->pixels[1], overlay->pixels[2]);} else {// managed frameAVFrame* managed_frame = opaque_obtain_managed_frame_buffer(opaque);if (!managed_frame) {ALOGE("OOM in opaque_obtain_managed_frame_buffer");return -1;}overlay_fill(overlay, opaque->managed_frame, opaque->planes);// setup frame managedfor (int i = 0; i < overlay->planes; ++i) {swscale_dst_pic.data[i] = overlay->pixels[i];swscale_dst_pic.linesize[i] = overlay->pitches[i];}if (need_swap_uv)FFSWAP(Uint8*, swscale_dst_pic.data[1], swscale_dst_pic.data[2]);}// swscale / direct drawif (use_linked_frame) {// do nothing} else if (ijk_image_convert(frame->width, frame->height,dst_format, swscale_dst_pic.data, swscale_dst_pic.linesize,frame->format, (const uint8_t**) frame->data, frame->linesize)) {opaque->img_convert_ctx = sws_getCachedContext(opaque->img_convert_ctx,frame->width, frame->height, frame->format, frame->width, frame->height,dst_format, opaque->sws_flags, NULL, NULL, NULL);if (opaque->img_convert_ctx == NULL) {ALOGE("sws_getCachedContext failed");return -1;}sws_scale(opaque->img_convert_ctx, (const uint8_t**) frame->data, frame->linesize,0, frame->height, swscale_dst_pic.data, swscale_dst_pic.linesize);if (!opaque->no_neon_warned) {opaque->no_neon_warned = 1;ALOGE("non-neon image convert %s -> %s", av_get_pix_fmt_name(frame->format), av_get_pix_fmt_name(dst_format));}}// TODO: 9 draw black if overlay is larger than screenreturn 0;
}

    填充像素数据及其linesize: 

static void overlay_fill(SDL_VoutOverlay *overlay, AVFrame *frame, int planes)
{overlay->planes = planes;for (int i = 0; i < AV_NUM_DATA_POINTERS; ++i) {overlay->pixels[i] = frame->data[i];overlay->pitches[i] = frame->linesize[i];}
}

    于此,FFmpeg软解后render的像素数据就准备好了,可以render了。 

3.2.2 创建render

  • 由于OpenGL ES的render是跨Android和iOS平台的,因此提供了统一的创建对应像素格式的render的接口;
  • 但IJKPLAYER只支持几种yuv系列和rgb系列的像素格式render;
  • 对于视频源像素格式不在IJKPLAYER支持之列的,会通过sws_scale()转为支持的像素格式,再render;

    在此根据IJKPLAYER所支持的像素格式创建对应的render: 

IJK_GLES2_Renderer *IJK_GLES2_Renderer_create(SDL_VoutOverlay *overlay)
{if (!overlay)return NULL;IJK_GLES2_printString("Version", GL_VERSION);IJK_GLES2_printString("Vendor", GL_VENDOR);IJK_GLES2_printString("Renderer", GL_RENDERER);IJK_GLES2_printString("Extensions", GL_EXTENSIONS);IJK_GLES2_Renderer *renderer = NULL;switch (overlay->format) {case SDL_FCC_RV16:      renderer = IJK_GLES2_Renderer_create_rgb565(); break;case SDL_FCC_RV24:      renderer = IJK_GLES2_Renderer_create_rgb888(); break;case SDL_FCC_RV32:      renderer = IJK_GLES2_Renderer_create_rgbx8888(); break;
#ifdef __APPLE__case SDL_FCC_NV12:      renderer = IJK_GLES2_Renderer_create_yuv420sp(); break;case SDL_FCC__VTB:      renderer = IJK_GLES2_Renderer_create_yuv420sp_vtb(overlay); break;
#endifcase SDL_FCC_YV12:      renderer = IJK_GLES2_Renderer_create_yuv420p(); break;case SDL_FCC_I420:      renderer = IJK_GLES2_Renderer_create_yuv420p(); break;case SDL_FCC_I444P10LE: renderer = IJK_GLES2_Renderer_create_yuv444p10le(); break;default:ALOGE("[GLES2] unknown format %4s(%d)\n", (char *)&overlay->format, overlay->format);return NULL;}renderer->format = overlay->format;return renderer;
}

    以yuv420p为例,render实例的创建:

IJK_GLES2_Renderer *IJK_GLES2_Renderer_create_yuv420p()
{ALOGI("create render yuv420p\n");IJK_GLES2_Renderer *renderer = IJK_GLES2_Renderer_create_base(IJK_GLES2_getFragmentShader_yuv420p());if (!renderer)goto fail;renderer->us2_sampler[0] = glGetUniformLocation(renderer->program, "us2_SamplerX"); IJK_GLES2_checkError_TRACE("glGetUniformLocation(us2_SamplerX)");renderer->us2_sampler[1] = glGetUniformLocation(renderer->program, "us2_SamplerY"); IJK_GLES2_checkError_TRACE("glGetUniformLocation(us2_SamplerY)");renderer->us2_sampler[2] = glGetUniformLocation(renderer->program, "us2_SamplerZ"); IJK_GLES2_checkError_TRACE("glGetUniformLocation(us2_SamplerZ)");renderer->um3_color_conversion = glGetUniformLocation(renderer->program, "um3_ColorConversion"); IJK_GLES2_checkError_TRACE("glGetUniformLocation(um3_ColorConversionMatrix)");renderer->func_use            = yuv420p_use;renderer->func_getBufferWidth = yuv420p_getBufferWidth;renderer->func_uploadTexture  = yuv420p_uploadTexture;return renderer;
fail:IJK_GLES2_Renderer_free(renderer);return NULL;
}

3.2.3 使用shader程序

    在完成编写/加载/链接shader程序到program之后,必须要将shader程序用起来:

static GLboolean yuv420p_use(IJK_GLES2_Renderer *renderer)
{ALOGI("use render yuv420p\n");glPixelStorei(GL_UNPACK_ALIGNMENT, 1);glUseProgram(renderer->program);            //IJK_GLES2_checkE·1rror_TRACE("glUseProgram");if (0 == renderer->plane_textures[0])glGenTextures(3, renderer->plane_textures);for (int i = 0; i < 3; ++i) {glActiveTexture(GL_TEXTURE0 + i);glBindTexture(GL_TEXTURE_2D, renderer->plane_textures[i]);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);glUniform1i(renderer->us2_sampler[i], i);}glUniformMatrix3fv(renderer->um3_color_conversion, 1, GL_FALSE, IJK_GLES2_getColorMatrix_bt709());return GL_TRUE;
}

    必须说明的是,以上在使用shader程序之时,还得创建texture,用以GPU采样纹理数据。简单来讲,就是通过texture将像素数据喂给GPU处理。 

3.2.4 upload texture

    将于yuv420p像素数据喂给GPU处理:

static GLboolean yuv420p_uploadTexture(IJK_GLES2_Renderer *renderer, SDL_VoutOverlay *overlay)
{if (!renderer || !overlay)return GL_FALSE;int     planes[3]    = { 0, 1, 2 };const GLsizei widths[3]    = { overlay->pitches[0], overlay->pitches[1], overlay->pitches[2] };const GLsizei heights[3]   = { overlay->h,          overlay->h / 2,      overlay->h / 2 };const GLubyte *pixels[3]   = { overlay->pixels[0],  overlay->pixels[1],  overlay->pixels[2] };switch (overlay->format) {case SDL_FCC_I420:break;case SDL_FCC_YV12:planes[1] = 2;planes[2] = 1;break;default:ALOGE("[yuv420p] unexpected format %x\n", overlay->format);return GL_FALSE;}for (int i = 0; i < 3; ++i) {int plane = planes[i];glBindTexture(GL_TEXTURE_2D, renderer->plane_textures[i]);glTexImage2D(GL_TEXTURE_2D,0,GL_LUMINANCE,widths[plane],heights[plane],0,GL_LUMINANCE,GL_UNSIGNED_BYTE,pixels[plane]);}return GL_TRUE;
}

3.2.5 draw

    最后调用glDrawArray(GL_TRIANGLE_STRIP, 0, 4)绘制:

/** Per-Frame routine*/
GLboolean IJK_GLES2_Renderer_renderOverlay(IJK_GLES2_Renderer *renderer, SDL_VoutOverlay *overlay)
{if (!renderer || !renderer->func_uploadTexture)return GL_FALSE;glClear(GL_COLOR_BUFFER_BIT);               IJK_GLES2_checkError_TRACE("glClear");GLsizei visible_width  = renderer->frame_width;GLsizei visible_height = renderer->frame_height;if (overlay) {visible_width  = overlay->w;visible_height = overlay->h;if (renderer->frame_width   != visible_width    ||renderer->frame_height  != visible_height   ||renderer->frame_sar_num != overlay->sar_num ||renderer->frame_sar_den != overlay->sar_den) {renderer->frame_width   = visible_width;renderer->frame_height  = visible_height;renderer->frame_sar_num = overlay->sar_num;renderer->frame_sar_den = overlay->sar_den;renderer->vertices_changed = 1;}renderer->last_buffer_width = renderer->func_getBufferWidth(renderer, overlay);if (!renderer->func_uploadTexture(renderer, overlay))return GL_FALSE;} else {// NULL overlay means force reload verticerenderer->vertices_changed = 1;}GLsizei buffer_width = renderer->last_buffer_width;if (renderer->vertices_changed ||(buffer_width > 0 &&buffer_width > visible_width &&buffer_width != renderer->buffer_width &&visible_width != renderer->visible_width)){renderer->vertices_changed = 0;IJK_GLES2_Renderer_Vertices_apply(renderer);IJK_GLES2_Renderer_Vertices_reloadVertex(renderer);renderer->buffer_width  = buffer_width;renderer->visible_width = visible_width;GLsizei padding_pixels     = buffer_width - visible_width;GLfloat padding_normalized = ((GLfloat)padding_pixels) / buffer_width;IJK_GLES2_Renderer_TexCoords_reset(renderer);IJK_GLES2_Renderer_TexCoords_cropRight(renderer, padding_normalized);IJK_GLES2_Renderer_TexCoords_reloadVertex(renderer);}glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);      IJK_GLES2_checkError_TRACE("glDrawArrays");return GL_TRUE;
}

3.3 video_refresh_thread 

    最后,提一下视频刷新线程,主要做几件事情:

  • 音视频同步,若以video为参考时钟,按duration播放视频,若以audio为参考时钟,纠正video的播放时间;
  • 在音视频同步之后,对准备display的video帧Frame,开始render;
  • 更新video的时钟,也即当前video播放到哪个时间点了,即对pts更新;
  • 调到第一步loop;

    video的display线程主函数: 

static int video_refresh_thread(void *arg)
{FFPlayer *ffp = arg;VideoState *is = ffp->is;double remaining_time = 0.0;while (!is->abort_request) {if (remaining_time > 0.0)av_usleep((int)(int64_t)(remaining_time * 1000000.0));remaining_time = REFRESH_RATE;if (is->show_mode != SHOW_MODE_NONE && (!is->paused || is->force_refresh))video_refresh(ffp, &remaining_time);}return 0;
}

    display线程的具体执行函数: 

/* called to display each frame */
static void video_refresh(FFPlayer *opaque, double *remaining_time)
{FFPlayer *ffp = opaque;VideoState *is = ffp->is;double time;Frame *sp, *sp2;if (!is->paused && get_master_sync_type(is) == AV_SYNC_EXTERNAL_CLOCK && is->realtime)check_external_clock_speed(is);if (!ffp->display_disable && is->show_mode != SHOW_MODE_VIDEO && is->audio_st) {time = av_gettime_relative() / 1000000.0;if (is->force_refresh || is->last_vis_time + ffp->rdftspeed < time) {video_display2(ffp);is->last_vis_time = time;}// video按duration显示,此字段用以display线程在完成当前帧render之后,下一帧render的时间*remaining_time = FFMIN(*remaining_time, is->last_vis_time + ffp->rdftspeed - time);}if (is->video_st) {
retry:if (frame_queue_nb_remaining(&is->pictq) == 0) {// nothing to do, no picture to display in the queue} else {double last_duration, duration, delay;Frame *vp, *lastvp;/* dequeue the picture */lastvp = frame_queue_peek_last(&is->pictq);vp = frame_queue_peek(&is->pictq);if (vp->serial != is->videoq.serial) {frame_queue_next(&is->pictq);goto retry;}if (lastvp->serial != vp->serial)is->frame_timer = av_gettime_relative() / 1000000.0;if (is->paused)goto display;/* compute nominal last_duration */// 音视频同步纠正video的pts时钟last_duration = vp_duration(is, lastvp, vp);delay = compute_target_delay(ffp, last_duration, is);time= av_gettime_relative()/1000000.0;if (isnan(is->frame_timer) || time < is->frame_timer)is->frame_timer = time;if (time < is->frame_timer + delay) {*remaining_time = FFMIN(is->frame_timer + delay - time, *remaining_time);goto display;}is->frame_timer += delay;if (delay > 0 && time - is->frame_timer > AV_SYNC_THRESHOLD_MAX)is->frame_timer = time;SDL_LockMutex(is->pictq.mutex);if (!isnan(vp->pts))// 更新video的时钟update_video_pts(is, vp->pts, vp->pos, vp->serial);SDL_UnlockMutex(is->pictq.mutex);if (frame_queue_nb_remaining(&is->pictq) > 1) {Frame *nextvp = frame_queue_peek_next(&is->pictq);duration = vp_duration(is, vp, nextvp);if(!is->step && (ffp->framedrop > 0 || (ffp->framedrop && get_master_sync_type(is) != AV_SYNC_VIDEO_MASTER)) && time > is->frame_timer + duration) {frame_queue_next(&is->pictq);goto retry;}}// 略去字幕相关处理逻辑......frame_queue_next(&is->pictq);is->force_refresh = 1;SDL_LockMutex(ffp->is->play_mutex);if (is->step) {is->step = 0;if (!is->paused)stream_update_pause_l(ffp);}SDL_UnlockMutex(ffp->is->play_mutex);}
display:/* display picture */if (!ffp->display_disable && is->force_refresh && is->show_mode == SHOW_MODE_VIDEO && is->pictq.rindex_shown)video_display2(ffp);}is->force_refresh = 0;// 略去状态打印code......
}

    video开始render: 

/* display the current picture, if any */
static void video_display2(FFPlayer *ffp)
{VideoState *is = ffp->is;if (is->video_st)video_image_display2(ffp);
}

     调用render的外围接口开始render:

static void video_image_display2(FFPlayer *ffp)
{VideoState *is = ffp->is;Frame *vp;Frame *sp = NULL;vp = frame_queue_peek_last(&is->pictq);if (vp->bmp) {// 略去字幕逻辑......if (ffp->render_wait_start && !ffp->start_on_prepared && is->pause_req) {if (!ffp->first_video_frame_rendered) {ffp->first_video_frame_rendered = 1;ffp_notify_msg1(ffp, FFP_MSG_VIDEO_RENDERING_START);}while (is->pause_req && !is->abort_request) {SDL_Delay(20);}}// 调用render模块接口开始renderSDL_VoutDisplayYUVOverlay(ffp->vout, vp->bmp);ffp->stat.vfps = SDL_SpeedSamplerAdd(&ffp->vfps_sampler, FFP_SHOW_VFPS_FFPLAY, "vfps[ffplay]");if (!ffp->first_video_frame_rendered) {ffp->first_video_frame_rendered = 1;ffp_notify_msg1(ffp, FFP_MSG_VIDEO_RENDERING_START);}if (is->latest_video_seek_load_serial == vp->serial) {int latest_video_seek_load_serial = __atomic_exchange_n(&(is->latest_video_seek_load_serial), -1, memory_order_seq_cst);if (latest_video_seek_load_serial == vp->serial) {ffp->stat.latest_seek_load_duration = (av_gettime() - is->latest_seek_load_start_at) / 1000;if (ffp->av_sync_type == AV_SYNC_VIDEO_MASTER) {ffp_notify_msg2(ffp, FFP_MSG_VIDEO_SEEK_RENDERING_START, 1);} else {ffp_notify_msg2(ffp, FFP_MSG_VIDEO_SEEK_RENDERING_START, 0);}}}}
}

    最后再通过SDL_Vout实例display_overlay回调让GPU做视频的render工作: 

int SDL_VoutDisplayYUVOverlay(SDL_Vout *vout, SDL_VoutOverlay *overlay)
{if (vout && overlay && vout->display_overlay)return vout->display_overlay(vout, overlay);return -1;
}

4 其他

4.1 分辨率变更

  • 分辨率变更,在render的时候需要重置顶点坐标并裁剪纹理坐标的右侧;
  • 再重新启动顶点坐标;
GLboolean IJK_GLES2_Renderer_renderOverlay(IJK_GLES2_Renderer *renderer, SDL_VoutOverlay *overlay)
{if (!renderer || !renderer->func_uploadTexture)return GL_FALSE;glClear(GL_COLOR_BUFFER_BIT);               IJK_GLES2_checkError_TRACE("glClear");GLsizei visible_width  = renderer->frame_width;GLsizei visible_height = renderer->frame_height;if (overlay) {visible_width  = overlay->w;visible_height = overlay->h;if (renderer->frame_width   != visible_width    ||renderer->frame_height  != visible_height   ||renderer->frame_sar_num != overlay->sar_num ||renderer->frame_sar_den != overlay->sar_den) {renderer->frame_width   = visible_width;renderer->frame_height  = visible_height;renderer->frame_sar_num = overlay->sar_num;renderer->frame_sar_den = overlay->sar_den;renderer->vertices_changed = 1;}renderer->last_buffer_width = renderer->func_getBufferWidth(renderer, overlay);if (!renderer->func_uploadTexture(renderer, overlay))return GL_FALSE;} else {// NULL overlay means force reload verticerenderer->vertices_changed = 1;}GLsizei buffer_width = renderer->last_buffer_width;if (renderer->vertices_changed ||(buffer_width > 0 &&buffer_width > visible_width &&buffer_width != renderer->buffer_width &&visible_width != renderer->visible_width)){renderer->vertices_changed = 0;IJK_GLES2_Renderer_Vertices_apply(renderer);IJK_GLES2_Renderer_Vertices_reloadVertex(renderer);renderer->buffer_width  = buffer_width;renderer->visible_width = visible_width;GLsizei padding_pixels     = buffer_width - visible_width;GLfloat padding_normalized = ((GLfloat)padding_pixels) / buffer_width;IJK_GLES2_Renderer_TexCoords_reset(renderer);IJK_GLES2_Renderer_TexCoords_cropRight(renderer, padding_normalized);IJK_GLES2_Renderer_TexCoords_reloadVertex(renderer);}glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);      IJK_GLES2_checkError_TRACE("glDrawArrays");return GL_TRUE;
}

    调整纹理坐标,裁剪右侧,即宽度: 

static void IJK_GLES2_Renderer_TexCoords_cropRight(IJK_GLES2_Renderer *renderer, GLfloat cropRight)
{ALOGE("IJK_GLES2_Renderer_TexCoords_cropRight\n");renderer->texcoords[0] = 0.0f;renderer->texcoords[1] = 1.0f;renderer->texcoords[2] = 1.0f - cropRight;renderer->texcoords[3] = 1.0f;renderer->texcoords[4] = 0.0f;renderer->texcoords[5] = 0.0f;renderer->texcoords[6] = 1.0f - cropRight;renderer->texcoords[7] = 0.0f;
}

4.2 iOS端画面缩放

    首先,在iOS播放器初始化之时,设置缩放或拉伸选项:

- (void)setContentMode:(UIViewContentMode)contentMode
{[super setContentMode:contentMode];switch (contentMode) {case UIViewContentModeScaleToFill:_rendererGravity = IJK_GLES2_GRAVITY_RESIZE;break;case UIViewContentModeScaleAspectFit:_rendererGravity = IJK_GLES2_GRAVITY_RESIZE_ASPECT;break;case UIViewContentModeScaleAspectFill:_rendererGravity = IJK_GLES2_GRAVITY_RESIZE_ASPECT_FILL;break;default:_rendererGravity = IJK_GLES2_GRAVITY_RESIZE_ASPECT;break;}[self invalidateRenderBuffer];
}

    再将此选项传入IJK_GLES2_Renderer中,用变量gravity保存: 

- (BOOL)setupRenderer: (SDL_VoutOverlay *) overlay
{if (overlay == nil)return _renderer != nil;if (!IJK_GLES2_Renderer_isValid(_renderer) ||!IJK_GLES2_Renderer_isFormat(_renderer, overlay->format)) {IJK_GLES2_Renderer_reset(_renderer);IJK_GLES2_Renderer_freeP(&_renderer);_renderer = IJK_GLES2_Renderer_create(overlay);if (!IJK_GLES2_Renderer_isValid(_renderer))return NO;if (!IJK_GLES2_Renderer_use(_renderer))return NO;IJK_GLES2_Renderer_setGravity(_renderer, _rendererGravity, _backingWidth, _backingHeight);}return YES;
}

    render用变量gravity来保存所设置的选项: 

GLboolean IJK_GLES2_Renderer_setGravity(IJK_GLES2_Renderer *renderer, int gravity, GLsizei layer_width, GLsizei layer_height)
{if (renderer->gravity != gravity && gravity >= IJK_GLES2_GRAVITY_MIN && gravity <= IJK_GLES2_GRAVITY_MAX)renderer->vertices_changed = 1;else if (renderer->layer_width != layer_width)renderer->vertices_changed = 1;else if (renderer->layer_height != layer_height)renderer->vertices_changed = 1;elsereturn GL_TRUE;renderer->gravity      = gravity;renderer->layer_width  = layer_width;renderer->layer_height = layer_height;return GL_TRUE;
}

     最后,在此进行缩放或拉伸:

static void IJK_GLES2_Renderer_Vertices_apply(IJK_GLES2_Renderer *renderer)
{switch (renderer->gravity) {case IJK_GLES2_GRAVITY_RESIZE_ASPECT:break;case IJK_GLES2_GRAVITY_RESIZE_ASPECT_FILL:break;case IJK_GLES2_GRAVITY_RESIZE:IJK_GLES2_Renderer_Vertices_reset(renderer);return;default:ALOGE("[GLES2] unknown gravity %d\n", renderer->gravity);IJK_GLES2_Renderer_Vertices_reset(renderer);return;}if (renderer->layer_width <= 0 ||renderer->layer_height <= 0 ||renderer->frame_width <= 0 ||renderer->frame_height <= 0){ALOGE("[GLES2] invalid width/height for gravity aspect\n");IJK_GLES2_Renderer_Vertices_reset(renderer);return;}float width     = renderer->frame_width;float height    = renderer->frame_height;if (renderer->frame_sar_num > 0 && renderer->frame_sar_den > 0) {width = width * renderer->frame_sar_num / renderer->frame_sar_den;}const float dW  = (float)renderer->layer_width	/ width;const float dH  = (float)renderer->layer_height / height;float dd        = 1.0f;float nW        = 1.0f;float nH        = 1.0f;// 2种等比缩放,以填充指定屏幕:iOS支持,Android则是在native层处理缩放switch (renderer->gravity) {case IJK_GLES2_GRAVITY_RESIZE_ASPECT_FILL:  dd = FFMAX(dW, dH); break;case IJK_GLES2_GRAVITY_RESIZE_ASPECT:       dd = FFMIN(dW, dH); break;}nW = (width  * dd / (float)renderer->layer_width);nH = (height * dd / (float)renderer->layer_height);renderer->vertices[0] = - nW;renderer->vertices[1] = - nH;renderer->vertices[2] =   nW;renderer->vertices[3] = - nH;renderer->vertices[4] = - nW;renderer->vertices[5] =   nH;renderer->vertices[6] =   nW;renderer->vertices[7] =   nH;
}

4.3 支持的像素格式

    IJKPLAYER支持对video像素格式的支持有以下8种:

#define SDL_FCC_YV12    SDL_FOURCC('Y', 'V', '1', '2')  /**< bpp=12, Planar mode: Y + V + U  (3 planes) */
#define SDL_FCC_I420    SDL_FOURCC('I', '4', '2', '0')  /**< bpp=12, Planar mode: Y + U + V  (3 planes) */
#define SDL_FCC_I444P10LE   SDL_FOURCC('I', '4', 'A', 'L')#define SDL_FCC_NV12    SDL_FOURCC('N', 'V', '1', '2')
#define SDL_FCC__VTB    SDL_FOURCC('_', 'V', 'T', 'B')    /**< iOS VideoToolbox */// RGB formats
#define SDL_FCC_RV16    SDL_FOURCC('R', 'V', '1', '6')    /**< bpp=16, RGB565 */
#define SDL_FCC_RV24    SDL_FOURCC('R', 'V', '2', '4')    /**< bpp=24, RGB888 */
#define SDL_FCC_RV32    SDL_FOURCC('R', 'V', '3', '2')    /**< bpp=32, RGBX8888 */

    具体参见以下函数实现: 

IJK_GLES2_Renderer *IJK_GLES2_Renderer_create(SDL_VoutOverlay *overlay)
{if (!overlay)return NULL;IJK_GLES2_printString("Version", GL_VERSION);IJK_GLES2_printString("Vendor", GL_VENDOR);IJK_GLES2_printString("Renderer", GL_RENDERER);IJK_GLES2_printString("Extensions", GL_EXTENSIONS);IJK_GLES2_Renderer *renderer = NULL;switch (overlay->format) {case SDL_FCC_RV16:      renderer = IJK_GLES2_Renderer_create_rgb565(); break;case SDL_FCC_RV24:      renderer = IJK_GLES2_Renderer_create_rgb888(); break;case SDL_FCC_RV32:      renderer = IJK_GLES2_Renderer_create_rgbx8888(); break;
#ifdef __APPLE__case SDL_FCC_NV12:      renderer = IJK_GLES2_Renderer_create_yuv420sp(); break;case SDL_FCC__VTB:      renderer = IJK_GLES2_Renderer_create_yuv420sp_vtb(overlay); break;
#endifcase SDL_FCC_YV12:      renderer = IJK_GLES2_Renderer_create_yuv420p(); break;case SDL_FCC_I420:      renderer = IJK_GLES2_Renderer_create_yuv420p(); break;case SDL_FCC_I444P10LE: renderer = IJK_GLES2_Renderer_create_yuv444p10le(); break;default:ALOGE("[GLES2] unknown format %4s(%d)\n", (char *)&overlay->format, overlay->format);return NULL;}renderer->format = overlay->format;return renderer;
}

4.4 设置目标像素格式

    通过setOption()设置目标像素格式进行render,缺省是SDL_FCC_RV32:

    { "overlay-format",                 "fourcc of overlay format",OPTION_OFFSET(overlay_format),  OPTION_INT(SDL_FCC_RV32, INT_MIN, INT_MAX),

    若目标像素格式设置为SDL_FCC__GLES2,并且视频源的像素格式为以下3种时,由IJKPLAYER自由选择: 

    ......Uint32 overlay_format = display->overlay_format;switch (overlay_format) {case SDL_FCC__GLES2: {switch (frame_format) {case AV_PIX_FMT_YUV444P10LE:overlay_format = SDL_FCC_I444P10LE;break;case AV_PIX_FMT_YUV420P:case AV_PIX_FMT_YUVJ420P:default:
#if defined(__ANDROID__)overlay_format = SDL_FCC_YV12;
#elseoverlay_format = SDL_FCC_I420;
#endifbreak;}break;}}......

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/806407.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

“AI程序员上岗:垂类大模型应用蓬勃发展“

AI程序员正在步入工作岗位&#xff0c;垂类大模型应用正迎来蓬勃发展的时期。随着人工智能大模型在各行业深入应用&#xff0c;像能自动编写代码的“AI员工”、智能客服的改进以及工业AI控制器的开发无需耗费大量时间&#xff0c;IT、工业生产、金融、服务营销等领域的垂类大模…

javaWeb网上零食销售系统

1 绪 论 目前&#xff0c;我国的网民数量已经达到7.31亿人&#xff0c;随着互联网购物和互联网支付的普及&#xff0c;使得人类的经济活动进入了一个崭新的时代。淘宝&#xff0c;京东等网络消费平台功能的日益完善&#xff0c;使得人们足不出户就可以得到自己想要的东西。如今…

[leetcode]remove-duplicates-from-sorted-list-ii

. - 力扣&#xff08;LeetCode&#xff09; 给定一个已排序的链表的头 head &#xff0c; 删除原始链表中所有重复数字的节点&#xff0c;只留下不同的数字 。返回 已排序的链表 。 示例 1&#xff1a; 输入&#xff1a;head [1,2,3,3,4,4,5] 输出&#xff1a;[1,2,5]示例 2&…

SpringCloudAlibaba-整合openfeign和loadbalence(三)

目录地址&#xff1a; SpringCloudAlibaba整合-CSDN博客 因为是order服务&#xff0c;调用user和product服务&#xff1b;所以这里在order模块操作&#xff1b; 1.引入依赖 <!--openfeign--> <dependency><groupId>org.springframework.cloud</groupId…

TestOps、TestDev、AIOps、NoOps

工具见&#xff1a;Katalon Platform | Integrations for End-to-End Software Testing 对于开发、测试、和运维又做了更为细节的划分&#xff1a; DevOps 开发兼运维 TestDev 开发兼测试 TestOps 测试兼运维 DevOps 和TestDev更注重编码能力&#xff08;本质就是开发&…

分类模型绘制决策边界、过拟合、评价指标

文章目录 1、线性逻辑回归决策边界1.2、使用自定义函数绘制决策边界1.3、三分类的决策边界1.4、多项式逻辑回归决策边界 2、过拟合和欠拟合2.2、欠拟合2.3、过拟合 3、学习曲线4、交叉验证5、泛化能力6、混淆矩阵7、PR曲线和ROC曲线 x2可以用x1来表示 1、线性逻辑回归决策边界 …

HarmonyOS 开发-阻塞事件冒泡

介绍 本示例主要介绍在点击事件中&#xff0c;子组件enabled属性设置为false的时候&#xff0c;如何解决点击子组件模块区域会触发父组件的点击事件问题&#xff1b;以及触摸事件中当子组件触发触摸事件的时候&#xff0c;父组件如果设置触摸事件的话&#xff0c;如何解决父组…

odoo中定期发送摘要邮件

在Odoo中&#xff0c;定期发送摘要邮件是一种常见的需求&#xff0c;特别是对于管理层或团队领导来说&#xff0c;他们可能希望在每天或每周定期收到系统的摘要信息&#xff0c;以便及时了解业务的进展情况。下面是如何在Odoo中实现定期发送摘要邮件的方法&#xff1a; 1. 创建…

HTML和markdown

总体情况 <p>在html的用处 在vscode中使用markdown [Markdown] 使用vscode开始Markdown写作之旅 - 知乎

如何训练自己的ChatGPT?需要多少训练数据?

近年&#xff0c;聊天机器人已经是很常见的AI技术。小度、siri、以及越来越广泛的机器人客服&#xff0c;都是聊天机器人的重要适用领域。然而今年&#xff0c;ChatGPT的面世让这一切都进行到一个全新的高度&#xff0c;也掀起了大语言模型&#xff08;LLM&#xff09;的热潮。…

MLT媒体程序框架01:概述

MLT官网 概述 MLT是一个开源的多媒体框架&#xff0c;专为电视广播而设计和开发。它为广播公司、视频编辑器、媒体播放器、转码器、网络流媒体和更多类型的应用程序提供了一个工具包。该系统的功能是通过各种现成的工具、XML创作组件和基于API的可扩展插件提供的。 它是通过…

python使用uiautomator2操作雷电模拟器9并遇到解决adb 连接emulator-5554 unauthorized问题

之前写过一篇文章 python使用uiautomator2操作雷电模拟器_uiautomator2 雷电模拟器-CSDN博客 上面这篇文章用的是雷电模拟器4&#xff0c;雷电模拟器4.0.78&#xff0c;android版本7.1.2。 今天有空&#xff0c;再使用雷电模拟器9&#xff0c;android版本9来测试一下 uiauto…

对接阿里云实时语音转文字的思路

将上述概念转化为详细代码需要一定的步骤。这里&#xff0c;我们将根据之前讨论的服务划分&#xff0c;创建一个简化的框架来模拟这个流程。注意&#xff0c;由于空间限制和简化目的&#xff0c;某些实现细节会被省略或简化&#xff0c;你可能需要根据实际情况进行调整。 1. 配…

华为2024年校招实习硬件-结构工程师机试题(四套)

华为2024年校招&实习硬件-结构工程师机试题&#xff08;四套&#xff09; &#xff08;共四套&#xff09;获取&#xff08;WX: didadidadidida313&#xff0c;加我备注&#xff1a;CSDN 华为硬件结构题目&#xff0c;谢绝白嫖哈&#xff09; 结构设计工程师&#xff0c;结…

FineBI概述

FineBI是一款商业智能&#xff08;BI&#xff09;软件&#xff0c;旨在帮助企业从数据中获取见解并做出更明智的业务决策。 具体来说&#xff0c;FineBI的主要功能和特点包括&#xff1a; 数据连接与整合&#xff1a;FineBI能够连接到各种数据源&#xff0c;如数据库、数据仓…

最新ChatGPT4.0工具使用教程:GPTs使用,Midjourney绘画,AI换脸,Suno-AI音乐生成大模型一站式系统使用教程

一、前言 ChatGPT3.5、GPT4.0、相信对大家应该不感到陌生吧&#xff1f;简单来说&#xff0c;GPT-4技术比之前的GPT-3.5相对来说更加智能&#xff0c;会根据用户的要求生成多种内容甚至也可以和用户进行创作交流。 然而&#xff0c;GPT-4对普通用户来说都是需要额外付费才可以…

抖音视频无水印采集拓客软件|视频批量下载提取工具

抖音视频无水印批量采集拓客软件助力高效营销&#xff01; 随着抖音平台的崛起&#xff0c;视频已成为各行各业进行营销的重要工具。但是&#xff0c;传统的视频下载方式往往效率低下&#xff0c;无法满足快速获取大量视频的需求。针对这一问题&#xff0c;我们开发了一款视频无…

R语言复现:轨迹增长模型发表二区文章 | 潜变量模型系列(2)

培训通知 Nhanes数据库数据挖掘&#xff0c;快速发表发文的利器&#xff0c;你来试试吧&#xff01;欢迎报名郑老师团队统计课程&#xff0c;4.20直播。 案例分享 2022年9月&#xff0c;中国四川大学学者在《Journal of Psychosomatic Research》&#xff08;二区&#xff0c;I…

VUE的相关知识锦集

一.vue的生命周期&#xff08;4个阶段、8个钩子函数&#xff09; 第一阶段(创建阶段):beforeCreate; created 第二阶段(挂载阶段):beforeMount; mounted 第三阶段(更新阶段):beforeUpdate; updated 第四阶段(销毁阶段):beforeDestory; destoryed beforeCreate: 在实例初始…

【力扣 Hot100 | 第一天】4.10 两数相加

文章目录 1.两数相加&#xff08;4.10&#xff09;1.1题目1.2解法一&#xff1a;模拟1.2.1解题思路1.2.2代码实现 1.两数相加&#xff08;4.10&#xff09; 1.1题目 给你两个 非空 的链表&#xff0c;表示两个非负的整数。它们每位数字都是按照 逆序 的方式存储的&#xff0c…