GithubHelp home page GithubHelp logo

wysaid / android-gpuimage-plus Goto Github PK

View Code? Open in Web Editor NEW
1.8K 96.0 473.0 117.17 MB

Android Image & Camera Filters Based on OpenGL.

License: MIT License

Java 15.98% Shell 0.73% Makefile 0.33% C++ 33.62% C 47.93% Objective-C 1.18% CMake 0.23%
cge filter opengl android-gpuimage

android-gpuimage-plus's Issues

你好!

我对视频录制添加滤镜对我非常有帮助!但现在你的源码没有这一部分的代码!能分享出来吗??

音视频不同步问题的一点小灵感

最近在看你的源码, 我发现项目里面的帧率是int型, 但是有的手机帧率是动态的, 也就是说会有小数点, 这样精度会丢失,建议你把类型换成float试试

javacv库的问题

现在最新的javacv已经移动到了github,但是我从github编译出的so库并没有ffmpegInvoke和neno的so库,你的库代码应该是googlecode的;
github的库编译之后出现录制过程中会出现闪退的情况,请问你的库是怎么编译出来的

glReadPixels的效率问题

hi,大师兄,我想对滤镜每帧图片进行加表情图片(边录制边加),但是项目代码中目前没有给出Recorder的native的实现(期待啊),无奈只能在onDrawFrame中读取每一帧图片出来,然后处理,主要处理的方式为:

GLES20.glViewport(0, 0, mRecordWidth, mRecordHeight);
mFrameRecorder.drawCache();
mFrameBuffer.position(0);
GLES20.glReadPixels(0, 0, mRecordWidth, mRecordHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, mFrameBuffer);
mFrameBmp.copyPixelsFromBuffer(mFrameBuffer);

这里width和hight是480*480。从这里读取出来得frame已经是你的底层native加好滤镜的帧了,这里拿到bitmap以后我会用canvas将表情图片draw到mFrameBmp上面,然后进行encode(没有重新写ffmpeg的encode,用了javacv)。
这套逻辑基本是可以工作的。但是问题是,上面的这几行代码对延迟相机预览,改动之前相机预览帧率是30,加上这几行以后变成了22帧左右。多次测试以后发现主要是GLES20.glReadPixels造成的延迟。

大师兄,有什么建议吗?

引用Library Module成功,但是在XML引用org.wysaid.view.CameraRecordGLSurfaceView,报错

IDE:Android Studio
引用第android-gpuimage-plus Library做为Dependency Lib, 没有问题。但是在布局里面,使用了Library中的org.wysaid.view.CameraRecordGLSurfaceView, 运行报错如下:

E/AndroidRuntime: FATAL EXCEPTION: GLThread 1340
Process: com.rocky.TestApp, PID: 26679
java.lang.UnsatisfiedLinkError: dalvik.system.PathClassLoader[DexPathList[[zip file "/data/app/com.rocky.TestApp-1/base.apk"],nativeLibraryDirectories=[/data/app/com.rocky.TestApp-1/lib/arm64, /data/app/com.rocky.TestApp-1/base.apk!/lib/arm64-v8a, /vendor/lib64, /system/lib64]]] couldn't find "libx264.142.so"
at java.lang.Runtime.loadLibrary(Runtime.java:367)
at java.lang.System.loadLibrary(System.java:1076)
at org.wysaid.nativePort.NativeLibraryLoader.load(NativeLibraryLoader.java:15)
at org.wysaid.nativePort.CGEFrameRenderer.(CGEFrameRenderer.java:11)
at org.wysaid.view.CameraGLSurfaceView.onSurfaceCreated(CameraGLSurfaceView.java:413)
at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1503)
at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1240)

报错行NativeLibraryLoader.java:15:
System.loadLibrary("x264.142");

后来我在这个库的Demo上试着改了应用的包名,也出现运行报错。把包名改回来,就能正常运行。

大神能给个解释吗?

你好,方便提供jni代码吗?

你好,方便提供jni代码吗? android-gpuimage
做相机的实现滤镜貌似会很慢 ! 所以想研究一下用jni 实现的滤镜!

请问下cgeGenerateVideoWithFilter 中的 传入参数, mute是代码什么?

cgeGenerateVideoWithFilter(const char* outputFilename, const char* inputFilename, const char* filterConfig, float filterIntensity, GLuint texID, CGETextureBlendMode blendMode, float blendIntensity, bool mute, CGETexLoadArg* loadArg)

filterConfig
filterIntensity
blendMode是代码什么
blendIntensity是代表什么
mute
CGETexLoadArg

你好请教下cgeVideoUtils.cpp里面有几个疑问

我想在A simple-slow offscreen video rendering function. cgeGenerateVideoWithFilter方法里,自己创建一个shader绘制一个图像,始终有问题。

请问下,这个离屏渲染,有相关的demo吗。

`/创建
float texCoor []= {
0.0f,1.0f,
1.0f,1.0f,
1.0f,0.0f,
0.0f,0.0f
};
float tableVerticesWithTriangles[] = {
-0.8f,0.3f,0,
0.8f,0.3f,0,
0.8f,-0.3f,0,
-0.8f,-0.3f,0};
const char vertex_shader[]="attribute vec4 a_Position;\n"
"attribute vec2 u_Texture;\n"
"varying vec2 vTextureCoord;\n"
"void main() {\n"
" gl_Position=a_Position;\n"
" vTextureCoord = u_Texture;\n"
"}\n";

                                                           const char  fragment_shader[]="precision mediump float;\n"
                                                                                        "varying vec2 vTextureCoord;\n"
                                                                                        "uniform sampler2D sTexture;\n"
                                                                                        "void main(){\n"
                                                                                        "	gl_FragColor = texture2D(sTexture, vTextureCoord);\n"
                                                                                        "}\n";
    GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
     glShaderSource(vertexShader,1,(const GLchar **)&vertex_shader,NULL);
      glCompileShader(vertexShader);

       GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
       glShaderSource(fragmentShader,1,(const GLchar **)&fragment_shader,NULL);
       glCompileShader(fragmentShader);
       GLuint m_programID= glCreateProgram();
       glAttachShader(m_programID, vertexShader);
       glAttachShader(m_programID, fragmentShader);
       glLinkProgram(m_programID);

      GLuint aPositionLocation =glGetAttribLocation(m_programID, "a_Position");
      GLuint uTextureLocation =glGetAttribLocation(m_programID, "u_Texture");`

渲染代码
`//测试渲染图片
glUseProgram(m_programID);
glVertexAttribPointer(aPositionLocation,3,GL_FLOAT,GL_FALSE,0,tableVerticesWithTriangles);
glEnableVertexAttribArray(aPositionLocation);

                                glVertexAttribPointer(uTextureLocation,2, GL_FLOAT, GL_FALSE, 0,texCoor);
                                glEnableVertexAttribArray(uTextureLocation);

                                glActiveTexture(GL_TEXTURE0);
                                glBindTexture(GL_TEXTURE_2D,texID);
                                glDrawArrays(GL_TRIANGLE_STRIP,0,4);


                                    //测试渲染图片结束`

请教下音频和视频合并要怎么做

先感谢一下,最近用你的库在尝试做视频录制的应用。
目前不知道要怎么吧声音文件合成到视频里录制,没有找到相关的api。
有没有相关的api或者其他的库可以参考。
望不吝赐教。

你好,请问下cgeVideoUtils有几个疑问

qq 20161116192641

1、我想在 handler.processingFilters()后面,添加自己的一段代码,比如自己绘制一个图片LOGO,请问下他会绘制到glReadPixels的data里吗。
2、我添加以下代码,但是却编译不过,提示
error:
no matching function for call to 'glShaderSource'
glShaderSource(fragmentShader,1,&fragment_shader,NULL);

但是我单独写一个工程是可以编译过的。

在华为Mate7上截取视频画面黑屏

你好!
我用您的代码在安卓机器上进行视频录制,当在华为Mate7上测试时,可以播放视频以及给视频加滤镜,但是无法通过VideoPlayerGLSurfaceView中的takeshot方法截图,在其他安卓机器上则没有问题。我debug发现GLES20.glReadPixels方法执行之后的IntBuffer的像素值全是0。在您提供的demo release的apk中也可以复现这问题。您是否方便验证下这是否是个bug?

请问@specail如何使用,能否给个例子

special方法参数解释: @special方法格式为 "@special " 其中参数 为该特效的编号。 此类用于处理所有不具备通用性的特效。直接重新编写一个processor以解决。

这个编号是指的 GLES20.glCreateProgram() 吗?
我试过,但是不行。
specialParser - unresolved index: 12 特效指令 "@special 12" 无法生成任何特效!

望回复,谢谢!

录制视频问题

你好,我下载demo的apk里面录制视频可以用,为什么到代码里面了录制视频的方法被屏蔽,没找到呢?

java.lang.UnsatisfiedLinkError: dlopen failed: /data/app/com.bhtc.huajuan-1/lib/arm/libx264.142.so: has text relocations

java.lang.UnsatisfiedLinkError: dlopen failed: /data/app/com.bhtc.huajuan-1/lib/arm/libx264.142.so: has text relocations
at java.lang.Runtime.loadLibrary(Runtime.java:372)
at java.lang.System.loadLibrary(System.java:1076)
at org.wysaid.nativePort.NativeLibraryLoader.load(NativeLibraryLoader.java:15)
at org.wysaid.nativePort.CGEFrameRenderer.(CGEFrameRenderer.java:11)
at org.wysaid.view.CameraGLSurfaceView.onSurfaceCreated(CameraGLSurfaceView.java:413)
at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1549)
at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1286)

Invalid Filter Config

Hello,
I'm taking a look on your library. I have the problem with the filter using image texture.
For example, when I use the filter that using the late_sunset.png, the log says that:
Invalid Filter Config @adjust lut late_sunset.png
I did copied the late_sunset.png into the project. Anything else is good.
Could you point me if i missed anything?
Thanks so much. :)

小米加载x264.142显示无法加载

couldn't find "libx264.142.so"
at java.lang.Runtime.loadLibrary(Runtime.java:366)
at java.lang.System.loadLibrary(System.java:988)
at org.wysaid.nativePort.NativeLibraryLoader.load(NativeLibraryLoader.java:15)
at org.wysaid.nativePort.CGEFrameRenderer.(CGEFrameRenderer.java:14)
at org.wysaid.view.VideoPlayerGLSurfaceView$9$1.run(VideoPlayerGLSurfaceView.java:487)
at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1462)
at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1239)
大神,求帮助回复!

ffmpeg库冲突

目前我们引用了一个第三方库也叫libffmpeg.so,这样就和您这个库冲突了,您能把libffmpeg.so重命名, 然后编译一下libCGE.so吗? 多谢了

有没有比readPixel更高效的方法

大师兄,你好,我正在用GPUImage开发一个滤镜效果,但是我的项目需要我从底层读出 滤镜渲染过的帧数据,然后进行一些 software encode。我看之前有个issue也提到了readPixel的效率问题,但是那位同学的需求可以绕过这个函数。而我的项目需求是必须在Android app层拿到底层的pixel数据,绕不过readPixel这个坎,我看网上有说PBO等方法来提高效率的。不知道大师兄有没有什么可行有效方法?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.