Commit 7b809209 authored by Sebastian Dröge's avatar Sebastian Dröge

Release 1.8.0

parent 19b7c54d
=== release 1.8.0 ===
2016-03-24 Sebastian Dröge <>
releasing 1.8.0
2016-03-16 20:18:41 +0200 Sebastian Dröge <>
* gst/interleave/deinterleave.c:
deinterleave: Use GstIterator for iterating all pads instead of manually iterating them while holding the object lock all the time
Doing queries while holding the object lock is a bit dangerous, and in this
case causes deadlocks.
2016-03-17 20:53:27 +0200 Vivia Nikolaidou <>
* gst/deinterlace/gstdeinterlace.c:
deinterlace: Fix typo to not change the input caps but our filtered caps
Changing the input caps and not using them anymore afterwards is useless, and
it breaks negotiation in pipelines like:
gst-launch-1.0 videotestsrc ! "video/x-raw,framerate=25/1,interlace-mode=interleaved" !
deinterlace fields=all ! "video/x-raw,framerate=50/1,interlace-mode=progressive" !
=== release 1.7.91 ===
2016-03-15 Sebastian Dröge <>
2016-03-15 12:04:39 +0200 Sebastian Dröge <>
* ChangeLog:
releasing 1.7.91
* docs/plugins/gst-plugins-good-plugins.args:
* docs/plugins/inspect/plugin-1394.xml:
* docs/plugins/inspect/plugin-aasink.xml:
* docs/plugins/inspect/plugin-alaw.xml:
* docs/plugins/inspect/plugin-alpha.xml:
* docs/plugins/inspect/plugin-alphacolor.xml:
* docs/plugins/inspect/plugin-apetag.xml:
* docs/plugins/inspect/plugin-audiofx.xml:
* docs/plugins/inspect/plugin-audioparsers.xml:
* docs/plugins/inspect/plugin-auparse.xml:
* docs/plugins/inspect/plugin-autodetect.xml:
* docs/plugins/inspect/plugin-avi.xml:
* docs/plugins/inspect/plugin-cacasink.xml:
* docs/plugins/inspect/plugin-cairo.xml:
* docs/plugins/inspect/plugin-cutter.xml:
* docs/plugins/inspect/plugin-debug.xml:
* docs/plugins/inspect/plugin-deinterlace.xml:
* docs/plugins/inspect/plugin-dtmf.xml:
* docs/plugins/inspect/plugin-dv.xml:
* docs/plugins/inspect/plugin-effectv.xml:
* docs/plugins/inspect/plugin-equalizer.xml:
* docs/plugins/inspect/plugin-flac.xml:
* docs/plugins/inspect/plugin-flv.xml:
* docs/plugins/inspect/plugin-flxdec.xml:
* docs/plugins/inspect/plugin-gdkpixbuf.xml:
* docs/plugins/inspect/plugin-goom.xml:
* docs/plugins/inspect/plugin-goom2k1.xml:
* docs/plugins/inspect/plugin-icydemux.xml:
* docs/plugins/inspect/plugin-id3demux.xml:
* docs/plugins/inspect/plugin-imagefreeze.xml:
* docs/plugins/inspect/plugin-interleave.xml:
* docs/plugins/inspect/plugin-isomp4.xml:
* docs/plugins/inspect/plugin-jack.xml:
* docs/plugins/inspect/plugin-jpeg.xml:
* docs/plugins/inspect/plugin-level.xml:
* docs/plugins/inspect/plugin-matroska.xml:
* docs/plugins/inspect/plugin-mulaw.xml:
* docs/plugins/inspect/plugin-multifile.xml:
* docs/plugins/inspect/plugin-multipart.xml:
* docs/plugins/inspect/plugin-navigationtest.xml:
* docs/plugins/inspect/plugin-oss4.xml:
* docs/plugins/inspect/plugin-ossaudio.xml:
* docs/plugins/inspect/plugin-png.xml:
* docs/plugins/inspect/plugin-pulseaudio.xml:
* docs/plugins/inspect/plugin-replaygain.xml:
* docs/plugins/inspect/plugin-rtp.xml:
* docs/plugins/inspect/plugin-rtpmanager.xml:
* docs/plugins/inspect/plugin-rtsp.xml:
* docs/plugins/inspect/plugin-shapewipe.xml:
* docs/plugins/inspect/plugin-shout2send.xml:
* docs/plugins/inspect/plugin-smpte.xml:
* docs/plugins/inspect/plugin-soup.xml:
* docs/plugins/inspect/plugin-spectrum.xml:
* docs/plugins/inspect/plugin-speex.xml:
* docs/plugins/inspect/plugin-taglib.xml:
* docs/plugins/inspect/plugin-udp.xml:
* docs/plugins/inspect/plugin-video4linux2.xml:
* docs/plugins/inspect/plugin-videobox.xml:
* docs/plugins/inspect/plugin-videocrop.xml:
* docs/plugins/inspect/plugin-videofilter.xml:
* docs/plugins/inspect/plugin-videomixer.xml:
* docs/plugins/inspect/plugin-vpx.xml:
* docs/plugins/inspect/plugin-wavenc.xml:
* docs/plugins/inspect/plugin-wavpack.xml:
* docs/plugins/inspect/plugin-wavparse.xml:
* docs/plugins/inspect/plugin-ximagesrc.xml:
* docs/plugins/inspect/plugin-y4menc.xml:
* gst-plugins-good.doap:
* win32/common/config.h:
Release 1.7.91
2016-03-15 11:53:37 +0200 Sebastian Dröge <>
* po/af.po:
* po/az.po:
* po/bg.po:
* po/ca.po:
* po/cs.po:
* po/da.po:
* po/de.po:
* po/el.po:
* po/en_GB.po:
* po/eo.po:
* po/es.po:
* po/eu.po:
* po/fi.po:
* po/fr.po:
* po/gl.po:
* po/hr.po:
* po/id.po:
* po/it.po:
* po/ja.po:
* po/lt.po:
* po/lv.po:
* po/mt.po:
* po/nb.po:
* po/nl.po:
* po/or.po:
* po/pl.po:
* po/pt_BR.po:
* po/ro.po:
* po/ru.po:
* po/sk.po:
* po/sl.po:
* po/sq.po:
* po/sv.po:
* po/tr.po:
* po/uk.po:
* po/vi.po:
* po/zh_CN.po:
* po/zh_HK.po:
* po/zh_TW.po:
Update .po files
2016-03-15 11:41:22 +0200 Sebastian Dröge <>
This is GStreamer 1.7.91
# GStreamer 1.8 Release Notes
**GStreamer 1.8.0 was released on 24 March 2016.**
The GStreamer team is proud to announce a new major feature release in the
stable 1.x API series of your favourite cross-platform multimedia framework!
As always, this release is again packed with new features, bug fixes and other
See [][latest] for the latest
version of this document.
*Last updated: Thursday 24 March 2016, 10:00 UTC [(log)][gitlog]*
## Highlights
- **Hardware-accelerated zero-copy video decoding on Android**
- **New video capture source for Android using the android.hardware.Camera API**
- **Windows Media reverse playback** support (ASF/WMV/WMA)
- **New tracing system** provides support for more sophisticated debugging tools
- **New high-level GstPlayer playback convenience API**
- **Initial support for the new [Vulkan][vulkan] API**, see
[Matthew Waters' blog post][vulkan-in-gstreamer] for more details
- **Improved Opus audio codec support**: Support for more than two channels; MPEG-TS demuxer/muxer can now handle Opus;
[sample-accurate][opus-sample-accurate] encoding/decoding/transmuxing with
Ogg, Matroska, ISOBMFF (Quicktime/MP4), and MPEG-TS as container;
[new codec utility functions for Opus header and caps handling][opus-codec-utils]
in pbutils library. The Opus encoder/decoder elements were also moved to
gst-plugins-base (from -bad), and the opus RTP depayloader/payloader to -good.
- **GStreamer VAAPI module now released and maintained as part of the GStreamer project**
## Major new features and changes
### Noteworthy new API, features and other changes
- New GstVideoAffineTransformationMeta meta for adding a simple 4x4 affine
transformation matrix to video buffers
- [g\_autoptr()](
support for all types is exposed in GStreamer headers now, in combination
with a sufficiently-new GLib version (i.e. 2.44 or later). This is primarily
for the benefit of application developers who would like to make use of
this, the GStreamer codebase itself will not be using g_autoptr() for
the time being due to portability issues.
- GstContexts are now automatically propagated to elements added to a bin
or pipeline, and elements now maintain a list of contexts set on them.
The list of contexts set on an element can now be queried using the new functions
and [gst\_element\_get\_contexts()]( GstContexts are used to share context-specific configuration objects
between elements and can also be used by applications to set context-specific
configuration objects on elements, e.g. for OpenGL or Hardware-accelerated
video decoding.
- New [GST\_BUFFER\_DTS\_OR\_PTS()](
convenience macro that returns the decode timestamp if one is set and
otherwise returns the presentation timestamp
- New GstPadEventFullFunc that returns a GstFlowReturn instead of a gboolean.
This new API is mostly for internal use and was added to fix a race condition
where occasionally internal flow error messages were posted on the bus when
sticky events were propagated at just the wrong moment whilst the pipeline
was shutting down. This happened primarily when the pipeline was shut down
immediately after starting it up. GStreamer would not know that the reason
the events could not be propagated was because the pipeline was shutting down
and not some other problem, and now the flow error allows GStreamer to know
the reason for the failure (and that there's no reason to post an error
message). This is particularly useful for queue-like elements which may need
to asynchronously propagate a previous flow return from downstream.
- Pipeline dumps in form of "dot files" now also show pad properties that
differ from their default value, the same as it does for elements. This is
useful for elements with pad subclasses that provide additional properties,
e.g. videomixer or compositor.
- Pad probes are now guaranteed to be called in the order they were added
(before they were called in reverse order, but no particular order was
documented or guaranteed)
- Plugins can now have dependencies on device nodes (not just regular files)
and also have a prefix filter. This is useful for plugins that expose
features (elements) based on available devices, such as the video4linux
plugin does with video decoders on certain embedded systems.
- gst\_segment\_to\_position() has been deprecated and been replaced by the
better-named gst\_segment\_position\_from\_running\_time(). At the same time
gst\_segment\_position\_from\_stream\_time() was added, as well as \_full()
variants of both to deal with negative stream time.
- GstController: the interpolation control source gained a new monotonic cubic
interpolation mode that, unlike the existing cubic mode, will never overshoot
the min/max y values set.
- GstNetAddressMeta: can now be read from buffers in language bindings as well,
via the new gst\_buffer\_get\_net\_address\_meta() function
- ID3 tag PRIV frames are now extraced into a new GST\_TAG\_PRIVATE\_DATA tag
- gst-launch-1.0 and gst\_parse\_launch() now warn in the most common case if
a dynamic pad link could not be resolved, instead of just silently
waiting to see if a suitable pad appears later, which is often perceived
by users as hanging -- they are now notified when this happens and can check
their pipeline.
- GstRTSPConnection now also parses custom RTSP message headers and retains
them for the application instead of just ignoring them
- rtspsrc handling of authentication over tunneled connections (e.g. RTSP over HTTP)
was fixed
- gst\_video\_convert\_sample() now crops if there is a crop meta on the input buffer
- The debugging system printf functions are now exposed for general use, which
supports special printf format specifiers such as GST\_PTR\_FORMAT and
GST\_SEGMENT\_FORMAT to print GStreamer-related objects. This is handy for
systems that want to prepare some debug log information to be output at a
later point in time. The GStreamer-OpenGL subsystem is making use of these
new functions, which are [gst\_info\_vasprintf()][gst_info_vasprintf],
[gst\_info\_strdup\_vprintf()][gst_info_strdup_vprintf] and
- videoparse: "strides", "offsets" and "framesize" properties have been added to
allow parsing raw data with strides and padding that do not match GStreamer
- GstPreset reads presets from the directories given in GST\_PRESET\_PATH now.
Presets are read from there after presets in the system path, but before
application and user paths.
### New Elements
- [netsim](): a new (resurrected) element to simulate network jitter and
packet dropping / duplication.
- New VP9 RTP payloader/depayloader elements: rtpvp9pay/rtpvp9depay
- New [videoframe_audiolevel]() element, a video frame synchronized audio level element
- New spandsp-based tone generator source
- New NVIDIA NVENC-based H.264 encoder for GPU-accelerated video encoding on
suitable NVIDIA hardware
- [rtspclientsink](), a new RTSP RECORD sink element, was added to gst-rtsp-server
- [alsamidisrc](), a new ALSA MIDI sequencer source element
### Noteworthy element features and additions
- *identity*: new ["drop-buffer-flags"](
property to drop buffers based on buffer flags. This can be used to drop all
non-keyframe buffers, for example.
- *multiqueue*: various fixes and improvements, in particular special handling
for sparse streams such as substitle streams, to make sure we don't overread
them any more. For sparse streams it can be normal that there's no buffer for
a long period of time, so having no buffer queued is perfectly normal. Before
we would often unnecessarily try to fill the subtitle stream queue, which
could lead to much more data being queued in multiqueue than necessary.
- *multiqueue*/*queue*: When dealing with time limits, these elements now use the
and ["gst_segment_to_running_time_full()"](
API, resulting in more accurate levels, especially when dealing with non-raw
streams (where reordering happens, and we want to use the increasing DTS as
opposed to the non-continuously increasing PTS) and out-of-segment input/output.
Previously all encoded buffers before the segment start, which can happen when
doing ACCURATE seeks, were not taken into account in the queue level calculation.
- *multiqueue*: New ["use-interleave"](
property which allows the size of the queues to be optimized based on the input
streams interleave. This should only be used with input streams which are properly
timestamped. It will be used in the future decodebin3 element.
- *queue2*: new ["avg-in-rate"](
property that returns the average input rate in bytes per second
- audiotestsrc now supports all audio formats and is no longer artificially
limited with regard to the number of channels or sample rate
- gst-libav (ffmpeg codec wrapper): map and enable JPEG2000 decoder
- multisocketsink can, on request, send a custom GstNetworkMessage event
upstream whenever data is received from a client on a socket. Similarly,
socketsrc will, on request, pick up GstNetworkMessage events from downstream
and send any data contained within them via the socket. This allows for
simple bidirectional communication.
- matroska muxer and demuxer now support the ProRes video format
- Improved VP8/VP9 decoding performance on multi-core systems by enabling
multi-threaded decoding in the libvpx-based decoders on such systems
- appsink has a new ["wait-on-eos"](
property, so in cases where it is uncertain if an appsink will have a consumer for
its buffers when it receives an EOS this can be set to FALSE to ensure that the
appsink will not hang.
- rtph264pay and rtph265pay have a new "config-interval" mode -1 that will
re-send the setup data (SPS/PPS/VPS) before every keyframe to ensure
optimal coverage and the shortest possibly start-up time for a new client
- mpegtsmux can now mux H.265/HEVC video as well
- The MXF muxer was ported to 1.x and produces more standard conformant files now
that can be handled by more other software; The MXF demuxer got improved
support for seek tables (IndexTableSegments).
### Plugin moves
- The rtph265pay/depay RTP payloader/depayloader elements for H.265/HEVC video
from the rtph265 plugin in -bad have been moved into the existing rtp plugin
in gst-plugins-good.
- The mpg123 plugin containing a libmpg123 based audio decoder element has
been moved from -bad to -ugly.
- The Opus encoder/decoder elements have been moved to gst-plugins-base and
the RTP payloader to gst-plugins-good, both coming from gst-plugins-bad.
### New tracing tools for developers
A new tracing subsystem API has been added to GStreamer, which provides
external tracers with the possibility to strategically hook into GStreamer
internals and collect data that can be evaluated later. These tracers are a
new type of plugin features, and GStreamer core ships with a few example
tracers (latency, stats, rusage, log) to start with. Tracers can be loaded
and configured at start-up via an environment variable (GST\_TRACER\_PLUGINS).
Background: While GStreamer provides plenty of data on what's going on in a
pipeline via its debug log, that data is not necessarily structured enough to
be generally useful, and the overhead to enable logging output for all data
required might be too high in many cases. The new tracing system allows tracers
to just obtain the data needed at the right spot with as little overhead as
possible, which will be particularly useful on embedded systems.
Of course it has always been possible to do performance benchmarks and debug
memory leaks, memory consumption and invalid memory access using standard
operating system tools, but there are some things that are difficult to track
with the standard tools, and the new tracing system helps with that. Examples
are things such as latency handling, buffer flow, ownership transfer of
events and buffers from element to element, caps negotiation, etc.
For some background on the new tracing system, watch Stefan Sauer's
GStreamer Conference talk ["A new tracing subsystem for GStreamer"][tracer-0]
and for a more specific example how it can be useful have a look at
Thiago Santos's lightning talk ["Analyzing caps negotiation using GstTracer"][tracer-1]
and his ["GstTracer experiments"][tracer-2] blog post. There was also a Google
Summer of Code project in 2015 that used tracing system for a graphical
GStreamer debugging tool ["gst-debugger"][tracer-3].
This is all still very much work in progress, but we hope this will provide the
foundation for a whole suite of new debugging tools for GStreamer pipelines.
### GstPlayer: a new high-level API for cross-platform multimedia playback
GStreamer has had reasonably high-level API for multimedia playback
in the form of the playbin element for a long time. This allowed application
developers to just configure a URI to play, and playbin would take care of
everything else. This works well, but there is still way too much to do on
the application-side to implement a fully-featured playback application, and
too much general GStreamer pipeline API exposed, making it less accessible
to application developers.
Enter GstPlayer. GstPlayer's aim is to provide an even higher-level abstraction
of a fully-featured playback API but specialised for its specific use case. It
also provides easy integration with and examples for Gtk+, Qt, Android, OS/X,
iOS and Windows. Watch Sebastian's [GstPlayer talk at the GStreamer Conference][gstplayer-talk]
for more information, or check out the [GstPlayer API reference][gstplayer-api]
and [GstPlayer examples][gstplayer-examples].
### Adaptive streaming: DASH, HLS and MSS improvements
- dashdemux now supports loading external xml nodes pointed from its MPD.
- Content protection nodes parsing support for PlayReady WRM in mssdemux.
- Reverse playback was improved to respect seek start and stop positions.
- Adaptive demuxers (hlsdemux, dashdemux, mssdemux) now support the SNAP_AFTER
and SNAP_BEFORE seek flags which will jump to the nearest fragment boundary
when executing a seek, which means playback resumes more quickly after a seek.
### Audio library improvements
- audio conversion, quantization and channel up/downmixing functionality
has been moved from the audioconvert element into the audio library and
is now available as public API in form of [GstAudioConverter][audio-0],
[GstAudioQuantize][audio-1] and [GstAudioChannelMixer][audio-2].
Audio resampling will follow in future releases.
- [gst\_audio\_channel\_get\_fallback\_mask()][audio-3] can be used
to retrieve a default channel mask for a given number of channels as last
resort if the layout is unknown
- A new [GstAudioClippingMeta][audio-4] meta was added for specifying clipping
on encoded audio buffers
- A new GstAudioVisualizer base class for audio visualisation elements;
most of the existing visualisers have been ported over to the new base class.
This new base class lives in the pbutils library rather than the audio library,
since we'd have had to make libgstaudio depend on libgstvideo otherwise,
which was deemed undesirable.
### GStreamer OpenGL support improvements
#### Better OpenGL Shader support
[GstGLShader][shader] has been revamped to allow more OpenGL shader types
by utilizing a new GstGLSLStage object. Each stage holds an OpenGL pipeline
stage such as a vertex, fragment or a geometry shader that are all compiled
separately into a program that is executed.
The glshader element has also received a revamp as a result of the changes in
the library. It does not take file locations for the vertex and fragment
shaders anymore. Instead it takes the strings directly leaving the file
management to the application.
A new [example][liveshader-example] was added utilizing the new shader
infrastructure showcasing live shader edits.
#### OpenGL GLMemory rework
[GstGLMemory] was extensively reworked to support the addition of multiple
texture targets required for zero-copy integration with the Android
MediaCodec elements. This work was also used to provide IOSurface based
GLMemory on OS X for zero-copy with OS X's VideoToolbox decoder (vtdec) and
AV Foundation video source (avfvideosrc). There are also patches in bugzilla
for GstGLMemoryEGL specifically aimed at improving the decoding performance on
the Raspberry Pi.
A texture-target field was added to video/x-raw(memory:GLMemory) caps to signal
the texture target contained in the GLMemory. Its values can be 2D, rectangle
or external-oes. glcolorconvert can convert between the different formats as
required and different elements will accept or produce different targets. e.g.
glimagesink can take and render external-oes textures directly as required for
effecient zero-copy on android.
A generic GL allocation framework was also implemented to support the generic
allocation of OpenGL buffers and textures which is used extensively by
#### OpenGL DMABuf import uploader
There is now a DMABuf uploader available for automatic selection that will
attempt to import the upstream provided DMABuf. The uploader will import into
2D textures with the necesarry format. YUV to RGB conversion is still provided
by glcolorconvert to avoid the laxer restrictions with external-oes textures.
#### OpenGL queries
Queries of various aspects of the OpenGL runtime such as timers, number of
samples or the current timestamp are not possible. The GstGLQuery object uses a
delayed debug system to delay the debug output to later to avoid expensive calls
to the glGet\* family of functions directly after finishing a query. It is
currently used to output the time taken to perform various operations of texture
uploads and downloads in GstGLMemory.
#### New OpenGL elements
glcolorbalance has been created mirroring the videobalance elements.
glcolorbalance provides the exact same interface as videobalance so can be used
as a GPU accelerated replacement. glcolorbalance has been added to glsinkbin so
usage with playsink/playbin will use it automatically instead of videobalance
where possible.
glvideoflip, which is the OpenGL equiavalant of videoflip, implements the exact
same interface and functionality as videoflip.
#### EGL implementation now selects OpenGL 3.x
The EGL implementation can now select OpenGL 3.x contexts.
#### OpenGL API removal
The GstGLDownload library object was removed as it was not used by anything.
Everything is performed by GstGLMemory or in the gldownloadelement.
The GstGLUploadMeta library object was removed as it was not being used and we
don't want to promote the use of GstVideoGLTextureUploadMeta.
#### OpenGL: Other miscellaneous changes
- The EGL implementation can now select OpenGL 3.x contexts. This brings
OpenGL 3.x to e.g. wayland and other EGL systems.
- glstereomix/glstereosplit are now built and are usable on OpenGL ES systems
- The UYVY/YUY2 to RGBA and RGBA to UYVY/YUY2 shaders were fixed removing the
sawtooth pattern and luma bleeding.
- We now utilize the GL\_APPLE\_sync extension on iOS devices which improves
performance of OpenGL applications, especially with multiple OpenGL
- glcolorconvert now uses a bufferpool to avoid costly
glGenTextures/glDeleteTextures for every frame.
- glvideomixer now has full glBlendFunc and glBlendEquation support per input.
- gltransformation now support navigation events so your weird transformations
also work with DVD menus.
- qmlglsink can now run on iOS, OS X and Android in addition to the already
supported Linux platform.
- glimagesink now posts unhandled keyboard and mouse events (on backends that
support user input, current only X11) on the bus for the application.
### Initial GStreamer Vulkan support
Some new elements, vulkansink and vulkanupload have been implemented utilizing
the new Vulkan API. The implementation is currently limited to X11 platforms
(via xcb) and does not perform any scaling of the stream's contents to the size
of the available output.
A lot of infrasctructure work has been undertaken to support using Vulkan in
GStreamer in the future. A number of GstMemory subclasses have been created for
integrating Vulkan's GPU memory handling along with VkBuffer's and VkImage's
that can be passed between elements. Some GStreamer refcounted wrappers for
global objects such as VkInstance, VkDevice, VkQueue, etc have also been
implemented along with GstContext integration for sharing these objects with the
### GStreamer VAAPI support for hardware-accelerated video decoding and encoding on Intel (and other) platforms
#### GStreamer VAAPI is now part of upstream GStreamer
The GStreamer-VAAPI module which provides support for hardware-accelerated
video decoding, encoding and post-processing on Intel graphics hardware
on Linux has moved from its previous home at the [Intel Open Source Technology Center][iostc]
to the upstream GStreamer repositories, where it will in future be maintained
as part of the upstream GStreamer project and released in lockstep with the
other GStreamer modules. The current maintainers will continue to spearhead
the development at the new location:
GStreamer-VAAPI relies heavily on certain GStreamer infrastructure API that
is still in flux such as the OpenGL integration API or the codec parser
libraries, and one of the goals of the move was to be able to leverage
new developments early and provide tighter integration with the latest
developments of those APIs and other graphics-related APIs provided by