PROJET AUTOBLOG


®om's blog

Site original : ®om's blog

⇐ retour index

Mise à jour

Mise à jour de la base de données, veuillez patienter...

Audio forwarding on Android 10

mardi 9 juin 2020 à 21:50

Audio forwarding is one of the most requested features in scrcpy (see issue #14).

Last year, I published a small experiment (USBaudio) to forward audio over USB, using the AOA2 protocol. Unfortunately, this Android feature was unreliable, and has been deprecated in Android 8.

Here is a new tool I developed to play the device audio output on the computer, using the Playback Capture API introduced in Android 10: sndcpy.

The name follows the same logic:

This is a quick proof-of-concept, composed of:

The long-term goal is to implement this feature properly in scrcpy.

How to use sndcpy

You could either download the release or build the app.

VLC must be installed on the computer.

Plug an Android 10 device with USB debugging enabled, and execute:

./sndcpy

If several devices are connected (listed by adb devices):

./sndcpy <serial>  # replace <serial> by the device serial

(omit ./ on Windows)

It will install the app on the device, and request permission to start audio capture:

request

Once you clicked on START NOW, press Enter in the console to start playing on the computer. Press Ctrl+c in the terminal to stop (except on Windows, just disconnect the device or stop capture from the device notifications).

The sound continues to be played on the device. The volume can be adjusted independently on the device and on the computer.

Apps restrictions

sndcpy may only forward audio from apps which do not prevent audio capture. The rules are detailed in §capture policy:

  • By default, apps that target versions up to and including to Android 9.0 do not permit playback capture. To enable it, include android:allowAudioPlaybackCapture="true" in the app’s manifest.xml file.
  • By default, apps that target Android 10 (API level 29) or higher allow their audio to be captured. To disable playback capture, include android:allowAudioPlaybackCapture="false" in the app’s manifest.xml file.

So some apps might need to be updated to support audio capture.

Integration in scrcpy

Ideally, I would like scrcpy to support audio forwarding directly. However, this will require quite a lot of work.

In particular, scrcpy does not use an Android app (required for capturing audio), it currently only runs a Java main as shell (required to inject events and capture the screen without asking).

And it will require to implement audio playback (done by VLC in this PoC), but also audio recording (for scrcpy --record file.mkv), encoding and decoding to transmit a compressed stream, handle audio-video synchronization…

Since I develop scrcpy on my free time, this feature will probably not be integrated very soon. Therefore, I prefer to release a working proof-of-concept which does the job.

Introducing USBaudio

jeudi 20 juin 2019 à 09:40

Forwarding audio from Android devices

In order to support audio forwarding in scrcpy, I first implemented an experimentation on a separate branch (see issue #14). But it was too hacky and fragile to be merged (and it does not work on all platforms).

So I decided to write a separate tool: USBaudio.

It works on Linux with PulseAudio.

How to use USBaudio

First, you need to build it (follow the instructions).

Plug an Android device. If USB debugging is enabled, just execute:

usbaudio

If USB debugging is disabled (or if multiple devices are connected), you need to specify a device, either by their serial or vendor id and product_id (as printed by lsusb):

usbaudio -s 0123456789abcdef
usbaudio -d 18d1:4ee2

The audio should be played on the computer.

If it’s stuttering, try increasing the live caching value (at the cost of a higher latency):

# default is 50ms
usbaudio --live-caching=100

Note that it can also be directly captured by OBS:

obs

How does it work?

USBaudio executes 3 steps successively:

  1. It enables audio accessory on the device (by sending AOA requests via libusb), so that the audio is forwarded over USB. If it works, PulseAudio (or ALSA) on the computer should detect a new audio input source.
  2. It retrieves the PulseAudio input source id associated to the Android device (via libpulse).
  3. It execs VLC to play audio from this input source.

Note that enabling audio accessory changes the USB device product id, so it will close any adb connection (and scrcpy). Therefore, you should enable audio forwarding before running scrcpy.

Manually

To only enable audio accessory without playing:

usbaudio -n
usbaudio --no-play

The audio input sources can be listed by:

pactl list short sources

For example:

$ pactl list short sources
...
5   alsa_input.usb-LGE_Nexus_5_05f5e60a0ae518e5-01.analog-stereo     module-alsa-card.c  s16le 2ch 44100Hz   RUNNING

Use the number (here 5) to play it with VLC:

vlc -Idummy --live-caching=50 pulse://5

Alternatively, you can use ALSA directly:

cat /proc/asound/cards

For example:

$ cat /proc/asound/cards
...
 1 [N5             ]: USB-Audio - Nexus 5
                      LGE Nexus 5 at usb-0000:00:14.0-4, high speed

Use the device number (here 1) as follow:

vlc -Idummy --live-caching=50 alsa://hw:1

If it works manually but not automatically (without -n), then please open an issue.

Limitations

It does not work on all devices, it seems that audio accessory is not always well supported. But it’s better than nothing.

Android Q added a new playback capture API. Hopefully, scrcpy could use it to forward audio in the future (but only for Android Q devices).

A new core playlist for VLC 4

mardi 21 mai 2019 à 09:25

The core playlist in VLC was started a long time ago. Since then, it has grown to handle too many different things, to the point it became a kind of god object.

In practice, the playlist was also controlling playback (start, stop, change volume…), configuring audio and video outputs, storing media detected by discovery

For VLC 4, we wanted a new playlist API, containing a simple list of items (instead of a tree), acting as a media provider for a player, without unrelated responsabilities.

I wrote it several months ago (at Videolabs). Now that the old one has been removed, it’s time to give some technical details.

vlc

Objectives

One major design goal is to expose what UI frameworks need. Several user interfaces, like Qt, Mac OS and Android1, will use this API to display and interact with the main VLC playlist.

The playlist must be performant for common use cases and usable from multiple threads.

Indeed, in VLC, user interfaces are implemented as modules loaded dynamically. In general, there is exactly one user interface, but there may be none or (in theory) several. Thus, the playlist may not be bound to the event loop of some specific user interface. Moreover, the playlist may be modified from a player thread; for example, playing a zip archive will replace the item by its content automatically.

As a consequence, the playlist will use a mutex; to avoid ToCToU issues, it will also expose public functions to lock and unlock it. But as we will see later, there will be other consequences.

Data structure

User interfaces need random access to the playlist items, so a vector is the most natural structure to store the items. A vector is provided by the standard library of many languages (vector in C++, Vec in Rust, ArrayList in Java…). But here, we’re in C, so there is nothing.

In the playlist, we only need a vector of pointers, so I first proposed improvements to an existing type, vlc_array_t, which only supports void * as items. But it was considered useless (1, 2) because it is too limited and not type-safe.

Therefore, I wrote vlc_vector. It is implemented using macros so that it’s generic over its item type. For example, we can use a vector of ints as follow:

// declare and initialize a vector of int
struct VLC_VECTOR(int) vec = VLC_VECTOR_INITIALIZER;

// append 0, 10, 20, 30 and 40
for (int i = 0; i < 5; ++i) {
    if (!vlc_vector_push(&vec, 10 * i)) {
        // allocation failure...
    }
}

// remove item at index 2
vlc_vector_remove(2);

// the vector now contains [0, 10, 30, 40]

int first = vec.data[0]; // 0
int last = vec.data[vec.size - 1]; // 40

// free resources
vlc_vector_destroy(&vec);

Internally, the playlist uses a vector of playlist items:

typedef struct VLC_VECTOR(struct vlc_playlist_item *) playlist_item_vector_t;

struct vlc_playlist {
    playlist_item_vector_t items;
    // ...
};

Interaction with UI

UI frameworks typically use list models to bind items to a list view component. A list model must provide:

In addition, the model must notify its view when items are inserted, removed, moved or updated, and when the model is reset (the whole content should be invalidated).

For example, Qt list views use QAbstractItemModel/QAbstractListModel and the Android recycler view uses RecyclerView.Adapter.

The playlist API exposes the functions and callbacks providing these features.

Desynchronization

However, the core playlist may not be used as a direct data source for a list model. In other words, the functions of a list model must not delegate the calls to the core playlist.

To understand why, let’s consider a typical sequence of calls executed by a view on its model, from the UI thread:

model.count();
model.get(0);
model.get(1);
model.get(2);
model.get(3);
model.get(4);

If we implemented count() and get(index) by delegating to the playlist, we would have to lock each call individually:

// in some artificial UI framework in C++

int MyModel::count() {
    // don't do this
    vlc_playlist_Lock(playlist);
    int count = vlc_playlist_Count();
    vlc_playlist_Unlock(playlist);
    return count;
}

vlc_playlist_item_t *MyModel::get(int index) {
    // don't do this
    vlc_playlist_Lock(playlist);
    vlc_playlist_item_t *item = vlc_playlist_Get(playlist, index);
    vlc_playlist_Unlock(playlist);
    return item;
}

Note that locking and unlocking from the UI thread for every playlist item is not a good idea for responsiveness, but this is a minor issue here.

The real problem is that locking is not sufficient to guarantee correctness: the list view expects its model to return consistent values. Our implementation can break this assumption, because the playlist content could change asynchronously between calls. Here is an example:

// the playlist initially contains 5 items: [A, B, C, D, E]
model.count(); // 5
model.get(0);  // A
model.get(1);  // B
                    // the first playlist item is removed from another thread:
                    //     vlc_playlist_RemoveOne(playlist, 0);
                    // the playlist now contains [B, C, D, E]
model.get(2);  // D
model.get(3);  // E
model.get(4);  // out-of-range, undefined behavior (probably segfault)

The view could not process any notification of the item removal before the end of the current execution in its event loop… that is, at least after model.get(4). To avoid this problem, the data provided by view models must always live in the UI thread.

This implies that the UI has to manage a copy of the playlist content. The UI playlist should be considered as a remote out-of-sync view of the core playlist.

Note that the copy must not be limited to the list of pointers to playlist items: the content which is displayed and susceptible to change asynchronously (media metadata, like title or duration) must also be copied. The UI needs a deep copy; otherwise, the content could change (and be exposed) before the list view was notified… which, again, would break assumptions about the model.

Synchronization

The core playlist and the UI playlist are out-of-sync. So we need to “synchronize” them:

Core to UI

The core playlist is the source of truth.

Every change to the UI playlist must occur in the UI thread, yet the core playlist notification handlers are executed from any thread. Therefore, playlist callback handlers must retrieve appropriate data from the playlist, then post an event to the UI event loop2, which will be handled from the UI thread. From there, the core playlist will be out-of-sync, so it would be incorrect to access it.

The order of events forwarded to the UI thread must be preserved. That way, the indices notified by the core playlist are necessarily valid within the context of the list model in the UI thread. The core playlist events can be understood as a sequence of “patches” that the UI playlist must apply to its own copy.

This only works if only the core playlist callbacks modify the list model content.

UI to core

Since the list model can only be modified by the core playlist callbacks, it is incorrect to modify it on user actions. As a consequence, the changes must be requested to the core playlist, which will, in turn, notify the actual changes.

The synchronization is more tricky in that direction.

To understand why, suppose the user selects items 10 to 20, then drag & drop to move them to index 42. Once the user releases the mouse button to “drop” the items, we need to lock the core playlist to apply the changes.

The problem is that, before we successfully acquired the lock, another client may have modified the playlist: it may have cleared it, or shuffled it, or removed items 5 to 15… As a consequence, we cannot apply the “move” request as is, because it was created from a previous playlist state.

To solve the issue, we need to adapt the request to make it fit the current playlist state. In other words, resolve conflicts: find the items if they had been moved, ignore the items not found for removal…

For that purpose, in addition to functions modifying the content directly, the playlist exposes functions to request “desynchronized” changes, which automatically resolve conflicts and generate an appropriate sequence of events to notify the clients of the actual changes.

Let’s take an example. Initially, our playlist contains 10 items:

[A, B, C, D, E, F, G, H, I, J]

The user selects [C, D, E, F, G] and press the Del key to remove the items. To apply the change, we need to lock the core playlist.

But at that time, another thread was holding the lock to apply some other changes. It removed F and I, and shuffled the playlist:

[E, B, D, J, C, G, H, A]

Once the other thread unlocks the playlist, our lock finally succeeds. Then, we call request_remove([C, D, E, F, G]) (this is pseudo-code, the real function is vlc_playlist_RequestRemove).

Internally, it triggers several calls:

// [E, B, D, J, C, G, H, A]
remove(index = 4, count = 2)   // remove [C, G]
// [E, B, D, J, H, A]
remove(index = 2, count = 1)   // remove [D]
// [E, B, J, H, A]
remove(index = 0, count = 1)   // remove [E]
// [B, J, H, A]

Thus, every client (including the UI from which the user requested to remove the items), will receive a sequence of 3 events on_items_removed, corresponding to each removed slice.

The slices are removed in descending order for both optimization (it minimizes the number of shifts) and simplicity (the index of a removal does not depend on previous removals).

In practice, it is very likely that the request will apply exactly to the current state of the playlist. To avoid unnecessary linear searches to find the items, these functions accept an additional index_hint parameter, giving the index of the items when the request was created. It should (hopefully) almost always be the same as the index in the current playlist state.

Random playback

Contrary to shuffle, random playback does not move the items within the playlist; instead, it does not play them sequentially.

To select the next item to play, we could just pick one at random.

But this is not ideal: some items will be selected several times (possibly in a row) while some others will not be selected at all. And if loop is disabled, when should we stop? After all n items have been selected at least once or after n playbacks?

Instead, we would like some desirable properties that work both with loop enabled and disabled:

In addition, if loop is enabled:

Randomizer

I wrote a randomizer to select items “randomly” within all these constraints.

To get an idea of the results, here is a sequence produced for a playlist containing 5 items (A, B, C, D and E), with loop enabled (so that it continues indefinitely):

E D A B C E B C A D C B E D A C E A D B A D C E B A B D E C B C A E D E D B C A
E C B D A C A E B D C D E A B E D B A C D C B A E D A B C E B D C A E D C A B E
B A E C D C E D A B C E B A D E C B D A D B A C E C E B A D B C E D A E A C B D
A D E B C D C A E B E A D C B C D B A E C E A B D C D E A B D A E C B C A D B E
A B E C D A C B E D E D A B C D E C A B C A E B D E B D C A C A E D B D B E C A

Here is how it works.

The randomizer stores a single vector containing all the items of the playlist. This vector is not shuffled at once. Instead, steps of the Fisher-Yates algorithm are executed one-by-one on demand. This has several advantages:

It also maintains 3 indexes:

0              next  head          history       size
|---------------|-----|.............|-------------|
 <------------------->               <----------->
  determinated range                 history range

Let’s reuse the example I wrote in the documentation.

Here is the initial state with our 5 items:

                                         history
                next                     |
                head                     |
                |                        |
                A    B    C    D    E

The playlist calls Next() to retrieve the next random item. The randomizer picks one item (say, D), and swaps it with the current head (A). Next() returns D.

                                         history
                     next                |
                     head                |
                     |                   |
                D    B    C    A    E
              <--->
           determinated range

The playlist calls Next() one more time. The randomizer selects one item outside the determinated range (say, E). Next() returns E.

                                         history
                          next           |
                          head           |
                          |              |
                D    E    C    A    B
              <-------->
           determinated range

The playlist calls Next() one more time. The randomizer selects C (already in place). Next() returns C.

                                         history
                               next      |
                               head      |
                               |         |
                D    E    C    A    B
              <------------->
            determinated range

The playlist then calls Prev(). Since the “current” item is C, the previous one is E, so Prev() returns E, and next moves back.

                                         history
                          next           |
                          |    head      |
                          |    |         |
                D    E    C    A    B
              <------------->
            determinated range

The playlist calls Next(), which returns C, as expected.

                                         history
                               next      |
                               head      |
                               |         |
                D    E    C    A    B
              <------------->
            determinated range

The playlist calls Next(), the randomizer selects B, and returns it.

                                         history
                                    next |
                                    head |
                                    |    |
                D    E    C    B    A
              <------------------>
               determinated range

The playlist calls Next(), the randomizer selects the last item (it has no choice). next and head now point one item past the end (their value is the vector size).

                                         history
                                         next
                                         head
                                         |
                D    E    C    B    A
              <----------------------->
                 determinated range

At this point, if loop is disabled, it is not possible to call Next() anymore (HasNext() returns false). So let’s enable it by calling SetLoop(), then let’s call Next() again.

This will start a new loop cycle. Firstly, next and head are reset, and the whole vector belongs to the last cycle history.

                 history
                 next
                 head
                 |
                 D    E    C    B    A
              <------------------------>
                    history range

Secondly, to avoid selecting A twice in a row (as the last item of the previous cycle and the first item of the new one), the randomizer will immediately determine another item in the vector (say C) to be the first of the new cycle. The items that belong to the history are kept in order. head and history move forward.

                     history
                next |
                |    head
                |    |
                C    D    E    B    A
              <---><------------------>
      determinated     history range
             range

Finally, it will actually select and return the first item (C).

                     history
                     next
                     head
                     |
                C    D    E    B    A
              <---><------------------>
      determinated     history range
             range

Then, the user adds an item to the playlist (F). This item is added in front of history.

                          history
                     next |
                     head |
                     |    |
                C    F    D    E    B    A
              <--->     <------------------>
      determinated          history range
             range

The playlist calls Next(), the randomizer randomly selects E. E “disappears” from the history of the last cycle. This is a general property: each item may not appear more than once in the “history” (both from the last and the new cycle). The history order is preserved.

                               history
                          next |
                          head |
                          |    |
                C    E    F    D    B    A
              <-------->     <-------------->
             determinated     history range
                range

The playlist then calls Prev() 3 times, that yields C, then A, then B. next is decremented (modulo size) on each call.

                               history
                               |    next
                          head |    |
                          |    |    |
                C    E    F    D    B    A
              <-------->     <-------------->
             determinated     history range
                range

Hopefully, the resulting randomness will match what people expect in practice.

Sorting

The playlist can be sorted by an ordered list of criteria (a key and a order).

The implementation is complicated by the fact that items metadata can change asynchronously (for example if the player is parsing it), making the comparison function inconsistent.

To avoid the problem, a first pass builds a list of metadata for all items, then this list is sorted, and finally the resulting order is applied back to the playlist.

As a benefit, the items are locked only once to retrieved their metadata.

Interaction with the player

For VLC 4, Thomas wrote a new player API.

A player can be used without a playlist: we can set its current media and the player can request, when necessary, the next media to play from a media provider.

A playlist, on the other hand, needs a player, and registers itself as its media provider. They are tightly coupled:

To keep them synchronized:

This poses a lock-order inversion problem: for example, if thread A locks the playlist then waits for the player lock, while thread B locks the player then waits for the playlist lock, then thread A and B are deadlocked.

To avoid the problem, the player and the playlist share the same lock. Concretely, vlc_playlist_Lock() delegates to vlc_player_Lock(). In practice, the lock should be held only for short periods of time.

Media source

A separate API (media source and media tree) was necessary to expose what is called services discovery (used to detect media from various sources like Samba or MTP), which were previously managed by the old playlist.

Thus, we could kill the old playlist.

Conclusion

The new playlist and player API should help to implement UI properly (spoiler: a new modern UI is being developed), to avoid racy bugs and to implement new features (spoiler: gapless).

Discuss on reddit and Hacker News.


  1. Actually, the Android app will maybe continue to implement its own playlist in Java/Kotlin, to avoid additional layers (Java/JNI and LibVLC).

  2. Even in the case where a core playlist callback is executed from the UI thread, the event must be posted to the event queue, to avoid breaking the order. Concretely, in Qt, this means connecting signals to slots using Qt::QueuedConnection instead of the default Qt::AutoConnection.

  3. We use next instead of current so that all indexes are unsigned, while current could be -1.

Implementing tile encoding in rav1e

jeudi 25 avril 2019 à 10:15

During the last few months at Videolabs, I added support for tile encoding in rav1e (a Rust AV1 Encoder).

What is this?

AV1 is an open and royalty-free video coding format, concurrent with HEVC (H.265).

Rav1e is an encoder written in Rust, developped by Mozilla/Xiph. As such, it takes an input video and encodes it to produce a valid AV1 bitstream.

Tile encoding

Tile encoding consists in splitting video frames into tiles that can be encoded and decoded independently in parallel (to use several CPUs), at the cost of a small loss in compression efficiency.

This speeds up encoding and increases decoding frame rate.

tiles
8 tiles (4 colums × 2 rows)

Preliminary work

To prepare for tiling, some refactoring was necessary.

A frame contains 3 planes (one for each YUV component, possibly subsampled). Each plane is stored in a contiguous array, rows after rows.

To illustrate it, here is a mini-plane containing 6×3 pixels. Padding is added for alignment (and other details), so its physical size is 8×4 pixels:

plane

In memory, it is stored in a single array:

plane memory

The number of array items separating one pixel to the one below is called the stride. Here, the stride is 8.

The encoder often needs to process rectangular regions. For that purpose, many functions received a slice of the plane array and the stride value:

pub fn write_forty_two(slice: &mut [u16], stride: usize) {
  for y in 0..2 {
    for x in 0..4 {
      slice[y * stride + x] = 42;
    }
  }
}

This works fine, but the plane slice spans multiple rows.

Let’s split our planes into 4 tiles (2 columns × 2 rows):

plane regions

In memory, the resulting plane regions are not contiguous:

plane regions memory

In Rust, it is not sufficient not to read/write the same memory from several threads, it must be impossible to write (safe) code that could do it. More precisely, a mutable reference may not alias any other reference to the same memory.

As a consequence, passing a mutable slice (&mut [u16]) spanning multiple rows is incompatible with tiling. Instead, we need some structure, implemented with unsafe code, providing a view of the authorized region of the underlying plane.

As a first step, I replaced every piece of code which used a raw slice and the stride value by the existing PlaneSlice and PlaneMutSlice structures (which first required to make planes generic after improving the Pixel trait).

After these changes, our function could be rewritten as follow:

pub fn write_forty_two<T: Pixel>(slice: &mut PlaneMutSlice<'_, T>) {
  for y in 0..2 {
    for x in 0..4 {
      slice[y][x] = 42;
    }
  }
}

Tiling structures

So now, all the code using a raw slice and a stride value has been replaced. But if we look at the definition of PlaneMutSlice, we see that it still borrows the whole plane:

pub struct PlaneMutSlice<'a, T: Pixel> {
  pub plane: &'a mut Plane<T>,
  pub x: isize,
  pub y: isize
}

So the refactoring, in itself, does not solves the problem.

What is needed now is a structure that exposes a bounded region of the plane.

Minimal example

For illustration purpose, let’s consider a minimal example, solving a similar problem: split a matrix into columns.

2D array

In memory, the matrix is stored in a single array:

2D array memory

To do so, let’s define a ColumnMut type, and split the raw array into columns:

use std::marker::PhantomData;
use std::ops::{Index, IndexMut};

pub struct ColumnMut<'a> {
    data: *mut u8,
    cols: usize,
    rows: usize,
    phantom: PhantomData<&'a mut u8>,
}

impl Index<usize> for ColumnMut<'_> {
    type Output = u8;
    fn index(&self, index: usize) -> &Self::Output {
        assert!(index < self.rows);
        unsafe { &*self.data.add(index * self.cols) }
    }
}

impl IndexMut<usize> for ColumnMut<'_> {
    fn index_mut(&mut self, index: usize) -> &mut Self::Output {
        assert!(index < self.rows);
        unsafe { &mut *self.data.add(index * self.cols) }
    }
}

pub fn columns(
    slice: &mut [u8],
    cols: usize,
) -> impl Iterator<Item = ColumnMut<'_>> {
    assert!(slice.len() % cols == 0);
    let rows = slice.len() / cols;
    (0..cols).map(move |col| ColumnMut {
        data: &mut slice[col],
        cols,
        rows,
        phantom: PhantomData,
    })
}

The PhantomData is necessary to bind the lifetime (in practice, when we store a raw pointer, we often need a PhantomData).

We implemented Index and IndexMut traits to provide operator []:

// via Index trait
let value = column[y];

// via IndexMut trait
column[y] = value;

The iterator returned by columns() yields a different column every time, so the borrowing rules are respected.

Now, we can read from and write to a matrix via temporary column views:

fn main() {
    let mut data = [1, 5, 3, 2,
                    4, 2, 1, 7,
                    0, 0, 0, 0];

    // for each column, write the sum
    columns(&mut data, 4).for_each(|mut col| col[2] = col[0] + col[1]);

    assert_eq!(data, [1, 5, 3, 2,
                      4, 2, 1, 7,
                      5, 7, 4, 9]);
}

Even if the columns are interlaced in memory, from a ColumnMut instance, it is not possible to access data belonging to another column.

Note that cols and rows fields must be kept private, otherwise they could be changed from safe code in such a way that breaks boundaries and violates borrowing rules.

In rav1e

A plane is split in a similar way, except that it provides plane regions instead of colums.

The split is recursive. For example, a Frame contains 3 Planes, so a Tile contains 3 PlaneRegions, using the same underlying memory.

In practice, more structures related to the encoding state are split into tiles, provided both in const and mut versions, so there is a whole hierarchy of tiling structures:

 +- FrameState → TileState
 |  +- Frame → Tile
 |  |  +- Plane → PlaneRegion
 |  +  RestorationState → TileRestorationState
 |  |  +- RestorationPlane → TileRestorationPlane
 |  |     +- FrameRestorationUnits → TileRestorationUnits
 |  +  FrameMotionVectors → TileMotionVectors
 +- FrameBlocks → TileBlocks

The split is done by a separate component (see tiler.rs), which yields a tile context containing an instance of the hierarchy of tiling views for each tile.

Relative offsets

A priori, there are mainly two possibilities to express offsets during tile encoding:

The usage of tiling views strongly favors the first choice. For example, it would be confusing if a bounded region could not be indexed from 0:

// region starting at (64, 64)
let row = &region[0]; // panic, out-of-bounds
let row = &region[64]; // ok :-/

Worse, this would not be possible at all for the second dimension:

// region starting at (64, 64)
let first_row = &region[64];
let first_column = row[64]; // wrong, a raw slice necessarily starts at 0

Therefore, offsets used in tiling views are relative to the tile (contrary to libaom and AV1 specification).

Tile encoding

Encoding a frame first involves frame-wise accesses (initialization), then tile-wise accesses (to encode tiles in parallel), then frame-wise accesses using the results of tile-encoding (deblocking, CDEF, loop restoration, …).

All the frame-level structures have been replaced by tiling views where necessary.

The tiling views exist only temporarily, during the calls to encode_tile(). While they are alive, it is not possible to access frame-level structures (the borrow checker statically prevents it).

Then the tiling structures vanish, and frame-level processing can continue.

This schema gives an overview:

                                \
      +----------------+         |
      |                |         |
      |                |         |  Frame-wise accesses
      |                |          >
      |                |         |   - FrameState<T>
      |                |         |   - Frame<T>
      +----------------+         |   - Plane<T>
                                /    - ...

              ||   tiling views
              \/
                                \
  +---+  +---+  +---+  +---+     |
  |   |  |   |  |   |  |   |     |  Tile encoding (possibly in parallel)
  +---+  +---+  +---+  +---+     |
                                 |
  +---+  +---+  +---+  +---+     |  Tile-wise accesses
  |   |  |   |  |   |  |   |      >
  +---+  +---+  +---+  +---+     |   - TileStateMut<'_, T>
                                 |   - TileMut<'_, T>
  +---+  +---+  +---+  +---+     |   - PlaneRegionMut<'_, T>
  |   |  |   |  |   |  |   |     |
  +---+  +---+  +---+  +---+     |
                                /

              ||   vanishing of tiling views
              \/
                                \
      +----------------+         |
      |                |         |
      |                |         |  Frame-wise accesses
      |                |          >
      |                |         |  (deblocking, CDEF, ...)
      |                |         |
      +----------------+         |
                                /

Command-line

To enable tile encoding, parameters have been added to pass the (log2) number of tiles --tile-cols-log2 and --tile-rows-log2. For example, to request 2x2 tiles:

rav1e video.y4m -o video.ivf --tile-cols-log2 1 --tile-rows-log2 1

Currently, we need to pass the log2 of the number of tiles (like in libaom, even if the aomenc options are called --tile-columns and --tile-rows), to avoid any confusion. Maybe we could find a better option which is both correct, non-confusing and user-friendly later.

Bitstream

Now that we can encode tiles, we must write them according to the AV1 bitstream specification, so that decoders can read the resulting file correctly.

Before tile encoding (i.e. with a single tile), rav1e produced a correct bitstream. Several changes were necessary to write multiple tiles.

Tile info

According to Tile info syntax, the frame header signals the number of columns and rows of tiles (it always signaled a single tile before).

In addition, when there are several tiles, it signals two more values, described below.

CDF update

For entropy coding, the encoder maintain and update a CDF (Cumulative Distribution Function), representing the probabilities of symbols.

After a frame is encoded, the current CDF state is saved to be possibly used as a starting state for future frames.

But with tile encoding, each tile finishes with its own CDF state, so which one should we associate to the reference frame? The answer is: any of them. But we must signal the one we choose, in context_update_tile_id; the decoder needs it to decode the bitstream.

In practice, we keep the CDF from the biggest tile.

Size of tiles size

The size of an encoded tile, in bytes, is variable (of course). Therefore, we will need to signal the size of each tile.

To gain a few bytes, the number of bytes used to store the size itself is also variable, and signaled by 2 bits in the frame header (tile_size_bytes_minus_1).

Concretely, we must choose the smallest size that is sufficient to encode all the tile sizes for the frame.

Tile group

According to General tile group OBU syntax, we need to signal two values when there are more than 1 tile:

The tile size (minus 1) is written in little endian, and use the number of bytes we signaled in the frame header.

That’s all. This is sufficient to produce a correct bitstream with multiple tiles.

Parallelization

Thanks to Rayon, parallelizing tile encoding is as easy as replacing iter_mut() by par_iter_mut().

I tested on my laptop (8 CPUs) several encodings to compare encoding performance (this is not a good benchmark, but it gives an idea, you are encouraged to run your own tests). Here are the results:

 tiles     time      speedup
   1    7mn02,336s    1.00×
   2    3mn53,578s    1.81×
   4    2mn12,995s    3.05×
   8*   1mn57,533s    3.59×

Speedups are quite good for 2 and 4 tiles.

*The reason why the speedup is lower than expected for 8 tiles is that my CPU has actually only 4 physical cores. See this reddit comment and this other one.

Limits

Why not 2×, 4× and 8× speedup? Mainly because of Amdahl’s law.

Tile encoding parallelizes only a part of the whole process: there are still single-threaded processings at frame-level.

Suppose that a proportion p (between 0 and 1) of a given task can be parallelized. Then its theoretical speedup is 1 / ((p/n) + (1-p)), where n is the number of threads.

 tiles   speedup   speedup   speedup
         (p=0.9)   (p=0.95)  (p=0.98)
   2      1.82×     1.90×     1.96×
   4      3.07×     3.48×     3.77×
   8      4.71×     5.93×     7.02×

Maybe counterintuitively, to increase the speedup brought by parallelization, non-parallelized code must be optimized (the more threads are used, the more the non-parallelized code represents a significant part).

The (not-so-reliable) benchmark results for 2 and 4 tiles suggest that tile encoding represents ~90% of the whole encoding process.

Fixing bugs

Not everything worked the first time.

The most common source of errors while implementing tile encoding was related to offsets.

When there was only one tile, all offsets were relative to the frame. With several tiles, some offsets are relative to the current tile, but some others are still relative to the whole frame. For example, during motion estimation, a motion vector can point outside tile boundaries in the reference frame, so we must take care to convert offsets accordingly.

The most obvious errors were catched by plane regions (which prevent access outside boundaries), but some others were more subtle.

Such errors could produce interesting images. For example, here is a screenshot of my first tiled video:

bbb

One of these offsets confusions had been quickly catched by barrbrain in intra-prediction. I then fixed a similar problem in inter-prediction.

But the final boss bug was way more sneaky: it corrupted the bitstream (so the encoder was unable to decode), but not always, and never the first frame. When an inter-frame could be decoded, it was sometimes visually corrupted, but only for some videos and for some encoding parameters.

After more than one week of investigations, I finally found it. \o/

Conclusion

Implementing this feature was an awesome journey. I learned a lot, both about Rust and video encoding (I didn’t even know what a tile was before I started).

Big thanks to the Mozilla/Xiph/Daala team, who has been very welcoming and helpful, and who does amazing work!

Discuss on r/programming, r/rust, r/AV1 and Hacker News.

Introducing scrcpy

jeudi 8 mars 2018 à 12:00

I developed an application to display and control Android devices connected on USB. It does not require any root access. It works on GNU/Linux, Windows and Mac OS.

scrcpy

It focuses on:

Like my previous project, gnirehtet, Genymobile accepted to open source it: scrcpy.

You can build, install and run it.

How does scrcpy work?

The application executes a server on the device. The client and the server communicate via a socket over an adb tunnel.

The server streams an H.264 video of the device screen. The client decodes the video frames and displays them.

The client captures input (keyboard and mouse) events, sends them to the server, which injects them to the device.

The documentation gives more details.

Here, I will detail several technical aspects of the application likely to interest developers.

Minimize latency

No buffering

It takes time to encode, transmit and decode the video stream. To minimize latency, we must avoid any additional delay.

For example, let’s stream the screen with screenrecord and play it with VLC:

adb exec-out screenrecord --output-format=h264 - | vlc - --demux h264

Initially, it works, but quickly the latency increases and frames are broken. The reason is that VLC associates a PTS to frames, and buffers the stream to play frames at some target time.

As a consequence, it sometimes prints such errors on stderr:

ES_OUT_SET_(GROUP_)PCR  is called too late (pts_delay increased to 300 ms)

Just before I started the project, Philippe, a colleague who played with WebRTC, advised me to “manually” decode (using FFmpeg) and render frames, to avoid any additional latency. This saved me from wasting time, it was the right solution.

Decoding the video stream to retrieve individual frames with FFmpeg is rather straightforward.

Skip frames

If, for any reason, the rendering is delayed, decoded frames are dropped so that scrcpy always displays the last decoded frame.

Note that this behavior may be changed with a configuration flag:

mesonconf x -Dskip_frames=false

Run a Java main on Android

Capturing the device screen requires some privileges, which are granted to shell.

It is possible to execute Java code as shell on Android, by invoking app_process from adb shell.

Hello, world!

Here is a simple Java application:

public class HelloWorld {
    public static void main(String... args) {
        System.out.println("Hello, world!");
    }
}

Let’s compile and dex it:

javac -source 1.7 -target 1.7 HelloWorld.java
"$ANDROID_HOME"/build-tools/27.0.2/dx \
    --dex --output classes.dex HelloWorld.class

Then, we push classes.dex to an Android device:

adb push classes.dex /data/local/tmp/

And execute it:

$ adb shell CLASSPATH=/data/local/tmp/classes.dex app_process / HelloWorld
Hello, world!

Access the Android framework

The application can access the Android framework at runtime.

For example, let’s use android.os.SystemClock:

import android.os.SystemClock;

public class HelloWorld {
    public static void main(String... args) {
        System.out.print("Hello,");
        SystemClock.sleep(1000);
        System.out.println(" world!");
    }
}

We link our class against android.jar:

javac -source 1.7 -target 1.7 \
    -cp "$ANDROID_HOME"/platforms/android-27/android.jar
    HelloWorld.java

Then run it as before.

Note that scrcpy also needs to access hidden methods from the framework. In that case, linking against android.jar is not sufficient, so it uses reflection.

Like an APK

The execution also works if classes.dex is embedded in a zip/jar:

jar cvf hello.jar classes.dex
adb push hello.jar /data/local/tmp/
adb shell CLASSPATH=/data/local/tmp/hello.jar app_process / HelloWorld

You know an example of a zip containing classes.dex? An APK!

Therefore, it works for any installed APK containing a class with a main method:

$ adb install myapp.apk
…
$ adb shell pm path my.app.package
package:/data/app/my.app.package-1/base.apk
$ adb shell CLASSPATH=/data/app/my.app.package-1/base.apk \
    app_process / HelloWorld

In scrcpy

To simplify the build system, I decided to build the server as an APK using gradle, even if it’s not a real Android application: gradle provides tasks for running tests, checking style, etc.

Invoked that way, the server is authorized to capture the device screen.

Improve startup time

Quick installation

Nothing is required to be installed on the device by the user: at startup, the client is responsible for executing the server on the device.

We saw that we can execute the main method of the server from an APK either:

Which one to choose?

$ time adb install server.apk
…
real    0m0,963s
…

$ time adb push server.apk /data/local/tmp/
…
real    0m0,022s
…

So I decided to push.

Note that /data/local/tmp is readable and writable by shell, but not world-writable, so a malicious application may not replace the server just before the client executes it.

Parallelization

If you executed the Hello, world! in the previous section, you may have noticed that running app_process takes some time: Hello, World! is not printed before some delay (between 0.5 and 1 second).

In the client, initializing SDL also takes some time.

Therefore, these initialization steps have been parallelized.

Clean up the device

After usage, we want to remove the server (/data/local/tmp/scrcpy-server.jar) from the device.

We could remove it on exit, but then, it would be left on device disconnection.

Instead, once the server is opened by app_process, scrcpy unlinks (rm) it. Thus, the file is present only for less than 1 second (it is removed even before the screen is displayed).

The file itself (not its name) is actually removed when the last associated open file descriptor is closed (at the latest, when app_process dies).

Handle text input

Handling input received from a keyboard is more complicated than I thought.

Events

There are 2 kinds of “keyboard” events:

Key events provide both the scancode (the physical location of a key on the keyboard) and the keycode (which depends on the keyboard layout). Only keycodes are used by scrcpy (it doesn’t need the location of physical keys).

However, key events are not sufficient to handle text input:

Sometimes it can take multiple key presses to produce a character. Sometimes a single key press can produce multiple characters.

Even simple characters may not be handled easily with key events, since they depend on the layout. For example, on a French keyboard, typing . (dot) generates Shift+;.

Therefore, scrcpy forwards key events to the device only for a limited set of keys. The remaining are handled by text input events.

Inject text

On the Android side, we may not inject text directly (injecting a KeyEvent created by the relevant constructor does not work). Instead, we can retrieve a list of KeyEvents to generate for a char[], using getEvents(char[]).

For example:

char[] chars = {'?'};
KeyEvent[] events = charMap.getEvents(chars);

Here, events is initialized with an array of 4 events:

  1. press KEYCODE_SHIFT_LEFT
  2. press KEYCODE_SLASH
  3. release KEYCODE_SLASH
  4. release KEYCODE_SHIFT_LEFT

Injecting those events correctly generates the char '?'.

Handle accented characters

Unfortunately, the previous method only works for ASCII characters:

char[] chars = {'é'};
KeyEvent[] events = charMap.getEvents(chars);
// events is null!!!

I first thought there was no way to inject such events from there, until I discussed with Philippe (yes, the same as earlier), who knew the solution: it works when we decompose the characters using combining diacritical dead key characters.

Concretely, instead of injecting "é", we inject "\u0301e":

char[] chars = {'\u0301', 'e'};
KeyEvent[] events = charMap.getEvents(chars);
// now, there are events

Therefore, to support accented characters, scrcpy attempts to decompose the characters using KeyComposition.

Set a window icon

The application window may have an icon, used in the title bar (for some desktop environments) and/or in the desktop taskbar.

The window icon must be set from an SDL_Surface by SDL_SetWindowIcon. Creating the surface with the icon content is up to the developer. For exemple, we could decide to load the icon from a PNG file, or directly from its raw pixels in memory.

Instead, another colleague, Aurélien, suggested I use the XPM image format, which is also a valid C source code: icon.xpm.

Note that the image is not the content of the variable icon_xpm declared in icon.xpm: it’s the whole file! Thus, icon.xpm may be both directly opened in Gimp and included in C source code:

#include "icon.xpm"

As a benefit, we directly “recognize” the icon from the source code, and we can patch it easily: in debug mode, the icon color is changed.

Conclusion

Developing this project was an awesome and motivating experience. I’ve learned a lot (I never used SDL or libav/FFmpeg before).

The resulting application works better than I initially expected, and I’m happy to have been able to open source it.

Discuss on reddit and Hacker News.