Skip to content
This repository has been archived by the owner on Mar 19, 2021. It is now read-only.

Headless stream #278

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from
Draft

Headless stream #278

wants to merge 2 commits into from

Conversation

ritiek
Copy link
Contributor

@ritiek ritiek commented Jul 17, 2020

What platform does your feature request apply to?
Linux (and maybe Windows)

Is your feature request related to a problem? Please describe.
I want to stream only the audio from my PS4 to my headless Raspberry Pi using Chiaki. This will allow me to listen to real-time audio from my PS4 from the external speakers that are connected to my Raspberry Pi, which would be very cool!

Describe the solution you'd like
I want to be able to run something like this on my Raspberry Pi which will write the audio data to STDOUT and other tools like ffplay or mpv would be able to play this audio stream.

$ chiaki-cli audiostream --host=192.168.1.2 --registkey=123abc12 --morning=abcdABCDabcdABCDabcdAB== | ffplay -

This feature would also allow users to redirect this audio output to a file which would allow them to record the audio from PS4.

Describe alternatives you've considered
It is already possible to listen to the audio stream by running the Chiaki GUI but this requires an X server to be available and is unnecessarily more expensive on resources as the video stream from the PS4 is decoded and displayed too.

Additional context
I've tried to implement something like this in this draft PR myself and I think I'm pretty close. To try out this draft PR:

The option to build the CLI needs to be ON in:

option(CHIAKI_ENABLE_CLI "Enable CLI for Chiaki" OFF)

Then compile and run:

$ chiaki-cli audiostream --host=192.168.1.2 --registkey=123abc12 --morning=abcdABCDabcdABCDabcdAB==

This should write what I believe is the OPUS encoded audio to stderr. It writes to stderr because currently all the chiaki log output is written to stdout, so the output would mix up if we were to write the audio data to stdout too (it would be a good idea to write logs to stdout which AFAIK is the convention, but that is a thing for a another day).
However, when I pipe this audio output from stderr to ffplay or mpv with:

$ chiaki-cli audiostream --host=192.168.1.2 --registkey=123abc12 --morning=abcdABCDabcdABCDabcdAB== 2>&1 > /dev/null | ffplay -

The player fails to recognize it as a valid audio data and nothing can be heard. I'm stuck here and need help to know why the player doesn't recognize and play the audio data received.

Also, I'm a novice in C and low-level stuff so there is a good chance this draft PR can potentially segfault at places or allocate unnecessary memory.

cli/src/main.c Outdated
@@ -28,7 +28,8 @@ static const char doc[] =
"\v"
"Supported commands are:\n"
" discover Discover Consoles.\n"
" wakeup Send Wakeup Packet.\n";
" wakeup Send Wakeup Packet.\n"
" audiostream Fetch the audiostream.\n";
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imo this should just be called "stream" since you might also want to get video and pipe it somewhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been thinking about this.

If it is possible to fetch the audio-only stream from PS4 (which means that PS4 doesn't even deliver the video stream but only the audio stream), then IMO it makes some sense to have separate commands for audiostream and stream, because then audiostream command won't download the additional video data only to discard it later on.

If PS4 also force sends us the video stream when we only need the audio stream, then maybe we should just only have the stream command like you said which would deliver us both video and audio.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that even if there was an option to only get an audio stream (which as far as I know there is not, unfortunately) it will still make more sense to have both things under a single stream command and switch by an additional option like --audio-only or something, since both operations would do very similar things for the most part.

chiaki_opus_decoder_set_cb(&opus_decoder, NULL, AudioFrameCb, NULL);

ChiakiAudioSink audio_sink;
chiaki_opus_decoder_get_sink(&opus_decoder, &audio_sink);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of using ChiakiOpusDecoder, which would decode the opus itself, you will want to implement the ChiakiAudioSink interface yourself and then output the raw opus data in ChiakiAudioSink.frame_cb. You might have to also craft a header for ogg or something in ChiakiAudioSink.header_cb, so the reading program will be able to determine how to deal with the data.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, thanks! I'll give this a shot.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't seem to properly craft the audio stream header. It seems like we are crafting the header for audio stream in the android build in:

uint8_t opus_id_head[0x13];
memcpy(opus_id_head, "OpusHead", 8);
opus_id_head[0x8] = 1; // version
opus_id_head[0x9] = header->channels;
uint16_t pre_skip = 3840;
opus_id_head[0xa] = (uint8_t)(pre_skip & 0xff);
opus_id_head[0xb] = (uint8_t)(pre_skip >> 8);
opus_id_head[0xc] = (uint8_t)(header->rate & 0xff);
opus_id_head[0xd] = (uint8_t)((header->rate >> 0x8) & 0xff);
opus_id_head[0xe] = (uint8_t)((header->rate >> 0x10) & 0xff);
opus_id_head[0xf] = (uint8_t)(header->rate >> 0x18);
uint16_t output_gain = 0;
opus_id_head[0x10] = (uint8_t)(output_gain & 0xff);
opus_id_head[0x11] = (uint8_t)(output_gain >> 8);
opus_id_head[0x12] = 0; // channel map
//AMediaFormat_setBuffer(format, AMEDIAFORMAT_KEY_CSD_0, opus_id_head, sizeof(opus_id_head));
android_chiaki_audio_decoder_frame(opus_id_head, sizeof(opus_id_head), decoder);
uint64_t pre_skip_ns = 0;
uint8_t csd1[8] = { (uint8_t)(pre_skip_ns & 0xff), (uint8_t)((pre_skip_ns >> 0x8) & 0xff), (uint8_t)((pre_skip_ns >> 0x10) & 0xff), (uint8_t)((pre_skip_ns >> 0x18) & 0xff),
(uint8_t)((pre_skip_ns >> 0x20) & 0xff), (uint8_t)((pre_skip_ns >> 0x28) & 0xff), (uint8_t)((pre_skip_ns >> 0x30) & 0xff), (uint8_t)(pre_skip_ns >> 0x38)};
android_chiaki_audio_decoder_frame(csd1, sizeof(csd1), decoder);
uint64_t pre_roll_ns = 0;
uint8_t csd2[8] = { (uint8_t)(pre_roll_ns & 0xff), (uint8_t)((pre_roll_ns >> 0x8) & 0xff), (uint8_t)((pre_roll_ns >> 0x10) & 0xff), (uint8_t)((pre_roll_ns >> 0x18) & 0xff),
(uint8_t)((pre_roll_ns >> 0x20) & 0xff), (uint8_t)((pre_roll_ns >> 0x28) & 0xff), (uint8_t)((pre_roll_ns >> 0x30) & 0xff), (uint8_t)(pre_roll_ns >> 0x38)};
android_chiaki_audio_decoder_frame(csd2, sizeof(csd2), decoder);

So I mostly copy-pasted the same thing here but I still can't seem to get the audio working with pipes.


static void AudioFrameCb(int16_t *buf, size_t samples_count, void *user)
{
fprintf(stderr, "%s", (char *)buf);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not work, %s is for zero-terminated strings, you will want to use fwrite instead.

@ritiek ritiek force-pushed the headless-stream branch from e66bd2c to 891d221 Compare July 19, 2020 09:22
@ritiek
Copy link
Contributor Author

ritiek commented Jul 19, 2020

I got the video working on mpv with next to no lag with:

$ chiaki-cli stream --host=192.168.1.2 --registkey=123abc12 --morning=abcdABCDabcdABCDabcdAB== 2>&1 > /dev/null | mpv --no-cache --untimed --no-demuxer-thread --vd-lavc-threads=1 -

However, this still results in this warning from mpv (and in turn ffmpeg):

[lavf] This format is marked by FFmpeg as having no timestamps!
[lavf] FFmpeg will likely make up its own broken timestamps. For
[lavf] video streams you can correct this with:
[lavf]     --no-correct-pts --fps=VALUE
[lavf] with VALUE being the real framerate of the stream. You can
[lavf] expect seeking and buffering estimation to be generally
[lavf] broken as well.

I wonder if it is possible to write some kind of header information to pipe so these warnings do not show up?

@ritiek
Copy link
Contributor Author

ritiek commented Jul 28, 2020

Sorry, I haven't been able to work on this. This stuff is still something on the woodoo side for me. If someone else wants to work on supporting headless mode, feel free to take stuff from this PR!

@thestr4ng3r
Copy link
Owner

I think I'll finish it when I have time. The timestamp warning is probably expected given the nature of the video data.

@thestr4ng3r thestr4ng3r force-pushed the master branch 2 times, most recently from 3693db7 to 9c1dfb6 Compare March 19, 2021 10:44
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants