How to stream audio from browser to WebRTC native C++ application

I have so far managed to run the following sample:

WebRTC native c++ to browser video streaming example

The sample shows how to stream video from a native C++ application (peerconnection_client.exe) to the browser (I am using Chrome). This works fine and I can see myself in the browser.

What I would like to do is to stream audio from the browser to the native application but I am not sure how. Can anyone give me some pointers please?



Receiving WebRTC call from a C++ native Windows application

I would like, from a native Windows application using C++, to receive video/audio data sent from a browser located in a remote location. It seems like WebRTC is the way to go for this. Most informatio

Using WebRTC to stream audio to java server

I’m afraid the answer to my question is no, but I’m asking it anyway just in case. What I would like to do is stream audio from a chrome browser to a server written in Java via WebRTC. My understandin

WebRTC release video input but have audio stream running

I have a stream with audio and video, just as apprtc. I have to stop the video track and have the audio track running such as the webcam is released from WebRTC and other app can use it. How do I do t

Video and audio stream control duirng the webrtc call

I can make the webrtc call between 2 parties with video and audio stream together. Is there any way to give user to stop sharing only video or audio during the call? Assume A and B are in a webrtc c

Stream recorded audio from browser to server

I would like to live stream recorded audio from the browser to the server and play it. The server will end up being a embedded device that plays these audio streams. So far I’ve successfully recorded

web audio analyzer on webrtc for remote stream

We are creating a audio video application which using webRTC. The problem is we are not able to show the stream spectrum for remote but for local we are able to. // setup a analyzer var analyser = au

Is it possible to do WebRTC browser to native (C, C++ or other)?

I have seen several examples of native to browser WebRTC applications, like for streaming video files stored on a server to one or more browsers, but is it possible to do the reverse ? I.e. streaming

webRTC: How to detect audio/video presence in Stream?

I would like to know the tracks presence in received stream onaddstream callback. Video calling is working well but I would like to make. audio only call, so I just passed audio:true,video:false in ge

Creating a WebRTC receiver

I am new to WebRTC and trying to figure out how to create a program outside a browser which receives a WebRTC audio stream and outputs it on speakers. Are there any WebRTC libraries for Java or C#? Th

webrtc communicate with SIP, how to get stream, ice?

now I am make webrtc signal server, use SIP. I have big problem. how to get stream, and ICE candidate from remote SIP Client.? in webrtc client side, call getusermedia -> peerconnection -> crea

Answers

you could use the following example which implement a desktop client for appRTC.

https://github.com/TemasysCommunications/appRTCDesk

this completes and interop with the web client, android client and iOs client provided by the open source implementation at webrtc.org, giving you a full suite of clients to work with their free server. peer connection_{client|server} is an old example from the lib jingle time (pre webrtc) and does not interop with anything else.

I know this is an old question, but I struggled myself to find a solution currently so I thought sharing is appreciated.

There’s is one more or less simple way to get an example running which streams from the browser to native code.You need the webrtc source http://www.webrtc.org/native-code/development

The two tools you need are the peerconnection server and client. Both can be found in the folder talk/example/peerconnection

To get it working you need to patch it to enable DTLS for the peerconnection client. So patch it with the patch provided here https://code.google.com/p/webrtc/issues/detail?id=3872 and rebuild the client. Now you are set up on the native site!

For the browser I recommend the peer2peer example from here https://github.com/GoogleChrome/webrtc after starting the peerconnection_server and connection the peerconnection_client try to connect with the peer2peer example.

Maybe a connection constraint is necessary:
{ “DtlsSrtpKeyAgreement”: true }

I’m trying to find a way to stream both video and audio from browser to my native program. and here is my way so far.

To stream video from browser to your native program without gui, just follow the example here. https://chromium.googlesource.com/external/webrtc/+/master/webrtc/examples/peerconnection/client/

use AddOrUpdateSink to add your own VideoSinkInterface and you will receive your frame data in callback void OnFrame(const cricket::VideoFrame& frame). Instead of render the frame to GUI as the example does, you can do whatever you want.

To stream audio from browser to your native program without real audio device. you can use a fake audio device.

  1. modify variable rtc_use_dummy_audio_file_devices to true in file https://chromium.googlesource.com/external/webrtc/+/master/webrtc/build/webrtc.gni
  2. invoke the global static function to specify the filename webrtc::FileAudioDeviceFactory::SetFilenamesToUse(“”, “file_to_save_audio”);
  3. patch file_audio_device.cc with the code blew. (as I write this answer, FileAudioDevice has some issues, may already be fixed)
  4. recompile your program, touch file_to_save_audio and you will see pcm data in file_to_save_audio after webrtc connection is established.

patch:

    diff --git a/webrtc/modules/audio_device/dummy/file_audio_device.cc b/webrtc/modules/audio_device/dummy/file_audio_device.cc
index 8b3fa5e..2717cda 100644
--- a/webrtc/modules/audio_device/dummy/file_audio_device.cc
+++ b/webrtc/modules/audio_device/dummy/file_audio_device.cc
@@ -35,6 +35,7 @@ FileAudioDevice::FileAudioDevice(const int32_t id,
     _recordingBufferSizeIn10MS(0),
     _recordingFramesIn10MS(0),
     _playoutFramesIn10MS(0),
+    _initialized(false),
     _playing(false),
     _recording(false),
     _lastCallPlayoutMillis(0),
@@ -135,12 +136,13 @@ int32_t FileAudioDevice::InitPlayout() {
       // Update webrtc audio buffer with the selected parameters
       _ptrAudioBuffer->SetPlayoutSampleRate(kPlayoutFixedSampleRate);
       _ptrAudioBuffer->SetPlayoutChannels(kPlayoutNumChannels);
+      _initialized = true;
   }
   return 0;
 }

 bool FileAudioDevice::PlayoutIsInitialized() const {
-  return true;
+  return _initialized;
 }

 int32_t FileAudioDevice::RecordingIsAvailable(bool& available) {
@@ -236,7 +238,7 @@ int32_t FileAudioDevice::StopPlayout() {
 }

 bool FileAudioDevice::Playing() const {
-  return true;
+  return _playing;
 }

 int32_t FileAudioDevice::StartRecording() {
diff --git a/webrtc/modules/audio_device/dummy/file_audio_device.h b/webrtc/modules/audio_device/dummy/file_audio_device.h
index a69b47e..3f3c841 100644
--- a/webrtc/modules/audio_device/dummy/file_audio_device.h
+++ b/webrtc/modules/audio_device/dummy/file_audio_device.h
@@ -185,6 +185,7 @@ class FileAudioDevice : public AudioDeviceGeneric {
   std::unique_ptr<rtc::PlatformThread> _ptrThreadRec;
   std::unique_ptr<rtc::PlatformThread> _ptrThreadPlay;

+  bool _initialized;;
   bool _playing;
   bool _recording;
   uint64_t _lastCallPlayoutMillis;