Closed Bug 909524 Opened 11 years ago Closed 10 years ago

Monitor end-to-end audio quality in automation

Categories

(Core :: WebRTC: Audio/Video, defect, P1)

22 Branch
defect

Tracking

()

RESOLVED FIXED

People

(Reporter: jesup, Assigned: jmaher)

References

(Blocks 2 open bugs)

Details

(Keywords: ateam-talos-big, Whiteboard: [webrtc][getusermedia][p=4])

Attachments

(11 files, 25 obsolete files)

(deleted), patch
Details | Diff | Splinter Review
(deleted), patch
Details | Diff | Splinter Review
(deleted), audio/wav
Details
(deleted), patch
Details | Diff | Splinter Review
(deleted), patch
Details | Diff | Splinter Review
(deleted), patch
Details | Diff | Splinter Review
(deleted), patch
jmaher
: review+
Details | Diff | Splinter Review
(deleted), patch
snandaku
: review+
Details | Diff | Splinter Review
(deleted), patch
jmaher
: review+
Details | Diff | Splinter Review
(deleted), patch
jmaher
: review+
Details | Diff | Splinter Review
(deleted), patch
Details | Diff | Splinter Review
Find a way to generate statistics similar to http://webrtc-dashboard.appspot.com/perf/audio_perf_chrome.html and especially http://webrtc-dashboard.appspot.com/perf/audio_perf.html

We don't have to have these exact statistics, those are a guide.

Once we can get numbers out of a build (talos, whatever) we can feed those into a graph generator automatically.

Measurement doesn't need to be PESQ; the mechanism should be designed to be replaceable without major surgery.  I'm quite open to any method, but the main idea is to get the framework done.  For now, we can also assume an ideal network (and run both ends in the same browser); we can complicate it later.

If possible we should be able to use either some type of synthetic audio or speech/music samples played back through a media element (for now, via captureStreamUntilEnded()).  Perhaps in the future we can emulate an actual audio input device in automation (see ted's work).
I'd like to get this done by early Gecko 27, so I'm setting a deadline for this of late September (9/30).
Target Milestone: --- → mozilla27
Randell , could you please point me to ted's work referred above ..
ted: suhas would like a pointer to the camera/mic emulation code we're using in automation
Flags: needinfo?(ted)
That's bug 815002. The Linux test slaves have v4l2loopback and alsa's snd_aloop drivers installed, so you can get loopback audio and video via real devices.
Flags: needinfo?(ted)
Attachment #809007 - Attachment is obsolete: true
Attachment #809012 - Attachment is obsolete: true
Comment on attachment 809013 [details] [diff] [review]
Basic media perf framework with audio-playback pesq evaluation

Review of attachment 809013 [details] [diff] [review]:
-----------------------------------------------------------------

First try on adding media-tests to Talos. The idea behind this patch is
1. To introduce a minimal framework to get started on introducing media tests as part of Talos.
2. This is a WIP patch and its purpose is to drive overall direction discussion going forward
3. This patch adds basic audio playback test with PESQ evaluation. It depends on SOX and PESQ tools (added as binaries in the patch).
Attachment #809013 - Flags: review?(rjesup)
Attachment #809013 - Flags: review?(rjesup) → feedback?(rjesup)
Comment on attachment 809013 [details] [diff] [review]
Basic media perf framework with audio-playback pesq evaluation

Review of attachment 809013 [details] [diff] [review]:
-----------------------------------------------------------------

Looks ok to me FWIW as a framework, but f? to ted since I really don't know talos or the test rigs.

Also need to validate a) can we install these tools; b) will they work the way we want on the test machines; c) do we need any synchronization of recording start/etc or do we need to pad the recording to ensure getting the parts we want

I assume once we have a working framework was can throw a 'call' into the mix using audio from an <audio> tag and audio.captureStreamUntilEnded.  Eventually we'd want to play it out from a wav file through the virtual audio input device, and capture the same way.

::: talos/results.py
@@ +121,5 @@
>          - counter_results : counters accumulated for this cycle
>          """
>  
>          if isinstance(results, basestring):
> +        	# ensure the browser log exists

spurious indent

@@ -175,5 @@
>      format = 'tsformat'
>  
>      def __init__(self, string, counter_results=None):
>          self.counter_results = counter_results
> -

spurious

::: talos/run_tests.py
@@ -164,5 @@
>    tests = useBaseTestDefaults(config.get('basetest', {}), tests)
>  
>    paths = ['profile_path', 'tpmanifest', 'extensions']
>    for test in tests:
> -

all the changes in this file are spurious

::: talos/test.py
@@ -52,5 @@
>  
>  
>  ### ts-style tests (ts, twinopen, ts_cold, etc)
>  ### The overall test number is calculated by excluding the max opening time
> -### and taking an average of the remaining numbers. 

I think all the changes in this file are trailing-space removals.  Let's not do that if we're not touching the file

::: talos/ttest.py
@@ +420,5 @@
>                  #stop the counter manager since this test is complete
>                  if counters:
>                      cm.stopMonitor()
>  
> +				# ensure the browser log exists

indent
Attachment #809013 - Flags: feedback?(ted)
Attachment #809013 - Flags: feedback?(rjesup) → feedback+
Also to note about my test environment for the above patch, I had installed SoundFlower on Mac to loop audio output back to input while using Headset for testing. On the other hand the binaries PESQ, SOX were compiled from the sources and assume that can be freshly built on the test machines as well.
Comment on attachment 809013 [details] [diff] [review]
Basic media perf framework with audio-playback pesq evaluation

Review of attachment 809013 [details] [diff] [review]:
-----------------------------------------------------------------

I'm punting this over to jmaher since he knows way more about Talos than I do.
Attachment #809013 - Flags: feedback?(ted) → feedback?(jmaher)
Comment on attachment 809013 [details] [diff] [review]
Basic media perf framework with audio-playback pesq evaluation

Review of attachment 809013 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks for the talos patch for a new test.  This is exciting.  I am really confused why we have to define this outside of test.py.  this appears to be a startup style test where we just load a single url and it prints out the data we care about.  Is there a need to record the specific time from launch to close?  if not, I would rather this be a pageloader test where we do setup/teardown in the page and just record the media perf numbers to the pageloader.

I see there is some need for launching a process for setup/finish.  We do this for xperf, I would like to make this generic for both! :)  

While this is close, I would like to see this integrated better as a test before a f+.

::: talos/PerfConfigurator.py
@@ +402,5 @@
> +            if noChrome:
> +                test_name_extension += '_nochrome'
> +            if mozAfterPaint:
> +                test_name_extension += '_paint'
> +            self.config['test_name_extension'] = test_name_extension

any reason why this is moved?

@@ +568,5 @@
> +                print '======================\n'
> +                test_class_names = [(test_class.name(), test_class.description())
> +                                    for test_class in mediaperf.tests]
> +                test_class_names.sort()
> +                for name, description in test_class_names:

I am not a fan of putting this in here.

@@ +624,5 @@
>  
>          # ensure tests are available
>          availableTests = test.test_dict.keys()
> +        # Add Media tests to the list
> +        availableTests.extend(mediaperf.test_dict.keys())

we define tests in test.py.  Is there a reason we are not doing that?

@@ +666,5 @@
>              # fix up url
>              url = getattr(test_instance, 'url', None)
>              if url:
>                  test_instance.url = self.convertUrlToRemote(url)
> +            print test_instance.url

extra print here.

@@ +837,5 @@
>  
>          # setup remote device
>          try:
>              self.testAgent = utils.testAgent(deviceip, deviceport)
> +            self.deviceroot = self.testAgent.getDeviceRoot()

this looks like a regression.

::: talos/mediaperf/utils.py
@@ +66,5 @@
> +      		'trim', 0, self.rec_duration]
> +      cmd = [str(s) for s in cmd]
> +      print " Running Record Command: %s" % cmd
> +      ret = subprocess.call(cmd,stdout=subprocess.PIPE,
> +      						stderr=subprocess.PIPE)

I am concerned about calling subprocess from inside a test case, we already have 3 levels of subprocess indirection.

@@ +118,5 @@
> +    			'1', '0.1', '1%', '1', '0.1', '1%']
> +    	cmd = [str(s) for s in cmd]
> +    	print " Silence removal command: %s" % cmd
> +    	ret = subprocess.call(cmd,stdout=subprocess.PIPE,
> +    							stderr=subprocess.PIPE)

another subprocess call.

::: talos/results.py
@@ +199,5 @@
>              self.results.append(result)
>              index += 1
>  
>          # The original case where we just have numbers and no pagename
> +        print string

extra debugging here.

::: talos/ttest.py
@@ +357,5 @@
> +                #perform any test specific setup outside browser
> +                test_class = None
> +                if mediaperf.test_dict.has_key(test_config['name']):
> +                    test_class = mediaperf.test_dict[test_config['name']]
> +                    test_class.setup()

We could make setup() a better routine to apply in general to talos.

@@ +401,5 @@
>  
> +                # Perform any final steps outside browser and get the results
> +                outcome = None
> +                if test_class:
> +                    outcome = test_class.finish()

can't this take place after the test records it's values?

@@ +436,5 @@
> +                if outcome:
> +                	print " Append the outcome to the browser log to unify the result parsing."
> +                	with open(browser_log_filename, "a") as b_file:
> +                		for i in outcome:
> +                			b_file.write(str(i))

indentation is off.  why not have the outcome go to stdout?
Attachment #809013 - Flags: feedback?(jmaher) → feedback-
Comment on attachment 809013 [details] [diff] [review]
Basic media perf framework with audio-playback pesq evaluation

Review of attachment 809013 [details] [diff] [review]:
-----------------------------------------------------------------

Hello Joel,

Many thanks for the review and your insights on this. I will try to put down my thinking on the design choices with respect to mediatests in this patch.

The idea is to add more tests in the future and in many case, the MediaTests differ from Startup or Pageload tests in certain ways as listed below
1. Pre-processing related to media files, systems media tools setup, device setup that needs to happen outside the browser context and even before the page is loaded.
2. Post processing, again outside the browser context for the generated media files and their analysis. 
3. 1 and 2 will have some results that needs to carried out for calculating results
4. Once a test is loaded, more advanced tests will have media processing within the page. Hence a simple startup test flow or pageload test flow might not apply.
5. Also the results is not expected to fit INTERVAL or AVERAGES category.
6. Platform specific dependencies for all the steps in the tests.

I know some of these are hand waving since I don't exactly know the nature and complexity of many media tests unless we try to implement them.

Keeping all this in mind, I thought it is better to keep the logic for executing media-tests on their own without complicating/messing up existing flows to fit the above requirements.

Lastly, the other part of this work is to integrate with Graph database, for which I have not much information to go about. Nevertheless this patch is the first step in setting up a framework in place and a basic test to get things rolling.

Please let me know your thoughts.

::: talos/PerfConfigurator.py
@@ +402,5 @@
> +            if noChrome:
> +                test_name_extension += '_nochrome'
> +            if mozAfterPaint:
> +                test_name_extension += '_paint'
> +            self.config['test_name_extension'] = test_name_extension

I am not sure of this either. I will get back on this.

@@ +568,5 @@
> +                print '======================\n'
> +                test_class_names = [(test_class.name(), test_class.description())
> +                                    for test_class in mediaperf.tests]
> +                test_class_names.sort()
> +                for name, description in test_class_names:

Since media-tests are their own list of tests, I added it separately. Please let me know how should I go about this.

@@ +671,1 @@
>              # fix up tpmanifest

Ah !! Will remove this.

@@ +837,5 @@
>  
>          # setup remote device
>          try:
>              self.testAgent = utils.testAgent(deviceip, deviceport)
> +            self.deviceroot = self.testAgent.getDeviceRoot()

Agreed

::: talos/mediaperf/utils.py
@@ +66,5 @@
> +      		'trim', 0, self.rec_duration]
> +      cmd = [str(s) for s in cmd]
> +      print " Running Record Command: %s" % cmd
> +      ret = subprocess.call(cmd,stdout=subprocess.PIPE,
> +      						stderr=subprocess.PIPE)

Ted/Randell have suggested a way to do entire thing in Javascript. I will investigate and update on this

@@ +118,5 @@
> +    			'1', '0.1', '1%', '1', '0.1', '1%']
> +    	cmd = [str(s) for s in cmd]
> +    	print " Silence removal command: %s" % cmd
> +    	ret = subprocess.call(cmd,stdout=subprocess.PIPE,
> +    							stderr=subprocess.PIPE)

Working on it..

::: talos/results.py
@@ +121,5 @@
>          - counter_results : counters accumulated for this cycle
>          """
>  
>          if isinstance(results, basestring):
> +        	# ensure the browser log exists

Will fix it

@@ +199,5 @@
>              self.results.append(result)
>              index += 1
>  
>          # The original case where we just have numbers and no pagename
> +        print string

Will remove this
as a note, I have been modifying talos a lot to support a common process management code path (https://github.com/mozilla/mozbase/tree/master/mozprocess/mozprocess).  In doing this the support for bcontroller.py is removed and the xperf setup/cleanup is now defined in test.py as scripts !  

While I am still awaiting review (i.e. my patch could change in time), I think what is implemented in bug 923770 will support the needs to setup and cleanup the audio for this test.  When this lands, I will mention it in this bug and either update whatever the latest patch is to work in the new format :)
Thank Joel for the information. I will update it to the new format once you indicate us.
Depends on: 923770
Joel, Just confirming that Bug 923770 is landed correct ...
Flags: needinfo?(jmaher)
the code is in the tree, it will be deployed today.  Feel free to start making it work, I suspect any documentation that I write will only alter your patch slightly.
Flags: needinfo?(jmaher)
While in the process of moving the currently uploaded patch to the new Talos framework and making it work for the end-to-end PeerConnection audio example, I am hitting 2 issues

1.Unable to access mozGetUserMediaDevices() within Talos framework  using SpecialPowers since mozGetUserMediaDevices is a privileged API. The relevant pieces of code for the same are uploaded here: 
1.1 HTML file loaded by Talos: https://github.com/suhasHere/MediaPerf/blob/master/files-tmp/gum_audio.html
1.2 SpecialPowersAPI file with Wrap/Unwrap code: https://github.com/suhasHere/MediaPerf/blob/master/files-tmp/specialpowersAPI.js
1.3 Code that calls mozGetUserMediaDevices : https://github.com/suhasHere/MediaPerf/blob/master/files-tmp/head.js#L88


2. On Randell's suggestion, I tried to use captureStreamUntilEnded() to generate MediaStream that can be fed into the PeerConnection. It seems to fail due to hints issue which is set for streams from GUM but not in non-GUM use-cases in our example.
specialpowers is not available in talos, there is an issue with the impact it has on the numbers.  could you create an addon that handles just the privileged calls you need?
I have opened bug 932845 to deal with using captureStreamUntilEnded() with Peer Connection. 

Also on discussing with :jib, he suggested using mozGetUserMeida() to access the loopback device might be a easy way to go about. :jib suggested that probably having a preference like media.enable.loopback set to True would instruct GUM to select loopback device by default and thus avoiding the dependency on SpecialPowers for accessing mozGetUserMediaDevices (which is chromeonly function)
Flags: needinfo?(rjesup)
Depends on: 932845
Depends on: 934667
Depends on: 933449
Attached file Peer Connection PESQ Talos Automation (obsolete) (deleted) —
This patch automates Peer Connection audio quality measurement within Talos framework. There are  few open issues with respect to overall design integration within the Talos that needs to be discussed. Nevertheless, this patch could be a starting point for discussions and directions moving forward.
Attachment #809013 - Attachment is obsolete: true
Attachment #8343897 - Flags: review?(rjesup)
Attachment #8343897 - Flags: review?(jmaher)
Attached file Steps to run the test (obsolete) (deleted) —
Comment on attachment 8343897 [details]
Peer Connection PESQ Talos Automation

Review of attachment 8343897 [details]:
-----------------------------------------------------------------

r+, but most of this is out of my area

::: talos/startup_test/media/audio_playback.html
@@ +1,1 @@
> +<!-- Used by audio_playback_perf to verify audio recorded. -->

MPL2 please

::: talos/startup_test/media/pc_audio_mozcapture.html
@@ +1,1 @@
> +<html>

MPL2 please
Attachment #8343897 - Flags: review?(rjesup) → review+
Comment on attachment 8343897 [details]
Peer Connection PESQ Talos Automation

Might make sense to have jsmith at least look this over
Attachment #8343897 - Flags: feedback?(jsmith)
Flags: needinfo?(rjesup)
Comment on attachment 8343897 [details]
Peer Connection PESQ Talos Automation

Review of attachment 8343897 [details]:
-----------------------------------------------------------------

the big issue not addressed in the specific review comments below is the setup/cleanup.  I believe we could make this work, although I see the problem with the needing to have the recording take place during the entire test.  What is the cleanup?  How do we know to terminate the recording?  Is this just done as talos terminates the runner script and effectively closes out the test object?

If we had a better way for setup to launch a process in the background, would that work?  How could we terminate it?

::: talos/mediaperf/media_utils.py
@@ +69,5 @@
> +_TEST_PATH_              = os.path.join(parent,'startup_test/media')
> +_PESQ_                   = os.path.join(_TOOLS_PATH_, 'PESQ')
> +_INPUT_FILE_             = os.path.join(_TEST_PATH_, 'input16.wav')
> +_RECORDED_FILE_          = os.path.join(_TEST_PATH_, 'record.wav')
> +_RECORDED_NO_SILENCE_    = os.path.join(_TEST_PATH_, 'record_no_silence.wav')

you would need to add these .wav files to the patch as well.  I see input16.wav on the patch.

@@ +101,5 @@
> +  # Set source monitor for recording
> +  # We pick the first monitor for the sink available
> +  def setRecordingDevice(self, device):
> +        self.rec_device = device
> +        # Adjust volime of the sink to 100%, since quality was

s/volime/volume/

@@ +103,5 @@
> +  def setRecordingDevice(self, device):
> +        self.rec_device = device
> +        # Adjust volime of the sink to 100%, since quality was
> +        # bit degraded when this was not done.
> +        cmd = ['pacmd', 'set-source-volume', self.rec_device, '65536']

65536 is hardcoded, can we add a comment? pacmd is not included in this patch.

@@ +105,5 @@
> +        # Adjust volime of the sink to 100%, since quality was
> +        # bit degraded when this was not done.
> +        cmd = ['pacmd', 'set-source-volume', self.rec_device, '65536']
> +        cmd = [str(s) for s in cmd]
> +        print "Running PACMD for Setting Volume %s" % cmd

can we use utils.info here?

@@ +109,5 @@
> +        print "Running PACMD for Setting Volume %s" % cmd
> +        p = subprocess.Popen(cmd, stdout=PIPE, stderr=PIPE)
> +        output, error = p.communicate()
> +        if p.returncode !=0:
> +          print "Failed to set Volume to 100% with error %s" % error

we need to make this a talosError from utils.py.

@@ +115,5 @@
> +
> +  # Start recording.
> +  def run(self):
> +
> +      # PACAT command to record 16 bit singned little endian mono channel

s/signned/signed/

@@ +118,5 @@
> +
> +      # PACAT command to record 16 bit singned little endian mono channel
> +      # audio from the sink self.rec_device
> +      pa_command = ['pacat', '-r', '-d', self.rec_device, '--format=s16le',
> +                    '--rate=16000', '--channels=1']

pacat is not in the patch as well, could this be _SAMPLE_RATE?

@@ +124,5 @@
> +
> +      # Sox command to convert raw audio from PACAT output to .wav format"
> +      sox_command = ['sox', '-t', 'raw', '-r',_SAMPLE_RATE_,'-sLb', 16,
> +                    '-c', _NUM_CHANNELS_, '-', self.output_file, 'trim', 0,
> +                    self.rec_duration]

nit, peast add 'sox' to the patch.

@@ +130,5 @@
> +
> +      # We sleep till the page gets loaded which takes approximiately
> +      # 15-18 seconds. This is needed in order to avoid HUGE delay
> +      # generated by the parec towards the beginning of the file.
> +      time.sleep(12)

Comment says 15-18 seconds, code sleeps for 12.

@@ +134,5 @@
> +      time.sleep(12)
> +
> +      print "Monitor Device is %s" % self.rec_device
> +      print "PACAT Command is %s" % pa_command
> +      print "SOX Command is %s"   %   sox_command

utils.info please.

@@ +145,5 @@
> +        print " Failed to record audio."
> +      else:
> +        print " Done with audio recording"
> +      p1.kill()
> +      # et's indicate the parent process to perform any cleanup

odd character here, I assume it should be "Let's"

@@ +163,5 @@
> +    # on the platform. (Linux Only)
> +    def setupAudioDeviceForRecording(self):
> +      # Use pactl to find the monitor for our Sink
> +      # This picks the first sink available
> +      cmd = ['pactl', 'list']

pactl isn't in the patch.

@@ +164,5 @@
> +    def setupAudioDeviceForRecording(self):
> +      # Use pactl to find the monitor for our Sink
> +      # This picks the first sink available
> +      cmd = ['pactl', 'list']
> +      print "Running PATCL Command %s" % cmd

utils.info please.

@@ +173,5 @@
> +
> +
> +    # Run PESQ on the audio reference file and recorded file
> +    def computePESQScore(self):
> +      print "In Utils.calculatePesq."

this seems like a statement we wouldn't need for production.

@@ +176,5 @@
> +    def computePESQScore(self):
> +      print "In Utils.calculatePesq."
> +      _START_ = '__start_media_report'
> +      _END_   = '__end_media_report'
> +      cmd = [_PESQ_,'+16000', _INPUT_FILE_, _RECORDED_NO_SILENCE_]

16000 == _SAMPLE_RATE

@@ +177,5 @@
> +      print "In Utils.calculatePesq."
> +      _START_ = '__start_media_report'
> +      _END_   = '__end_media_report'
> +      cmd = [_PESQ_,'+16000', _INPUT_FILE_, _RECORDED_NO_SILENCE_]
> +      print "Running PESQ Command %s" % cmd

utils.info

@@ +184,5 @@
> +      result = re.search('Prediction.*= (\d{1}\.\d{3})\t(\d{1}\.\d{3})', output)
> +      # We need to sleep for the browser_log to be available.
> +      # time.sleep(1)
> +      # delete the recorded file with no silence
> +      #os.remove(_RECORDED_NO_SILENCE_)

sleep and remove are commented out, can we remove these lines or add a comment as to why they don't work and what is needed to make them work?

@@ +190,5 @@
> +        pesq_report = _START_ + str(result.group(1)) + ',' + str(result.group(2)) + _END_
> +        if os.path.exists('browser_output.txt'):
> +          try:
> +            print pesq_report
> +            f = open ('browser_output.txt', "w+")

we should pass in browser_config[logfile] instead of the hard coded file name

@@ +193,5 @@
> +            print pesq_report
> +            f = open ('browser_output.txt', "w+")
> +            f.write(pesq_report)
> +          except IOError,ValueError:
> +            print "BAM !!!!!!!!!!!!!!!!!!!!!!!!!!!!"

do we need a talosError here, is this expected?  This looks like a message I would have added :)

::: talos/results.py
@@ +296,5 @@
>  
>      # tokens for the report types
>      report_tokens = [('tsformat', ('__start_report', '__end_report')),
> +                     ('tpformat', ('__start_tp_report', '__end_tp_report')),
> +                     ('mediaformat',('__start_media_report','__end_media_report'))

could we make this test a startup_test instead of a pageloader test?  We are only producing a single value from the test.  If there is a need in the future to have multiple pages (i.e. tests), lets keep it a pageloader test.  If we do, I would prefer to keep the format of the logging tpformat instead of mediaformat.

@@ -364,5 @@
>                  value = int(value)
>              except ValueError:
>                  self.error("Could not cast %s to an integer: %s" % (attr, value))
> -            if _last_token < position:
> -                self.error("%s [character position: %s] found before %s [character position: %s]" % (tokens, _last_token, previous_tokens, position))

why is this removed?

::: talos/startup_test/media/audio_playback.html
@@ +34,5 @@
> +
> +    startTest("input16.wav");
> +
> +  </script>
> +</html>

where is this filed used?  I cannot find any references to it.

::: talos/startup_test/media/pc_audio_mozcapture.html
@@ +31,5 @@
> +  let pc1_offer;
> +  let pc2_answer;
> +
> +  function failed(code) {
> +    alert("Something Failed " + code);

alerts are not that useful for production, if we want this to test out, I suggestion adding a url param ?auto=1 so we can alert if auto=0 and log messages if auto=1

@@ +64,5 @@
> +  }
> +
> +  // pc1.setRemote finished, media should be running!
> +  function step6() {
> +    log("HIP HIP HOORAY");

lets make this a bit more production friendly :) is there an end to this?  a finishTest() or end() that needs to be called?

@@ +76,5 @@
> +    pc1.onaddstream = function(obj) {
> +      log("pc1 got remote stream from pc2 " + obj.type);
> +      // we are not playing audio to avoid noise
> +      //pc2audio.mozSrcObject = obj.stream;
> +      //pc2audio.play();

why are these commented out?

@@ +79,5 @@
> +      //pc2audio.mozSrcObject = obj.stream;
> +      //pc2audio.play();
> +    }
> +    pc2.onaddstream = function(obj) {
> +      log("pc2 got remote stream from pc1 " + obj.type);

where does log() go?

@@ +89,5 @@
> +    localaudio.addEventListener('ended', function(evt) {
> +      end(true);
> +    },false);
> +
> +    //localaudio.src = "input.ogg";

input.ogg isn't in this patch.

@@ +105,5 @@
> +
> +  function end(status) {
> +    // let's wait for cleanup to happen before logging
> +    if(window.dump) {
> +        dumpLog('__startTimestamp' + Date.now() + '__endTimestamp\n');

this is a 4 space indent, the rest of the file is 2 space indent.

::: talos/startup_test/tspaint_test.html
@@ +6,5 @@
>  <!-- Pick off begin time as a cgi argument and print it out -->
>  <!-- call this with an arg, e.g. file://foo/startup-test.html?begin=12345678 -->
>  <!-- In-line this to avoid compilation. -->
> +<script language="Javascript" type="text/javascript" src="../specialpowers/chrome/specialpowers/content/specialpowersAPI.js"></script>
> +<script language="Javascript" type="text/javascript" src="../specialpowers/chrome/specialpowers/content/specialpowers.js"></script>

specialpowers proper has caused a great deal of noise in the numbers, we would need to make a very lightweight version if we were to use it.  please remove this.

@@ +13,5 @@
>  <script language="javascript" type="text/javascript">
>  var gBegin = document.location.search.split('=')[1]; // ?begin=nnnnn
>  
> +alert("Hmmmm" + SpecialPowers);
> +alert("Hmmmm" + SpecialPowers.prototype.wrap);

please remove the specialpowers calls here.  I assume this is unrelated to this patch.

::: talos/test.py
@@ +470,5 @@
> +    """
> +    abstract base class for media tests.
> +    """
> +    url_timestamp = False
> +    tpcycles = 1 # number of time to run each page

normally we run for 20 cycles, sometimes we do fewer cycles if the test does its own management of normalizing the data, in that case I always suggest at least 5 cycles and report the median value.

@@ +486,5 @@
> +    audioUtils = media_utils.AudioUtils()
> +    # input audio file is 10 seconds long. A 5 seconds
> +    # is added extra to make sure we get maximum audio
> +    # recorder
> +    audioUtils.startRecording(15)

what is this for?  can we not use setup/cleanup?
Attachment #8343897 - Flags: review?(jmaher) → review-
Comment on attachment 8343897 [details]
Peer Connection PESQ Talos Automation

Review of attachment 8343897 [details]:
-----------------------------------------------------------------

I don't know too much about talos tests, although I can provide input on the JS code that touches on the WebRTC code. I've added comments for those areas.

::: talos/startup_test/media/audio_playback.html
@@ +34,5 @@
> +
> +    startTest("input16.wav");
> +
> +  </script>
> +</html>

Followup question I have to is - how is this file being used?

::: talos/startup_test/media/pc_audio_mozcapture.html
@@ +1,3 @@
> +<html>
> +<head>
> +  <title>Simple PeerConnection Audio Test</title>

This file looks like a carbon copy of the pc test code from http://mozilla.github.io/webrtc-landing/pc_test.html. We shouldn't copy & paste the code directly here - instead, let's be contextual to what's actually needed to be tested & reuse existing frameworks to simplify the code. Can we reuse any of the code from dom/media/tests/mochitest? If not, can we follow a similar test pattern here to allow for creating tests easily?
Attachment #8343897 - Flags: feedback?(jsmith)
Comment on attachment 8343897 [details]
Peer Connection PESQ Talos Automation

Review of attachment 8343897 [details]:
-----------------------------------------------------------------

::: talos/mediaperf/media_utils.py
@@ +69,5 @@
> +_TEST_PATH_              = os.path.join(parent,'startup_test/media')
> +_PESQ_                   = os.path.join(_TOOLS_PATH_, 'PESQ')
> +_INPUT_FILE_             = os.path.join(_TEST_PATH_, 'input16.wav')
> +_RECORDED_FILE_          = os.path.join(_TEST_PATH_, 'record.wav')
> +_RECORDED_NO_SILENCE_    = os.path.join(_TEST_PATH_, 'record_no_silence.wav')

The other files are automatically generated as part of the test and deleted once PESQ scores are computed.

@@ +101,5 @@
> +  # Set source monitor for recording
> +  # We pick the first monitor for the sink available
> +  def setRecordingDevice(self, device):
> +        self.rec_device = device
> +        # Adjust volime of the sink to 100%, since quality was

will change

@@ +103,5 @@
> +  def setRecordingDevice(self, device):
> +        self.rec_device = device
> +        # Adjust volime of the sink to 100%, since quality was
> +        # bit degraded when this was not done.
> +        cmd = ['pacmd', 'set-source-volume', self.rec_device, '65536']

pacmd is part of pulseaudio that needs to be installed on the platform

@@ +105,5 @@
> +        # Adjust volime of the sink to 100%, since quality was
> +        # bit degraded when this was not done.
> +        cmd = ['pacmd', 'set-source-volume', self.rec_device, '65536']
> +        cmd = [str(s) for s in cmd]
> +        print "Running PACMD for Setting Volume %s" % cmd

good idea. I will update all the print to use utils.info

@@ +109,5 @@
> +        print "Running PACMD for Setting Volume %s" % cmd
> +        p = subprocess.Popen(cmd, stdout=PIPE, stderr=PIPE)
> +        output, error = p.communicate()
> +        if p.returncode !=0:
> +          print "Failed to set Volume to 100% with error %s" % error

I shall do so

@@ +118,5 @@
> +
> +      # PACAT command to record 16 bit singned little endian mono channel
> +      # audio from the sink self.rec_device
> +      pa_command = ['pacat', '-r', '-d', self.rec_device, '--format=s16le',
> +                    '--rate=16000', '--channels=1']

pacat is also part of libpulse that should be installed on the Linux platform

@@ +124,5 @@
> +
> +      # Sox command to convert raw audio from PACAT output to .wav format"
> +      sox_command = ['sox', '-t', 'raw', '-r',_SAMPLE_RATE_,'-sLb', 16,
> +                    '-c', _NUM_CHANNELS_, '-', self.output_file, 'trim', 0,
> +                    self.rec_duration]

Sox is a pre-requisite library that needs to be installed on the platform.

@@ +130,5 @@
> +
> +      # We sleep till the page gets loaded which takes approximiately
> +      # 15-18 seconds. This is needed in order to avoid HUGE delay
> +      # generated by the parec towards the beginning of the file.
> +      time.sleep(12)

This sleep is tuned to be lower than 15-18 seconds because this allows SOX Recorder to add few seconds of silence in the recorded file generated which will eventually be removed once the test ends. So this sleep time is to give some silence cushion so that we don't miss on the actual idea while recording,

@@ +134,5 @@
> +      time.sleep(12)
> +
> +      print "Monitor Device is %s" % self.rec_device
> +      print "PACAT Command is %s" % pa_command
> +      print "SOX Command is %s"   %   sox_command

Will Do

@@ +163,5 @@
> +    # on the platform. (Linux Only)
> +    def setupAudioDeviceForRecording(self):
> +      # Use pactl to find the monitor for our Sink
> +      # This picks the first sink available
> +      cmd = ['pactl', 'list']

pactl is also part of PusleAudio library on Linux

@@ +173,5 @@
> +
> +
> +    # Run PESQ on the audio reference file and recorded file
> +    def computePESQScore(self):
> +      print "In Utils.calculatePesq."

Will remove

@@ +176,5 @@
> +    def computePESQScore(self):
> +      print "In Utils.calculatePesq."
> +      _START_ = '__start_media_report'
> +      _END_   = '__end_media_report'
> +      cmd = [_PESQ_,'+16000', _INPUT_FILE_, _RECORDED_NO_SILENCE_]

Good catch , Will update

@@ +184,5 @@
> +      result = re.search('Prediction.*= (\d{1}\.\d{3})\t(\d{1}\.\d{3})', output)
> +      # We need to sleep for the browser_log to be available.
> +      # time.sleep(1)
> +      # delete the recorded file with no silence
> +      #os.remove(_RECORDED_NO_SILENCE_)

Will do

@@ +190,5 @@
> +        pesq_report = _START_ + str(result.group(1)) + ',' + str(result.group(2)) + _END_
> +        if os.path.exists('browser_output.txt'):
> +          try:
> +            print pesq_report
> +            f = open ('browser_output.txt', "w+")

I did try to do this in a cleaner way but wasn't able to find a way to pass browser_config. I need some help here.

@@ +193,5 @@
> +            print pesq_report
> +            f = open ('browser_output.txt', "w+")
> +            f.write(pesq_report)
> +          except IOError,ValueError:
> +            print "BAM !!!!!!!!!!!!!!!!!!!!!!!!!!!!"

:) ... the BAM !!! should go away

::: talos/results.py
@@ +296,5 @@
>  
>      # tokens for the report types
>      report_tokens = [('tsformat', ('__start_report', '__end_report')),
> +                     ('tpformat', ('__start_tp_report', '__end_tp_report')),
> +                     ('mediaformat',('__start_media_report','__end_media_report'))

As of today, I have made it mimic start up test. Am i missing something here ?

::: talos/startup_test/media/audio_playback.html
@@ +34,5 @@
> +
> +    startTest("input16.wav");
> +
> +  </script>
> +</html>

yes, this file is not used now. The idea is to have another test (startup) for this file, once we figure out the integration with setup.py and cleanup.py and the related Open Issue. I will be sending mail on this regard.

::: talos/startup_test/media/pc_audio_mozcapture.html
@@ +1,3 @@
> +<html>
> +<head>
> +  <title>Simple PeerConnection Audio Test</title>

Yes this is indeed copy of that file but it used mozCaptureStreamUtntilEnded() which isn't the case in the original file (unless i am understood it wrong). I tried to re-use mochitest within the Talos but it ended up requiring SpecialPowers and SimpleTest.js dependancy. I was not sure if we need to pull another framework with in the Talos framework.
So for the starters, this option looked feasible.

@@ +31,5 @@
> +  let pc1_offer;
> +  let pc2_answer;
> +
> +  function failed(code) {
> +    alert("Something Failed " + code);

Good point. I will update it to do as suggested

@@ +64,5 @@
> +  }
> +
> +  // pc1.setRemote finished, media should be running!
> +  function step6() {
> +    log("HIP HIP HOORAY");

Agreed. The end to this case is when the local audio element completes playing the file.

@@ +76,5 @@
> +    pc1.onaddstream = function(obj) {
> +      log("pc1 got remote stream from pc2 " + obj.type);
> +      // we are not playing audio to avoid noise
> +      //pc2audio.mozSrcObject = obj.stream;
> +      //pc2audio.play();

These are commented out since we do one way audio test. I would probably remove the comments.

@@ +79,5 @@
> +      //pc2audio.mozSrcObject = obj.stream;
> +      //pc2audio.play();
> +    }
> +    pc2.onaddstream = function(obj) {
> +      log("pc2 got remote stream from pc1 " + obj.type);

This logs on to a browser div element.

@@ +89,5 @@
> +    localaudio.addEventListener('ended', function(evt) {
> +      end(true);
> +    },false);
> +
> +    //localaudio.src = "input.ogg";

This comment must be removed. Since we are using input16.wav for now.

@@ +105,5 @@
> +
> +  function end(status) {
> +    // let's wait for cleanup to happen before logging
> +    if(window.dump) {
> +        dumpLog('__startTimestamp' + Date.now() + '__endTimestamp\n');

I shall update this to match the overall file

::: talos/startup_test/tspaint_test.html
@@ +13,5 @@
>  <script language="javascript" type="text/javascript">
>  var gBegin = document.location.search.split('=')[1]; // ?begin=nnnnn
>  
> +alert("Hmmmm" + SpecialPowers);
> +alert("Hmmmm" + SpecialPowers.prototype.wrap);

Agree with you. This file is not part of this patch and will not be updated in the next version of this patch

::: talos/test.py
@@ +486,5 @@
> +    audioUtils = media_utils.AudioUtils()
> +    # input audio file is 10 seconds long. A 5 seconds
> +    # is added extra to make sure we get maximum audio
> +    # recorder
> +    audioUtils.startRecording(15)

This is the open issue on how the media test works related to the recording and PESQ computation. I will be sending out an email in this regard to discuss this further
Attachment #8343897 - Attachment is obsolete: true
Attachment #8343897 - Attachment is patch: false
Attached patch Part 1: Utilties for media operations (obsolete) (deleted) — Splinter Review
Attached file Part 2: Server logic to run media-tests (obsolete) (deleted) —
Attached patch Part 3: JS and HTML Media Tests (obsolete) (deleted) — Splinter Review
Attachment #8350382 - Flags: review?(rjesup)
Attachment #8350382 - Flags: review?(jmaher)
Attachment #8350384 - Flags: review?(rjesup)
Attachment #8350384 - Flags: review?(jmaher)
Attachment #8350385 - Flags: review?(rjesup)
Attachment #8350385 - Flags: review?(jmaher)
Attachment #8350386 - Flags: review?(jmaher)
The set of new patches uploaded splits the work done into 4 parts as below
 Part 1 : Contains utilities for performing media operations such as recording, pesq calculation
 Part 2: Logic for standalone httpd server for running media tests and interfacing with utilities in Part 1 above
 Part 3: Javascript test logic and set of tests that are run by our server in Part 2
 Part 4: Integrating all the above parts into Talos based on :jmaher's latency work. This part needs some exploration once we decide on the right format for media test results. but for now, current way of reporting seems sufficient.
Attachment #8343901 - Attachment is obsolete: true
Comment on attachment 8350382 [details] [diff] [review]
Part 1: Utilties for media operations

Review of attachment 8350382 [details] [diff] [review]:
-----------------------------------------------------------------

a lot of little nits

::: talos/startup_test/media/media_utils.py
@@ +15,5 @@
> +import re
> +import subprocess
> +import threading
> +import time
> +from subprocess  import Popen, PIPE

you import these specifically, but in line 16 you import subprocess.  I see calls to subprocess.Popen in the source, lets just remove this line and keep line 16.

@@ +21,5 @@
> +
> +# This Directory
> +here = os.path.dirname(os.path.realpath(__file__))
> +# Parent
> +parent = os.path.abspath(os.path.join(here,os.pardir))

lines 22 and 24 are unnecessary comments

@@ +70,5 @@
> +        cmd = ['pacmd', 'set-source-volume', self.rec_device, _VOLUME_100_PERCENT_]
> +        cmd = [str(s) for s in cmd]
> +        talos.utils.debug("Running PACMD for Setting Volume %s", cmd)
> +        p = subprocess.Popen(cmd, stdout=PIPE, stderr=PIPE)
> +        output, error = p.communicate()

if we don't use output or error, then lets just make this line:
p.communicate()

@@ +74,5 @@
> +        output, error = p.communicate()
> +        # we ignore the return code
> +
> +  # Start recording.
> +  def run(self):

we assume rec_duration and rec_device have been set in this function.  While that is safe, I would like to see some checking here.

@@ +94,5 @@
> +
> +      # Run the commands the PIPE them together
> +      p1 = Popen(pa_command, stdout=PIPE)
> +      p2 = Popen(sox_command, stdin=p1.stdout, stdout=PIPE)
> +      retcode = p2.communicate()[0]

do we ever use retcode?

@@ +120,5 @@
> +    def setupAudioDeviceForRecording(self):
> +      # Use pactl to find the monitor for our Sink
> +      # This picks the first sink available
> +      cmd = ['pactl', 'list']
> +      talos.utils.debug("Running PATCL Command %s", cmd)

s/PATCL/PACTL/

@@ +124,5 @@
> +      talos.utils.debug("Running PATCL Command %s", cmd)
> +      output = subprocess.check_output(cmd)
> +      result = re.search('\s*Name: (\S*\.monitor)',output)
> +      if result:
> +	self.recorder.setRecordingDevice(result.group(1))

we are assuming recorder != None (default value in this class)

@@ +130,5 @@
> +
> +    # Run PESQ on the audio reference file and recorded file
> +    def computePESQScore(self):
> +      _START_ = '__start_media_report'
> +      _END_   = '__end_media_report'

as indicated in a previous review, I would like this to be __start_report since this is living as a startup style test (i.e. with no pageloader)
Attachment #8350382 - Flags: review?(jmaher) → review+
Comment on attachment 8350384 [details]
Part 2: Server logic to run media-tests

Review of attachment 8350384 [details]:
-----------------------------------------------------------------

too many little details and a few real questions I need answers to before I r+

::: talos/startup_test/media/media_manager.py
@@ +48,5 @@
> +@mozhttpd.handlers.json_response
> +def parseGETUrl(request):
> +    # Parse the url and invoke appropriate handlers
> +    url_path = request.path.lower()
> +    talos.utils.debug("ParseUrl: Request is %s ", url_path)

too much debugging going on here.

@@ +54,5 @@
> +      return (handleAudioRequest(request))
> +    elif url_path.find('server') != -1:
> +      return (handleServerConfigRequest(request))
> +    else:
> +      return (500, {'ERROR': request.path})

return parseUnknownUrl(request)?

@@ +55,5 @@
> +    elif url_path.find('server') != -1:
> +      return (handleServerConfigRequest(request))
> +    else:
> +      return (500, {'ERROR': request.path})
> +

4 space indent the entire function

@@ +62,5 @@
> +      return (500, {'ERROR': request.path})
> +
> +# Handler for Server Configuration Commands
> +def handleServerConfigRequest(request):
> +    talos.utils.debug("Config Request %s ", request.path)

too much debugging going on

@@ +78,5 @@
> +      return (parseAudioRecorderRequest(request))
> +    elif request.path.find('pesq') != -1:
> +     return(parsePESQRequest(request))
> +    else:
> +      return (500, {'ERROR': request.path})

4 space indent.

@@ +102,5 @@
> +def parsePESQRequest(request):
> +    if request.path.find('compute') != -1:
> +      results = ObjectDb.audio_utils.computePESQScore();
> +      talos.utils.debug("PESQ Results %s", results)
> +      return (200, {'PESQ-SCORE' : results}) 

nit, remove the whitespace at the end of the line.  also mixing 2 and 4 space indentations, please keep it at 4 spaces.

@@ +117,5 @@
> +                                                     { 'method'   : 'GET',
> +                                                       'path'     : '/server/?',
> +                                                       'function' : parseGETUrl } ])
> +    talos.utils.info("Server %s at %s:%s" ,
> +      httpd_server.docroot, httpd_server.host, httpd_server.port);

remove, trailing semicolon

@@ +124,5 @@
> +
> +# Kick-off the firefox process with passed in profile
> +# talos_results_url is appended to the URL loaded if passed in
> +# during invocation. This enables the test page to dump the
> +# results to Talos MozHttpdServer.

can you put this comment in a """ block?

@@ +128,5 @@
> +# results to Talos MozHttpdServer.
> +def open_browser(browser, profile):
> +    url = __TEST_HTML_PAGE__
> +    if ObjectDb.talos_results_url:
> +      url =  url + "?" + "results="+ObjectDb.talos_results_url

4 space indentation.

@@ +131,5 @@
> +    if ObjectDb.talos_results_url:
> +      url =  url + "?" + "results="+ObjectDb.talos_results_url
> +
> +    talos.utils.debug("Test Page Url : %s", url)
> +    command = [ browser, '-profile', profile, url]

add a '-no-remote' in here as well.

@@ +136,5 @@
> +    command = [str(s) for s in command]
> +    talos.utils.debug("Browser Launch Command: %s ", command)
> +    browser_proc = talos.talosProcess.talosProcess(command,
> +                                                   env=os.environ.copy())
> +    browser_proc.run()

please include a timeout in here as well as a try/except around this so we can handle timeouts and other errors.

@@ +161,5 @@
> +    talos.utils.debug("Profile Path %s " , options.profile)
> +    talos.utils.debug("Browser Path %s " , options.browser)
> +    talos.utils.debug("Browser Log File %s " , options.logfile)
> +    talos.utils.debug("Talos Path %s " , options.talos_path)
> +    talos.utils.debug("Talos Results URL %s " , options.talos_results)

these debug messages are a bit too much, I would ignore them for now.  Also, can we support a json file with the data you care about?  We do that in xperf:
http://hg.mozilla.org/build/talos/file/2bcf422011d1/talos/xtalos/xtalos.py

the config file is generated here:
http://hg.mozilla.org/build/talos/file/tip/talos/utils.py#l241

@@ +167,5 @@
> +    ObjectDb.talos_results_url = options.talos_results
> +
> +    # 1. Create handle to the AudioUtils
> +    ObjectDb.audio_utils = media_utils.AudioUtils()
> +    ObjectDb.audio_utils.setBrowserLogFile(options.logfile)

I don't understand objectdb, is this an instance of the class?  a global variable?

@@ +172,5 @@
> +
> +    # 2. Kick off the browser
> +    open_browser(options.browser, options.profile)
> +
> +    # 3. Start the httpd server

why do we start the server after the browser?
Attachment #8350384 - Flags: review?(jmaher) → review-
Comment on attachment 8350385 [details] [diff] [review]
Part 3: JS and HTML Media Tests

Review of attachment 8350385 [details] [diff] [review]:
-----------------------------------------------------------------

this looks pretty sane.
Attachment #8350385 - Flags: review?(jmaher) → review+
Comment on attachment 8350386 [details]
Part 4: Integration into Talos based on latency-benchmark work

Review of attachment 8350386 [details]:
-----------------------------------------------------------------

I would like to see this cleaned up a bit before r+, very close and looking good!

::: talos/run_tests.py
@@ +135,5 @@
> +  params = '?'.join(request.uri.split('?')[1:])
> +  params = urllib.unquote(params).decode('utf-8')
> +  data = json.loads(params)
> +  with open(results_log, 'w') as fhandle:
> +    fhandle.write("__start_media_report\n");

pending semicolon and please use __start_report not _media.  Also I recall seeing this output from the media_utils file, why do we have it in two places?

@@ +142,5 @@
> +    fhandle.write("__end__media_report\n");
> +    #TODO: figure out this in more detail - I want it to be more accurate
> +    fhandle.write('__startTimestamp%s__endTimestamp\n' % int(time.time()*1000));
> +    fhandle.write("__startBeforeLaunchTimestamp%d__endBeforeLaunchTimestamp\n" % int(time.time()*1000))
> +    fhandle.write("__startAfterTerminationTimestamp%d__endAfterTerminationTimestamp\n" % int(time.time()*1000))

I would like to make the startTimestamp at least be prior to the process execution, please fix that one and we can figure out __startBeforeLaunchTimestamp later.

@@ +146,5 @@
> +    fhandle.write("__startAfterTerminationTimestamp%d__endAfterTerminationTimestamp\n" % int(time.time()*1000))
> +
> +  return (200, { 'called': 1,
> +                 'data': {},
> +                 'query': request.query })

this file is 4 space indent, please adhere to the style.

::: talos/test.py
@@ +470,5 @@
> +    """
> +    abstract base class for media tests.
> +    """
> +    url_timestamp = False
> +    tpcycles = 1 # number of time to run each page

tpcycles isn't used for startup test style runs.

@@ +473,5 @@
> +    url_timestamp = False
> +    tpcycles = 1 # number of time to run each page
> +    preferences = {'media.navigator.enabled': True, 'media.peerconnection.enabled': True,'media.navigator.permission.disabled': True}
> +
> +class media_tests(TmBase):

why do you create TmBase?  If you have >1 test in the future it will make sense- if you do that please add:
 desktop = True
 mobile = False

@@ +481,5 @@
> +    tpcycles = 1
> +    desktop = True
> +    mobile = False
> +    # Question to :jmaher --> How to get browser_config[browser_log] in here
> +    url = 'python ${talos}/startup_test/media/media_manager.py -p ${profile} -b ${firefox} -l browser_output.txt -t ${talos} -r http://localhost:15707/results'

you are making an assumption that this is localhost and 15707.  Also why are you pasting browser_output.txt and a results server in here?

Please modify http://hg.mozilla.org/build/talos/file/tip/talos/utils.py#l120 if you need to add additional variables.
Attachment #8350386 - Flags: review?(jmaher) → review-
Comment on attachment 8350384 [details]
Part 2: Server logic to run media-tests

Review of attachment 8350384 [details]:
-----------------------------------------------------------------

I will fix the indent and spacing nits to be uniform across all the patches.

::: talos/startup_test/media/media_manager.py
@@ +117,5 @@
> +                                                     { 'method'   : 'GET',
> +                                                       'path'     : '/server/?',
> +                                                       'function' : parseGETUrl } ])
> +    talos.utils.info("Server %s at %s:%s" ,
> +      httpd_server.docroot, httpd_server.host, httpd_server.port);

ah !! will remove it

@@ +124,5 @@
> +
> +# Kick-off the firefox process with passed in profile
> +# talos_results_url is appended to the URL loaded if passed in
> +# during invocation. This enables the test page to dump the
> +# results to Talos MozHttpdServer.

Yes

@@ +131,5 @@
> +    if ObjectDb.talos_results_url:
> +      url =  url + "?" + "results="+ObjectDb.talos_results_url
> +
> +    talos.utils.debug("Test Page Url : %s", url)
> +    command = [ browser, '-profile', profile, url]

Will do so

@@ +136,5 @@
> +    command = [str(s) for s in command]
> +    talos.utils.debug("Browser Launch Command: %s ", command)
> +    browser_proc = talos.talosProcess.talosProcess(command,
> +                                                   env=os.environ.copy())
> +    browser_proc.run()

Sure, will do so

@@ +167,5 @@
> +    ObjectDb.talos_results_url = options.talos_results
> +
> +    # 1. Create handle to the AudioUtils
> +    ObjectDb.audio_utils = media_utils.AudioUtils()
> +    ObjectDb.audio_utils.setBrowserLogFile(options.logfile)

ObjectDb is used as a global variable to hold all the needed object handles such as browser process , logfile name in a structure like setup than defining multiple global variables.

@@ +172,5 @@
> +
> +    # 2. Kick off the browser
> +    open_browser(options.browser, options.profile)
> +
> +    # 3. Start the httpd server

Server is started after the browser since we start server in the blocking mode which blocks the execution.
Comment on attachment 8350386 [details]
Part 4: Integration into Talos based on latency-benchmark work

Review of attachment 8350386 [details]:
-----------------------------------------------------------------

::: talos/run_tests.py
@@ +135,5 @@
> +  params = '?'.join(request.uri.split('?')[1:])
> +  params = urllib.unquote(params).decode('utf-8')
> +  data = json.loads(params)
> +  with open(results_log, 'w') as fhandle:
> +    fhandle.write("__start_media_report\n");

I will remove the output from the media_utils file. I had kept it if in the case we don't go with collectResults based approach. I will also modify this to be __start_report

@@ +142,5 @@
> +    fhandle.write("__end__media_report\n");
> +    #TODO: figure out this in more detail - I want it to be more accurate
> +    fhandle.write('__startTimestamp%s__endTimestamp\n' % int(time.time()*1000));
> +    fhandle.write("__startBeforeLaunchTimestamp%d__endBeforeLaunchTimestamp\n" % int(time.time()*1000))
> +    fhandle.write("__startAfterTerminationTimestamp%d__endAfterTerminationTimestamp\n" % int(time.time()*1000))

I can send the startTimestamp in the JSON GET request which will timed before the tests are run. Then we can just dump the timestamp as we dump the results.

@@ +146,5 @@
> +    fhandle.write("__startAfterTerminationTimestamp%d__endAfterTerminationTimestamp\n" % int(time.time()*1000))
> +
> +  return (200, { 'called': 1,
> +                 'data': {},
> +                 'query': request.query })

Agreed. I will follow the 4 space indent style.
Comment on attachment 8350386 [details]
Part 4: Integration into Talos based on latency-benchmark work

Review of attachment 8350386 [details]:
-----------------------------------------------------------------

::: talos/test.py
@@ +470,5 @@
> +    """
> +    abstract base class for media tests.
> +    """
> +    url_timestamp = False
> +    tpcycles = 1 # number of time to run each page

Agreed. It is a residue form my earlier attempt at this. I will remove it

@@ +473,5 @@
> +    url_timestamp = False
> +    tpcycles = 1 # number of time to run each page
> +    preferences = {'media.navigator.enabled': True, 'media.peerconnection.enabled': True,'media.navigator.permission.disabled': True}
> +
> +class media_tests(TmBase):

I probably dont need TmBase since if we need more JS tests, that will go in startup_test/media/html. Let me remove TmBase and make my test a child of Ts as you do for latency-benchmark

@@ +481,5 @@
> +    tpcycles = 1
> +    desktop = True
> +    mobile = False
> +    # Question to :jmaher --> How to get browser_config[browser_log] in here
> +    url = 'python ${talos}/startup_test/media/media_manager.py -p ${profile} -b ${firefox} -l browser_output.txt -t ${talos} -r http://localhost:15707/results'

Given the results_log will be used in collectResults() in run_tests.py, I can get rid of browser_output.txt

Results server url (http://localhost:15707/results) is needed to let our JS script to log the Talos results eventually.

Probably didnot understand your question here ....
Attachment #8350382 - Flags: review?(rjesup) → review+
Attachment #8350385 - Flags: review?(rjesup) → review+
Comment on attachment 8350384 [details]
Part 2: Server logic to run media-tests

Review of attachment 8350384 [details]:
-----------------------------------------------------------------

I really don't need to be a reviewer on this.
Attachment #8350384 - Flags: review?(rjesup)
Attachment #8350386 - Attachment is obsolete: true
Attachment #8350386 - Attachment is patch: false
Comment on attachment 8350824 [details] [diff] [review]
Part 4: Integration into Talos based on latency-benchmark work

Review of attachment 8350824 [details] [diff] [review]:
-----------------------------------------------------------------

Incorporates following changes on suggestions from :jmaher
1. Added startTimeStamp as part of JSON request in collectResults
2. Remove dependency on mozHttpd port by replacing the right url in PerfConfigurator
3. Make Media test direct sub-class of TsBase
4. Minor nits
Attachment #8350824 - Flags: review?(jmaher)
Attachment #8350384 - Attachment is obsolete: true
Attachment #8350384 - Attachment is patch: false
Attached file Part 2: Server logic to run media-tests (obsolete) (deleted) —
Attachment #8350833 - Attachment is obsolete: true
Attachment #8350833 - Attachment is patch: false
Attached patch Part 2: Server logic to run media-tests (obsolete) (deleted) — Splinter Review
Comment on attachment 8350836 [details] [diff] [review]
Part 2: Server logic to run media-tests

Review of attachment 8350836 [details] [diff] [review]:
-----------------------------------------------------------------

This patch incorporates following inputs
1. Making spacing 4 indents vs 2 
2. Remove dependency on the browser log file since Part 4 deals with collectResults() way of logging
3. Add timeout for the browser process
4. Fix minor nits overall
5. Remove excessive debugging
Attachment #8350836 - Flags: review?(jmaher)
Attached patch PESQ Binary built for Ubuntu (deleted) — Splinter Review
Attachment #8350841 - Flags: review?(jmaher)
Attached patch JS and HTML Media Tests (obsolete) (deleted) — Splinter Review
Taking r+ forward from :jmaher and :jesup. It also adds sending timestamp to talos
Attachment #8350385 - Attachment is obsolete: true
Attached file Part 2: Server logic to run media-tests (obsolete) (deleted) —
Attachment #8350836 - Attachment is obsolete: true
Attachment #8350836 - Flags: review?(jmaher)
Attachment #8351101 - Attachment filename: Bug909524_2.patch → Part 2: Server logic to run media-tests
Attachment #8351101 - Attachment description: Bug909524_2.patch → Part 2: Server logic to run media-tests
Attachment #8351101 - Attachment filename: Part 2: Server logic to run media-tests → Bug909524_2.patch
Attachment #8351101 - Attachment is obsolete: true
Attachment #8351101 - Attachment is patch: false
Attached patch Part 2: Server logic to run media-tests (obsolete) (deleted) — Splinter Review
Attached file Part 2: Server logic to run media-tests (obsolete) (deleted) —
Attachment #8351102 - Attachment is obsolete: true
Attachment #8351104 - Attachment is obsolete: true
Attachment #8351104 - Attachment is patch: false
Attached patch Part 2: Server logic to run media-tests (obsolete) (deleted) — Splinter Review
Comment on attachment 8351106 [details] [diff] [review]
Part 2: Server logic to run media-tests

Update on the earlier patch that incorporates comments raised
 + Indent styling with 4 spaces
 + Error Handling
 + Remove dependency on browser_log
 + minor nits
Attachment #8351106 - Flags: review?(jmaher)
Attached patch Part 1: Utilties for media operations (obsolete) (deleted) — Splinter Review
Taking R+ forward
Attachment #8350382 - Attachment is obsolete: true
Attachment #8351110 - Attachment description: Bug909524_1.patch → Part 1: Utilties for media operations
Attached patch Part 3: JS and HTML Media Tests (deleted) — Splinter Review
Taking R+ forward
Attachment #8350845 - Attachment is obsolete: true
Attached audio Input Audio .wav file for tests. (deleted) —
Attachment #8350841 - Flags: review?(jmaher)
Comment on attachment 8351106 [details] [diff] [review]
Part 2: Server logic to run media-tests

Review of attachment 8351106 [details] [diff] [review]:
-----------------------------------------------------------------

everything is good except many of the return values need to remove the extra parens.

you have:
return (errorMessasge(message))

and it should be:
return errorMessage(message)

please remove the parens for all return values that it can be removed from.

::: talos/startup_test/media/media_manager.py
@@ +58,5 @@
> +def parseGETUrl(request):
> +    # Parse the url and invoke appropriate handlers
> +    url_path = request.path.lower()
> +    if url_path.find('audio') != -1:
> +        return (handleAudioRequest(request))

why do you have () around the return value?

@@ +112,5 @@
> +            return (200, {'PESQ-SCORE' : message})
> +        else:
> +            return (errorMessage(message))
> +    else:
> +        return (rrorMessage(request.path))

s/rrorMessage/errorMessage/

@@ +140,5 @@
> +    url = __TEST_HTML_PAGE__
> +    if ObjectDb.talos_results_url:
> +        url =  url + "?" + "results="+ObjectDb.talos_results_url
> +
> +    command = [ browser, '-profile', profile, url, '-no-remote']

put -no-remote before the url
Attachment #8351106 - Flags: review?(jmaher) → review+
Comment on attachment 8350824 [details] [diff] [review]
Part 4: Integration into Talos based on latency-benchmark work

Review of attachment 8350824 [details] [diff] [review]:
-----------------------------------------------------------------

all of these issues are minor and mostly style related

::: talos/PerfConfigurator.py
@@ +275,5 @@
> +        'plugins.update.url': 'http://127.0.0.1/plugins-dummy/updateCheckURL',
> +        'media.navigator.enabled': True,
> +        'media.peerconnection.enabled': True,
> +        'media.navigator.permission.disabled': True,
> +        'media.capturestream_hints.enabled': True

these media prefs- will they affect anything else?  We have a lot of pages with a lot of random content in them.

@@ +699,5 @@
> +        # test's url input. results url will be used in tests
> +        # to log results to be collected by the Talos internal
> +        # mozHttpd server.
> +        if 'http://localhost/results' in url:
> +            replace_url = 'http://%s/%s' % (webserver, 'results')

replace_url = 'http://%s/results' % webserver

::: talos/run_tests.py
@@ +134,5 @@
> +  """
> +  params = '?'.join(request.uri.split('?')[1:])
> +  params = urllib.unquote(params).decode('utf-8')
> +  data = json.loads(params)
> +  begin_time_stamp = None

I would initialize this with int(time.time()*1000)

@@ +141,5 @@
> +    for item in data:
> +        if item == 'BEGIN_TIME_STAMP':
> +            begin_time_stamp = int(data[item])
> +        else:
> +            fhandle.write("%s,%s\n" % (item, data[item]));

2 space indent.
remove trailing semicolon!!!

@@ +145,5 @@
> +            fhandle.write("%s,%s\n" % (item, data[item]));
> +    fhandle.write("__end__report\n")
> +    # fix up time_stamp if not provided
> +    if not begin_time_stamp:
> +        begin_time_stamp = int(time.time()*1000)

remove this condition

@@ +174,5 @@
>    port = url.port
>  
>    if port:
>      import mozhttpd
> +    return mozhttpd.MozHttpd(host=url.hostname, port=int(port), docroot=here, 

remove trailing whitespace at the end of the line.

@@ +309,5 @@
>    utils.startTimer()
>    utils.stamped_msg(title, "Started")
>    for test in tests:
>      testname = test['name']
> +    test['browser_log'] = browser_config['browser_log']

why do we need to assign the browser_log to the test object?

::: talos/test.py
@@ +481,5 @@
>  tests = [ts_paint, ts, tsvg, tdhtml,
>           tspaint_places_generated_max, tspaint_places_generated_med,
>           tp4m, tp5n, tp5o, tpaint, tresize,
>           trobopan, tcheckerboard, tprovider, tcheck2, tcanvasmark,
> +         dromaeo_css, dromaeo_dom, v8_7, kraken, media_tests

you need a trailing ,
Attachment #8350824 - Flags: review?(jmaher) → review+
Comment on attachment 8351106 [details] [diff] [review]
Part 2: Server logic to run media-tests

Review of attachment 8351106 [details] [diff] [review]:
-----------------------------------------------------------------

::: talos/startup_test/media/media_manager.py
@@ +58,5 @@
> +def parseGETUrl(request):
> +    # Parse the url and invoke appropriate handlers
> +    url_path = request.path.lower()
> +    if url_path.find('audio') != -1:
> +        return (handleAudioRequest(request))

I will go ahead and remove the extra parenthesis, It came in there due to my old habits :)

@@ +140,5 @@
> +    url = __TEST_HTML_PAGE__
> +    if ObjectDb.talos_results_url:
> +        url =  url + "?" + "results="+ObjectDb.talos_results_url
> +
> +    command = [ browser, '-profile', profile, url, '-no-remote']

Will do so
Comment on attachment 8350824 [details] [diff] [review]
Part 4: Integration into Talos based on latency-benchmark work

Review of attachment 8350824 [details] [diff] [review]:
-----------------------------------------------------------------

::: talos/PerfConfigurator.py
@@ +275,5 @@
> +        'plugins.update.url': 'http://127.0.0.1/plugins-dummy/updateCheckURL',
> +        'media.navigator.enabled': True,
> +        'media.peerconnection.enabled': True,
> +        'media.navigator.permission.disabled': True,
> +        'media.capturestream_hints.enabled': True

These are very WebRTC specific and shouldnot affect other tests.

@@ +699,5 @@
> +        # test's url input. results url will be used in tests
> +        # to log results to be collected by the Talos internal
> +        # mozHttpd server.
> +        if 'http://localhost/results' in url:
> +            replace_url = 'http://%s/%s' % (webserver, 'results')

Ah !! Will update it so

::: talos/run_tests.py
@@ +134,5 @@
> +  """
> +  params = '?'.join(request.uri.split('?')[1:])
> +  params = urllib.unquote(params).decode('utf-8')
> +  data = json.loads(params)
> +  begin_time_stamp = None

Agreed. Will update

@@ +141,5 @@
> +    for item in data:
> +        if item == 'BEGIN_TIME_STAMP':
> +            begin_time_stamp = int(data[item])
> +        else:
> +            fhandle.write("%s,%s\n" % (item, data[item]));

Coding in JS and python at the same time has some unwanted side-effects :-D .. I will get rid off semicolon. thanks

@@ +145,5 @@
> +            fhandle.write("%s,%s\n" % (item, data[item]));
> +    fhandle.write("__end__report\n")
> +    # fix up time_stamp if not provided
> +    if not begin_time_stamp:
> +        begin_time_stamp = int(time.time()*1000)

Sure

@@ +174,5 @@
>    port = url.port
>  
>    if port:
>      import mozhttpd
> +    return mozhttpd.MozHttpd(host=url.hostname, port=int(port), docroot=here, 

Will do

::: talos/test.py
@@ +481,5 @@
>  tests = [ts_paint, ts, tsvg, tdhtml,
>           tspaint_places_generated_max, tspaint_places_generated_med,
>           tp4m, tp5n, tp5o, tpaint, tresize,
>           trobopan, tcheckerboard, tprovider, tcheck2, tcanvasmark,
> +         dromaeo_css, dromaeo_dom, v8_7, kraken, media_tests

Will do
Taking r+ forward to checkin
Attachment #8351110 - Attachment is obsolete: true
Attachment #8355091 - Attachment description: Bug909524_1.patch → Part 1: Utilties for media operations
Attachment #8355091 - Flags: checkin?(jmaher)
Attachment #8351106 - Attachment is obsolete: true
Comment on attachment 8355092 [details] [diff] [review]
Part 2: Server logic to run media-tests

Review of attachment 8355092 [details] [diff] [review]:
-----------------------------------------------------------------

Taking R+ forward to checkin
Attachment #8355092 - Flags: checkin?(jmaher)
Comment on attachment 8351111 [details] [diff] [review]
Part 3: JS and HTML Media Tests

Review of attachment 8351111 [details] [diff] [review]:
-----------------------------------------------------------------

Taking R+ forward
Attachment #8351111 - Flags: checkin?(jmaher)
Attachment #8350824 - Attachment is obsolete: true
Attachment #8355093 - Attachment description: Bug909524_4.patch → Part 4: Integration into Talos based on latency-benchmark work
Comment on attachment 8355093 [details] [diff] [review]
Part 4: Integration into Talos based on latency-benchmark work

Review of attachment 8355093 [details] [diff] [review]:
-----------------------------------------------------------------

Incorporated comments. Taking R+ forward
Attachment #8355093 - Flags: checkin?(jmaher)
Attachment #8350841 - Flags: checkin?(jmaher)
Comment on attachment 8351112 [details]
Input Audio .wav file for tests.

Input audio file used in the tests.
Attachment #8351112 - Flags: checkin?(jmaher)
Target Milestone: mozilla27 → ---
testing in try server, once I get success there, I will land on the main talos branch.

Once landed, we need to adjust configs to run this, it might take some work as this is linux only- but not too difficult.
I am not able to get this running on try server, nor locally.  It appears to run tests (locally with a couple small modifications), but it fails to report results.  I am using the 6 patches on this bug.
Hello Joel,

   Do you see browser_output.txt with results from 2 tests something similar to http://pastebin.mozilla.org/3921668 being created.
Updated PESQ results reporting as per :jmaher's suggestion to better integrate with the results reporting. With this change, ts.txt seems to have appropriate results generated as shown below

START
VALUES
qm-pxp01,media_tests,,c0f85061c7d3,20131223145122,1388712436
0,3.45,audio_playback_quality_pesq_lqo
1,3.43,audio_playback_quality_pesq_mos
2,1.79,audio_peer_connection_quality_pesq_lqo
3,2.18,audio_peer_connection_quality_pesq_mos
END
Attachment #8355394 - Flags: review?(jmaher)
Attachment #8355395 - Flags: review?(jmaher)
Attached patch Part 8: Fix up logging file renaming (obsolete) (deleted) — Splinter Review
Comment on attachment 8355396 [details] [diff] [review]
Part 8: Fix up logging file renaming

Review of attachment 8355396 [details] [diff] [review]:
-----------------------------------------------------------------

This patch fixes results_log renaming logic  to browser_output.txt that was missing from the earlier patch set.
Attachment #8355396 - Flags: review?(jmaher)
Comment on attachment 8355395 [details] [diff] [review]
Part 7: Fixup PESQ results generation to enable better results reporting

Review of attachment 8355395 [details] [diff] [review]:
-----------------------------------------------------------------

::: talos/startup_test/media/html/pc_audio_quality.js
@@ +10,5 @@
>  // mozCaptureStreamUntilEnded() is used to replace GUM to feed
>  // audio from an input file into the Peer Connection.
>  // Once played out, the audio at the speaker is recorded to compute
>  // PESQ scores
> + 

nit: remove blank space
Attachment #8355395 - Flags: review?(jmaher) → review+
Comment on attachment 8355396 [details] [diff] [review]
Part 8: Fix up logging file renaming

Review of attachment 8355396 [details] [diff] [review]:
-----------------------------------------------------------------

I believe you need another copy of results_log to browser_log in the normal flow of ttest.
Attachment #8355396 - Flags: review?(jmaher) → review-
Attachment #8350841 - Flags: checkin?(jmaher)
Attachment #8351111 - Flags: checkin?(jmaher)
Attachment #8351112 - Flags: checkin?(jmaher)
Attachment #8355091 - Flags: checkin?(jmaher)
Attachment #8355092 - Flags: checkin?(jmaher)
Attachment #8355093 - Flags: checkin?(jmaher)
Attachment #8355394 - Flags: review?(jmaher)
Attached file Part 8: Fix up logging file renaming (obsolete) (deleted) —
Attachment #8355396 - Attachment is obsolete: true
Comment on attachment 8355492 [details]
Part 8: Fix up logging file renaming

Review of attachment 8355492 [details]:
-----------------------------------------------------------------

Added missing result_log renaming fix. Thanks Joel for pointing it out.
Attachment #8355492 - Flags: review?(jmaher)
Attachment #8355492 - Attachment is obsolete: true
Attachment #8355492 - Attachment is patch: false
Attachment #8355492 - Flags: review?(jmaher)
Attached patch Part 8: Fix up logging file renaming (obsolete) (deleted) — Splinter Review
Attachment #8355503 - Attachment is obsolete: true
Comment on attachment 8355505 [details] [diff] [review]
Part 8: Fix up logging file renaming and command invocation

Review of attachment 8355505 [details] [diff] [review]:
-----------------------------------------------------------------

Fixes log file rename issue and some nits
Attachment #8355505 - Flags: review?(jmaher)
this patch works for me locally, it incorporates fixes and cleanups for various commands.  I still have problems running this in a larger loop (>5 iterations) and I always get 0.0 on my local computer.  One step closer though!
Attachment #8355505 - Attachment is obsolete: true
Attachment #8355505 - Flags: review?(jmaher)
Attachment #8355543 - Flags: review?(snandaku)
Comment on attachment 8355543 [details] [diff] [review]
part 8: glue to make audio tests work (1.0)

Review of attachment 8355543 [details] [diff] [review]:
-----------------------------------------------------------------

One small question with the  media_utils.py change

::: talos/startup_test/media/media_utils.py
@@ +134,1 @@
>              return True, "Recording Device: %s Set" % result.group(1)

Wondering what is the change in here. The diffs look alike.
Attachment #8355543 - Flags: review?(snandaku) → review+
I replaced a tab with some spaces.

the last remaining item is to fix the local python webserver to startup in this case. I don't know the best way to do this- there are a few choices.  What makes this hard is in automation, we specifically run talos with '--webServer,localhost' which turns off all --develop based options.  I am waiting to see what other changes are required to get this green on try server before recommending a final solution here.
Attached patch Bug909524_9.patch (obsolete) (deleted) — Splinter Review
Attachment #8355628 - Attachment is obsolete: true
Attachment #8355629 - Attachment is obsolete: true
Comment on attachment 8355630 [details] [diff] [review]
Part 9: Enable multiple runs with graceful test timeout handling

Review of attachment 8355630 [details] [diff] [review]:
-----------------------------------------------------------------

This patch enables graceful test-timeouts in the case of test failure and also enable multiples cycles of the test. 
A Sample run of 12 cycles can be found here: http://pastebin.mozilla.org/3930340
Attachment #8355630 - Flags: review?(jmaher)
Attachment #8355635 - Flags: review?(jmaher)
Comment on attachment 8355630 [details] [diff] [review]
Part 9: Enable multiple runs with graceful test timeout handling

Review of attachment 8355630 [details] [diff] [review]:
-----------------------------------------------------------------

::: talos/startup_test/media/html/media_tests.js
@@ +126,5 @@
>    }
>    var test = tests[testIndex];
> +  // Occasionally <audio> tag doesn't play media properly
> +  // and this results in test case to hang. Since it
> +  // never error's out, test.forceTimeout gaurds against this.

s/gaurds/guards/

@@ +132,4 @@
>    test.test();
>  };
>  
> +// Test timedout, go through the next test.

s/timedout/timed out/

@@ +135,5 @@
> +// Test timedout, go through the next test.
> +var checkTimeout = function(test) {
> +  if(!test.finished) {
> +    test.finished = true;
> +    log("test " + test.name + "timedout");

s/timedout/timed out/

::: talos/test.py
@@ +176,5 @@
>  class media_tests(TsBase):
>      """
>      Media Performance Tests
>      """
> +    cycles = 5

ideally I would like this much higher (20), for now while we work on getting this into the tree, lets stick with 5.
Attachment #8355630 - Flags: review?(jmaher) → review+
Attachment #8355635 - Flags: review?(jmaher) → review+
Taking r+ forward
Depends on: 956957
landed on talos repository:
https://hg.mozilla.org/build/talos/rev/27eb2c6ea066

We are still far away, but this gives us the ability to run media_tests locally.

Still to do for turning this on in automation:
1) figure out how to get this running on the machines (I suspect we need to install sox)
2) add support to graphs.mozilla.org
3) add support to turn on this test for linux only (probably buildbot patches for this)
4) test this for stability over time
5) make tweaks, turn it on

We can now focus on small tweaks and making this fit into the production system.
Depends on: 957374
Depends on: 957671
Depends on: 959673
Depends on: 960154
this bug has sort of stalled, are the next steps here to get the new pesq tool replacement in a new location and then put in a patch?
this doesn't work on linux32, pesq is not a valid binary.  I have validated that this toolchain and test works on linux64.
what are the next steps here?  I have been waiting for the pesq replacement tool.  I haven't seen this as a priority, who specifically wants to see this running and can they help elaborate what the next steps are?
Flags: needinfo?(rjesup)
We need to get this landed with SNR to start at least; we have a pile of other bugs in the queue that want to be able to use this framework to add more measurements of delay, video framerate, etc.
Flags: needinfo?(rjesup)
what is SNR?  I know that PESQ doesn't work on 32 big linux, so are we looking at 64 bit linux only for testing?  Sorry if my questions are in left field, I am in the dark on webrtc and audio stuff.
SNR - signal to noise ratio.  Not a very good measure, but it will flag gross failures/bugs.
Assignee: snandaku → rjesup
Assignee: rjesup → jib
Blocks: 970426
backlog: --- → Fx32+
Whiteboard: [webrtc][getusermedia] → [webrtc][getusermedia][p=5]
Target Milestone: --- → mozilla32
backlog: Fx32+ → Fx32?
Priority: -- → P1
Whiteboard: [webrtc][getusermedia][p=5] → [webrtc][getusermedia][p=1]
Hi Joel, We (Jib, Jesup and I) believe you're going to be landing this bug as part of getting the WebRTC media metrics stood up in Talos;  so I'm assigning this to you.  If that's incorrect, please let us know.  Thanks for all your help with this!
Assignee: jib → jmaher
this is taking longer than I thought to fix.  I am getting 50% success rates and I need to do more debugging.  I made this p=4 as it will probably take me a couple days to figure this out, then all the related docs, patches, landing deployment items.

Luckily I can reproduce this on my loaner slave at the same rate as I get on try.
Whiteboard: [webrtc][getusermedia][p=1] → [webrtc][getusermedia][p=4]
Blocks: 1023648
No longer blocks: 970426
backlog: Fx32? → ---
Target Milestone: mozilla32 → ---
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: