Closed Bug 717005 Opened 13 years ago Closed 13 years ago

Store logs in mysql instead of on the filesystem

Categories

(Tree Management Graveyard :: TBPL, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: justin.lebar+bug, Assigned: Swatinem)

References

Details

Attachments

(1 file, 4 obsolete files)

<jlebar> edmorley, is there a trick to getting the tbpl summaries to load before I turn 50?
<edmorley> jlebar: not really, I just run lots of tabs in parallel

I seem to recall that, in the past, clicking on a bunch of orange numbers would kick off a bunch of log-parsing requests.  When I'd go back and click on the first number I'd clicked, its log summary would be there.

But now it seems that when I click on two orange numbers, the second click seems to cancel the first log-parsing request.  This of course greatly increases the latency of starring the tree.

I don't know if I'm remembering correctly how it used to work, but can we please go back to how I remember?
I presume there's a separate bug about parsing these logs on the server-side, which would obviate this entire problem, and also make TBPL approximately 100x more fun to use.
Or we could have a button which says "retreive all teh data!".

What's particularly painful about the current incarnation is that I have to babysit it.  I click on one orange, then wait 30s, then click on the next one, then wait...  It's maddening.
I don't think PHP has any choice about continuing to run when the request gets cancelled, so I'd think doing what you want would mean keeping the XHR alive when you click on, and then off, a particular letter, so the server-side of tbpl would keep fetching the log and requesting bug suggestions even though the client-side has already been told not to show the results. I sort of hope we've always cancelled it, and we didn't actually work the way it seemed like we did before, since keeping them silently alive seems like a risky scheme, given how often I get into the bug 682749 state even without having even more invisible requests running.

But, other than the fact that it might accelerate the time when IT will notice that we never ever delete anything and turn bug 703967 into a P1 blocker, it might be nice if tbpl just fetched and cached the log for every failed job, pretending that someone will actually look at them eventually. Probably a sort of dicey thing to implement, though, since I don't think that there's any promise made about whether or not the log has been uploaded already at the time that the results are stuck into builds-4hr.js.

And that only "might be nice" because nobody actually knows what part of the operation is slow when the whole thing is slow, so having already fetched the log might make zero difference, and having already parsed out the failure lines might make zero difference, and it's possible that the entire slowness is down to slowness querying Bugzilla, and the only way to speed it up would be to have a cache of suggested bugs that I would then start screaming about the first time it refused to show something because it was filed between the time the job finished and the time I wanted to star something.
(In reply to Phil Ringnalda (:philor) from comment #3)
> I don't think PHP has any choice about continuing to run when the request
> gets cancelled,

Actually it does!  The way that the old behavior that Justin describes in comment 0 was like this:

1. You would click on an orange which would trigger an XHR.  The PHP server side script would start to fetch the log in a loop, post-processing it.  You would then cancel the XHR but the PHP script would continue fetching the data.  When it finished downloading and post-processing the log, the first PHP job would write the result in a file.  During the loop of fetching the data, the second script would check for the existence of the local cache file in each loop iteration, so when the first PHP job wrote the file, the second one would see it and serve it as its response, abandoning the download it had in progress.

> so I'd think doing what you want would mean keeping the XHR
> alive when you click on, and then off, a particular letter, so the
> server-side of tbpl would keep fetching the log and requesting bug
> suggestions even though the client-side has already been told not to show
> the results. I sort of hope we've always cancelled it, and we didn't
> actually work the way it seemed like we did before, since keeping them
> silently alive seems like a risky scheme, given how often I get into the bug
> 682749 state even without having even more invisible requests running.

Yeah maybe but I think your solution below is much better (and is what I have planned to do a long time ago but never got around to).

> But, other than the fact that it might accelerate the time when IT will
> notice that we never ever delete anything and turn bug 703967 into a P1
> blocker, it might be nice if tbpl just fetched and cached the log for every
> failed job, pretending that someone will actually look at them eventually.
> Probably a sort of dicey thing to implement, though, since I don't think
> that there's any promise made about whether or not the log has been uploaded
> already at the time that the results are stuck into builds-4hr.js.

I think this would be a very good solution.  Even if the logs won't be there in time, we can just re-ask for them later.

> And that only "might be nice" because nobody actually knows what part of the
> operation is slow when the whole thing is slow, so having already fetched
> the log might make zero difference, and having already parsed out the
> failure lines might make zero difference, and it's possible that the entire
> slowness is down to slowness querying Bugzilla, and the only way to speed it
> up would be to have a cache of suggested bugs that I would then start
> screaming about the first time it refused to show something because it was
> filed between the time the job finished and the time I wanted to star
> something.

My memory is dusty here, but IIRC the slow part was retrieving the logs.  The Bugzilla queries should be blazing fast compared to the sucky speed of loading the logs.  (Note that we at least used to download the post-processed log, which is the slow one to load.)

Also, IIRC most of the wait time was us waiting for a response from the HTTP server.
We could probably add some instrumenting to clear up these questions, where the real bottleneck is.
Prefetching the logs (and bugzilla data?) sounds good to me.  Feel free to morph this bug.
Summary: TBPL should allow more than one log-parsing request at a time → TBPL should pre-download logs and generate cached orange bug suggestions
Very rough wallclock timing exercise, using the Web Console's times:

* click on an Android orange (red, purple) which is almost certain to have an empty summary and a small log from failing before it ran anything, and note how long both the type=tinderbox_print request takes (which should be the one which fetches the log), and how long the type=annotated request takes (which should be the one which waits for ParallelFileGenerating.php to create the cached version, rather than fetching). Right now I'm seeing around 1.5 seconds for tinderbox_print and 3 seconds for annotated.
* click on a Windows debug mochitest-4, which is bound to be a leak, so we'll parse out a failure, but not search for bugs. Right now, I'm seeing around 3 seconds for tinderbox_print, and 10 seconds for annotated.
* click on a failure for which we will really fetch a suggestion, for just one thing. I'm seeing around 20 seconds for annotated.

So I'd expect that decent instrumenting would tell us that fetching isn't really that much of a problem, but we're doing something with parsing that falls over with our grotesquely bloated logs (buildbot chops them off at 50MB, but that's per-step, so the max for a mochitest-oth would be 200MB, and it's not unusual for debug M1 and M4 to flirt with 50MB).

One thing I'm pretty sure is not going terribly well is when http://hg.mozilla.org/users/mstange_themasta.com/tinderboxpushlog/file/962932eba448/php/inc/GzipUtils.php#l38 does $lineArray = array_values($lineArray); - that's where, after going to all the trouble of doing an install on Dreamhost, the very first orange I clicked on hit their memory limit, because apparently that creates a new array from the array_values, and then assigns it, so we wind up with two 50MB arrays for a while.

I'm less sure how to feel about 10 seconds for fetching suggestions - obviously, bug  711639 says we sometimes completely break fetching suggestions, but I don't have any feeling for how long it _ought_ to take.
So then I stared that the numbers for the nearly empty Android failures, and played with a few more, and played with some on tbpl-dev, which had a much higher likelihood of taking very nearly the exact same time for both, and assuming I asked the right question (a rather big assumption, given my usual communications with IT), what we do is we fetch the log from ftp.m.o, write it somewhere like node79.generic_cluster/tbpl.mozilla.org/cache/, and we fetch the log from ftp.m.o, write it to somewhere like node89.generic_cluster/tbpl.mozilla.org/cache/, and in fact nothing that we think about the workings of /cache/ is correct, because it's actually 100 different directories.

Which would certainly explain bug 699711, where we assume that we can find the log in /cache/ rather than trying to find it and creating it if we can't - unless it's an orange that lots of people have loaded, there's only a 1 in 100 chance of a leak-analysis request happening on a server which has the log cached. (And it might explain why bug 703967 hasn't yet turned into a P1 blocker at IT's behest, since we aren't spewing logs into just one directory and never removing them, we're actually spewing them into 100 different directories.)
(In reply to Phil Ringnalda (:philor) from comment #8)
> and in fact nothing that we
> think about the workings of /cache/ is correct, because it's actually 100
> different directories.

Oh! What now? :(
So the cause of this bug might be a totally different one:
- Click on a run to load the summary
- Click another one to push the first into the background
- The first one completes, but is not cached clientside
- Click on the first run again
  -> The request is delivered to a server that does not have the same filesystem cache as the server that serves the first request so it does the same processing all over again?!?
Yeah, I guess when they said "we'll put it on the generic cluster" we were supposed to say "oh, we weren't expecting that, can you put /cache/ on NFS while we rewrite things to store logs as blobs in the db?"
(In reply to Phil Ringnalda (:philor) from comment #11)
> Yeah, I guess when they said "we'll put it on the generic cluster" we were
> supposed to say "oh, we weren't expecting that, can you put /cache/ on NFS

I would say that this is a failure in communication on their side.  :(

> while we rewrite things to store logs as blobs in the db?"

Out of curiosity, how hard would that be?  It's been ages since I've looked at the TBPL code, but if this is something fairly simple, I'm willing to spend some time to fix it, cause it drives me (and everybody else using TBPL) nuts every day!
You honestly want to save multi-megabyte blobs in mysql? (or a gzipped blob)
Well I have no experience with this kind of workload as I have always avoided it. Can someone say how it is performance wise?

But even in that case we would have a slight problem synchronizing this. I could imagine that we could just write a NULL row in the DB when we start parsing the log, then write a string (even if empty) when that is finished.
Other clients would just busy-wait (query + sleep) until a non NULL value arrives.

Ideas?
If we can come up with a key to access the blob, and create a "blob" column for the cached data, and don't do stupid things like free text search over the blob column, we should not face any perf problems.
That is, to say, MySQL is perfectly capable of handling multi-megabyte fields efficiently.
(In reply to Arpad Borsos (Swatinem) from comment #13)
> But even in that case we would have a slight problem synchronizing this. I
> could imagine that we could just write a NULL row in the DB when we start
> parsing the log, then write a string (even if empty) when that is finished.
> Other clients would just busy-wait (query + sleep) until a non NULL value
> arrives.
> 
> Ideas?

Hmm not sure I understand the problem.. using transactions, the other clients won't see anything until the transaction is committed. Once committed, the data would be available to all clients at the same time.

Why have the client busy-wait instead of simply returning that there is no data (the client could retry, automatically or user-prompted, if it knows that the log should or will be there shortly)?
(In reply to Ehsan Akhgari [:ehsan] from comment #14)
> If we can come up with a key to access the blob, and create a "blob" column
> for the cached data, and don't do stupid things like free text search over
> the blob column, we should not face any perf problems.

(In reply to Ehsan Akhgari [:ehsan] from comment #15)
> That is, to say, MySQL is perfectly capable of handling multi-megabyte
> fields efficiently.

I agree that this is feasible, although something like Elastic Search would certainly be less of a SPOF, and allow for searching which might be useful.

MySQL is more mature and easier for IT to support (which is the same reason TBPL moved from mongo to it when it came to live at tbpl.m.o)

I'd suggest trying MySQL out for this purpose first since the infra is in place, and if ES sounds interesting to anyone we could experiment with it and introduce it incrementally (it could be used as a cache and search engine in front of MySQL, for example).
(In reply to Robert Helmer [:rhelmer] from comment #16)
> (In reply to Arpad Borsos (Swatinem) from comment #13)
> > But even in that case we would have a slight problem synchronizing this. I
> > could imagine that we could just write a NULL row in the DB when we start
> > parsing the log, then write a string (even if empty) when that is finished.
> > Other clients would just busy-wait (query + sleep) until a non NULL value
> > arrives.
> > 
> > Ideas?
> 
> Hmm not sure I understand the problem.. using transactions, the other
> clients won't see anything until the transaction is committed. Once
> committed, the data would be available to all clients at the same time.
> 
> Why have the client busy-wait instead of simply returning that there is no
> data (the client could retry, automatically or user-prompted, if it knows
> that the log should or will be there shortly)?

Re-reading and thinking about the way it's implemented now, is the concern that the clients might compete to *write* the same log? I assumed that "pre-download logs" implied this would be moved to a background process and not be on-demand anymore, if that's not the case then sure, a synchronization method is in order here for writes.
So my idea is as follows:
New table logs with:
runid, int
logtype, either varchar or enum for `raw`, `annotated` and the other types
content, gzipped blog or NULL


- The first client to gets a request to display log X
- It will try to fetch the log from the db
  - if the row does not exist
    - it will create a row with conent NULL and start the processing
  - if content is NULL
    - will either busy wait until the log is available
    - or return to the client that will display a "try again later"
  - if the content is not NULL just pipe it through

Ideas?
But yes, a dedicated background process that just does the processing is a better idea. How best to implement it? PHP or Python? Cronjob or a dedicated service?
While we definitely want comment 20, it's a larger project, so I think we also want comment 19 in the meantime.

Note that we should also store the creation date of each row in the table, since we can use that to clean up the old logs periodically for example.
Attached patch database table for logs (obsolete) (deleted) β€” β€” Splinter Review
I went with a varchar as type field. This should be more flexible.
Attachment #588362 - Flags: review?(ehsan)
Attachment #588362 - Flags: feedback?(rhelmer)
Attached patch WIP (obsolete) (deleted) β€” β€” Splinter Review
This is some early work in progress. This is running (or more precisely: failing) on tbpl.swatinem.de

I see the following problems: I still get races (they manifest as duplicate key violations), so maybe we should use LOCK TABLE or something like that?
For any other cases, pma says the BLOB is 0B, so I’m not sure this works correctly at all.

I’m a bit tired now so If anyone wants to take over, feel free to do so.
Attachment #588396 - Flags: feedback?(rhelmer)
Attachment #588396 - Flags: feedback?(ehsan)
Attachment #588396 - Flags: feedback?(mstange)
Attachment #588362 - Flags: review?(ehsan) → review+
Comment on attachment 588362 [details] [diff] [review]
database table for logs

Hmm, the `id' field seems to be unused.  Can it be removed?
We had this discussion before when migrating to mysql. The consensus was to give all tables a AI primary, even if it is unused, as is the case for the runs_notes and builders_history tables.
Comment on attachment 588396 [details] [diff] [review]
WIP

Review of attachment 588396 [details] [diff] [review]:
-----------------------------------------------------------------

::: php/inc/GzipUtils.php
@@ +34,5 @@
> +    $stmt->bindParam(":type", $log['type']);
> +    $stmt->execute();
> +    $stmt->bindColumn(1, $blob, PDO::PARAM_LOB);
> +    $stmt->fetch(PDO::FETCH_BOUND);
> +    fpassthru($blob);

Do you need to close the $blob stream somehow?  (I don't know the PDO APIs very well :-)

@@ +52,5 @@
> +    $stmt->bindParam(":type", $log['type']);
> +    $stmt->execute();
> +    $stmt->bindColumn(1, $blob, PDO::PARAM_LOB);
> +    $stmt->fetch(PDO::FETCH_BOUND);
> +    $content = gzdecode(stream_get_contents($blob));

Ditto.

@@ +63,1 @@
>      }

Hmm, I wonder if this could be done more efficiently, something like:

$lines = preg_split("/([^\n]*\n)/", $content, -1, PREG_SPLIT_NO_EMPTY | PREG_SPLIT_DELIM_CAPTURE);

::: php/inc/ParallelFileGenerating.php
@@ +30,5 @@
> +          SET buildbot_id = :id, type = :type;");
> +        $stmt->execute(array(
> +          ':id' => $log['_id'],
> +          ':type' => $log['type']
> +        ));

I think you should handle the failure of this INSERT gracefully.  It is possible that you query for an existing log and don't find it, and before you get here another process inserts the log entry, which will cause this to fail because of violation of the unique index constraint.  I think the correct way of handling this would be to sleep and continue if the statement fails.

@@ +49,5 @@
> +      SELECT content IS NULL as `inprogress`
> +      FROM runs_logs
> +      WHERE buildbot_id = :id AND type = :type;");
> +    $stmt->execute(array(":id" => $log["_id"], ":type" => $log["type"]));
> +    $run = $stmt->fetch(PDO::FETCH_ASSOC);

queryLog doesn't return anything, so $exists above will always be null.  Then you attempt to insert into the DB using an id/type pair which already exists, which fails because of the unique index.

::: php/inc/gzdecode.php
@@ +1,2 @@
> +<?php
> +// taken from: http://at2.php.net/manual/en/function.gzdecode.php#82930

I don't know a lot about the gzip file format, so I'll just trust this code.  :-)
Attachment #588396 - Flags: feedback?(ehsan) → feedback-
(In reply to Arpad Borsos (Swatinem) from comment #25)
> We had this discussion before when migrating to mysql. The consensus was to
> give all tables a AI primary, even if it is unused, as is the case for the
> runs_notes and builders_history tables.

I didn't know that.  In that case, it's fine by me.  :-)
(In reply to Ehsan Akhgari [:ehsan] from comment #26)
> ::: php/inc/GzipUtils.php
> @@ +34,5 @@
> > +    $stmt->bindParam(":type", $log['type']);
> > +    $stmt->execute();
> > +    $stmt->bindColumn(1, $blob, PDO::PARAM_LOB);
> > +    $stmt->fetch(PDO::FETCH_BOUND);
> > +    fpassthru($blob);
> 
> Do you need to close the $blob stream somehow?  (I don't know the PDO APIs
> very well :-)
> 
> @@ +52,5 @@
> > +    $stmt->bindParam(":type", $log['type']);
> > +    $stmt->execute();
> > +    $stmt->bindColumn(1, $blob, PDO::PARAM_LOB);
> > +    $stmt->fetch(PDO::FETCH_BOUND);
> > +    $content = gzdecode(stream_get_contents($blob));
> 
> Ditto.

I will take care of that.

> @@ +63,1 @@
> >      }
> 
> Hmm, I wonder if this could be done more efficiently, something like:
> 
> $lines = preg_split("/([^\n]*\n)/", $content, -1, PREG_SPLIT_NO_EMPTY |
> PREG_SPLIT_DELIM_CAPTURE);

Thanks for the tip

> ::: php/inc/ParallelFileGenerating.php
> @@ +30,5 @@
> > +          SET buildbot_id = :id, type = :type;");
> > +        $stmt->execute(array(
> > +          ':id' => $log['_id'],
> > +          ':type' => $log['type']
> > +        ));
> 
> I think you should handle the failure of this INSERT gracefully.  It is
> possible that you query for an existing log and don't find it, and before
> you get here another process inserts the log entry, which will cause this to
> fail because of violation of the unique index constraint.  I think the
> correct way of handling this would be to sleep and continue if the statement
> fails.
> 
> @@ +49,5 @@
> > +      SELECT content IS NULL as `inprogress`
> > +      FROM runs_logs
> > +      WHERE buildbot_id = :id AND type = :type;");
> > +    $stmt->execute(array(":id" => $log["_id"], ":type" => $log["type"]));
> > +    $run = $stmt->fetch(PDO::FETCH_ASSOC);
> 
> queryLog doesn't return anything, so $exists above will always be null. 
> Then you attempt to insert into the DB using an id/type pair which already
> exists, which fails because of the unique index.

Missing return statement, thats a classic. I was wondering why i got duplicate key errors all the time :D

> ::: php/inc/gzdecode.php
> @@ +1,2 @@
> > +<?php
> > +// taken from: http://at2.php.net/manual/en/function.gzdecode.php#82930
> 
> I don't know a lot about the gzip file format, so I'll just trust this code.
> :-)

As I understand the docs, the difference here is if we handle gzipped "files" (with headers and footers) or gzipped "strings".

gzwrite (and gzfile) needs a real file and as I read the docs, it does not work on a generic file pointer like the one pdo returns.
gzencode creates the contents of a file but returns them as string, gzdecode is mentioned in the docs, but it is not included in any php version.

So yes, I am a bit confused about all this myself.
In case I was not clear enough, I'm fine with the gzdecode implementation as long as it works in your tests.  :-)
Attached patch store logs in mysql (obsolete) (deleted) β€” β€” Splinter Review
This now works correctly on tbpl.swatinem.de as far as I can tell.
I hit a bug in the version of php running on my server that pdo does not return file handles but strings.

So please test.
And how is the deployment strategy for this? I guess it should live in a staging area first before it is deployed to tbpl.m.o.
Assignee: nobody → arpad.borsos
Attachment #588362 - Attachment is obsolete: true
Attachment #588396 - Attachment is obsolete: true
Status: NEW → ASSIGNED
Attachment #588362 - Flags: feedback?(rhelmer)
Attachment #588396 - Flags: feedback?(rhelmer)
Attachment #588396 - Flags: feedback?(mstange)
Attachment #588619 - Flags: review?(ehsan)
Attachment #588619 - Flags: feedback?(rhelmer)
Attachment #588619 - Flags: feedback?(mstange)
So this seems to work as I described in comment 0 -- if I click a few builds then go back to the first one, the comment is there.  Which is awesome!

But is there any reason we can't aggressively prefetch all the logs, as soon as the build finishes?  Or, if not, just the non-green logs, as soon as we know the build is non-green?

If not, a "fetch all teh orange!" button might be helpful.  We can, of course, do this as a separate bug.
(In reply to Justin Lebar [:jlebar] from comment #31)
> So this seems to work as I described in comment 0 -- if I click a few builds
> then go back to the first one, the comment is there.  Which is awesome!
> 
> But is there any reason we can't aggressively prefetch all the logs, as soon
> as the build finishes?  Or, if not, just the non-green logs, as soon as we
> know the build is non-green?
> 
> If not, a "fetch all teh orange!" button might be helpful.  We can, of
> course, do this as a separate bug.

I think we can do all of that, but they're different bugs...
Summary: TBPL should pre-download logs and generate cached orange bug suggestions → TBPL should generate cached orange bug suggestions
Comment on attachment 588619 [details] [diff] [review]
store logs in mysql

Review of attachment 588619 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good to me!
Attachment #588619 - Flags: review?(ehsan) → review+
(In reply to Arpad Borsos (Swatinem) from comment #30)
> Created attachment 588619 [details] [diff] [review]
> store logs in mysql
> 
> This now works correctly on tbpl.swatinem.de as far as I can tell.
> I hit a bug in the version of php running on my server that pdo does not
> return file handles but strings.
> 
> So please test.
> And how is the deployment strategy for this? I guess it should live in a
> staging area first before it is deployed to tbpl.m.o.

There used to be a TBPL staging server, I think.  IIRC philor knows about it.
There is, at https://tbpl-dev.allizom.org/, where everything you commit to the repo gets deployed within 15 minutes. Ping me for the user/pass if you don't have it, tell me how we can make sure every tbpl committer knows it exists if you have an idea, because that's where you make sure your push doesn't break anything before we deploy.
Along with wanting to test that we aren't breaking leak-analysis (we are), we'll want to test that we aren't breaking reftest analysis.
> I think we can do all of that, but they're different bugs...
> Summary: TBPL should pre-download logs and generate cached orange bug suggestions β†’ TBPL should generate cached orange bug suggestions

Well, if you change the bug summary...  :D

This change is now bug 718632.
Blocks: 718632
While we are at it, lets change the summary to reflect the actual work going on in this bug :-)
Blocks: 699711
Summary: TBPL should generate cached orange bug suggestions → Store logs in mysql instead of on the filesystem
(In reply to Phil Ringnalda (:philor) from comment #35)
> There is, at https://tbpl-dev.allizom.org/, where everything you commit to
> the repo gets deployed within 15 minutes. Ping me for the user/pass if you
> don't have it, tell me how we can make sure every tbpl committer knows it
> exists if you have an idea, because that's where you make sure your push
> doesn't break anything before we deploy.

Mentioning it in the README file would be a good first step I think: <http://hg.mozilla.org/users/mstange_themasta.com/tinderboxpushlog/rev/dde5be0a9d65>
Attached patch refactor leak-analysis (obsolete) (deleted) β€” β€” Splinter Review
This is just basically copy-pasted to the new structure.
It is completely untested, as I am not at home right now. I will push it to my server and take another look at it later today.
Attachment #589215 - Flags: review?(ehsan)
Comment on attachment 589215 [details] [diff] [review]
refactor leak-analysis

LogParser::generate is not called AFAICT.  Am I missing something?
(In reply to Ehsan Akhgari [:ehsan] from comment #41)
> Comment on attachment 589215 [details] [diff] [review]
> refactor leak-analysis
> 
> LogParser::generate is not called AFAICT.  Am I missing something?

ParallelLogGenerating::ensureLogExists calls it. But I just noticed the class has the wrong name. I will post a revised patch (and test it) later.
Attached patch store logs in mysql (deleted) β€” β€” Splinter Review
Sorry for the delay, looks like i really have a serious League of Legends addiction :-D

Works now: http://tbpl.swatinem.de/php/getLeakAnalysis.php?id=8621706
I’ve folded it up with the previous patch and added some more safety measures to ParallelLogGenerating.
Attachment #588619 - Attachment is obsolete: true
Attachment #589215 - Attachment is obsolete: true
Attachment #588619 - Flags: feedback?(rhelmer)
Attachment #588619 - Flags: feedback?(mstange)
Attachment #589215 - Flags: review?(ehsan)
Attachment #589458 - Flags: review?(ehsan)
Comment on attachment 589458 [details] [diff] [review]
store logs in mysql

LGTM.  It would be a good idea for mstange to look at it as well I think.
Attachment #589458 - Flags: review?(ehsan) → review+
Comment on attachment 589458 [details] [diff] [review]
store logs in mysql

It would, sure. But I don`t know how much time Markus has right now.
Attachment #589458 - Flags: review?(mstange)
Comment on attachment 589458 [details] [diff] [review]
store logs in mysql

Looks great, thanks!
Attachment #589458 - Flags: review?(mstange) → review+
Depends on: 722886
Pushed: http://hg.mozilla.org/users/mstange_themasta.com/tinderboxpushlog/rev/f86f3e6360af
Let’s get tested on staging!
Shipping it - could just be coincidence, but it fetched a couple of bug 711639 all-stars on the very first attempt, and I'll take any momentary glimmer of hope for that.
Depends on: 725220
jlebar's going to stay justifiably bitter, since we fixed what we wanted to fix instead of bug 718632, but hey, analyze the leak works again, and the bug 711639 things seem to be carrying over to working on prod too, so I'll take it :)
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Blocks: 759313
Product: Webtools → Tree Management
Product: Tree Management → Tree Management Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: