Closed
Bug 121069
(bz-transactions)
Opened 23 years ago
Closed 17 years ago
Implement database transactions across Bugzilla
Categories
(Bugzilla :: Bugzilla-General, enhancement, P1)
Bugzilla
Bugzilla-General
Tracking
()
RESOLVED
FIXED
Bugzilla 3.2
People
(Reporter: CodeMachine, Assigned: mkanat)
References
()
Details
(Whiteboard: [roadmap: 3.2])
Attachments
(1 file)
(deleted),
patch
|
mkanat
:
review+
|
Details | Diff | Splinter Review |
Spun off of bug #98304.
Some databases support the notion of transactions, where an operation is
considered to be atomic. Apparently PostgreSQL has good support for them -
MySQL I believe supports them but not well.
Transactions have two main benefits that I see:
- When a transaction contains a write operation is occuring, all other
transactions will see the database either as is was at the start, or as it was
at the end. Failures of any sort will not result in a "corrupted" database
where half a transaction was applied.
Bug #104589 makes this problem serious, but even if we fix that transactions are
still a good idea for robustness purposes. Computers can crash and software can
have bugs.
- Read-only operations should never be denied - they can always use the last
copy of the database that was before any other transactions begun, which is
guaranteed to be consistent by the definition of transactions. This means the
shadow database is unnecessary as a performance improvement.
I don't see that introducing transactions into the codebase would be very
difficult. The API should be rather simple.
You need the calls BeginTransaction, CommitTransaction. A shorthand call
PerformTransaction that performs a list of SQL statements would also be good
when you know up front what you want to do.
Essentially I think we should be able to add these calls in a bit at a time. If
the database in use does not support transactions there is no great loss. In
this case the calls will essentially be no-ops and there is no loss over what we
had before.
I don't provide a RollbackTransaction call for this reason - not
all databases will support it and we must not expect them to.
I also think BeginTransaction and CommitTransaction provide another benefit - it
is a nice solution to bug #104589, users closing the window shouldn't terminate
scripts. A script should NEVER terminate during a transaction. We can
implement this notion of "transaction" even in databases that don't support
transactions. Indeed, I'm not sure we need to prevent script termination
outside of transactions at all.
There is also the issue of locking.
Write locks are still probably useful in databases with transactions - without
them a transaction can fail because it clashes in the typical "multiple writers"
situation, and the transaction would need to be reapplied. We need to decide
whether the complexity of retrying is better than the possibility of deadlocks
that comes with locking. The former could be hidden away in a call such as
PerformTransaction, but would need to be considered for CommitTransaction.
Read locks are unnecessary for transactions in PostgreSQL, since it will
maintain an old copy of the data for the reader. So if a traditional read lock
was done on such a database, it would block writers when there is no particular
need to. I'm not sure whether PostgreSQL would block writers or just ignore
ignore the locks. If the former, we need to make sure read locks are only done
for non-transaction mode.
So the question here is essential, is locking something that BeginTransaction
worries about? If read locks would block writers, it needs to be, as it can
decide whether to do the read locks. If we want to do retrying sometimes
instead of write locks, the same applies. The locks could be passed as a
parameter to BeginTransaction and it would decide what to do.
Unfortunately it isn't feasible for BeginTransaction to automatically compute
what to lock at the start if necessary since it doesn't know what is going to
happen in the transaction - PerformTransaction is not constrained in that way
and so perhaps could.
Comment 1•23 years ago
|
||
> Indeed, I'm not sure we need to prevent script termination
> outside of transactions at all.
Only if all dbs support transactions. Since we currently only support mysql, and
you need a different, semi-experimental table type, I'd rather not rely on that.
OTOH, that TERM bug is likely to be pushed out from 2.16 unless someone has a
patch. Its possible that we could merge the idea in there of checking for user
exited by doing so at the end of a transaction.
> Read locks are unnecessary for transactions in PostgreSQL, since it will
> maintain an old copy of the data for the reader.
This is untrue. See my comments in the pgsql bug, giving examples where this is
not the case.
It is possible to avoid some read locks - for example, in mysql, tables used in
a query must either be all locked, or none locked. This is not the case in
postgresql. Some of the write locks can be turned into SELECT .. FOR UPDATE ON
<tablename>, so that we have row level locking. The token stuff can probably
lose most, if not all, table level locking that way.
After 2.16, and the templatisation which moves towards relationSet, + getting
rid of bug_form.pl, and so on, I want to audit our locking. Currently, it is
theoetically possible for show_bug.cgi, for example, to show a different topic
in the <title> than on the page, if a process_bug happened between the two
queries. Moving to Bug.pm for things like this, where everything is selected at
once (and in postgres, an individual SELECT is always self-consistent), would
remove this problem.
Other differences include mysql reording tables in the LOCK statement - the app
has to do so itsself when locking tables. Also, if you want different types of
locks, they must be in different statments in postgresql, although 7.2 can lock
multiple tabels with the same type of lock in that case.
Other comments:
We do not want to do retrying. I forsee some kind of LockTables() call, which
takes an array of {tablename, locktype} pairs, in the order they are to be
locked, locktype would not only be read/write, but would degrade to READ/WRITE
under mysql. See http://developer.postgresql.org/docs/postgres/sql-lock.html.
Currently some of our write locking relies on SELECTs blocking, but that is too
heavy for lots of cases - I'd want to distinguish with:
READ - share
WRITE - share row exclusive
EXCLUSIVE - exclusive mode, same as WRITE for mysql.
I think we can get away w/o ACCESS EXCLUSIVE if we use select for update. mysql
3.23 understands that syntax, and since I believe we're upping the requirement
post-2.16, we can just add those calls to the SQL directly. However, mysql
doesn't support the OF <tablename> syntax, to only loock a subset of the tbales
used in the query, so maybe we will need a wrapper function.
We may want a "READ if !($db->{'supports_SelectForUpdate'})" level, too.
It may be possible to avoid exlusive locks entirely - we'll have to try it to
find out.
(BTW, I'm aiming for a DB::mysql, and DB::postgres type thing, which would have
functions, and variables - postgres allows ON DELETE CASCADE, so the code which
deletes groups from everywhere could be if(!$db->{'supportsCascade'}). Or
something like that)
Comment 2•23 years ago
|
||
You have to prevent others from reading if you are going to write anything that
depends on the data you read that might change as a result of what you write.
For example, on databases where we don't have autoincrement and we don't have a
sequence counter, if you're going to get the next bug ID you have to lock
everyone else out of reading from the bugs table until you determine what the
next available bug ID is and snag it for yourself, otherwise another process
might try to grab the same ID.
Comment 3•23 years ago
|
||
Right- that was my comment in the pgsql bug.
This brings up annother issue (Aside from 'why are we having technical
discussions in bugzilla again'...). In postgres, sequence values, are always
allocated, and never rolled back (to avoid blocking a transaction while waiting
for a commit). This means that it is possible for holes to appear, either if you
rollback (which we're not going to do), or if theres an error in a later
statement in the same transaction. Since the number of INSERTs we do to the bugs
or attachments tables (which are the only improtant ones which use ids) are
small, does anyone see a problem with either locking the bugs table, and using
MAX trick, or using a separate table which is locked? The latter is probably
better, but may be a pain to impl is a cross-db fashion
Anyone know what mysql does for that?
Reporter | ||
Comment 4•23 years ago
|
||
>> Indeed, I'm not sure we need to prevent script termination
>> outside of transactions at all.
> Only if all dbs support transactions.
Like I said, for MySQL we can still suppress TERM within a logical transaction
even if we can't use a database transaction.
>> Read locks are unnecessary for transactions in PostgreSQL, since it will
>> maintain an old copy of the data for the reader.
> This is untrue. See my comments in the pgsql bug, giving examples where this
> is not the case.
I'm not sure what you mean here.
> Currently, it is theoetically possible for show_bug.cgi, for example,
> to show a different topic in the <title> than on the page, if a
> process_bug happened between the two queries.
Isn't this what read transactions fix?
> you have to lock everyone else out of reading from the bugs table until you
> determine what the next available bug ID is and snag it for yourself,
> otherwise another process might try to grab the same ID.
OK, I believe two clashing transactions of this type would cause one to need to
be retried, which is where you might want to write lock the number, but I think
it would be better to pass this to something like PerformTransaction and then
you don't need to lock since it can handle retrying transparently.
Reporter | ||
Comment 5•23 years ago
|
||
I think a separate table for seqnums is good. I'm not sure how a separate table
is not cross-db compatible, it would seem the opposite to me. It also doesn't
require locking the whole record in MySQL, so wouldn't block bug updates for
example.
Comment 6•23 years ago
|
||
>> This is untrue. See my comments in the pgsql bug, giving examples where this
>> is not the case.
>I'm not sure what you mean here.
http://bugzilla.mozilla.org/show_bug.cgi?id=98304#c19 and followups.
>> Currently, it is theoetically possible for show_bug.cgi, for example,
>> to show a different topic in the <title> than on the page, if a
>> process_bug happened between the two queries.
?Isn't this what read transactions fix?
Yes, but we don't actually do that for show_bug, and since that has to read from
the main db, I don't think we want to. That point was more the "we need to audit
locking" rather than to do with transactiosn directly, though.
>OK, I believe two clashing transactions of this type would cause one to need to
>be retried, which is where you might want to write lock the number, but I think
>it would be better to pass this to something like PerformTransaction and then
>you don't need to lock since it can handle retrying transparently.
We want to avoid retrying. You'd have to pass a sub in, (and then we'd use
serialised transactions, not read committed), and we'd just waste processing
time which a simple lock could be avoid. There may be cases where this is
needed; I can't think of any off hand
Comment 7•23 years ago
|
||
postgresql actually does use a separate table - the serial datatype is a
shortcut for an int field with a default value of the next value from a
sequence. See
http://developer.postgresql.org/docs/postgres/datatype.html#DATATYPE-SERIAL
I don't know what mysql does with auto_incrememnt in transactions.
Reporter | ||
Comment 8•23 years ago
|
||
> http://bugzilla.mozilla.org/show_bug.cgi?id=98304#c19 and followups.
OK, I suppose if you do row-level locking this is true, I was thinking from a
table-level where it would be a write lock instead.
>> Isn't this what read transactions fix?
> Yes, but we don't actually do that for show_bug
Why on Earth not? That's the point of transactions.
> We want to avoid retrying. You'd have to pass a sub in,
I think it depends how you get seqnums.
I was thinking about a record on a separate table, where you could use
PerformTransaction, which would take a list of SQL statements. I think we could
do that - can you say in SQL to add 1 to a number?
If you used the MAX technique, you probably couldn't do it that way.
> and we'd just waste processing time which a simple lock could be avoid.
I suppose this is true, but I was mainly thinking of situations where this could
actually save time. If the situation you are locking against is rare, it would
be quicker on average to retry.
Comment 9•23 years ago
|
||
>>> Isn't this what read transactions fix?
>> Yes, but we don't actually do that for show_bug
>Why on Earth not? That's the point of transactions.
well, we'd want a serialized transaction then. See
http://developer.postgresql.org/docs/postgres/transaction-iso.html and the
following two pages (Actually, we want repeatable read, but postgres doesn't
support those) Since show_bug doesn't shange anything, this would be OK.
I do want to stress that that example had nothing to do with transactions,
though, and more to the point that currently there is no read lock, even for
mysql. Of course, adding that lock would mean locking all the other tables, so...
This implies that BeginTransaction would have to take a flag variable.
>> We want to avoid retrying. You'd have to pass a sub in,
>I think it depends how you get seqnums.
For seqnums we don't, in general we would (eg process_bug, which does lots of stuff)
>> and we'd just waste processing time which a simple lock could be avoid.
>I suppose this is true, but I was mainly thinking of situations where this
>could actually save time. If the situation you are locking against is rare, it
>would be quicker on average to retry.
Hmm. Good point. I'll have to think about that.
Reporter | ||
Comment 10•23 years ago
|
||
> This implies that BeginTransaction would have to take a flag variable.
Do you mean for the transaction type? I forgot to mention that, yes that could
possibly be one parameter to BeginTransaction.
I believe there are four different types and PostgreSQL supports two of them
although I'm not sure what they are.
To me everything should use the strongest type. I suppose the other types are
for better performance. I would suggest to use the strongest type by default
but allow it to be overridden where we're really sure it is unnecessary.
Although even then someone might modify the transaction so it is necessary - so
we need to be careful of that, even though its a "bug" that bites us no more
than having no transactions.
PerformTransaction could, I imagine, work out what type is needed up front.
If we took transaction type as a parameter (or created different begin subs),
would we support just the types that PostgreSQL supports, or all the types in
case other databases support them? I understand you could always choose the
next strongest if a type isn't available, and PostgreSQL supports the strongest.
Comment 11•23 years ago
|
||
Well, there are 4 types according to ANSI sql. postgres supports 2, and the
mysql docs mention all 4. I don't know if all 4 are only supported in 4.0, or if
3.23+dbd tables support them all. Yeah, going up to the next highest is possible.
However, any further discussion of this is going to have to wait until we've
tried it out - I think we're getting ahead of ourselves, without having actually
tried this out.
> PerformTransaction could, I imagine, work out what type is needed up front.
Ug. No, it should be explicit. Maybe with a default, but bugzilla shouldn't try
to scan sql to work out what we want.
Reporter | ||
Comment 12•23 years ago
|
||
>> PerformTransaction could, I imagine, work out what type is needed up front.
> Ug. No, it should be explicit. Maybe with a default, but bugzilla shouldn't
> try to scan sql to work out what we want.
I assume your concerns here are about performance. We could definitely set the
default to 'auto'. The question would therefore be, what is a greater
performance drain, the computation of the transaction mode, or the cost of
having too strict a transaction? Neither is very obvious, and the latter is
possibly difficult to measure entirely when you consider there might be some
sort of cache effects. And the above tradeoff could be different for different
databases.
Reporter | ||
Comment 13•23 years ago
|
||
Some more thoughts:
To fully support transactionless installations, as a part of the API we still
need to allow specification of the locks that get done when transactions aren't
on. So essentially all the locking and transaction logic is hidden in the
transaction API.
Now, bbaetz's sequential number example before raises a distinction here - the
difference between an essential lock and a transactionless lock. In the first
type, the lock must get done regardless of transactions. This type is quite
rare I believe. The second type only need get done when you're not using
transactions, as MVCC will take care of your problems.
So, do we allow specification of the difference between these lock types? If we
pass something similar to our current system of "table1 READ, table2 WRITE", it
might be a little hard to parse. We could instead pass this as a hash of table
to type.
But given the rareness of the essential lock situation, perhaps a better
solution is in these situations to use SendSQL directly and not use the
transaction API.
Allow me to contradict my earlier statement that we can't in general determine
the transaction type we want. If we do pass locks for transactionless
installation, we can possibly compute the transaction type using this data. I'd
need to learn more about the transactions to how accurate this process could
be. We can certainly easily discover whether a transaction is read-only,
write-only or read-write.
If this data was a hash, processing of this information could be relatively
quick. The reason I am so interested in automatic computation is I would prefer
to avoid the class of bugs where the transaction type is too weak - I see this
as being too easy to do. It's possible we could move this checking into the
testing suite, but this would not be easy.
Reporter | ||
Comment 14•23 years ago
|
||
I think full transaction support should mean:
- We have all the necessary documentation and code to support transactions on
both MySQL and PgSQL.
- On all databases, regardless of whether they support transactions, an
administrator should be able to turn off transactions.
Reporter | ||
Updated•23 years ago
|
Priority: -- → P2
Target Milestone: --- → Bugzilla 2.18
Assignee | ||
Comment 15•21 years ago
|
||
Now that we're moving on PgSQL support again:
Where are we on this? I think that we should at least have a BeginTransaction()
and EndTransaction() function, and then we can play around with the details when
we discover actual performance problems, or modify the functions when we run
into some actual locking issue (in the code, of course, not in testing :-) ).
(Also, I assume that this is "nobody", since there hasn't really been any
movement on it.)
Assignee: justdave → nobody
Comment 16•21 years ago
|
||
Instead of having BeginTransaction() and EndTransaction() routines, I think a better solution is to have a
single DoTransaction() which takes a coderef as an argument. I use something like this all the time with
Class::DBI. This is how it would look from a usage point of view:
Simple:
DoTransaction({
SendSQL(...);
SendSQL(...);
});
For more complex usage, you can also pass in values for the coderef and return values from it:
my($var3) = DoTransaction({
my($tvar1,$tvar2) = @_;
SendSQL(...);
if ($tvar1) { ... }
SendSQL(...);
return($tvar2 * 5);
},$var1,$var2);
Possible implementation:
sub DoTransaction {
my($code,@args) = @_;
my @return_values = ();
my $dbh = ...
# Localize database handle attributes.
local $dbh->{RaiseError} = 1; # if not the default
local $dbh->{AutoCommit} = 0; # or $dbh->begin_work
eval {
@return_values = ($code->(@args));
$dbh->commit;
};
if ($@) {
my $error = $@;
eval { $dbh->rollback; };
if ($@) {
my $rollback_error = $@;
croak("Transaction aborted: $error; Rollback failed: $rollback_error\n");
} else {
croak("Transaction aborted (rollback successful): $error\n");
}
}
return(@return_values);
}
Comment 17•21 years ago
|
||
The way that Class::DBI, PHP, and so on handle the 'simple' sql wrappers doesn't
work for complex actions. I don't think theres any reason to complicate the code
using coderefs, rather than just calling the DBI bits arround begin/commit/rollback.
Comment 18•21 years ago
|
||
(In reply to comment #17)
> The way that Class::DBI, PHP, and so on handle the 'simple' sql wrappers doesn't
> work for complex actions.
I believe the above will work with any arbitrarily complex database action. I may have gotten the idea
from Class::DBI, but I've used it with plain DBI applications as well.
> I don't think theres any reason to complicate the code
> using coderefs, rather than just calling the DBI bits arround begin/commit/rollback.
I agree coderefs complicate the Perl coding somewhat, but it has advantages. It encapsulates the
transaction in a very useful way that prevents programming mistakes and unconsciously promotes
keeping transactions short and tight. You never have a problem with someone forgetting to
EndTransaction() or doing a BeginTransaction() after having already done one, etc.
By the way, replace "DoTransaction({" with "DoTransaction(sub {" in comment #16. My apologies for the
syntax error.
Comment 19•21 years ago
|
||
These bugs appear to be abandoned. Retargeting to 2.20
Target Milestone: Bugzilla 2.18 → Bugzilla 2.20
Reporter | ||
Comment 20•20 years ago
|
||
I think coderefs are probably worthwhile in the small number of cases where we
can't just pass in a list of statements. I can't see any cases they wouldn't be
able to handle.
Assignee | ||
Comment 21•20 years ago
|
||
So, we have support for this in bug 237862.
I really think that we should require MySQL 4 before we start doing transactions
and depending upon them. That is, I want to eliminate table locks (I suspect
they are causing some significant performance problems in Bugzilla, see my
analysis on bug 282199) and replace them almost entirely with transactions, if
possible.
So we'd actually *require* transaction support to do that.
Severity: normal → enhancement
Depends on: 281360
Comment 22•20 years ago
|
||
(In reply to comment #21)
> So, we have support for this in bug 237862.
>
> I really think that we should require MySQL 4 before we start doing transactions
> and depending upon them. That is, I want to eliminate table locks (I suspect
> they are causing some significant performance problems in Bugzilla, see my
> analysis on bug 282199) and replace them almost entirely with transactions, if
> possible.
>
> So we'd actually *require* transaction support to do that.
Trouble is that nothing is as simple as it seems :-). MySQL in version 4 does
support transactions (even 3.23.17 have limited support for transactions, see
http://dev.mysql.com/doc/mysql/en/commit.html), but only on transaction-safe
database engines (http://dev.mysql.com/doc/mysql/en/storage-engines.html). We
are currently using MyISAM, which is NOT transaction-safe.
I hit this when I was trying to add support for transactions in MySQL in the DB
layer based on MySQL version.
One of the trouble is how to identify what engine we are running on. Simply
creating tables specifying InnoDB is not sufficient (see
http://sql-info.de/mysql/database-definition.html#2_4).
So the best approach I can see right now is to switch bugzilla to ask for InnoDB
tables (which means if InnoDB is not supported, we'll get MyISAM os something
else), and at runtime, check the engine and MySQL version and use transactions
only if we can.
Which means we need to create locking/transaction interface (or extend the
existing we have in the DB compat module) which will handle both - it will use
transactions, if supported, or it will fall back to locking as we have it now.
BTW, are there places in bugzilla where we'll need locking even inside of
transaction (e.g. not to allow using a value while references to it are being
deleted from all tables to prevent orphans etc.)?
Assignee | ||
Updated•20 years ago
|
Assignee: nobody → Tomas.Kopal
Target Milestone: Bugzilla 2.20 → Bugzilla 2.22
Comment 23•20 years ago
|
||
Another funny thing (not really): MySQL supports full text search only on MyISAM
databases (http://dev.mysql.com/doc/mysql/en/fulltext-search.html). So you can
have full text search, or you can have transactions, but not both (unless you
run two engines in parallel, which seems to be what e.g. Wikipedia is using,
according to one of the comments below that doco page).
You just have to love MySQL database...
Assignee | ||
Comment 24•20 years ago
|
||
OK, I've set the URL to be a good description of what MediaWiki does.
To handle the "can't know if we're InnoDB" problem, we can change the table type
and then with a clever use of SHOW TABLE STATUS can figure out whether or not we
actually changed.
We should fail hard if we don't have InnoDB support -- MySQL 4.0+ has it turned
on by default, and that's what we'll be requiring for Bugzilla 2.22.
Admins can turn it on if they have disabled it.
Assignee | ||
Comment 25•20 years ago
|
||
Also, FWIW, MySQL deals with AUTO_INCREMENT columns the same way the PostgreSQL
deals with SERIAL columns, inside of a transaction:
From http://dev.mysql.com/doc/mysql/en/innodb-auto-increment-column.html --
"Note that you may see gaps in the sequence of values assigned to the
AUTO_INCREMENT column if you roll back transactions that have gotten numbers
from the counter."
Assignee | ||
Comment 26•20 years ago
|
||
We will also need to explicitly set the transaction isolation level on either
PostgreSQL or MySQL:
"In PostgreSQL READ UNCOMMITTED is treated as READ COMMITTED, while REPEATABLE
READ is treated as SERIALIZABLE"
Whereas in MySQL the default is REPEATABLE READ.
The behavior needs to be consistent across the two DBs.
Comment 27•19 years ago
|
||
The trunk is now frozen to prepare Bugzilla 2.22. Only bug fixes are accepted, no enhancement bugs. As this bug has a pretty low activity (especially from the assignee), it's retargetted to ---. If you want to work on it and you think you can have it fixed for 2.24, please retarget it accordingly (to 2.24).
Target Milestone: Bugzilla 2.22 → ---
Updated•18 years ago
|
QA Contact: mattyt-bugzilla → default-qa
Assignee | ||
Updated•18 years ago
|
Assignee: Tomas.Kopal → mkanat
Assignee | ||
Updated•18 years ago
|
Whiteboard: [roadmap: 3.2]
Assignee | ||
Updated•18 years ago
|
Priority: P2 → P1
Assignee | ||
Updated•18 years ago
|
Alias: bz-transactions
Target Milestone: --- → Bugzilla 3.2
Comment 28•18 years ago
|
||
Max, is this bug fixed now? We support transactions, even if it's not used everywhere.
Assignee | ||
Comment 29•18 years ago
|
||
(In reply to comment #28)
> Max, is this bug fixed now? We support transactions, even if it's not used
> everywhere.
No, this is the general transactions tracking bug, now. That is, we need to move from table locks to transactions everywhere, and then this bug will be resolved.
Summary: Support database transactions. → Implement database transactions across Bugzilla
Comment 30•17 years ago
|
||
There is only one file left using bz_lock_tables(): process_bug.cgi! The very last patch removing it as well as bz_unlock_tables() (which is also appearing in Error.pm) should come here. Marking this bug as a blocker for 3.1.3 as it would be a shame to keep a single call to bz_lock_tables() in our next release (also, I want it removed entirely before 3.2 RC1). manu, are you working on it?
Flags: blocking3.1.3+
Comment 31•17 years ago
|
||
I've discussed this with mkanat on IRC and he feels that bug #367914 blocks this one.
Depends on: bz-process_bug
Comment 32•17 years ago
|
||
(In reply to comment #31)
> I've discussed this with mkanat on IRC and he feels that bug #367914 blocks
> this one.
Why?
Assignee | ||
Comment 33•17 years ago
|
||
(In reply to comment #32)
> Why?
Because longdescs has to be updated outside of transactions, and I don't want to (and probably can't, even, right now, until we're done) move that code around in process_bug.cgi until we're done.
Comment 34•17 years ago
|
||
Attachment #299592 -
Flags: review?(mkanat)
Assignee | ||
Comment 35•17 years ago
|
||
Comment on attachment 299592 [details] [diff] [review]
patch, v1
Beautiful. Put parens after the bz_start_transaction and bz_commit_transaction on checkin, though--that's how we do it everywhere else.
Attachment #299592 -
Flags: review?(mkanat) → review+
Assignee | ||
Updated•17 years ago
|
Flags: approval+
Comment 36•17 years ago
|
||
Checking in process_bug.cgi;
/cvsroot/mozilla/webtools/bugzilla/process_bug.cgi,v <-- process_bug.cgi
new revision: 1.401; previous revision: 1.400
done
Checking in Bugzilla/Constants.pm;
/cvsroot/mozilla/webtools/bugzilla/Bugzilla/Constants.pm,v <-- Constants.pm
new revision: 1.88; previous revision: 1.87
done
Checking in Bugzilla/DB.pm;
/cvsroot/mozilla/webtools/bugzilla/Bugzilla/DB.pm,v <-- DB.pm
new revision: 1.109; previous revision: 1.108
done
Checking in Bugzilla/Error.pm;
/cvsroot/mozilla/webtools/bugzilla/Bugzilla/Error.pm,v <-- Error.pm
new revision: 1.23; previous revision: 1.22
done
Checking in Bugzilla/DB/Mysql.pm;
/cvsroot/mozilla/webtools/bugzilla/Bugzilla/DB/Mysql.pm,v <-- Mysql.pm
new revision: 1.56; previous revision: 1.55
done
Checking in template/en/default/global/code-error.html.tmpl;
/cvsroot/mozilla/webtools/bugzilla/template/en/default/global/code-error.html.tmpl,v <-- code-error.html.tmpl
new revision: 1.104; previous revision: 1.103
done
Assignee | ||
Comment 37•16 years ago
|
||
Added to the Bugzilla 3.2 release notes in bug 432331.
Keywords: relnote
You need to log in
before you can comment on or make changes to this bug.
Description
•