Age | Commit message (Collapse) | Author |
|
|
|
|
|
* Moved notification sending from Notice::saveReplies to distrib queue handler, so it'll pull from the reply set we've saved regardless of how we got it.
* Set up gettext infrastructure for command-line scripts; gets localization mail notifications etc working from background queues.
* Adjusted locale switching: common_switch_locale() works at runtime for bg scripts, forces a message catalog update
|
|
|
|
|
|
before distribution. Move saveGroups after saveTags when saving notices; groups may save additional tags, so need to be moved after so the check for duplicates actually works.
|
|
|
|
Conflicts:
actions/confirmaddress.php
|
|
|
|
seeing @-replies from them, or them subbing to you in future)
|
|
* some translator documentation added
|
|
mail messages: fixed some bad gettext usage, added trans doc comments.
|
|
mail messages: fixed some bad gettext usage, added trans doc comments.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
the packed box loaded at insert time, so we can simply unpack it and check before doing the update query.
Should help with dupes that come in when inbox distrib jobs die and get restarted, etc.
Conflicts:
classes/Inbox.php
Looks like this was implemented on master recently and not copied up to testing. Merging to my version on testing as I've added some doc comments and extracted a couple functions for future ease of use.
|
|
the packed box loaded at insert time, so we can simply unpack it and check before doing the update query.
Should help with dupes that come in when inbox distrib jobs die and get restarted, etc.
|
|
|
|
to (profile_id, id) instead of (profile_id, created, id).
It's been falling back to PRIMARY instead, which is really
very inefficient for a profile that hasn't posted in a few
months. Even though forcing the index will cause a filesort,
it's usually going to be better. Even for large profiles it
seems much faster than the badly-indexed query.
|
|
to (profile_id, id) instead of (profile_id, created, id).
It's been falling back to PRIMARY instead, which is really
very inefficient for a profile that hasn't posted in a few
months. Even though forcing the index will cause a filesort,
it's usually going to be better. Even for large profiles it
seems much faster than the badly-indexed query.
|
|
This reverts commit a09b27ff41df41a86fdb0abae14239907d5ee6ec.
|
|
This reverts commit 650074c648d98f81674c6e2b0ebf052c473ada6e.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Conflicts:
plugins/Blacklist/BlacklistPlugin.php
|
|
into queries. Comment can then be seen in process list, slow query logs on the server, aiding in tracking down unexpected slow queries.
SELECT /* queuedaemon.php Ostatus_profile->processPost */ * FROM notice WHERE ( notice.uri = 'http://stormcloud.local/mublog2/notice/479' )
INSERT /* POST Notice::saveNew */ INTO notice (profile_id , content ....
|
|
Conflicts:
lib/attachmentlist.php
|
|
|
|
|
|
|
|
profiles and remove them.
|
|
|
|
|
|
|
|
notice during a session override.
This was being triggered by welcomebot messages created at account creation time, then propagated through replies.
|
|
notice during a session override.
This was being triggered by welcomebot messages created at account creation time, then propagated through replies.
|
|
|
|
DB_DataObjects, instead of failing silently.
The magic __call() method is used to implement a getter and setter interface, and simply didn't bother to throw an error for things it didn't recognize.
This may expose a number of existing errors where mistyped method names are called and we're not noticing that they're failing.
|
|
working if it was in the middle of a query loop, even if the cloned object falls out of scope and triggers its destructor.
This bug was hitting a number of places where we had the pattern:
$db->find();
while($dbo->fetch()) {
$x = clone($dbo);
// do anything with $x other than storing it in an array
}
The cloned object's destructor would trigger on the second run through the loop, freeing the database result set -- not really what we wanted.
(Loops that stored the clones into an array were fine, since the clones stay in scope in the array longer than the original does.)
Detaching the database result from the clone lets us work with its data without interfering with the rest of the query.
In the unlikely even that somebody is making clones in the middle of a query, then trying to continue the query with the clone instead of the original object, well they're gonna be broken now.
|
|
* Subscription::start was sometimes passing users instead of profiles to hooks, which broke OStatus subscription notifications; now normalizing to profiles for processing.
* H-card parsing would trigger a lot of PHP warnings and notices in hKit. Now suppressing warnings and notices for the duration of the call to keep them out of output when display_errors is on.
* H-card parsing would trigger a PHP fatal error if the source page was not well-formed XML and Tidy was not present on the system. Switched normalization to use the PHP DOM module which is always present, as we have no need for Tidy's extra features here.
* Trying to fetch avatars from Google profiles failed and triggered a PHP warning due to the relative URL not being resolved during h-card parsing. Now passing profile page URL into hKit by sneaking a <base> tag in while we normalize the HTML source.
* Profile pages without a "Link" header could trigger PHP notices due to a bad NULL -> array(NULL) conversion in LinkHeader::getLink(). Now checking that there was a return value before converting single return value into array.
|