CZ Talk:Bots

From Citizendium
Jump to navigation Jump to search

Comments on the initial draft

This is a reasonable start. I notice that wikipedia has done a thorough job. It also includes things like "minor edits" and "approval", etc., etc.. How many of these do we want to use? Do we need to re-invent the wheel?

I think the main thing is to make sure anyone that is affected will have a chance to give input. It seems that the onus should be placed on the bot producer to advise those that it is going to affect, rather than expecting those that it is going to affect to check in here to make sure we aren't going to mess with their article? D. Matt Innis 23:54, 20 September 2009 (UTC)

There is no "their article" here, I think, and any edit in this wiki should be regarded as a Be bold edit unless it causes some sort of damage (to page content, formatting or contextualization, or to other users). Of course, undoing bot edits is tedious, and the onus should be on the bot operator to do it if necessary, for which precautions can be taken (e.g. bot-specific categories). But if it is clear that failure to do so would result in a ban, it is unlikely that we are going to see multiple such cases. There is no need to reinvent the wheel, and I think the bot policy at the English Wikipedia is a useful starting point (that's why I had put it in), but if we have to differ from them, then our policy should be more liberal than their's, since the main reason for their strictness is that they do not use an entry check and bots could thus, technically, be operated from multiple bot-created accounts and cause massive damage. Here, only one account per user is possible (or a few more should bot accounts ever be permitted), so keeping misuse at bay really should not be the problem. --Daniel Mietchen 20:24, 28 September 2009 (UTC)
Well, I would suggest that we start strict and loosen the reigns as we see how it works. I don't think it is reasonable for us to expect that the bot programmer can think of all the potential problems that their bot might produce, so right off the bat, we need to assure that they will be documented and tested before they are run. How can this be done? Could we require that the bot be tested on, say 50 articles or so... then review the changes before moving on. I assume this is soemthing that you already do. All we need to do is document it. Is this already done? If so, where? D. Matt Innis 02:07, 29 September 2009 (UTC)
Three points:
  1. I would think it's more appropriate if we start out less strict and tighten things up if need be. Otherwise, we might end up with a system like that for proposals, which broke because it was too bureaucratic.
  2. So far, documentation was only in the edit summaries of the test edits, but I will now also link to the test edits and the first "real runs" from the documentation page
  3. Usually, one test edit is enough to see if something important is wrong with the code (keep in mind that most of the testing takes place in non-edit mode anyway, i.e. it is invisible for anyone but the tester), but more subtle bugs may only come up after a while. For instance, the inactive editor script had not originally taken into account editors who had registered more than three months ago and never made a single edit, but when that situation actually occured, it was visible in the bot logs, and I could fix that problem. In the meantime, the bot had performed hundreds of on-target edits which, I think, justified its operation.
--Daniel Mietchen 09:23, 29 September 2009 (UTC)
Responses:
  1. Less strict, good point. Let's find a happy medium that allows people to experiment without having to read a thousand pages of instructions and requirements, but requires that they do some basic research on their proposed bot before they make others take the time to approve or disapprove it.
  2. I'm not only worried about code, I am concerned about changes that a bot would make that irritates another user. We don't allow authors to make significant changes without discussing on talk pages, so why should we allow a bot?
  3. Testing should include a phase of real edits in articles from all workgroups. Ideally, I would like to see that there is a quick way for the editors in these workgroups to communicate quickly if adverse events occurred.
We need a place that we can see all requests with documentation and feedback easily evaluated.
D. Matt Innis 14:03, 29 September 2009 (UTC)
Maybe something like this, only better ;-) D. Matt Innis 14:59, 29 September 2009 (UTC)
The idea would be that, once the author made it through all the links on the table, than all that the "bot approvers" would have to do is look over the the links in the table real quick. Also, putting a link to this table in the Account request would make it real easy for a constable to look it over and approve the account (or not). D. Matt Innis 15:08, 29 September 2009 (UTC)
Subpagination Bot Purpose Documentation Script Bot test results Approval history Community input
I like this basic layout and will fill it in for the existing bots when I restart them on Monday. The "purpose" field could also be used by non-operators to request bot actions. --Daniel Mietchen 19:12, 29 September 2009 (UTC)

User:Subpagination Bot

As an example, the documentation that went along with the Subpagination bot is probably a good example of what we need to start with. Notice the tests prior to full scale running and logs of things that it had done... Daniel, do we have this type of information available for the User:Related Articles Bot request? D. Matt Innis 18:51, 6 October 2009 (UTC)

Subpagination Bot Purpose Documentation Script Bot test results Approval history Community input
I'll stop here for now because I'm not totally happy with the solution here, yet, and await feedback. D. Matt Innis 19:29, 6 October 2009 (UTC)
For User:Related Articles Bot (for the moment still residing here), we have the purpose and the script (which in my view is better than verbal documentation) and there were test edits, though not logged on pages but marked in the edit summaries. I asked Jitse for the Subpagination bot's code last week but haven't received a reply yet. Having that code would spare me having to write the logging part anew. --Daniel Mietchen 19:42, 6 October 2009 (UTC)
Looking at the script as a constable that has to approve an account, I don't think we can count on anyone understanding it. Drew made a comment on my talkpage that made sense that having the code for everyone to see might invite vandalism, etc.. He might have a good point. I think the test process (and seeing that it is approved by someone after a sample run) is more important. Actually - just that it was approved by a process that made reasonable checkpoints and safeguards. Did you see the bottom part of the Log1 on the Subpagination Bot where it makes some notes about how it was tested and the results and how it was fixed, etc.. There is also some of this on the talk pages of the different tasks, ie. task1. D. Matt Innis 23:54, 6 October 2009 (UTC)

Bot threshold

*5 Bots should be run such that they can be undone by an existing bot, the command for which would have to be specified upon application. For scripts, this is probably too much to demand, so they are limited to single runs or to less than 500 edited pages over the course of one month (note: this number is defined at CZ:Bot threshold).

This seems high for something that may have to be fixed by hand. I can see 20 or so, but 500 fixes if it goes wrong seems impossible. D. Matt Innis 22:27, 16 January 2010 (UTC)

Summary of a talk on bot policies at WP

I thought this may be of interest here. --Daniel Mietchen 23:47, 27 March 2010 (UTC)

I agree with the premise that bots need to be a reflection of what the society wants rather than society being subserviant to bot mechanics. D. Matt Innis 12:53, 28 March 2010 (UTC)