Peter Cooper Jr. on Thu, 28 Jun 2007 16:12:03 -0700 (MST)

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [s-d] oracle report 27/06/07 (nday 3)

"Daniel Lepage" <dplepage@xxxxxxxxx> writes:
> The two humans are just part of the test. The test is to determine
> whether the machine can pretend to be a person. The judge or the human
> conversationalist doesn't "pass" the turing test any more than the
> proctor of a school exam "passes" the exam, or the driving instructor
> giving a driving test "passes" that.
> That said, we have enough argument about the Turing Test that you
> could probably get away with defining it however you want. At least,
> Primo seems to have done so, and they're not even eligible to be the
> proctor of a Turing Test.

Based on my limited reading of the history of the Turing Test, it
derived from "The Imitation Game", which clearly had 3 players. A
definition of Turing Test that allows humans to pass it seems somewhat
reasonable, and, I might add, possibly best for the current health of
the game.

> The problem with the Statute of Limitations is that it broke all kinds
> of things whenever somebody made a mistake. Originally it was "any
> action, after 10 ndays, becomes retroactively legal unless somebody
> objects", and Uin nearly repealed rule 10 by sticking "I repeal rule
> 10!" in his signature.

Surely needing to recalculate current gamestate based on a mistake
made an ndecade ago that wasn't caught until recently is also poor.

Maybe we can define Checkpoints of some sort, that definitively
declare a particular gamestate to be correct, even if prior actions
weren't? We could only allow new Checkpoints to be made by proposal
even, if you're worried about abuse.

Peter C.
spoon-discuss mailing list