Arie Dirkzwager (email@example.com)
Fri, 23 Oct 1998 11:28:52 +0000
Underneath an example of an e-mail exchange in COLTS so you can
judge how it works. I'd like to have your opinion - do you like this system
and if not why not?
This is also a repeated invitation to participate in an experience
regarding a COLTS (Collaborative Learning and Teaching System) I am managing
in the form of an e-mail discussion. As an important side effect I'll
collect exact data on this experience and report on them eventually to
evaluate this "COLTS" experimentally. The topic taught and studied is
In a COLTS every participant is sometimes a "teacher" giving
explanations in short statements and sometimes a "student" asking critical
questions, also in the format of short statements figuring as "hypotheses".
A "lesson" is a collection of such statements with the assignment to judge
for each statement its truth-value expressed as the (personal) probability
that it is true according to the following directions:
For each given statement think up as many arguments pro and contra that
statement and then give your judgement about each of the statements as an
estimate of its p(true) according to your knowledge about what they are
about, of course when you think a statement is most likely false your
reported p(true) should be less than .50.
If you are interested to participate in this experience let me know
and I'll send you the first "lesson". Underneath is a first "discussion" of
this lesson as an example of what you can expect when you participate.
Participation will take you very little time (I will not send you all
"discussions", only the lessons, each containing 20 statements to be judged,
and a personal reply to your judgements) so please participate. I need some
data to evaluate this system.
Thanks in advance!
COLTS discussion regarding lesson 1:
Dear COLTS participant,
Thanks for your reaction. Would you please fill in the percentages I
left blank in this message and send them as a reply to me at left blank in this message and send them as a reply to me at aried@xs$all.nl
(NOT to the whole list!) Thank you.
At 22:39 22-10-98 +0200, you wrote:
>Answers to lesson 1 statements:
>>(7) Any assessment results in an evaluation that can be expressed as
>>grade (A, B, C, D, or E), or as a (real) number, the larger this
>>number the better the performance.
>> p(true)= 50%
>(7e) An assessment or evaluation in general need not result in an
>ordinal scale - let alone an ordinal scale with numbers that are
>proportional to performance (it could indeed be the opposite: larger
>numbers representing a greater number of errors, for instance).
>However, you may be using the words assessment and evaluation in a
>restricted, technical sense of which I am not aware. Without further
>clarification I cannot answer this question; hence 50%.
Isn't that a little bit nit-picking? All (educational) assessment
works this way and number of errors is just a reversed scale.
>>(8) Grades can be assigned the numbers 5, ,4, 3, 2, and 1 on an
>>ordered scale, adding or multiplying them (e.g. to compute average
>>grades) generally has no meaning. With real numbers those operations
>>might have meaning, depending on what they stand for.
>> p(true)= 25%
>(8-1) A question of definition. If by ordered scale you mean a ranking
>order, where each of the numbers 5..1 is assigned to one individual
>(or to one group of individuals), then I agree with your statement.
>However, when you use the word *grades* I would expect these to be at
>least at interval-scale level - and then I disagree.
(8.1) Grades are only on an interval scale when the difference between A and
B is the same as between e.g. D and E. It is not.
>>(9) Psychometrics is the main science to study assessment
>>so this course is a course in psychometrics.
>> p(true)= 50%
>(9e) I have insufficient data: I do not (yet) know what direction this
>course will take! However, if you will allow me to rephrase your
>statement: Psychometrics is (...), so this course should involve
>psychometrics - then I agree.
OK, I'll change that.
>>(11) The backbone of a collaborative course are explanations in the
>>form of statements and continuous assessment by evaluating those
> p(true)= 50%
>>(11c) You couldn't think up any arguments in favor of this statement nor
any >>opposing it, so you really had no idea if it was true or false.
> p(true)= 75%
>>(11e) -Basically - I simply have no idea!
>>(12) Given statements are either assumed to be either true or false
>>to be proven to be true or false in the future.
>> p(true)= 40%
>(12-1) (12) is only true provided statements are very carefully
>phrased - which is not often the case, unfortunately.
(12+1) (12) is only true provided statements are very carefully
phrased - which is often the case, fortunately.
>*Lightning kills* is nearly always untrue (!);
>*Lightning kills humans when it hits them* is often true, but not
>the statement *Lightning can kill humans* is true, but
>contains very little information;
>the statement *Lightning kills
>humans when it hits them, in more than 95% of the cases* is true (I
>>(15) The probability p(correct) should be equal to p(true) when the
>>statement is true and equal to p(false) when the statement is false.
>> p(true)= 50%
>(15e) - I didn't (and still don't) understand the question.
(15+2) The subjective estimate of p(correct) is equal to the subjectively
esimated p(true) iff this is larger than 50% and equal to the subjective
p(false) when p(true)<50% as p(false)=100%-p(true).
(15+1) When the subjective p(true)>50% the statement is considered most
likely to be true and this judgement is "correct" iff the statement is
really true in fact.
>>(17) Multiple Choice is the best assessment method.
>> p(true)= 0%
>(17+1) MC is relatively cheap, efficient and fast, when compared to
>open questions; certainly when assessing large groups.
(17-1) ME (Multiple Evaluation) is relatively cheap, efficient and fast,
when compared to MC.
>(17+2) MC allows even amateurs to perform all kinds of analysis,
>thereby giving it an illusion of mathematical accuracy and
(17-2) MC gives only an *illusion* of mathematical accuracy and
>(17+3) MC gives quite good results, in practice - for many
>applications it is better than open questions.
(17-30 ME gives quite good results, in practice - for many
applications it is better than MC.
>(17-1) In a developing field, nothing can be considered *the best
>method*; the best you can hope for is *the best method available* or,
>limiting this even further, *the best method I know*.
>>(20) It is not possible to reach consensus on these twenty statements
>>as an outcome of this course.
>> p(true)= 75%
> You disagree. >> I don't! But maybe my answer doesn't
>correspond to your key? <<
(20.1) It is possible to reach consensus on these twenty statements
as an outcome of this course.
>(20-1) If a short course like this would suffice to reach consensus
>within a group of experts with diametrically opposing views, it would
>be little short of a miracle. I am not that optimistic.
(20+1) This course should go on until consensus is reached.
> Thank you for your cooperation and your reply. Now you see the
>at work would you reconsider you response to (20)? >> No. See above.
Thanks for your contribution!
Educational Instrumentation Technology,
Computers in Education.
1402 AE Bussum,
When reading the works of an important thinker, look first for the
apparent absurdities in the text and ask yourself how a sensible person
could have written them." T. S. Kuhn, The Essential Tension (1977).
Accept that some days you are the statue, and some days you are the bird.
This archive was generated by hypermail 2.0b3 on Thu Dec 23 1999 - 09:01:53 EST