Us User: Rrhardy

696?size=160x160

Name: Rrhardy

Joined: Monday 20 July 2009 19:46:35 (UTC)

Last seen: Thursday 23 July 2009 11:33:04 (UTC)

Email (public): reed.hardy [at] gmail.com

Website: Not specified

Location: Green Bay, United States

Rrhardy has been credited 0 times

Rrhardy has an average rating of:

0.0 / 5

(0 ratings in total)

for their items

I am a retired professor who continues to be interested in the sciences, especially psychology, biology, physics and computer science (specifically AI).  I have a strong interest in computer simulations of learning/development based on layered neural networks.  The most important insight I bring to this work is the need for these systems to have sensory data and perceptual systems at the outset, then all the system needs is experience upon which to base learning and development. 

 

My approach to evolution theory is first-and-foremost based on the recognition of the fact that natural selection NEVER favors a feature.  Evolution works only by deselecting features that simply can't survive or can't reproduce, or which provide such a burden to the organism that owns or hosts the feature that it can't survive or reproduce.  Evolution theorists are in the habit of  talking and thinking about adaptation as a process that involves "favoring" a particularly useful trait.  This is just a bad habit based on our own tendency to favor things and thereby make them happen more often than chance alone would dictate.  When we see that evolutionary change is based only on deselection of only the most costly traits, we gain a clearer understanding of why evolution is such a slow process.

I believe that it would be wise for those who are attempting to use "genetic algorythms" or other iterative natural-selection-like computer programs to produce novel solutions to a range of "problems" to adopt a deselective approach.  I have done this with the computer program described by Richard Dawkins in his Science article.  His was a simulation program designed to "evolve" the Shakespear phrase, "Methinks it is like a weasel..."  Dawkins' program, using a process that "favored" any new "mutation" that was more similar to the end product than the previous outcome, produced the phrase in less than 200 mutation/selection events.  My program, that "deselected" outcomes that were a lot LESS like the target, took several tens of thousands of mutation/selection events before reaching the target string.  Which approach do you think more closely simulates natural selection?

<!--Session data--><!--Session data-->

<input><!--Session data--></input><input />

 

Other contact details:

Not specified

Interests:

Personal Growth
Evolution Theory
Singularity Theory
Computer-Based Learning
Comparative Psychology
Human Development

Field/Industry: College Professor

Occupation/Role(s): Teaching/Research in Psychology/Human Development/Evolution/Personal Growth

Organisation(s):

Not specified
 

No news

What is this?

Linked Data

Non-Information Resource URI:


Alternative Formats

HTML
RDF
XML