Click here to Skip to main content
15,885,278 members
Articles / Artificial Intelligence
Tip/Trick

Milestone Approach to AI Development

Rate me:
Please Sign up or sign in to vote.
5.00/5 (3 votes)
2 Dec 2015CPOL9 min read 14K   5   7
Some high-level thoughts on Artificial Intelligence. Is it really AI we are looking for?

Introduction

This tip is not so much a code project as general look at a few requirements needed in designing an intelligent system. It is not intended to cover all the possibilities, nor shall I pretend to be that intelligent, myself. I have been thinking about this subject since I got my first desktop computer, off and on, which eventually led to me becoming a programmer, in the first place. So, let's take a high-level look at what it is we are looking to accomplish in developing an AI system.

I had an epiphany while working out some ideas concerning the design of an AI system, so I thought I'd write down how that happened along with a few thoughts on some general requirements of such an open source project and perhaps generate a little interest in the subject.

It took the Universe 11 billion years or so to design an intelligent being, the only ones we know of and suspect as being intelligent, anyway.

We don't have the time to write a computer program to work out what has taken nature billions of years of mutation and natural selection to do, but what we can do is to attempt to identify the milestones of the process and, perhaps, "design-in" many of those milestones, up front.

What is Required?

We don't really want an intelligent machine, anyway. Do we? Ignoring the bleak connotations, we want a slave. Something that lives in a box, is deaf, blind, and only takes input as directed. We want it to perform difficult tasks that we don't necessarily have time for, and to act like a human being in the process.

We don't want an independent being that can ignore our orders.  If possible, we want it smarter than us, but who wants to relinquish our control?

What does an AI Need?

To even get close to desiging an AI, like Maslow, we'd need to identify its hierarchy of needs, and get some idea as to how this thing should be structured.

So, Milestones?

The Need to Learn

It would require a need to learn; that is, a need to learn from the data it receives, via keyboard, video, sound, textual input, touch sensor, or whatever.

Order of Importance

There must be an order of importance to what it learns. That requires time-stamped memory stacks working as a short-termed memory to determine the most important or current memories in its "life", and a long-term memory to file away things in order of importance, and some method of determining why some memories are more important than others, regardless of when it was learned.

For example, anything threatening its "life", or harm, or the lives or harm of others, would be somewhat important. Anything presented as a traumatic event by anyone would have a certain level of importance. So, it requires not only time-stamped memory stacks, but cross-linking on level of importance.

Language Processor and Context Generator

It would have to have a really good English (for us) language processor, and be able to take clues from all inputs and file away short and long term memories, based upon the language processor, and a context generator. So the context would be generated from sight and sound and not just from the words spoken or typed by persons providing input.

We could provide stacks of nouns, verbs, adverbs, adjectives, conjunctions, etc. and possibly initial images or sounds... Also such things as comparisons, similes, metaphors, or at least common ones, but an ability to construct more from internal rules.

It would have to at least generate partial contexts from given information. (He's not talking about a dog bark, but the bark of a tree, etc.) This implies some ability to talk to one's self, or to "think" things through, before deciding upon a response. Common contexts could be preprogrammed, but logical thinking is a must. Cause must necessarily be followed by effect.

The Ability to Forget

It has to have the ability to forget. Certain things become less important over time, especially things of lower importance to its needs (or our needs). In fact, without short term memory, it would fill up with trivial data and basically freeze-up, or uselessly take an inordinate amount of time sorting through an incredible mass of data. So, short-term memory, is not only a requirement for intelligence, it also provides a focus on the current timeline.

There, you have it. Useful intelligence cannot exist without the ability to forget (This was my epiphany!).

I don't think it needs to be as forgetful as we are. But even as large as our long-term storage is, we can't focus on it all at once and get anything useful from it in a reasonable amount of time as to perhaps save your life. This requires levels of importance, and of time.

Memory absolutely has to become less important with time and must be reinforced recursively in order for a fading memory to create a net gain in importance. "Fool me once..."

Humans do that by loosing links to large amounts of memory. Provided the cross-link, we tend to remember most of an occurrence. The memory is there, but over time parts of it, become less important to us, and it operates by the refreshing of an electro-chemical reaction in our brains, making it more important because it refreshes the time interval since we last had such a thought, sometimes requiring several refreshes to make it more permanent. The memories are now newer.

Have you ever forgotten one word, and given the word remember the entire sentence or paragraph?

A Story Generator

Our AI has to have a way to generate thoughts of its own. Let's call it a story-generator. It should be able to recount things in its short-term memory, and relate them to long-term memories in orders of importance and build a variety of contexts from them. Over time the memories fade, so that might imply a shift in both order of importance and a slight shift in context.  This may well explain part of what fuzzies our logic.

We could perhaps provide a procedure for questioning others about things that are "on its mind". Approaches, so to speak. (Imagine putting two AI's together and watching them converse.) "What are you thinking?" or "What do you think about, blah?"..."Did you hear about...", Do you know...?", etc.

It's no wonder so many of us sound like broken records as we get older.

Questions and approaches to conversations are geared upon time, the current focus, and levels of importance; that is, time-stamped, cross-linked memories.

What makes us individuals is not only what we know, but what becomes important to us, what scares us, what fulfills a need, and how we put the mix together and learn from it.

The Ability to Dream

One learning mode, that we have, is the ability to dream. We take all of the above and mix it up to build stories with difference levels of importance. While doing that, it reinforces our memory, refreshes them, and teaches us something new, which teaches us new lessons from old data and resets our point of reference.

So, does that mean the ability to lie to oneself is significant to intelligence?

Of course, this has to affect who we are, and how we relate to the current focus in time, but generally, we keep an overall persona.

Beside items that were pre-programmed, our AI would still have to learn, so the person who writes the thing would have to spend years training it, or allow others to do so, like we do with real people.

This would be a great relinquishing of control. Remember, whatever the end-result, nobody is really looking for an uncontrolled, independent entity, here.

Asimov attempted to address this idea with his ineradicable Three Laws of Robotics, but then proceeded to write stories detailing exceptions and paradoxes inherent in these rules.

Asimov's laws were an immutable design in the structure of the positronic brain. Easy enough to say, but impossible to actuality do.

I suspect that in our increasingly politically correct climate, some fool or fools will eventually set out to "free the AIs", which could well lead to an Artificial Intelligence with the accent of an Austrian-American actor seeking to take over robotic manufacturing and design a time machine, but that is another story.

I don't think we really want our computers to be able to override our own opinions or directions, rather; we want our little slave-in-a-box, that we can trust to do useful work for us, and keep our secrets and any desenting ideas or opinions to itself.

Time to Learn

Any AI as complex as we are, would likely take at least as long to learn, and by definition, forget as much as we do. Who decides how much memory to keep, and how much to allow to fade away? How much short-term is useful, and how much fixed memory to allow to be retained... or fade over time, if it is not refreshed in usage.

Well, those are a few of the milestones that we can design for. With so many results, we could make adjustments until closer approximations can be made. Of course, there comes a time when upgrading the firmware can be equivalent of a personality change.

It gives us another context to "doing a memory swap".

Conclusion

These are just a few thoughts on the subject, and by no means outlines a hierarchy of needs or anything like that, but it does show a few of the milestones that could be programmed towards.

When I first began thinking about this stuff, I had all of 5K RAM in my VIC 20 computer.

Now, how much does that sound like, I almost fell off my dinosaur?

Still, it sounds like a fun project to work on, and I've even considered starting an open source, online project using a "milestones" approach as a general guideline.

Not sure I'd live long enough to see it through.

Before yelling at me, I completely understand this is a simulacrum approach, and not necessarily a neural-net, grows like a virus, type approach, but what are we exactly aiming for, here, and if it passes the Turing test and does the job required of it, who cares?

Although, I'm sure someone will want to be able to point and say, "It's Alive!" The minute someone makes a single change to make the creature more amicable, as they most certainly will, it becomes a mere simulation.

You know, "free will" is over-rated, right?

Fred Perkins
circa 2015

 

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior)
United States United States
A Full-Stack (If I can use the term) ecommerce
Applications and Database Programmer
with 20+ years of programming experience.
Working mostly with T-SQL and .NET Languages

Comments and Discussions

 
QuestionSome Further Thoughts... Pin
C Grant Anderson4-Dec-15 11:45
professionalC Grant Anderson4-Dec-15 11:45 
AnswerRe: Some Further Thoughts... Pin
Fred Perkins4-Dec-15 15:53
professionalFred Perkins4-Dec-15 15:53 
QuestionRe: Some Further Thoughts... Pin
Fred Perkins14-Dec-15 22:10
professionalFred Perkins14-Dec-15 22:10 
AnswerRe: Some Further Thoughts... Pin
C Grant Anderson15-Dec-15 12:29
professionalC Grant Anderson15-Dec-15 12:29 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.