Click here to Skip to main content
15,891,409 members
Articles / Artificial Intelligence
Tip/Trick

Milestone Approach to AI Development

Rate me:
Please Sign up or sign in to vote.
5.00/5 (3 votes)
2 Dec 2015CPOL9 min read 14.1K   5   7
Some high-level thoughts on Artificial Intelligence. Is it really AI we are looking for?

Introduction

This tip is not so much a code project as general look at a few requirements needed in designing an intelligent system. It is not intended to cover all the possibilities, nor shall I pretend to be that intelligent, myself. I have been thinking about this subject since I got my first desktop computer, off and on, which eventually led to me becoming a programmer, in the first place. So, let's take a high-level look at what it is we are looking to accomplish in developing an AI system.

I had an epiphany while working out some ideas concerning the design of an AI system, so I thought I'd write down how that happened along with a few thoughts on some general requirements of such an open source project and perhaps generate a little interest in the subject.

It took the Universe 11 billion years or so to design an intelligent being, the only ones we know of and suspect as being intelligent, anyway.

We don't have the time to write a computer program to work out what has taken nature billions of years of mutation and natural selection to do, but what we can do is to attempt to identify the milestones of the process and, perhaps, "design-in" many of those milestones, up front.

What is Required?

We don't really want an intelligent machine, anyway. Do we? Ignoring the bleak connotations, we want a slave. Something that lives in a box, is deaf, blind, and only takes input as directed. We want it to perform difficult tasks that we don't necessarily have time for, and to act like a human being in the process.

We don't want an independent being that can ignore our orders.  If possible, we want it smarter than us, but who wants to relinquish our control?

What does an AI Need?

To even get close to desiging an AI, like Maslow, we'd need to identify its hierarchy of needs, and get some idea as to how this thing should be structured.

So, Milestones?

The Need to Learn

It would require a need to learn; that is, a need to learn from the data it receives, via keyboard, video, sound, textual input, touch sensor, or whatever.

Order of Importance

There must be an order of importance to what it learns. That requires time-stamped memory stacks working as a short-termed memory to determine the most important or current memories in its "life", and a long-term memory to file away things in order of importance, and some method of determining why some memories are more important than others, regardless of when it was learned.

For example, anything threatening its "life", or harm, or the lives or harm of others, would be somewhat important. Anything presented as a traumatic event by anyone would have a certain level of importance. So, it requires not only time-stamped memory stacks, but cross-linking on level of importance.

Language Processor and Context Generator

It would have to have a really good English (for us) language processor, and be able to take clues from all inputs and file away short and long term memories, based upon the language processor, and a context generator. So the context would be generated from sight and sound and not just from the words spoken or typed by persons providing input.

We could provide stacks of nouns, verbs, adverbs, adjectives, conjunctions, etc. and possibly initial images or sounds... Also such things as comparisons, similes, metaphors, or at least common ones, but an ability to construct more from internal rules.

It would have to at least generate partial contexts from given information. (He's not talking about a dog bark, but the bark of a tree, etc.) This implies some ability to talk to one's self, or to "think" things through, before deciding upon a response. Common contexts could be preprogrammed, but logical thinking is a must. Cause must necessarily be followed by effect.

The Ability to Forget

It has to have the ability to forget. Certain things become less important over time, especially things of lower importance to its needs (or our needs). In fact, without short term memory, it would fill up with trivial data and basically freeze-up, or uselessly take an inordinate amount of time sorting through an incredible mass of data. So, short-term memory, is not only a requirement for intelligence, it also provides a focus on the current timeline.

There, you have it. Useful intelligence cannot exist without the ability to forget (This was my epiphany!).

I don't think it needs to be as forgetful as we are. But even as large as our long-term storage is, we can't focus on it all at once and get anything useful from it in a reasonable amount of time as to perhaps save your life. This requires levels of importance, and of time.

Memory absolutely has to become less important with time and must be reinforced recursively in order for a fading memory to create a net gain in importance. "Fool me once..."

Humans do that by loosing links to large amounts of memory. Provided the cross-link, we tend to remember most of an occurrence. The memory is there, but over time parts of it, become less important to us, and it operates by the refreshing of an electro-chemical reaction in our brains, making it more important because it refreshes the time interval since we last had such a thought, sometimes requiring several refreshes to make it more permanent. The memories are now newer.

Have you ever forgotten one word, and given the word remember the entire sentence or paragraph?

A Story Generator

Our AI has to have a way to generate thoughts of its own. Let's call it a story-generator. It should be able to recount things in its short-term memory, and relate them to long-term memories in orders of importance and build a variety of contexts from them. Over time the memories fade, so that might imply a shift in both order of importance and a slight shift in context.  This may well explain part of what fuzzies our logic.

We could perhaps provide a procedure for questioning others about things that are "on its mind". Approaches, so to speak. (Imagine putting two AI's together and watching them converse.) "What are you thinking?" or "What do you think about, blah?"..."Did you hear about...", Do you know...?", etc.

It's no wonder so many of us sound like broken records as we get older.

Questions and approaches to conversations are geared upon time, the current focus, and levels of importance; that is, time-stamped, cross-linked memories.

What makes us individuals is not only what we know, but what becomes important to us, what scares us, what fulfills a need, and how we put the mix together and learn from it.

The Ability to Dream

One learning mode, that we have, is the ability to dream. We take all of the above and mix it up to build stories with difference levels of importance. While doing that, it reinforces our memory, refreshes them, and teaches us something new, which teaches us new lessons from old data and resets our point of reference.

So, does that mean the ability to lie to oneself is significant to intelligence?

Of course, this has to affect who we are, and how we relate to the current focus in time, but generally, we keep an overall persona.

Beside items that were pre-programmed, our AI would still have to learn, so the person who writes the thing would have to spend years training it, or allow others to do so, like we do with real people.

This would be a great relinquishing of control. Remember, whatever the end-result, nobody is really looking for an uncontrolled, independent entity, here.

Asimov attempted to address this idea with his ineradicable Three Laws of Robotics, but then proceeded to write stories detailing exceptions and paradoxes inherent in these rules.

Asimov's laws were an immutable design in the structure of the positronic brain. Easy enough to say, but impossible to actuality do.

I suspect that in our increasingly politically correct climate, some fool or fools will eventually set out to "free the AIs", which could well lead to an Artificial Intelligence with the accent of an Austrian-American actor seeking to take over robotic manufacturing and design a time machine, but that is another story.

I don't think we really want our computers to be able to override our own opinions or directions, rather; we want our little slave-in-a-box, that we can trust to do useful work for us, and keep our secrets and any desenting ideas or opinions to itself.

Time to Learn

Any AI as complex as we are, would likely take at least as long to learn, and by definition, forget as much as we do. Who decides how much memory to keep, and how much to allow to fade away? How much short-term is useful, and how much fixed memory to allow to be retained... or fade over time, if it is not refreshed in usage.

Well, those are a few of the milestones that we can design for. With so many results, we could make adjustments until closer approximations can be made. Of course, there comes a time when upgrading the firmware can be equivalent of a personality change.

It gives us another context to "doing a memory swap".

Conclusion

These are just a few thoughts on the subject, and by no means outlines a hierarchy of needs or anything like that, but it does show a few of the milestones that could be programmed towards.

When I first began thinking about this stuff, I had all of 5K RAM in my VIC 20 computer.

Now, how much does that sound like, I almost fell off my dinosaur?

Still, it sounds like a fun project to work on, and I've even considered starting an open source, online project using a "milestones" approach as a general guideline.

Not sure I'd live long enough to see it through.

Before yelling at me, I completely understand this is a simulacrum approach, and not necessarily a neural-net, grows like a virus, type approach, but what are we exactly aiming for, here, and if it passes the Turing test and does the job required of it, who cares?

Although, I'm sure someone will want to be able to point and say, "It's Alive!" The minute someone makes a single change to make the creature more amicable, as they most certainly will, it becomes a mere simulation.

You know, "free will" is over-rated, right?

Fred Perkins
circa 2015

 

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior)
United States United States
A Full-Stack (If I can use the term) ecommerce
Applications and Database Programmer
with 20+ years of programming experience.
Working mostly with T-SQL and .NET Languages

Comments and Discussions

 
QuestionSome Further Thoughts... Pin
C Grant Anderson4-Dec-15 11:45
professionalC Grant Anderson4-Dec-15 11:45 
AnswerRe: Some Further Thoughts... Pin
Fred Perkins4-Dec-15 15:53
professionalFred Perkins4-Dec-15 15:53 
Thanks Grant!

2K views and only 2 votes. I was wondering if the article was a little too confusing. A very smart fellow (James Foley), who was also a psychology major, helped to set me on this course of thought.

I couldn't agree with you more, although, I'm sure there are some very smart people working on variations of artificial intelligence. I don't think anything really brings them all together into one place. It's a vast and daunting subject and one of those things left for last, you know? An engineer Designs a cool bot, then at the last minute makes it do a weak simulation. A geneticist realizes that its all a giant programming task; genetics, that is, and can only scratch the surface on writing some code to do it or to analyze it. I'm not smart enough to knock them though. I'm sure they have accomplished a lot from very little, just in my life-time alone. It shows that programmers will be in demand for years to come.

Some people work on a simulation of intelligence, and others, I think, want to create life.

I think the real trick in AI is in making a context generator work. Everything else kind of revolves around that.

You know the long-term, short-term memory thing is kind of significant. If you think of the short-term memory as a kind of floating focus on the most current long term or significant thoughts, then you see it doesn't take massive memory stores to make that happen. It's a recursive, rehashing thing.

About 10 years before the PC, I remember a teacher showing us a program written on a punch-card system that could write poetry from random phrases. I was impressed.

Someone said once there are at least seven different independent intelligences operating in the human mind (why seven? Who knows).

Imagine the calculations required to catch a baseball in 3 dimensional space (Jim's words, not mine), yet we do it without consciously thinking about it. We coordinate our bodies and move the muscles into the correct position, based upon our stereoscopic sight, and, well I miss a lot, but some can do it. What is amazing is we keep our hearts beating on time, breath at regular intervals, attempt to avoid danger, and a myriad of other things simultaneously, while we are doing it. All of it controlled and processed by the mind.

I suspect our memories aren't as simple as single track refreshes. I'm pretty sure we make multiple tracks every time we think about something. Just the fact that memories drop-off makes it different each time around the loop.

You might have noticed this was my first contribution here. I've knocked the idea around for a while, and not very seriously, so I thought it was time to share some thoughts on it.

Well, you almost have me writing another article.

Of course I'm interested.

Like you, I'd love to see some coordination of the effort, and not all of it high-brow. I mean calculus is cool, but, well There are many levels in such a venture. It would be pretty cool for a mechanical engineer to have a place to go to drum up some cool code for his latest Arduino project. It's a large Internet. I'm sure there's something out there.

I still think the milestone approach has some merit to it.

But who am I. I've only been talking about it.
- Fred

QuestionRe: Some Further Thoughts... Pin
Fred Perkins14-Dec-15 22:10
professionalFred Perkins14-Dec-15 22:10 
AnswerRe: Some Further Thoughts... Pin
C Grant Anderson15-Dec-15 12:29
professionalC Grant Anderson15-Dec-15 12:29 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.