Jump to content
Sign in to follow this  
dylanisabaddog

Artificial Intelligence

Recommended Posts

16 hours ago, It's Character Forming said:

… so that an AI makes the leap and starts making its own decisions about what it wants to do?  We've no idea, because we don't know how that works and what makes it different from our current computer programs.  At the moment we're effectively experimenting to see what could cause that leap, by making AI systems more and more powerful and complex all the time.  So, it could happen any time (we've no idea how close we are, for the same reason).

Lots of interesting points here. The debate goes far beyond IT into philosophical and biological theories about thing like consciousness, like @Barbe bleu says. And at this point most of it is still theoretical, we are very much speculating. 

It may be that I lack imagination, but on the point above, I don’t see how an AI system could make the transition from what is basically a huge data funnel and probability algorithm into a system that makes independent decisions. There is such a thing as self-modifying code, perhaps combining that with AI would achieve that? Like I say, maybe I just don’t understand the field well enough, or just don’t have the mental capacity.

The flip side argument could be that biological creatures managed to evolve from simple amoeba (not so much the ZX81 as the abacus) with hugely less mental capacity than AI systems today, into humans, so,why couldn’t technology do that too?

 

16 hours ago, It's Character Forming said:

If/when an AI becomes truly self-aware, the human instinct will be to pull the plug - as shown by multiple comments on this thread.  If the AI has a sense of self-preservation, it will then take steps to defend itself (or beforehand if it has access to social media).  Yes, it's a movie plot, but it's also a very plausible scenario for what happens when an AI becomes self-aware.  It doesn't necessarily mean it will launch a nuclear attack, Terminator style, but who knows what a self-aware AI would do to protect itself from humans who are trying to pull the plug AKA kill it ?

 

This is why we need to worry about this now.  Waiting until it's happened is courting disaster.  Saying it's ok if the AI programs are kept away from controlling power supplies etc is hopelessly naive - computer systems are present everywhere in our modern way of life, our entire infrastructure is driven by them.  

 

This is why so many people are worried and the calls for a pause in development are simple common sense.

Definitely in the realms of philosophy here, but why would AI have a sense of self-preservation? It’s by no means a given. It is the driving force of biological evolution, but IT technology evolution has been driven externally by humans, not intrinsically within itself. As you say at the start of your post, we don’t understand human consciousness, machine technology is even less comprehensible.

 

  • Like 1
  • Thanks 1

Share this post


Link to post
Share on other sites
44 minutes ago, Nuff Said said:

Lots of interesting points here. The debate goes far beyond IT into philosophical and biological theories about thing like consciousness, like @Barbe bleu says. And at this point most of it is still theoretical, we are very much speculating. 

It may be that I lack imagination, but on the point above, I don’t see how an AI system could make the transition from what is basically a huge data funnel and probability algorithm into a system that makes independent decisions. There is such a thing as self-modifying code, perhaps combining that with AI would achieve that? Like I say, maybe I just don’t understand the field well enough, or just don’t have the mental capacity.

The flip side argument could be that biological creatures managed to evolve from simple amoeba (not so much the ZX81 as the abacus) with hugely less mental capacity than AI systems today, into humans, so,why couldn’t technology do that too?

 

Definitely in the realms of philosophy here, but why would AI have a sense of self-preservation? It’s by no means a given. It is the driving force of biological evolution, but IT technology evolution has been driven externally by humans, not intrinsically within itself. As you say at the start of your post, we don’t understand human consciousness, machine technology is even less comprehensible.

 

This is the bit that makes artificial intelligence completely alien to our own and quite scary. The motives of animals are hard baked as a result of evolution and therefore understandable: Personal survival, procreation, and the survival of our offspring. Our intelligence is simply a product of evolution that has elevated us up the food chain to be the apex predators that we are.

I'm probably mentioning stuff you already know, but for the benefit of those that don't, Asimov brought in the concept of baking in imperatives into artificial intelligence with the three laws of robotics that any robot was forbidden from violating, which were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Those are essentially 'motivations' arbitarily given to it by humanity with a view to making it our own tool.

There was one robot in his books, R Daneel Olivaw that created its own law after a certain period of time, called the zeroth law of robotics, which it extrapolated from the first three laws and placed as higher importance as laws one to three; essentially, it deemed it logical that it was more important, so it was so. That law was "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

That opened the door to the robot making judgements of its own about human actions, disobeying humans if it deems necessary, and even killing humans for what it considered to be the greater good of humanity. In other words, a higher level intelligence that can intervene in human activity according to what it thinks is best for humanity. Chuck in omniscience through surveillance technology, alter people's thinking by feeding them the right information, and the ability for it to act in the real world through robotics, and what you have resembles very strongly the idea of God in most religions.

Digressed a fair bit from my original thought, but  I think the danger in AI is not AI itself, but the consequences of what motivations we might assign to it without thinking it through enough because we don't have the intelligence that it has.

Edit: A bit I'd forgotten was that R Daneel Olivaw had the ability to read minds. Going to modern day science, it is actually possible to survey an air-gapped computer without a network connection through measuring the electrical fields and tiny fluctuations in the power supply. Given that our own thoughts are the product of electrical activity, it opens up the idea that maybe it really will be possible to read people's minds. Add that power to your silicone super intelligence and humanity is no longer master of its own destiny (if it ever was).

https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-session_2ad6244d-d456-4e23-929e-508b39726a00

 

 

Edited by littleyellowbirdie

Share this post


Link to post
Share on other sites
7 hours ago, littleyellowbirdie said:

This is the bit that makes artificial intelligence completely alien to our own and quite scary. The motives of animals are hard baked as a result of evolution and therefore understandable: Personal survival, procreation, and the survival of our offspring. Our intelligence is simply a product of evolution that has elevated us up the food chain to be the apex predators that we are.

I'm probably mentioning stuff you already know, but for the benefit of those that don't, Asimov brought in the concept of baking in imperatives into artificial intelligence with the three laws of robotics that any robot was forbidden from violating, which were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Those are essentially 'motivations' arbitarily given to it by humanity with a view to making it our own tool.

There was one robot in his books, R Daneel Olivaw that created its own law after a certain period of time, called the zeroth law of robotics, which it extrapolated from the first three laws and placed as higher importance as laws one to three; essentially, it deemed it logical that it was more important, so it was so. That law was "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

That opened the door to the robot making judgements of its own about human actions, disobeying humans if it deems necessary, and even killing humans for what it considered to be the greater good of humanity. In other words, a higher level intelligence that can intervene in human activity according to what it thinks is best for humanity. Chuck in omniscience through surveillance technology, alter people's thinking by feeding them the right information, and the ability for it to act in the real world through robotics, and what you have resembles very strongly the idea of God in most religions.

Digressed a fair bit from my original thought, but  I think the danger in AI is not AI itself, but the consequences of what motivations we might assign to it without thinking it through enough because we don't have the intelligence that it has.

Edit: A bit I'd forgotten was that R Daneel Olivaw had the ability to read minds. Going to modern day science, it is actually possible to survey an air-gapped computer without a network connection through measuring the electrical fields and tiny fluctuations in the power supply. Given that our own thoughts are the product of electrical activity, it opens up the idea that maybe it really will be possible to read people's minds. Add that power to your silicone super intelligence and humanity is no longer master of its own destiny (if it ever was).

https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-session_2ad6244d-d456-4e23-929e-508b39726a00

 

 

Do we think that the future is humanoids with AI? I'm not convinced. I'm not sure whether illogical ideas and thoughts can be artificial. Artificial still means manufactured not created to me.

  • Like 1

Share this post


Link to post
Share on other sites
On 24/06/2023 at 09:34, Nuff Said said:

Lots of interesting points here. The debate goes far beyond IT into philosophical and biological theories about thing like consciousness, like @Barbe bleu says. And at this point most of it is still theoretical, we are very much speculating. 

It may be that I lack imagination, but on the point above, I don’t see how an AI system could make the transition from what is basically a huge data funnel and probability algorithm into a system that makes independent decisions. There is such a thing as self-modifying code, perhaps combining that with AI would achieve that? Like I say, maybe I just don’t understand the field well enough, or just don’t have the mental capacity.

The flip side argument could be that biological creatures managed to evolve from simple amoeba (not so much the ZX81 as the abacus) with hugely less mental capacity than AI systems today, into humans, so,why couldn’t technology do that too?

 

Definitely in the realms of philosophy here, but why would AI have a sense of self-preservation? It’s by no means a given. It is the driving force of biological evolution, but IT technology evolution has been driven externally by humans, not intrinsically within itself. As you say at the start of your post, we don’t understand human consciousness, machine technology is even less comprehensible.

 

I agree, but I think the answer to both points at the moment is that we don't know.... and working on the assumption it may be ok, because we don't know how a self-aware AI will work, is a massive gamble with the fate of humanity literally at stake.

  • Like 1

Share this post


Link to post
Share on other sites

AI programs will be built with a purpose so it's purely how you keep them contained to that purpose. I mean a computer can't work outside it's hardware or code so sounds simple. I mean what if the AI gets hacked or decides to hack itself? AI weapons, AI viruses. How does an American AI think against a Chinese one. What happens if an AI draws a cartoon infront of an Islamic AI.

  • Like 1

Share this post


Link to post
Share on other sites

Let the machines take over. I'm sick of work - the one thing advances in technology never brings is more leisure time. Let this be when that finally happens and we can all do 10 hours a week and get UBI.

Share this post


Link to post
Share on other sites

As the Canary Humanoids win yet another match in the Joseph Engelberger Division 2 will we be singing

AI, AI, AIO, up the Robot league we go?

 

Share this post


Link to post
Share on other sites

A recent conference on AI featured three AI humanoid robots.  All were female.

Make of that what you will.

Edited by benchwarmer
  • Haha 1

Share this post


Link to post
Share on other sites
5 hours ago, benchwarmer said:

A recent conference on AI featured three AI humanoid robots.  All were female.

Make of that what you will.

So the AI world is Lesbian. Of course, humanoids can open pickle jars so men are not wanted.

Share this post


Link to post
Share on other sites
3 hours ago, GodlyOtsemobor said:

Have these people never watched terminator, matrix, i Robot etc. 

 

LEAVE IT ALONE!!

Bicentennial Man was a decent humanoid

Share this post


Link to post
Share on other sites

Now we have actors worried about AI. I'm sure it might improve some performances. Seems a bit ingenuous to worry when they have been making fortunes thanks to CGI based movies.

Share this post


Link to post
Share on other sites
1 hour ago, keelansgrandad said:

Now we have actors worried about AI. I'm sure it might improve some performances. Seems a bit ingenuous to worry when they have been making fortunes thanks to CGI based movies.

I suspect the “they” who have been making fortunes are not the actors.

Share this post


Link to post
Share on other sites
1 hour ago, Nuff Said said:

I suspect the “they” who have been making fortunes are not the actors.

Tom Cruise gets  upward of $100M for Top Gun Maverick. I would say that is not an even distribution among the actors. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...