A.I. is an acronym for Artificial Intelligence, a form of programmed and/or machine learning technology, this has been integrated into many real-world applications including climate prediction, disease control in their cures, applied sciences with current and future real-time world causes for humanity.
Talking of acronyms, the robotics universe is absolutely full of them, I find this quite funny in a way due to the efficiency in explanations, if you know what I mean!?
Knowledge-based classification in A.I. Works in many different and complex ways and are further divided or compartmentalized into referred to as factoring, for example, there is the direct input data programmed for responses if questioned (knowledge-based inductive learning) this has a series on input in a basic form and example such as “What is that blue color over there?”, the answers could comprise of many options such as “The Sky”, “The Sea” or something more direct and unique like “In this particular case, it is because of Chekenov radiation”; we can see that with such a simple question, many outputs can be decided upon.
With some variations of AI, there is then a form of intelligence that can be applied and then adapts to gain new skills though referencing then creating new subsets of data. These subsets are used as a launch platform to decide upon the given outputs that could include input factoring such the questioners’ current emotions, environmental surroundings, choices of routes in a maze and so on – this is the form of deductive knowledge.
There is a difference between A.I. and machine learning but they can also combine to work together and seemingly most science and future thinking projects of this sort do this with combinations of pre-programmed and real-time processing. Websites such as https://thispersondoesnotexist.com and https://www.eviebot.com allow us mere mortals to access advanced AI data streams and services. Front facing publicly accessible tech such as these are all well and good but what about the AI tech that we don’t see, hear about or cannot access?
From a source on the internet that has now disappeared; Facebook set up a data-center and servers with A.I. inputted data-sets, it is not clear upon the actual end product or vision for the system but apparently after some time set up with the servers and integrated AI systems, they started ‘talking’ with each other in regular inputted code – suddenly and without warning or even extra data merged into the system, it started creating new sub-routines (as was expected) but chunks of this data was unrecognizable by the system programmers and engineers that first-hand created it. As time progressed more and more chunks of this data including the actual packets streaming across the network become in-accessible. It then became clear that the system had created its own language and was communicating with itself and other linked integrated systems.
Apparently the engineers and admins got so freaked out with this system in that they could not understand it, they shut down and decommissioned the whole experimental package!!
If this urban myth is true, the power of some advanced AI systems is almost relentless in it’s advances and could be ultimately unstoppable too. One plus point for Facebook maybe was that this experimental data center was not connected to the surface web, it it was, would this coded then self-expediated system have ‘escaped’ and be in the wild now, it could of accessed systems including those of the Internet of Things devices and so on… Makes you think, doesn’t it?
To see front-facing experimental research products and services from Facebook, see their AI official website at: https://ai.facebook.com