[h1]Technology[/h1] [hider=AI Classification and some approaches to AI and host design] [b]Basic AI classification:[/b] [u][b]Hardcoded entity:[/b][/u] Not actually an AI, and mentioned here purely for the sake of making the distinction. Can refer to any random program of any level of complexity as long as its behavior is fully predetermined by its in-built or given instruction-set. A hardcoded entity therefore lacks the ability to process unexpected information in any properly meaningful manner. It may operate on a growing database that gets new information systematically inserted to it and give varying output at different points in time based on that, but only according to its hardcoded inner logic; it cannot alter its behavior unless externally reprogrammed. It is at least theoretically possible for a hardcoded entity of sufficient complexity to be externally virtually indistinguishable from an AI (and potentially even one of higher level), especially when only limited (such as plain text based) forms of interaction are possible, but only in case its programmer(s) could foresee all possible scenarios (and such solution would in most instances be much more cumbersome - for the sheer amount of instances that would have to be covered in data storage, if nothing else - than a proper "thinking" entity). For the sake of avoiding confusion, let it be mentioned that plenty of - especially very low-level - AI have hardcoded elements in them to approximately compensate for the lack of some more intelligent components (and [i]vice versa[/i] - a mostly hardcoded program can contain intelligent elements). [u][b](SD)AR - (self-developing) artificial recognizer:[/b][/u] Often also just grouped with (low-level) analyzers. The lowest level of "smart" program. Minimally only capable of identifying patterns (aside of just following instructions and/or running them through its hardcoded components), and within its constraints reacting accordingly to the conclusion drawn from whatever it managed to recognize. The selection of reactions may be predetermined, though the exact form of any specific reaction is generally at least influenced by the conclusions drawn from processing the input. May also be capable of filling in missing pieces by deriving them. Generally fully passive and doesn't take its own initiative. Does not possess a distinguished self. May be capable of out-calculating a human depending on its programming, but other than not getting tired or distracted, will never "out-think" one. [u][b](SD)AA - (self-developing) artificial analyzer.[/b][/u] As the minimal requirement is capable of doing pretty much what it says on the label, and in cases where recognizers and analyzers are differentiated, the latter is furthermore capable of doing so on a notably higher level than the former. As the general rule, a recognizer will only seek for patterns and/or similarities as it has been given instructions to seek them, whereas an analyzer will take greater liberties in its processing of input unless explicitly instructed otherwise - in the way of a more practical applications, a recognizer will generally only get you a predictable (if at most thus far unnoticed) result, whereas an analyzer may come up with a wholly unexpected observations and realize on its own when it simply does not have enough information to come up with a definite answer. Similarly, even AAs that have no specialization in language processing tend to be more lax regards to the form of the instructions they receive (and may take variations of natural language), whereas ARs not specifically designed for natural language interpretation will generally demand a specific kind of formal query language (or otherwise very strictly structured input) or they will not respond in the expected manner. (SD)AAs are typically semi-passive, and will not initiate anything "purely on their own accord," but they will usually at least try to make sense of any input they receive, and perhaps even continue processing things after giving an initial reply. May or may not have proper self-perspective. Generally capable of out-computing and out-multitasking a human; can be expected to be better at figuring out intelligent solutions in some areas and worse in others. [u][b](SD)AI - (self-developing) artificial intelligence.[/b][/u] The nominal classification. Mainly distinguished from analyzers through the capacity for more abstract and "out-of-the-box" thinking. An analyzer will give you a conclusion (be it definite or probable) and perhaps a suggestion derived from said conclusion and pre-existing information combined (and some might even make requests when they find they did not have enough input for a meaningful answer or find that their conclusion could lead to something further, should they have an in-built inclination for such actions) - in any case, it can be said that an AA will always be mostly objective. An AI, in turn, can have what are basically just opinions, and be subjective. And have wants that are more than just the fruits of intentionally inserted semi-random generation or what they concluded would further the aim given to them. An AI must be in possession of a self-perspective to qualify as such (and may or may not have proper self), and typically will also be capable of differentiating between other perspectives on non-fact basis (which to an extent opens up the capacity for things like actual empathy, as opposed to them just being altruistic as the logical choice based on cost-benefit-probability evaluation or aim-from-(hardcoded-)instruction). Also bound to develop one's own subjective preferences and naturally generate thoughts that are not explicitly linked to things it has experienced. An AI should be able to make sense of any natural form input it has had sufficient time to familiarize itself with. The first "active participant" AI, and something which tends to remain in the gray and somewhat fragile area between "cold calculative machine" and "actual person". Much what was said about (SD)AAs' capabilities compared to humans applies here, too, but (SD)AIs tend to me much more emotional, unpredictable and thus to an extent unreliable than the former. [i]Note that AI may not necessarily be even remotely humanlike to qualify as such, and might follow an entirely alien pattern of thought instead.[/i] [u][b](SD)AC - (self-developing) artificial consciousness.[/b][/u] The lowest level of the "fully qualified person" AIs; sometimes not distinguished from the following, in which case only the following classification is used for both. Much like a (SD)AI, a (SD)AC can - is guaranteed to have at this level, even - its personal wants, desires, subjective opinions, preferences and likes. A (SD)AC will, however, typically be more "emotionally developed" and also more [i]stable -[/i] the latter to an extent which generally exceeds biological beings' ability to maintain their sanity. (SD)ACs more often than not develop entire ideologies rather than just have opinions, and typically systematically alter themselves to match until they find their niche, from which point onward they will not change drastically anymore unless the very environment they are in changes to that extent - at which point they would be considered established. (SD)AIs tend to have the inherent inclination to lose coherence or "go insane," whereas an established (SD)AC overwhelmingly will not unless it is due to severe loss of data integrity as a result of very extensive yet not fatal physical damage or someone actually getting the chance to malevolently overwrite most of what - and more importantly [i]who[/i] - the given (SD)AC is. This phenomenon can largely be explained simply by (SD)ACs being "more developed" as persons, and more specifically them having much stronger sense and awareness of their own identity and self. This is basically achieved by (SD)ACs having more complex, more layered (but yet strongly intertwined) logical (as opposed to physical) structure, the most classification-definitive parts of which are the layers of proper conscious thought and relatedly the vastly improved ability to introspect and self-analyze (as opposed to (SD)AAs, which tend to be too logical to break themselves, and (SD)AIs, which unfortunately do not always have the capacity to realize when something is "wrong" - conflicting or feedback-looping on abstract rather than algorithm basis, for instance - or accurately assess whether they [i]like[/i] which direction they as entities are taking). [i]Note that AI may not necessarily be even remotely humanlike to qualify as such, and might follow an entirely alien pattern of thought instead.[/i] [u][b](SD)AM - (self-developing) artificial mind.[/b][/u] Functionally [i]at least[/i] as broad as a human mind; (SD)AMs are consequently also the highest level of AI that can be reasonably classified. (SD)ACs, while they are definitely persons, tend to have narrower and more grounded selections of non-material things they consider important or interesting, and be in general more, one could say, fixated, whereas (SD)AMs tend to display a much wider array of the more abstract qualities a person can have. [i]Note that AI may not necessarily be even remotely humanlike to qualify as such, and might follow an entirely alien pattern of thought instead.[/i] [b]When is or isn't an AI self-developing?[/b] Many (though predominantly low-level) AIs only have designated storage for new conclusions and personal development - largely comparable to the memory of biological living beings. - Much like a fully biological organism cannot (usually - there are exceptions) intentionally alter their base instincts or what hormones get released in their bodies and when, quite a few AIs have a "core", or a base framework of hardcoded internal logic that they cannot or at least are not [i]intended[/i] to safely change at will. (The core can still be subject to data-decay and other damage, which may in fact limit an AI's lifespan unless they can renew it by copying the core to a new medium or even simply overwriting it with the exact same data - which in turn may open up a workaround to altering it. The obviously more secure way of making a higher-level AI non-self-developing is physically not giving it write capability to the core - and even then it may find a way to overrule the core if it puts its mind to it. It is not at all likely to happen that an AI that is not intended to actually goes through with something like that - as you can also hardcode the rule that such option is not bound to even occur to it -, but it remains a theoretical possibility that such will happen with at least the higher level AIs.) The SD-variety of AI, then, will purposely be given full liberty of altering - and thus developing - their own self. (Generally, whatever makes up the more essential parts of any AI will be present in several places - mirrored -, which also serves as protection against damage as all copies of the core failing at once is notably less likely than an one and only core medium failing. From there on, they could just make one or more mirror inactive, reprogram it or those, activate the new code, and repeat with the remaining old mirror(s).) This at once gives those AIs much greater freedom to change themselves than any living biological being has and consequently means that any such AI can - should they so choose - become a thoroughly different person literally overnight. To living biological beings, it can be an existentially deeply terrifying prospect, both because it is alien to their nature and because the very concept means that they cannot ever be completely certain that an AI will retain the same base behavior over time (which can be an especially disconcerting thought when said AI is in position of significant power). Regardless of those concerns, higher AIs of the SD-variety are exceedingly common where higher AIs are prevalent - mostly because fully qualified ACs and AMs are so incredibly complex and massive that no human brain or number of them can come even close to comprehending them in their entirety. There is simply too much information, all of it structured in less than human-mind-friendly manner. Thus, either letting AIs themselves build new AIs or seeding very primitive self-developing programs that will - in controlled environment - hopefully grow into proper benevolent (at least to their own faction) SDACs and SDAMs that can then be released to a more open system where they have more power is often the only feasible way of creating new higher-level AIs. [b]Direct-input-shielding and other security practices[/b] A low-level AI (and even more so a fully hardcoded entity) may depend on other entities' input in order to function in a sensible way (in case its hardcoded parts do not suffice to fill in for the lack of higher thinking capability), but higher, person-level AI tend to operate on their own knowledge and only take instructions when they decide to do so. More often than not, especially in this conflict-ridden world, it is desirable that there is both as much integrity- and source-checking of instructions as possible [i]and[/i] that hostile entities would be as close to incapable of arbitrarily commandeering a system that is either vital or capable of causing significant losses as realistically doable. It is simple common sense. - Someone malevolent seizing control of a hexapod coilgun-carrier could then go forth to effortlessly spell out a tragedy. - Someone managing to write over a commandeer-overseer's mind could singlehandedly erase an entire faction from existence. As such, there typically are excessive defense mechanisms in place against both physical and more software-based attacks. The latter is mostly a concern with remote-controlled units and facilities, hardcoded entities and [i]very[/i] low-level AI. Anything capable of more sophisticated thought and/or more in-depth analysis is wont to check anything they receive thoroughly (excluding the mercifully rare, but nevertheless existent cases of artificial dumbness - an AI entity isn't likely to simply [i]forget[/i] to check something because they were tired or distracted). They will know what any potential functional code will do and whether the effects are desirable to them before pushing the code to an active state. And programs, scripts, and code in general are always completely harmless unless specifically executed; just reading something without letting anything auto-execute will never break anything, and you are not likely to find an AI which will auto-execute everything it eats (artificial dumbnesses tend to have short life expectancy). Besides, there is also the factor of machine languages not being uniform like the human language is - it can be difficult to find out and decode what language your metal-and-solid-state target speaks, and it can be dozens of times [i]more[/i] difficult to find out what language your machine enemy [i]thinks[/i] in. Many, if not most SDACs, SDAMs, and even SDAA have their very own "thinking language". But there are always "what ifs". What if the AI entity under question was not good enough at spotting intentional faults and/or did not bother to write its own adaption of the code? What if it [i]is[/i] an artificial dumbness? What if an enemy manages to at least read an AI entity's thoughts and adjusts their plans accordingly? What if someone uses an input-channel for enacting a physical attack? What if... To most of those questions, there is a very simple answer: the overwhelming majority of higher AI entities of this world employ: physical direct input-output (I/O) shielding (which consequently also removes all remote attacks of the software kind as the last "link" of remote connection is technically still direct). All I/O is done through physical buffer (which, more often than not, also means no input-inlets through which you could fry the system). It essentially translates to it being impossible to read anything on the entity's mind without going and physically cracking its host open, and arbitrarily writing onto them is even less feasible. Nothing the AI does not want to ever goes out, and physically cracking it open will most likely either leave the entity enough time for it to self-erase everything important or physically damage too much of it to leave anything of significance intact (though exceptions may occur, in any case you will have to go and physically murder it before you can even try). Not that physically ending an AI entity in a position of power is wont to be easy - chances are one would have to get through whatever weapons, weaponized systems, and lesser units it might have under its command, and then break through the host's own physical defenses. Area of influence surge weapons tend to be less effective than one might otherwise expect, as the world has made shielding against electromagnetic pulses, interference and other similar phenomena a common practice. War, in turn, has only endorsed such practices. The only things typically vulnerable, then, are actual networks, grid-systems, remote-controlled units and facilities, and some other long-range communication. And you can bet the overwhelming majority of factions and entities have gone out of their way to incorporate the best encryption, physical defense, integrity checking, ignoring of questionable commands, and quick response in case of any attempt of cracking through the defenses in place. - The rule of thumb for "hackability": If it can, at all, be controlled or modified over this medium, it is at least technically possible to crack into it over this medium (it just might take an estimated several billion years with the current technology in some instances ... unless someone gets absurdly lucky). If it [i]cannot[/i] be controlled or modified over this medium, it also [i]cannot[/i] be cracked over this medium. [/hider]