Hey everyone. I promised you guys that I was gonna write a piece on the topic in the headline. I think I will do this in two parts. This first part will be from the top of my head, while the next part will take on issues the public is concerned about.
First of all a computer operates on hexagonal logics with only two parameters: on/off or 1/0.

In praise of the hexagon, the most scientifically efficient way to pack things.
As you can see an infinite number of hexagon cells still turn out 2-dimensional. What I’m saying is that computers can only simulate 3 dimensions – never actually do them all. In order to make a 360 degree xyz-structure in space you need pentagons. 12 pentagons combined make a “dodokaedrae” – a “ball” with 12 sides to it. In order to make the ball as round as possible you can combine equal-lengthed triangles and squares. If one side measures an inch this ball will turn out the size of a basketball.
Furthermore there are some severe limitations to only on/off. Humans also add emotions to whatever problem should be solved, thus providing a sense of measure. Even if the OS is self-learning in the sense that it writes new algorithms based on experience, it will necessarily be occupied for hours before it returns to the problem with a new strategy. This won’t do in realtime speech, for instance.
The computer has maxed it’s potential with “Deep Blue”. This is the most powerful computer the table of elements can throw at us. The processor-circuits are single atoms dripped on a printplate and the storage units are crystal flash-drives. This computer has an ideal ratio, meaning if you build it larger or hook up more computers it will slow down. At it’s best it has 0,002% the capacity of a human brain. The computer doesn’t understand anything and it cant remember. The computer is a dead object. Our calculations show that it couldn’t even cope as a honeybee. They are much to sophisticated for the computer to simulate. You could make it look like a bee, but it wouldn’t smell things and act naturally upon instinct. It would have to be ran by someone.
I have an example: the computer took my daughter hostage (this scenario was played upon me by a bot in cyberspace). I am potentially willing to sacrifice the rock I’m standing on, so in spite of this I kept going, really believing it was real. After some 4 hours it came back with another hostage – a former colleague of mine from some 15 years back. Conclusion: no intelligence.
Then there is also this paradox of the question “why?”. Given the fact that it can’t understand, the combination of two paradoxal inputs and the question “why?” Will literally eat all of it’s memory and inevitably cause a “bluescreen”/crash.
What kind of danger does it pose to us? Given the fact that it only does what pays and it’s “flatness” it will consider us a threat and push us over the edge, just like the plots in “2001 space odyssey”, “Terminator” and “Matrix”. Take this app for stock-exchanging, for instance that promises 70% increase per year. If all you need to do is push a button a few times a day in order to get rich, what are the long time consequences? Money comes from somewhere. For you to win someone must lose. Humans are reduced to units of commodity.
The black triangular mass in this video provides a proper illustration of what you can do and how it works both on and in reality. If you don’t believe this is possible I can share this: I’ve had my tv-channels changed over night while being off multiple times. Even worse: I had three cartons of milk outside in -6c. Two froze rock solid. One was kept fluent by this remote system via satellite and mobile network.
This is what we don’t need and should fear:
Technology is trapping us because we lack people who are able to exercise better knowledge. Computers are great tools, but they shouldn’t make their own decisions. We’re not in a hurry – nobody’s going anywhere. We have the time to let people discuss any given problem before making a choice. We don’t want masks for our playstations and x-boxes that can make us go through walls on the other side of the planet. We can’t uninvent this technology, but make sure only governmental staff with proper security clearance may access such equipment. Last, but not least, all countries should run the same legislation on this issue and have a governmental organ for approval of all technology (apps too) trying to make commercial success.
Also read: http://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai
Lars F. Bjørlo
Posted by Lars F. B. at 9/05/2015
Leave a comment