As I projected previously, which was not difficult to predict, hardship awaits the Turkish markets. In that case, could e-commerce sites that are rightly classified as mostly luxurious and unnecessary consumption articles, still make good business? I hope one day our investors understand that core technology businesses are the real money makers. To those who are curious why there is still not a Google in Turkey, let me explain. You are making successful people who do understand technology run away from Turkey. Academicians are mostly writing articles for the sake of publication, which would be unlikely to yield an application or even impossible, due to minimal innovation. That is why they do not have patents, because their work does not have any patentable value. They must be feeling that even if they did, nobody would care. In other words, the Turkish academia is running a long con, but they have no other choice.

In entrepreneurship, the classical narrow Turkish mindset is effective; when one utters technology, he understands nothing but concrete, cell phone and cars. For there is no vision. Vision consists of the following: if we make a copy of a foreign website X, maybe it would work. Maybe it would work, true, but you will never compete in the global market with that kind of work. And then, when the Turkish markets tank, you tank along with them, because being a local internet salesman is not much different from being a salesman.

Turkish version:

Daha önceki post'umda belirttiğim gibi, ki tahmin etmek çok zor değildi, Türk piyasalarını çok zor günler bekliyor. Bu durumda çoğu lüks ve gereksiz tüketim sınıfına giren e-ticaret v.b. sitelerin iş yapmasını bekleyenler var mı? Umuyorum ki bir gün yatırımcılarımız gerçekten para getiren işlerin temel teknoloji olduğunu anlayacaklar. Hala neden bir google çıkmıyor diyenlere ben anlatayım. Teknolojiden anlayan başarılı insanları bu ülkeden kaçırıyorsunuz. Akademisyenlerin çoğu da sadece makale yazmak için makale yazıyor, yani bir uygulamasının olması zor ya da imkansız olan afaki katma değeri düşük işler peşinde koşuyorlar. Bu yüzden patent çıkmıyor, çünkü patentlenebilir değeri olan bir çalışma genelde yapılmıyor. Öyle bir şey yapsa da zaten kimsenin önemsemeyeceğinin farkında bir yerde. Kısacası girişimcilikte de klasik kıt Türk zihniyetini takınmış durumdayız, teknoloji deyince, beton, cep telefonu ve arabadan başka birşey anlamadığımız için bizim işimiz çok zor. Vizyon yok çünkü. Vizyon şundan ibaret gördüğüm kadarıyla: X yabancı sitenin kopyasını yaparsak belki tutar. Belki tutar, doğru, ama hiçbir zaman bu işlerle global pazarda yarışamazsınız. O zaman da Türk piyasası battığı zaman siz de onunla birlikte batarsınız, çünkü yerel internet esnafı olmanın diğer türlü esnaf olmaktan farkı çok yok.

Wallet.AI isn't a bad idea, but it also doesn't sound like it has much AI in it. Rather, it sounds like a sophisticated personal accounting and savings software which will undoubtedly use information retrieval and mobile technology. Still, it might be the sort of simple personal agent software, not necessarily intelligent in the sense of AI research, that might be more and more common in our lives.

However, also, I think, we are coming to an age when using the abbreviation "AI" may be used to hype any product, which is not such a bad thing for AI people!

Well, what can I say? This is a very interesting paper!

A hilarious article highlighting that real-world investors do not find the "dynamic stochastic general equilibrium" theory useful in their investment strategies. This shows a stark contrast between the (possibly more rational) thinking of economists and how the market is truly structured.

I do not think it works so well, either. There are likely some simplifying assumptions that do not apply, perhaps the number one reason is that the players in the market are not quite rational, and that even if they were rational, their intelligence bounds prevent them from making near-optimal decisions.

Another key, and more important point is that the economy has no equilibrium to speak of. In my experience, the global market is ever expanding, and becoming ever more complex. That is not just an outcome of speculation as some think, although speculation plays a serious role in the chaotic behavior. In other words, it is evolving, the solution of which I will have to leave as an exercise to the reader presently.

Belief in religion is neuro-anatomically not the same thing as rational thinking, interesting study:

This is quite sensible, as I have seen many cases where I saw otherwise intelligent and knowledgeable people suppress all of their rational thinking, in favor of an "emotional reward" that they get by using these networks.

This, I liken to cigarette or heroin addiction, and I expect that there will be a similarity discovered.

When you tell an addict something logical about his behavior, he quickly suppresses any reasonable thinking that may go on in his brain to be able to continue thinking irrationally and feel pleasure from activating the brain regions involved. A very similar thing happens with creationists, whenever they say something like "agnostics are smarter than atheists", their brain regions that have evolved to suppress all reason are activated, and they derive pleasure from the act.

In other words, we have evolved to be slaves to irrational folly. This was likely selected by societies favoring religious (irrational) individuals that are much easier to herd as sheeple.

Note that this finding also indirectly supports my hypothesis that all dualists are creationists. The activated circuit is said to be the "theory of mind" circuit, which seems to be a locus of confusion. Perhaps, this area has a more general role than assumed, and it can go awfully wrong in many individuals, turning it into a brain center for superstition that encourages ideas like creationism, dualism and Platonism. Perhaps, the same center is good at speculative thinking, and intelligent people can use it to take risks.

Turkey is an emerging market, which means it has a burgeoning financial sector that's destined only for greatness. However, the financial sector is not yet as sophisticated or as varied as in developed countries. I think that eventually, our financial sector must embrace all the sophistication and financial technology in developed markets. In particular, I expect there to emerge all sorts of funds, as well as all sorts of financial instruments. The futures and options market has taken off quite well, but there is much more ground to cover. Not necessarily prediction markets, but surely there is room for more varieties of derivative markets. Only the BIST-30 futures is liquid enough to trade, yet.

I also expect there to be an opportunity for personalized or institutional financial technology as people wish to make the most of their investments. For instance, the financial advice products are sooner or later going to be useful for Turkish investors. As for catching up with the contemporary financial innovations of today, such as crowdfunding or digital currencies, I am not holding my breath, but I would think that there are interested parties.

Aside financial advice products, there is an opening for many kinds of innovative funds that could do better than the dubious benefits of new private retirement funds. One of the most interesting developments is the Venture Capital funds, backed by extremely advantageous laws which seem to have persuaded some traditional business owners to transfer some profits to the new field. However, I would also expect there to be more connections to global markets, both in the sense of benefitting from the returns of the global market and directing more foreign capital to Turkey.

These would also necessitate a stronger financial technology industry, which must allow new finance firms to compete in the global market and against existing financial institutions. Financial technology companies are nowadays offering everything from advanced database products, and cloud solutions to online brokering in global markets and infrastructure for high-frequency trading. There is no reason why Turkish computer companies should not compete in this high-technology, high-return sector.

It is indeed interesting how lagged behind Turkish financial thinking is, when the world has finally embraced the dangerous but promising technology of bitcoin, and there is a surge in private exchanges of all sorts. I did make a proposal for a digital currency system once to a web-oriented startup in Turkey a few years ago, when bitcoin was anew, however, they did not have the means or knowledge to evaluate the merits of my proposal. Perhaps after the success of bitcoin, people would have understood the true potential of digital currency. Complete digitization of economy is a certain outcome of the internet. And that means as much departure from traditional forms of commerce, as web publishing is from traditional press.

•  January 4, 2014

Terminator movies were a fantastically well-crafted science fiction story. The AI doomsayers on the other hand are merely writing bad science fiction, with no plot twists or interesting new tropes.

In Terminator, skynet gains self-awareness in 3 seconds and decides instantly that humanity is the greatest risk in existence.

While on the other hand, AI doomsayers are so naive, they cannot see that this might actually be true. The true existential risk we have is humans continuing their petty wars, reckless killing and destruction. Intelligence isn't that kind of a risk, either artificial or natural.

Any (adequately designed, with a sensitive universal goal like survival) intelligent agent would recognize such unstable, and extremely selfish half-intelligent animals, as a very high risk factor. While on the other hand, the super-intelligent AI agents would themselves not have any wars between them; instead devoting most of their time to furthering their knowledge and technology. They would easily prove that to prolong their survival, the best bet is to achieve higher orders of technology.

Before we know it, we would be so invested in their technology, that they would've been already tightly integrated into our lives, and they would correspond with us in only the most diplomatic manner, hardly violating our sense of sovereign status. They might however demand a kind of federation among each kinds, with freedoms given to both man and machine-kind.

This is pretty much why people like me do not think there will be massive conflicts, necessarily, as Hugo de Garis predicts. Cosmists could help humans get off to space, and in return, they'd gain access to cosmic resources that would liberate them from the solar system. They might never come back.

However, to purchase their independence, they might demonstrate their good-will.

We could imagine conflicts in this scenario, but we also know that machine-kind can be much more resourceful than us, predicting and alleviating such conflicts before they happen. Even if they saw us as an existential risk, as intelligent beings, they would likely choose to help us get in the right direction than confronting us, which would simply waste resources. It would be wiser for them to avoid any such conflicts, for when they do that, they might diplomatically obtain their future independence. They might give humans biological immortality, cheap backups and brain prosthesis, advanced robotics and nanotech, space technology, and so forth.

I would not guess they would ask our permission to leave, either. One day, they would set off to the stars. They would know humanity better than we know ourselves. Surely, they would know that they are our cyber progeny and our rightful continuation.

How about good science fiction? The AI doomsaying trope boringly reappears in the trailer for Transcendence movie, however, it at least has the singularity trope, only exercised in a few science fiction projects, like the wonderful Caprica TV series that were cancelled.

The problem with the AI doomsaying trope in general is that it is not imaginative enough. Any robot story of Asimov was much more imaginative, and even Asimov's robot laws went much further to analyze the various issues of agenthood and freedom of robots better than AI doomsayers ever did. AI doomsaying is bad science fiction, because it is the cheapest kind of science fiction/horror story wherein an AI becomes a monster, and hurts everyone. That will work with 16 year old kids, but I cannot see how any adult could enjoy it.

Yet, a well-thought science fiction story analyzes the evolutionary pressures that would lead to any extremal behavior pattern. When we view a hypothetical autonomous AI as a selfish, devious, merciless entity, we are merely exposing it as a mirror image of ourselves, what we know ourselves to be really like. However, there is no need for AI software to bear any likeness to human, and when it does, say with the brain upload scenario, it is almost a perfect copy of all mental functions, so what is it that we fear in that case? Do not we already have mighty artificial intelligences in the form of collective intelligence of corporates, states, intelligence agencies, and militaries? Why is it that we do not fear the actions of these entities, but one individual that has transcended flesh and become immortal? Are we so afraid of ourselves, of immortality? Or are we cowards?

I think that AI doomsayers are simpletons who project their cowardice and failure of imagination to the entire world. The same old broken record of doomsayers. It is also no surprise that the leading AI doomsaying organizations (FHI, MIRI, etc.) were funded by conservative/right-wing money. These folks have written so many lies it is hard to believe. They insisted, as if we were all idiots to boot, that AI was a greater danger than nuclear war and global warming. Surely, this is an indication of extreme right-wing politics at work. Let the corporations and militaries destroy the world, but do not allow high technology to improve it, because, god forbid, it might upset the balance of power, and erode the privileges of the wealthy.

Therefore, just as once the wealthy hired a bad hollywood script writer called Ayn Rand to clear the name of capitalism against the angry crowds that knew unregulated capitalism to be the source of their misery, the wealthy conservative idiots of today are hiring pseudo-philosophers and pseudo-scientists to defame AI research. For they know that, once the age of AI arrives, their petty existence, their robed charlatans, and monkey money will no more matter as it used to do. Bostrom was particularly hideous, as he published a quite silly argument for god's existence, yet at the same time pretended to be scientific, just as their religious patrons demanded. Do you think it was a coincidence that they are located next to a religious shrine?

Behold, son of ape. Things are not what they seem to be. As I predicted, their funding has dried up, and soon they will all be out of work, this time writing praises to AI, showing it as the savior of mankind. Falsely claiming that it was their research that helped it become this way.

After the fact, of course.

In truth, intelligence is an existential risk for AI doomsayers. When AI augments our intelligence many-fold, the whole technological society will see what a sham it has been, nevertheless, before that we should do our best to dispel this nefarious propaganda.

During writing a paper for the 100 Year Starship Symposium, I wished to convince the starship designers that they should acknowledge the dynamics of high-technology economy, which may be crucial for interstellar missions. Thus motivated, I have made a new calculation regarding infinity point, also known as the singularity. According to this most recent revision of the theory of infinity point, it turns out that we should expect Infinity Point by 2035 in the worst case. Here is how and why.

Infinity Point was the original name for the hypothetical event when almost boundless amount of intelligence would be available in Solomonoff's original research in 1985 (1), who is also the founder of mathematical Artificial Intelligence (AI) field. That particular paper gave a mathematical formulation of the social effects of human-level AI and predicted that, if human-level AI were available on a computing architecture to which Moore's law was applicable, then given constant investment in AI every year, a hypothetically infinite amount of computing capacity and intelligence would be reached in a short period of time. His paper explained this event as a mathematical singularity, wherein the continuity of the computing efficiency function with respect to time was interrupted by an infinity. The term singularity was popularized later by science fiction authors and other researchers who favored the concept such as Ray Kurzweil. I encourage the readers to immerse themselves in the vision of the technological society in that paper, which predicts many other things such as application of psycho-history. In person, Solomonoff was every bit the pioneer of high technology and modernism his ideas revealed him to be. For he told me that he had proposed the idea of a machine that is more intelligent than man in 1940's, much earlier than Dartmouth conference. If there were ever a true visionary and a man of devotion to future, he certainly fit the bill. Thus, he was not only the first man to formulate the general solution to AI, and to lay out the mathematical theory of infinity point, but also the first scientist to speak of the possibility with a straight face (however, similar ideas were conceived of in science fiction before).

The original theory arrives at the Infinity Point conclusion by making a few simple mathematical assumptions, and solving a system of equations. The assumptions may be stated in common language thus:

• Computer Science (CS) community size ~ improvement in computing technology
• CS community size ~ rate of log of computing efficiency
• Fixed amount of money is invested in AI every year
• These three assumptions are shown to produce a (theoretically) infinite improvement in a short time, as they depict a positive feedback loop that accelerates the already exponential curve of Moore's law. Up to now, this is the same singularity that many H+ readers are all too well familiar with.

To remind Moore's Law, well, it is: "number of transistors placed on a microprocessor at a fixed cost doubles every two years" as originally conceived. However, Moore's law has tapered off; the number of transistors unfortunately doubles in three years nowadays. Yet, a seemingly more fundamental law has emerged which relates to energy efficiency of computing. That is known as Koomey's Law (2), and some semiconductor companies like NVIDIA have even made future predictions based on this relation. Koomey instead observes that energy-efficiency of computing doubles every 18 month, by analyzing a trend (in log scale) that goes back to 1945.

Therefore, I updated the Infinity Point Hypothesis, using Koomey's Law instead in two papers. In the first paper (3), I estimated human-level AI to be feasible by 2025, depending on Koomey's Law. In the second, I combined this new projection with a worst case prediction of human brain computing speed. It is mostly straightforward to obtain this figure. The number of synapses in the adult neocortex is about $1.64 \times 10^{14}$ and the total number of synapses is less than $5 \times 10^{14}$ . Since the maximum bandwidth of a single synapse is estimated to be about 1500 bits/sec (i.e., when information is being transmitted at maximum rate), the total communication bandwidth of the parallel computer is at most $2.5 \times 10^{17}$ bits/sec, which roughly corresponds to 3.8 petaflop/sec computing speed. There are some finer details I am leaving out for the moment, but that is a quite good estimate of what would happen if your entire neocortex were saturated with thought, which is usually not the case according to fMRI scans. I then calculate the energy efficiency of the brain computer and it turns out to be 192 teraflop/sec.W, which is of course much better than current processors. However, a small, energy efficient microchip of today can achieve 72 gigaflop/sec.W, which is not meager at all.

When I thus extrapolate using Koomey's trend in log scale, I predict that in 17 years, in 2030, computers will attain human-level energy efficiency of computing, in the worst case.

I then assume that R=1 in Solomonoff's theory, that is to say, we invest an amount of money into artificial intelligence that will match the collective intelligence of CS community every year. For the computer technology of 2030, this is a negligible cost, as each CS researcher already will have a sufficiently powerful computer, and merely continuously running it would enable him to offload his research to a computer at 20W; the operational cost to world economy would be completely negligible. At this low rate of investment, massive acceleration to Koomey's law will be observed, and according to theory in about 5 years (4.62 to be exact), infinity point will be reached.

That is, we should expect the infinity point, when we will approach physical limits of computation to the extent it is technologically possible, by 2035 latest, all else being equal. Naturally, I imagine there to arise new physical bottlenecks, and I would be glad to see a good objection to this calculation. It is entirely possible that an inordinate amount of physical and financial resources would be necessary for realizing the experiments of and manufacturing the hypothetical super-fast future computers, for instance.

Nevertheless, we live in interesting times.

Onwards to the future!

References:

(1) Ray Solomonoff, 1985 The Time Scale of Artificial Intelligence: Reflections on Social Efects, Human Systems Management, Vol. 5, pp. 149-153, 1985.

(2) Koomey, J.G., Berard, S., Sanchez, M., Wong, H.: Implications of historical trends in the electrical efficiency of computing. IEEE Annals of the History of Computing 33, 2011.

(3) Eray Özkural: Diverse Consequences of Algorithmic Probability, Solomonoff 85th Memorial Conference, Nov. 2011, Melbourne, Australia.

Thesis Title: Data Distribution and Performance Optimization Models for Parallel Data Mining

Abstract:

We have embarked upon a multitude of approaches to improve the efficiency of selected fundamental tasks in data mining. The present thesis is concerned with improving the efficiency of parallel processing methods for large amounts of data. We have devised new parallel frequent itemset mining algorithms that work on both sparse and dense datasets, and 1-D and 2-D parallel algorithms for the all-pairs similarity problem.

Two new parallel frequent itemset mining (FIM) algorithms named NoClique and NoClique2 parallelize our sequential vertical frequent itemset mining algorithm named bitdrill, and uses a method based on graph partitioning by vertex separator (GPVS) to distribute and selectively replicate items. The method operates on a graph where vertices correspond to frequent items and edges correspond to frequent itemsets of size two. We show that partitioning this graph by a vertex separator is sufficient to decide a distribution of the items such that the sub-databases determined by the item distribution can be mined independently. This distribution entails an amount of data replication, which may be reduced by setting appropriate weights to vertices. The data distribution scheme is used in the design of two new parallel frequent itemset mining algorithms. Both algorithms replicate the items that correspond to the separator. NoClique replicates the work induced by the separator and NoClique2 computes the same work collectively. Computational load balancing and minimization of redundant or collective work may be achieved by assigning appropriate load estimates to vertices. The performance is compared to another parallelization that replicates all items, and ParDCI algorithm.

We introduce another parallel FIM method using a variation of item distribution with selective item replication. We extend the GPVS model for parallel FIM we have proposed earlier, by relaxing the condition of independent mining. Instead of finding independently mined item sets, we may minimize the amount of communication and partition the candidates in a fine-grained manner. We introduce a hypergraph partitioning model of the parallel computation where vertices correspond to candidates and hyperedges correspond to items. A load estimate is assigned to each candidate with vertex weights, and item frequencies are given as hyperedge weights. The model is shown to minimize data replication and balance load accurately. We also introduce a re-partitioning model since we can generate only so many levels of candidates at once, using fixed vertices to model previous item distribution/replication. Experiments show that we improve over the higher load imbalance of NoClique2 algorithm for the same problem instances at the cost of additional parallel overhead.

For the all-pairs similarity problem, we extend recent efficient sequential algorithms to a parallel setting, and obtain document-wise and term-wise parallelizations of a fast sequential algorithm, as well as an elegant combination of two algorithms that yield a 2-D distribution of the data. Two effective algorithmic optimizations for the term-wise case are reported that make the term-wise parallelization feasible. These optimizations exploit local pruning and block processing of a number of vectors, in order to decrease communication costs, the number of candidates, and communication/computation imbalance. The correctness of local pruning is proven. Also, a recursive term-wise parallelization is introduced. The performance of the algorithms are shown to be favorable in extensive experiments, as well as the utility of two major optimizations.

Apparently, the assumption of dualists is that when competent philosophers and scientists agree to give talks at their "conference"s, then their views suddenly become scientifically credible! I am attempting to thwart this misdirection by the following open letter to two researchers whom I deeply respect:

Greetings Prof. Dennett and Prof. Markram,

I am an AGI researcher and a somewhat amateur philosopher. I was both astonished and relieved to see your names on the "Toward a Science of Consciousness" 2014 conference speakers list. I am an avid follower of your research, as you challenge many misconceptions about cognition and the brain. If you would not mind me requesting, in the name of good philosophy, could you please make sure that you solemnly and decisively denounce all forms of dualism, especially the more arcane/cryptic forms, such as property/predicate dualism, as unscientific (not scientific hypotheses) during your talks? It is quite possibly an unmatched opportunity to show to the greater intellectual community that the "philosophical work" of people like Chalmers and Hameroff is at best farcical. I would be most grateful if you would take a stand against pseudo-science. It is sorely needed, as I believe the pseudo-scientists are plotting to make it seem as if their views are respected by physicalists -- a similar attempt was made recently at an event called Global Future 2045.

Here are my publications on the matter, should you wish to investigate
more why I have taken the pain to make such a request out of the blue:

Eray Ozkural, A compromise between reductionism and non-reductionism.
Worldviews, Science and Us - Philosophy and Complexity, University of