https://dr-knz.net/on-the-future.html
dr knz @ work
* Home
* Projects
* Categories
* Tags
* Archives
* [ ]
On the future of computer science
Note
The latest version of this document can be found online at https://
dr-knz.net/on-the-future.html. Alternate formats: Source, PDF.
Prologue
A famous researcher once said that a scientist should not work more
than 5-6 years in the same research area. The argument had something
to do with losing the "fresh eye" of a newcomer and accepting
established boundaries as inescapable. As I have been working in
processor architecture research for 6 years, I feel my time may be
coming to move on. However, before I turn this page, I feel compelled
by duty and responsibility as a researcher to report a seemingly new
understanding in my field, which I have not yet seen occurring to
my peers.
Theses
1. The definition of computing models is exactly the way humans
capture and externalize their thought patterns;
2. It is the human activity of defining or refining computing models
that provides to humans the incentive to build computers and
runnable software, ie. artefacts of the physical world that can
simulate (imitate) those models;
3. The effective teaching of computing models from one human to
another requires working computers and software that illustrate
these models in a way that can be observed without the physical
presence of the designer or programmer;
4. There exists a fundamental process, hereafter named "innovation",
carried out by humans, of assembling artefacts together in an
initially meaningless way, and then observing the results in the
physical world, and then deriving new meaning and possibly new
models, in this order over time;
5. This form of "innovation" in turn requires artefacts that can be
composed in an initially meaningless way; conversely, this form
of innovation is inhibited by rules that require all combinations
of artefacts to have a meaning according to a predefined, known
thought system;
6. There exists a system of background culture, accepted models,
prevaling ideas and tacit knowledge, established by CS experts in
the period 1950-2015, that limits how new computing artefacts can
be assembled and thus fundamentally limits further innovation;
7. This rule system is predicated on the Church-Turing thesis and
the requirement that each new computing artefact must simulate
thought patterns that could be those of a single human person;
("there is no function that can be described by human language
that cannot be computed by a Turing-equivalent machine")
8. A population of two or more people together form a complex system
whose collective thought patterns cannot be comprehended nor
modeled by any of them individually [1]; however they can
together build an artefact that simulates their collective
pattern of thoughts;
this statement derives from the observation that Heisenberg's
[1] uncertainty principle interferes with the possibility of an
effectively computable simulation of the passage of time in a
parallel system.
9. The computing models corresponding to human collectives and the
artefacts built from them by human collectives are strictly more
powerful than those produced within the cultural mindset
described in #6, insofar they enable humans to express and
communicate new higher-order understandings of the world around
us, understandings that would be impossible to express or
communicate using individual patterns of thoughts or machines
built to simulate Turing-equivalent models;
10. It is the role and duty of those computer scientists who have
combined expertise in programming language design, operating
system design and computer architecture to explore the nature,
characteristics and programmability of this new generation of
computing models.
Accompanying thoughts
I am, obviously, keenly interested in further interactions with peers
to evaluate and discuss the validity of these hypotheses and the
means to test them. Next to the previous 10 points, I disclose the
following beliefs that have accompanied me in the past few years and
helped shape the points above, but whose truth I take less
for granted:
1. Hypothesis #5 may be an indirect corollary of Godel's
incompleteness theorem.
2. Parallel "accelerators", recently seen as the means to achieve a
"next generation" of computer systems, merely provide
quantitative improvements in artefact performance and do not
require the invention of new computing models. As such, they will
not yield any fundamental new understanding of human
thought patterns.
3. Bitcoin and other distributed computing models of currency and
monetary exchange, in contrast, have provided mankind with a
fertile ground to discuss and develop new collective
understandings of power and incentive models.
4. Fully distribtued multi-core processors and Cloud infrastructure,
due to the decentralized nature of their control structures in
hardware and the physical impossibility to simulate them
accurately using an abstract Turing machine, may be also a
fertile ground to achieve a new level of understanding to
mankind, however any full realization of this will be out of
grasp from every individual human.
5. (from #5) Programming languages where every program is only valid
if its correctness can be decided a priori within the language's
formal semantics are sterile ground for innovation; this
includes Haskell.
6. By #3, The process of innovation using artefacts built by the
collective mind cannot be taught by a single teacher to students
working individually.
7. (from the point above) The prevailing current mentor-mentoree
relationship of PhD trajects makes PhD trajects an inappropriate
means to train future innovators.
8. The statements #1, #7 and #8 that accompanied my PhD thesis in
2012 (see link below) were the forerunner to this document and
their truth would be a consequence of the hypotheses above.
Bibliography
* Peter van Emde Boas. Machine models and simulations. In Handbook
of theoretical computer science (vol. A), pages 1-66. MIT Press,
Cambridge, MA, USA, 1990. ISBN 0-444-88071-2. (copies available
upon request)
* Scott Aaronson. Why philosophers should care about computational
complexity. Technical Report arXiv:1108.1791 [cs.CC],
August 2011.
* Raphael Poss. Research statements accompanying the PhD thesis.
Cover to "On the realizability of hardware microthreading".
September 2012. ISBN 978-94-6108-320-3. DOI 11245/2.109484.
Like this post? Share on: Twitter Hacker News Reddit LinkedIn
Email
---------------------------------------------------------------------
Raphael 'kena' Poss Avatar Raphael 'kena' Poss is a computer
scientist and software engineer specialized in compiler construction,
computer architecture, operating systems and databases.
Comments
So what do you think? Did I miss something? Is any part unclear?
Leave your comments below.
Comments
---------------------------------------------------------------------
Keep Reading
* Open problems in Computer Science
* Errors vs. exceptions in Go and C++ in 2020 Why and how
exceptions are still better for performance, even in Go
* My Go executable files are still large What's new in 2021 and Go
1.16
* Loss of input in Charm's Bubbletea
* The machine does not (yet) think outside of the box
---------------------------------------------------------------------
* << How good are you at programming? A CEFR-like approach to
measure programming proficiency
* My "ideal" Bachelor of Informatics >>
Published
Sep 2014
Last Updated
Sep 3, 2018
Category
Computer Science
Tags
* analysis 22
* research ideas 8
Stay in Touch
Content licensed under Creative Commons Attribution 4.0 International
License.
Powered by: Pelican Theme: Elegant