I present now my story, full of mystery and intrigue - rich in irony, and most satirical.
Again one of the more intelligent advocates of Trump's moderately localist course on Tucker's show.
I think I understood what he said just fine, but he seems to be open to the idea that banks should be able to issue their own currencies, which I don't think would be such a terrific idea, since, as it is, there's at least some control over the issuance by the depositing of collateral (e.c. treasury bonds) at the Federal Reserve Banks.
Still, this is very instructive and mostly common sense. The whole accounting aspect of it is of course incongruent from the start: You accept my conditions and I pledge for you. is the nature of the deal, and of course such an arrangement doesn't translate neatly into numbers.
In my opinion it's better to handle the advancement of trust without a profit motive, because what trust is there among those who seek profit? What are the schemes that they would vow not to engage in amongst themselves? Also, if the trust is warranted: Would it be given more readily by someone motivated by profit? And consequently: Do we need institutions to prevent trust too readily given?
If Trump were to pursue this, deregulating issuance, demanding investment in new technologies instead, he would on the one hand stay in line with the role of the banks in the generative cycle of the age of works and on the other, like with the tariffs, replace global dynamics with local responsibility, and that's all fine as such, but in order to escape the dynamic of our time it is not enough to assert responsibility: If instruments are being used without reflecting their consequences, we become slaves to the dynamics they cause, but they themselves have developed in a larger one, if you turn a blind eye to a fissure in a dam, it will break, but it is the pressure from too much water that is causing fissures, the one and potentially many others.
The reason we need institutions to prevent trust too readily given is to prevent tribal conflicts, in the words of the Revelation: the third part of the waters became wormwood; and many men died of the waters, because they were made bitter, that is to say we have to act professionally, and the entirety of the global control grid grows out of the same necessity, which, at this point, is divorcing itself from the good it seeks to protect.
Still, what is an administration supposed to do? In the absence of a social movement it can only regulate private affairs. Also, I don't believe in the communist approach of trusting pain to shape acceptance: Acceptance is not to be shaped, but dug up, cleansed from all the obstacles that prevent it, but some of them, like hubris, may demand pain and hence it can't be denounced categorically.
Thus things can go wrong in many ways at this juncture, but fortune may indeed favour the bold, as a successful march towards a new paradigm within the existing institutional framework may already have begun.
By the way, concerning research and artificial intelligence, the actual research practice of seeking connections between the most recent results and proven stepping stones could undoubtably be more aptly pursued by an artificial intelligence as Eric Schmidt suggested, but this practice is another example of an instrument creating its own dynamic, which is mirrored in the complaints of dilettantes bemoaning science's apparent stagnation, which arise almost regularly given that such connections cannot easily be communicated to people outside their field and hence remain unnoticed.
Artificial intelligence would undoubtably be better, that is worse, in that regard too, but the real question is, whether science's applicability is correlated with the questions that people intuitively pursue. For instance, nobody not mesmerised by the existence of an apparent continuum would come up with something like
To allow a scientist to pursue his intuitive questions becomes of course ever more wasteful, because those questions tend to be the same amongst people and thus have an increasing likeliness of having already been asked and answered. And hence the current practice of research shuns them. But at the same time, if there is a correlation between them and the applicability of research, this steers the latter away from the former.
Well, this blog is full of intuitive questions of mine, and Eric Schmidt, explaining why artificial intelligence should be good at mathematics, points to the limitedness of its vocabulary. He does not mention the term normal form though. I already stated two years ago that a normal form for human thought would push the intellectual abilities of the already existing large language model artificial intelligences beyond our own, so much for the applicability of my intuitive question what the nature of our thinking is: It is quite literally the holy grail of artificial intelligence.
I think I understood what he said just fine, but he seems to be open to the idea that banks should be able to issue their own currencies, which I don't think would be such a terrific idea, since, as it is, there's at least some control over the issuance by the depositing of collateral (e.c. treasury bonds) at the Federal Reserve Banks.
Still, this is very instructive and mostly common sense. The whole accounting aspect of it is of course incongruent from the start: You accept my conditions and I pledge for you. is the nature of the deal, and of course such an arrangement doesn't translate neatly into numbers.
In my opinion it's better to handle the advancement of trust without a profit motive, because what trust is there among those who seek profit? What are the schemes that they would vow not to engage in amongst themselves? Also, if the trust is warranted: Would it be given more readily by someone motivated by profit? And consequently: Do we need institutions to prevent trust too readily given?
If Trump were to pursue this, deregulating issuance, demanding investment in new technologies instead, he would on the one hand stay in line with the role of the banks in the generative cycle of the age of works and on the other, like with the tariffs, replace global dynamics with local responsibility, and that's all fine as such, but in order to escape the dynamic of our time it is not enough to assert responsibility: If instruments are being used without reflecting their consequences, we become slaves to the dynamics they cause, but they themselves have developed in a larger one, if you turn a blind eye to a fissure in a dam, it will break, but it is the pressure from too much water that is causing fissures, the one and potentially many others.
The reason we need institutions to prevent trust too readily given is to prevent tribal conflicts, in the words of the Revelation: the third part of the waters became wormwood; and many men died of the waters, because they were made bitter, that is to say we have to act professionally, and the entirety of the global control grid grows out of the same necessity, which, at this point, is divorcing itself from the good it seeks to protect.
Still, what is an administration supposed to do? In the absence of a social movement it can only regulate private affairs. Also, I don't believe in the communist approach of trusting pain to shape acceptance: Acceptance is not to be shaped, but dug up, cleansed from all the obstacles that prevent it, but some of them, like hubris, may demand pain and hence it can't be denounced categorically.
Thus things can go wrong in many ways at this juncture, but fortune may indeed favour the bold, as a successful march towards a new paradigm within the existing institutional framework may already have begun.
By the way, concerning research and artificial intelligence, the actual research practice of seeking connections between the most recent results and proven stepping stones could undoubtably be more aptly pursued by an artificial intelligence as Eric Schmidt suggested, but this practice is another example of an instrument creating its own dynamic, which is mirrored in the complaints of dilettantes bemoaning science's apparent stagnation, which arise almost regularly given that such connections cannot easily be communicated to people outside their field and hence remain unnoticed.
Artificial intelligence would undoubtably be better, that is worse, in that regard too, but the real question is, whether science's applicability is correlated with the questions that people intuitively pursue. For instance, nobody not mesmerised by the existence of an apparent continuum would come up with something like
A topological space consists of a space X and a topology O on it consisting of so called open subsets of X, fulfilling the conditionsNo matter how intelligent the artificial intelligence is, its chances of coming up with this equal those of a bunch of monkeys playing with typewriters coming up with a Shakespeare, unless it had an inbuilt notion of connectedness of areas consisting of infinitely many constituents, which are separated by a distance greater than zero from one another, which it was trying to define.and a topological space X is called connected, if X and the empty set are the only open sets whose complement is also open.
- X is open, the empty set is open,
- any union of open sets is open,
- any intersection of finitely many open sets is open,
To allow a scientist to pursue his intuitive questions becomes of course ever more wasteful, because those questions tend to be the same amongst people and thus have an increasing likeliness of having already been asked and answered. And hence the current practice of research shuns them. But at the same time, if there is a correlation between them and the applicability of research, this steers the latter away from the former.
Well, this blog is full of intuitive questions of mine, and Eric Schmidt, explaining why artificial intelligence should be good at mathematics, points to the limitedness of its vocabulary. He does not mention the term normal form though. I already stated two years ago that a normal form for human thought would push the intellectual abilities of the already existing large language model artificial intelligences beyond our own, so much for the applicability of my intuitive question what the nature of our thinking is: It is quite literally the holy grail of artificial intelligence.
Labels: 41, formalisierung, geschichte, gesellschaftsentwurf, gesetze, institutionen, intelligenz, programmierung, rezension, sehhilfen, wahrnehmungen, zeitgeschichte, ἰδέα, φιλοσοφία