View on GitHub

Quorten Blog 1

First blog for all Quorten's blog-like writings

  • Yeah, I see the “creative possibilities” of 3D printers being advertised, but you know what the reality of the technology is? It’s just that the mass majority of work in the economy revolves around that of two-dimensional nature, hence it is unlikely that most people would end up having a 3D printer at home, despite the low costs. Not unless it can be used to competitively sell consumer merchandise through alternate channels faster, easier, and more convenient.

  • Again, I reiterate, because this is important! The reality of artwork. It hinges heavily around promotion and publication. Yes, there’s lots of artwork and other work available for much lower cost, it’s just that it is not promoted in a way that gets noticed by most people. Yes it’s there, but to find it invokes a little bit more human effort.

  • Now here’s a big problem with humans. We have observed that pure digital sales of books have done poorly, so paperback books are still in salesforce. But the problem with paperback? Yes, the advantage is that they take up more space so they are more political, but that is also what causes them to get lost so easily.

  • Also complaining about messy looking house pictures. Well, duh, hello? That’s what we have 3D scanning and object management technology for.

  • And what about signs. They’re there to help people. And then when people make mistakes and bad things happen? They didn’t read the sign? How is anything supposed to be accomplished in an economic and efficient manner if people don’t pay attention to signs?

    • The fact is that technology complacency is a trait of most people. And honestly, it is only the person themself that can make themself get more efficient when it comes to technology use.

    • Come on, I recommend simple uses of technology to family members. When they use them? The results are very well, and they are glad that they listened. When they don’t? Well then over time they realize they were wrong, and now they are clearly “poorer” than they would have been had it been for better use of technology.

  • And how much I hate this. That honestly, failing better logical reasoning, humans are really just slaves to human psychology principles.

  • And, worst of all, lost knowledge corresponds to poorer decision-making skills in the future, namely sub-optimal behavior “cycles” due to the lack of memory.

  • So, we must conclude. There are some activities that require a very high degree of precision that humans simply are not fit to do. Who is to solve the problem?

  • Don’t be embarrassed about what is being photographed or scanned. It will not be marketed and promoted to be visible to humans in general. Rather, its main visibility will be to computational engines that process the data for analytics. In fact, this has to be the case, because humans are very inefficient when it comes to observing and processing data.

  • Even if you think you don’t want the information back, you might find that you need it back. Again, this is where computers are much better at managing information effectively, especially in large volumes.

  • But the other thing we learned. The Apollo 13 movie. “Silly ways of the Western World” and they rip off their health monitoring equipment. Then NASA Mission Control said “You’re no longer sending health monitoring signals.” “Oh yeah, that was intentional. I don’t need to wear that thing.” So, the lesson learned, especially as the related technology gets cheaper, smaller, and more compact. The reason why it will not be adopted on the mass market, except when people are really going through great changes such as during pregnancy.

    • And unfortunately, this also applies to complacency in the intellectual domain. “Well, it seems like the decision is good, why should I need to double-check every single decision I make with a computer?”
  • What, the study of art? Art is composed of lines and texture? No, these are not properties of the objects, these are really properties of the human psyche. Neuroarchitecture, that’s what causes those properties to be so observable to humans and weigh so heavily in “art.” Really, it’s more the study of the human visual system. Well, a hybrid of that and primitive technologies that match up.

  • Important! The nature of artificial neural networks, and how they differ from the human brain, especially the human visual system. Artificial neural networks operate directly on the image pixels, whereas the human eyes send a pre-processed signal with 90% data reduction to the brain for processing. Thus, artificial neural networks are capable of seeing many nuances that the human brain cannot. On the other hand, they require thousands of images of cats for training due to their degree of precision matching, whereas a 2-year-old only needs to see 3 2D pictures of a cat to get the idea.

    • So yes, indeed, some forms of data compression come down to matching a computational model that mirrors the human brain’s neuroarchitecture.

    • Also, another interesting observation. Artificial neural networks, when fed with thousands of images, can eventually.

    • But also, the other realization. Anywhere that a computer cannot reasonably collect a thousand times more data than a human would normally work with, artificial neural networks do not work. Take for example the software development profession. Progress and rate of change has been extremely rapid over the course of the computing profession’s lifetime, and in its earliest days, very few people have worked in the profession and they have produced very little software. Thus, the only practical method to capture a large amount of data in the early days would be to stretch out the data collection over a long period of time, failing breadth in data availability over short periods of time. Yet that would mean the data is already obsolete and a different sector entirely! Yeah, so one might be led to believe that the early computing profession is simply a sector that is inaccessible to the new technology, only direct humans can fare in the environment. But even then, the intellectual demands of the early computing profession were such that not just any random person could participate. Oh, no, only the smartest would be good enough. So that’s yet another issue.

      But we’re past that point in time. It’s interesting from a scientific point of view. As for modern times, there are so many more software developers onboard that even with the rapid pace of change, it is still possible to collect a sufficiently large amount of data over a short period of time to prime artificial neural networks.

      • In fact, that is how modern-day Internet search engines function. They rely on huge, seemingly redundant redundant repositories of data in order to function well and give the user a good search experience.

        Also, this is the reason why search functions on intranets have failed to keep up with the quality of search functions on the Internet. There’s simply not enough data available on an Internet within the bounded limits of cultural change for the artificial neural network technology to work well.

      • Oh yeah! And it also reinforces your argument that the most popular technicalities are those that are associated with the most data behind them. Without sufficient data, a technicality is highly prone to obsolescence.

  • Yeah, so what you were saying. The practical limitations of artificial neural networks. Multiply the data quantity by a factor of 1000 times that required for a human to “get the idea” before the machine technology will actually function well.

  • But once the machine has the “critical mass” of data stored, adding new data has very little storage cost as the intelligent software can reconstruct the new data from the old data using parameters plus a few “manual fix-up” details. Since the manual fix-up details are few and small, they can be compressed highly effectively, thus making additional object storage scale in logarithmically compared to the size of the source data rather than linearly.

    • Also a similar pattern is observed in human psychology with the rate at which infants learn new words. Since the early words require priming the biological neural network with the specific technical modalities of a locale language and culture, the process takes longer in the beginning. But, once the mass majority arbitrary artificial technical details are understood, the small variations that result in different words as itemized in the language can be focused on, in a more memory-efficient manner, and the rate of learning, in terms of measured words, dramatically increases.