Bruce Sterling’s speech from NEXT Berlin is a blast of cold air on the themes of startup life, disruption, and global collapse. Bruce excoriates the startup world for its complicity with the conspiracy of the global investor class to vastly increase the wealth of a tiny minority, and describes the role that “design fiction” has in changing this.
via Boing Boing
Learn how to walk. Literally. When I observe people how they walk I notice that everyone walks differently. This is great, it allows one to recognize friends from far away. But it is also bad: some types of walking and posture is worse for you than other. Especially because you are probably walking the same way your whole life.
We are never really taught how to walk. We learn on our own. It is one of first things we learn. And everyone around us is so happy that they forget to help us improve our walk. And we, based on our first steps we did by chance, extrapolate to walking and get used to it. For long our body tolerates any way we are using it, but through years we might suddenly discover that it cannot anymore. But then it is too late.
While others help us with other things we learn (for example, talking, you will get feedback if you talk incomprehensibly, or too loudly), for walking we have to do it ourselves. So stop just repeating steps you did as a child and start walking your grown-up walk.
I do not know much about speech recognition. I do not know what is the state of the art. But years ago I was playing a bit with it and I would like to throw an idea our there, maybe somebody picks it up, maybe it turns out useful, or maybe it is already being used. Please tell me. It can be used for not just speech recognition, but any general audio pattern recognition, or any signal pattern recognition.
The basic idea is to observe that human hearing works by first cochlea doing physically a frequency transform. Hairs of different lengths resonate to frequencies in the audio input. Stronger a particular frequency is in the input, stronger will be a signal for that hair. A stronger signal in neurons does not mean a larger amplitude of the action potential, but more of them. So a stronger signal for a particular frequency means that more impulses will go over that neuron. More impulses mean a higher frequency of those impulses. So brain has to learn not directly from the input audio, but from changes of frequency of the signal for each frequency in the input audio. If brain is recognizing patterns from that, we should too.
Can somebody create a filter which extracts only clean water from soda and other drinks? No sugar, no colors, no additives. Just pure water. It seems bottled water is more expensive than soda drinks, so, let’s just use a filter.
Re-decentralizing web and Internet-based technologies is gaining momentum. Just recently there was a Decentralized Web Summit where many projects were represented. But while people contemplate importance of decentralized technologies and are building alternatives, to me it seems we are forgetting one important lecture from the past: we should be building layered technological stacks and not vertically integrated ones. This is how you can encourage diverse implementations (critical for stability of decentralized technologies) and experimentation at each layer without having to reimplement a whole stack. I can understand why this is happening. It is already hard enough to develop decentralized technologies. So it feels easier to control the stack at least vertically. But we should not be building decentralized technologies because it is easy. But because it is hard.