My local government has a program to reimburse 100% of the cost of residential rain barrels, and since it’s hard for me to ever argue with “free,” I signed up. The only requirements are that each participant attend a lecture about rain barrels and related subjects (which I did), and that each participant also show a government inspector that they’ve properly installed your rain barrel (which I haven’t yet).
The presentation was given by environmental people from local agencies and nonprofits, and they explained that the primary benefit of rain barrels was to reduce storm water runoff and the attendant problems with flash flooding and fish kills. Roads and driveways are covered in motor oil and other chemicals, and lawns and farms are covered in pesticides and fertilizers. When it rains heavily, these chemicals are washed into waterways all at once, which kills aquatic life and also makes the waterways unsafe for humans for days.
A rain barrel helps mitigate this problem by storing the water that falls onto your house’s roof. You put the barrel next to your downspout and do some simple cutting and crimping of the metal downspout to connect it to a hole in the top of the barrel. During storms, the rain that falls on your roof flows into the rain barrel and stays there, reducing local water runoff by some minuscule amount. Presumably, if every house and building had a rain barrel, there would be a meaningful reduction in flooding and fish kills (the presenters unfortunately had no estimates, so I made some of my own below).
Before the presentation ended, the problem with the rain barrel concept became clear to me: they require routine maintenance. It’s up to homeowners to keep track of how full their rain barrels are and to periodically drain them (productive uses like washing cars or watering gardens were suggested), or else they’ll fill to the brim after a few storms and thereafter overflow each time it rains, defeating their purpose. Homeowners also have to check on them to make sure they aren’t clogged up with dead leaves or full of mosquito larvae.
Call me a cynic, but I think even this small amount of diligence is too much for most people, and rain barrels will function best if they automatically empty themselves of water. The simplest (and probably best) solution might be to screw a cap with a tiny hole in the middle over the rain barrel’s faucet. The hole would only allow a few drops of water to leak through it per minute, which would be a much slower flow rate than the unobstructed downspout. The rain barrel would fill during storms and then slowly discharge its load over several days. Keep in mind that it’s not the amount of rain that causes the problem, but the suddenness of the rain, so discharging all the water in your rain barrel won’t contribute to flooding or fish kills if it happens very gradually. Once-yearly maintenance might consist of cleaning the dead leaves out of the barrel and installing a new cap, which might cost $2.00 at Home Depot. That sounds doable for average people.
I’m going to call this idea the “Russian engineering solution.”
The spigot at the base of my rain barrel
In lieu of making a cap, I’ve screwed a 4′ long extension hose into the spigot, and pointed the hose away from my house to prevent discharged water from flowing towards its foundations. Last Saturday night, my area got its first major rainfall since I installed the barrel, and to my surprise, it filled to the brim in a few hours (FYI, 700 square feet of roof feed into the downspout that is connected to the rain barrel). I opened the spigot and emptied out the tank on Sunday. However, it didn’t rain for the rest of that day or the next, and it occurred to me that the rain barrel’s utility as a storm water runoff and flood control device would be optimized if its discharges took rainfall forecasts into account and were timed to occur when the ground was as dry as possible.
In other words, because it rained on Saturday night, the ground was still soaked on Sunday, its absorbency was reduced, and the water I discharged from my barrel that day might have added to the runoff problem. It would have been better if I had instead drained the barrel on Monday since the ground would have been more absorbent thanks to the extra day of drying out, but I didn’t know that since I didn’t check the weather forecast.
My 55 gallon rain barrel filled almost to the brim after just one night of moderate rain.
Checking weather forecasts to time the barrel discharges requires unrealistic diligence from people, so automation would be necessary. And if we’re designing a truly “smart” rain barrel, why not try to full optimize it by programming it to consider all pertinent variables? This includes:
The amount of water in the barrel (easily done with a float)
Absorbency of the soil (estimated based on recent rainfall and barrel discharges)
Rainfall forecast for the next 72 hours (including amount and timing of rainfalls; would require wireless access to an internet weather service)
Conversion factor that uses the rainfall forecast to predict how much new water will flow into the barrel (the barrel could formulate its own conversion factor by comparing past rainfall events with corresponding increases to its own load)
And of course, the smart rain barrel would need internal features that would let it discharge itself without human help, and I think copying the tried-and-true toilet tank setup would be fine. A chain could connect the float to some type of simple machine, and the float’s rise and fall along with the water level would apply tension to the chain, which the machine would somehow store as potential energy (a mousetrap or a revolver’s hammer give clues as to how this can be done). When signaled by the smart rain barrel’s computer, the machine would use that stored potential energy to mechanically lift the “toilet flapper” at the bottom of the barrel, letting the water flow out.
I’m going to call this the “American engineering solution.”
Ha ha! So which do we prefer?
Russian engineering solution: Simple, cheap, non-optimal but good enough
American engineering solution: Complex, expensive, optimal
Call me unpatriotic, but I’m inclined towards the former. Glory to Russia!
And lastly, how much would rain barrels of either sort help mitigate storm water runoff and flash flooding? It’s impossible to say for sure, but this should be the starting point of any estimate:
‘In the United States alone, pavements and other impervious surfaces cover more than 43,000 square miles—an area nearly the size of Ohio—according to research published in the 15 June 2004 issue of Eos, the newsletter of the American Geophysical Union. Bruce Ferguson, director of the University of Georgia School of Environmental Design and author of the 2005 book Porous Pavements, says that a quarter of a million U.S. acres are either paved or repaved every year. Impervious surfaces can be concrete or asphalt, they can be roofs or parking lots, but they all have at least one thing in common—water runs off of them, not through them. And with that runoff comes a host of problems.
…According to the nonprofit Center for Watershed Protection, as much as 65% of the total impervious cover over America’s landscape consists of streets, parking lots, and driveways—what center staff refer to as “habitat for cars.”’ SOURCE
That means in the U.S., 35% of impervious surfaces are roofs of buildings or houses. If we make the very optimistic assumptions that 1) every roofed structure in the country had a smart rain barrel system, 2) the gutters and downspouts of each structure shunted 100% of the rain falling on their roofs to the barrels, and 3) the barrels were big enough to never overfill except during extreme instances like hurricanes, then the smart rain barrels would presumably reduce the runoff problem by 35%, which is nothing to sneeze at.
Of course, all of that assumes 100% participation rates and 100% efficiency rates, neither of which is realistic unless we’re thinking about the distant future, when humanity is much better off and has worked its way very far down the “Global Problems List.”
More realistic assumptions would set at everything at 50%: 50% of structures have rain barrels, the average rain barrel collects 50% of the rain that falls on the roof (that’s true of my own setup), the average rain barrel doesn’t overfill during 50% of rain events. In that case, the storm water runoff reduction is only 4.375%. [Frownie face.]
I thought the movie Prometheus was awful, and rather than waste my time ranting about all the things I hated, I’ll just say I agree with the critics who collectively bashed the confused and scientifically flawed storyline, shallow and unlikable characters, and inexplicable/unrealistically stupid behavior of the characters. I love the first three Alien films, but everything since has been disastrous. Enough said.
Instead of spending any time writing about the flawed plot (IMDB has a summary here: http://www.imdb.com/title/tt1446714/synopsis) , I’ll jump straight to an analysis of the vision of the future depicted in the film, which is set in 2093.
We will have proof that humans evolved from or were engineered by aliens. Prometheus is premised on the notion that ancient aliens seeded the Earth with life and repeatedly returned to direct the genetic and cultural evolution of humans. The theory that intelligent aliens influenced the rise of the human species is debunked by the fossil record, by comparative DNA analyses of humans and other hominids, and by human biochemistry. Together they prove we are indigenous to Earth and that we slowly evolved from simpler species. By 2093, we will not have “new evidence” that contradicts this story of our origins, though there will probably still be many uneducated and/or mentally ill people who believe in this and other conspiracy theories. It is at least slightly plausible that life began on Earth billions of years ago thanks to panspermia (i.e. – an asteroid containing simple organic matter fell to Earth), but I don’t see how we could ever prove the hypothesis since time has destroyed any evidence that may have existed.
Some robots will be indistinguishable from humans. One of the main characters is “David,” an artificially intelligent robot who looks and acts like a human. Since David is modeled after humans, he is a special type of robot called an “android,” and note the literal translation of the word from Greek is “man-like” (andro-oid). I think androids like David will exist by 2093, and they will be capable of an impressive range of behaviors and functions that will make them seem very human-like. In fact, they’ll be so refined that we might not be able to tell them apart from humans at all, or only be able to do so on rare occasions (ex – some of their responses to questions might not make sense). Whether they will be truly conscious and creative like humans is a different matter.
Left: A human crew member. Right: David the android.
The hyper-realistic sculptures made by artists like Ron Mueck, and advanced animatronics like the Garner Holt Productions Abraham Lincoln convince me that we could build robot bodies today that look 95% the same as real humans. Eeking out that last 5% to cross the Uncanny Valley should be easy to accomplish long before 2093. The much harder part is going to be endowing the machines with intelligence, with the ability to walk and stay balanced on two feet, and with other forms of physical deftness and coordination that will allow them to safely and efficiently work alongside humans and to do so without appearing “mechanical” in their movements.
Sculpture by Ron Mueck
Machines will do surgery on people, unassisted. There’s a gruesome and silly scene in Prometheus where the female main character realizes she is pregnant with a rapidly growing alien-human hybrid. She runs into the space ship’s infirmary, lies down in a coffin-sized surgery pod, and orders the machine to surgically remove the fetus. Several robotic arms bearing laser scalpels and claws do it in about a minute. I think surgery will be completely automated by 2093, along with all or almost all other types of jobs. Replacing high-paid human doctors with robot doctors that work for free will make healthcare dramatically cheaper and easier to access (with positive effects on human life expectancy and quality of life), though mass unemployment will also reduce the amount of money people have to pay for things like healthcare.
There will be space ships that can travel faster than the speed of light. The Prometheus space ship is capable of faster than light space travel, and the movie’s events take place in a different star system. Our current understanding of physics informs us that there is no way to exceed the speed of light, and propelling something as big as the Prometheus to just 10% of that speed would require impractically large amounts of energy. While mass figures for the fictional ship are unavailable, let’s assume it weighed about as much as the Space Shuttle, which was 2,000,000 kg. This kinetic energy calculator indicates it would require 9 x 10^20 Joules of energy to accelerate it to 10% of the light speed (30,000,000 meters/second). That’s as much energy as the entire United States generates in nine years.
While science is by nature always open to revision, I think it’s a bad idea to base one’s vision of the future on assumptions that well-tested pillars of science like the Theory of Relativity will just go away. That said, I don’t think faster than light space travel is likely to exist in 2093–or perhaps ever–so we’ll still be confined to our solar system then.
FWIW, the space ships flying around our solar system by that year will be considerably larger and more advanced than what we have now, and it’s likely that space ships of similar size and technology (sans light speed drives) as the Prometheus will be plying interplanetary space.
There will be instantaneous gene-sequencing machines. In Prometheus, the humans find a severed alien head inside a wrecked alien structure, and they bring it back to their space ship for examination. The alien belongs to an advanced species nicknamed “The Engineers,” and the head’s features are very human-like. As part of the examination, the humans take a DNA sample from the head and put it in a gene sequencing machine, which determines it shares 99% of its genome with humans. The cost of sequencing a full human genome has plummeted at a rate exceeding Moore’s Law, and well before 2093, the service will become trivially cheap (e.g. – the same price as routine blood tests or vaccinations) and will take a few hours.
FYI, today it costs less than $5,000 to sequence a human genome, and the machines can do the work in about 24 hours. But since we can only decipher a minuscule fraction of the genetic information, it’s still not worth it for healthy people to get their genomes sequenced. Within 20 years, the price will get low enough and the medical utility will get high enough to change that.
Paper-thin, ultra-high-res display screens will be in common use. Computer monitors and TV’s with these qualities are shown throughout the film. Many of them are also integrated into translucent glass, so clear windows can also serve as touchscreens. This will be a very old, mature technology by 2093.
A set of display interfaces
Wall-sized display monitors will be common. Early in Prometheus, there’s a scene where David is watching a film on a TV screen that covers an entire wall of a room in the space ship. This should be very old technology by 2093, and given current trends, floor-to-ceiling TVs will become available to average-income Americans in the 2030s. Since standard-sized doorways are too small to fit enormous TVs through them, the TVs will also need to be paper-thin and rollable into tubes, or capable of being assembled from a grid of many smaller pieces.
David watching “Lawrence of Arabia” on a wall-sized TV
Suspended animation pods will exist. During the multi-year space journey from Earth to the alien planet where the film’s events happen, the human crew members are kept in a state of suspended animation in coffin-sized pods. The mechanism through which their physiological functions are suspended (i.e. – Deep cold? Preservative fluids injected into their bodies? Something else?) is never made clear, but one crew member is shown to be dreaming in her pod, indicating that her brain is still active, and by necessity, her metabolism (even if it is dramatically slowed). That being the case, the “hypersleep” depicted in Prometheus is fundamentally different from today’s human preservation methods, which involve freezing dead people whose biochemical and brain activity have ceased in liquid nitrogen.
Frankly, I can’t say whether suspended animation will exist in 2093 because there isn’t any trendline for the technology like Moore’s Law that I can put on a graph and extrapolate. The best I can do is to note that our ability to preserve human organs meant for transplantation is improving as time passes, we do not appear to be close to the limit of what is scientifically possible, credible scientists have proposed ways to improve the relevant technologies, and whole-body human cryopreservation and revival is theoretically possible.
Machines will be able to read human thoughts and create digital representations of those thoughts that other people can watch. At the start of the movie, the Prometheus is still en route to the alien planet, all of the humans are in cryosleep pods, and David the android is the only crew member awake. During the montage that shows how he spends his time as the ship’s custodian, he takes a moment to check on the status of a female member of the crew. David puts a virtual reality visioning device on his face, and through it he is able to see a dream that the person is having at that moment, as if he were watching live-action film footage. I think this technology will exist by 2093, but its capabilities will be more limited than shown in the film.
Human thought is not a magical phenomenon; it happens thanks to biochemical and bioelectric events happening inside of our brains. Currently, we don’t understand the linkages between specific patterns of brain activity and specific thoughts, and our technologies for monitoring brain activity are coarse, but there’s no reason to assume both won’t improve until we have machines that can decipher thoughts from brain activity. To quote Microsoft Co-Founder Paul Allen, “An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort.”
Unlike faster than light space travel, mind reading machines don’t violate any laws of physics, nor is there reason to believe the machines would require impractically large amounts of energy. In fact, crude versions of the technology have already been built in labs using fMRI machines and brain implants. In all cases, the machines first recorded the participants’ brain activity during training sessions where the humans were made to do scripted physical or mental tasks. The machines learned which patterns of brain activity correlated with which human thoughts or physical actions, enabling them to do things like decipher simple sentences the humans were thinking of with high accuracy. In other lab experiments of this nature, physically disabled people were able to command robot arms to move around and grab things by thought alone.
However, I think the accuracy of mind reading machines will be hampered by the fundamentally messy, plastic nature of the human mind. Scientists commonly refer to the human brain as an example of “wetware” due to its fusion of its hardware and software, and to its ever-shifting network of internal connections. As a result, if I close my eyes and try to envision an apple, there will be a discrete pattern of brain activity. If I do this again in a few minutes, the activity pattern will be slightly different. Contrast this with a computer, where the image of an apple exists as a discrete software file that never changes. Because of this, even if a brain scanning machine had perfect, real-time information about all brain activity, its interpretation of what the activity meant would always have some margin of error.
The cinematic dream footage that David sees in virtual reality.
Returning to the movie’s specific depiction of mind reading technology, let me add that if we could see the same mental images that a person sees while dreaming, I doubt they would look sharp or well-detailed, or that the sequence of events would follow a logical order for more than a few seconds before the dream transformed into something different. It would be like watching a fuzzy, low-resolution art film comprised of disjointed images and sounds, occasionally peaking in intensity and coherence enough for you to discern something of meaning, before dissolving into the equivalent of human brain “static.” So while it’s plausible that, in 2093, you could use machines to read someone else’s thoughts, I think the output you would see would be much less accurate and less detailed than it was in Prometheus.
There will be small, flying drones that can do many things autonomously, like mapping places and finding organic life. After landing on the alien planet, the crew of the Prometheus travels overland to a mysterious alien structure and goes inside. The interior is a long series of dark, twisting corridors and strange rooms. To speed up their exploration, one crewman releases two volleyball-sized flying drones, which zip down the corridors while beaming red, contorting lasers at everything. As they float along, the drones transmit live data back to the Prometheus that is compiled to build a 3D volumetric map of the alien structure’s interior spaces.
Simpler examples of this technology already exist and are used for mapping, farming and forestry (one of many commercial examples is “Drone Deploy” https://youtu.be/SATijfXnshg; another is “Elios,” which is enmeshed in a spherical cage as protection against collisions in tight spaces). Sensor miniaturization, better motors and batteries, better AI, and cost reductions to every type of technology will allow us to build scanning drones that are almost identical to those in the movie decades before 2093. The only parts of the movie’s depiction I disagree with are 1) the use of red lasers for sensing (passive sensors and LIDAR beams that are invisible to human eyes are likelier) and 2) the use of some type of magical antigravity technology to fly (recognizable means of propulsion like spinning rotors and directed jets of exhaust will probably still be in use, though they will be smaller but more powerful thanks to improved technology). Small, cheap, highly versatile flying drones will have enormous implications for mass surveillance, espionage, environmental monitoring, and warfare.
There will be 3D volumetric displays. The bridge of the Prometheus has a large table that can project detailed, 3D volumetric images above it. The crew uses it to view an architectural diagram of the alien structure they find on the planet. Crude versions of this technology already exist, and can make simple images that float in midair by focusing laser beams on discrete points in space called “voxels” (volumetric pixels) heating them to such high temperatures that they turn the air into glowing plasma. If enough voxels are simultaneously illuminated, 3D objects can be constructed in the same way that pixels on a digital watch face can arrange into numbers if lit up in the right sequence.
Volumetric display of the alien complex, 2093
Today’s volumetric displays produce ozone gas and excessive noise thanks to air ionization, but it’s plausible the problems could be solved or at least greatly reduced by 2093. For certain applications, the displays would be very useful, though I think holographic displays (i.e. – a flat screen TV doesn’t make voxels but uses other techniques to fool your eye into thinking its images are popping out of the screen) and virtual reality glasses will fulfill the same niche, possibly at lower cost. Intelligent machines might also be so advanced that they won’t need to look at volumetric displays to grasp spatial relationships as humans have to.
State of the art volumetric display, 2011
Some disabled people and old people will use powered exoskeletons instead of wheelchairs. The space mission depicted in the film is funded by an elderly tech tycoon named “Peter Weyland.” Unbeknownst to most of the crew, he secretly embarked with them from Earth, and is sleeping in a suspended animation pod in a locked room while the first 3/4 of the film’s events unfold. At that point, David awakens him, and it is revealed to the surviving crewmen that Weyland supported the mission in the hopes that the aliens would give him a cure for his own mortality. They get into their space suits for a final trip to the alien structure, and Weyland’s outfit includes a light, powered exoskeleton for his lower body, which allows him to walk much faster than he normally could given his age.
Weyland, right, is wearing an articulated exoskeleton around his legs and lower back.
Exoskeletons for the disabled and the elderly already exist, a recent example being the “Phoenix” unit made by the “suitX” company. Unfortunately, Phoenix is $40,000 (a typical electric wheelchair is only $2,000) and requires a somewhat heavy battery backpack. I suspect that Phoenix’ high cost is due to patents and R&D costs being amortized over a small production run, and that the physical materials the suits are made of are not expensive or exotic. Prices for Phoenix-like exoskeletons will only decline as relevant patents expire, copycats arise, and batteries get lighter and cheaper. It’s hard to see how these kinds of exoskeletons won’t be ubiquitous among mobility-impaired people by 2093 (as electric wheelchairs are today), if not decades before.
That being said, I don’t think they’ll make electric wheelchairs completely obsolete because some disabled and old people will find it too physically taxing to stand upright, even if supported by a prosthesis. Some users might also find it too time-consuming to put on and take off exoskeletons each day (note the large number of straps in the photo below).
The Phoenix exoskeleton
There will be lots of 100+ year old people. Piggybacking off the last point, Mr. Weyland is 103 years old, though since he spent the space journey in suspended animation, his aging process was probably slowed down, making his “biological age” slightly lower than his chronological age. Though living 100 years has a kind of mythic aura, it’s actually only a little higher than the current life expectancy in rich countries, and, making conservative assumptions about future improvements to healthcare, living to 100 will probably be common in 2093 (doing the math, you could someday be in this group).
Today, a wealthy white male who is diligent about his diet and exercise (as Weyland probably had been throughout his life) can expect to live to about 90. In fact, that’s a low estimate since it assumes the state of medical technology will stay fixed at 2017 levels for his entire life. In reality, we’re certain to develop new medicines, prostheses, and therapies that extend lifespan farther between now and 2093. A 10 year bump to average life expectancy in the next 76 years–which would put Weyland over the century mark–is entirely possible, and note that U.S. life expectancy actually grew more than that in the 76 years preceding 2017, so there’s recent historical precedent for lifespan increases of this magnitude.
In 2093, “100 will be the new 80,” and indefinite extensions to human lifespan might even be on the horizon.
What was missing from the movie’s depiction of 2093:
The fusion of man and machine. Where were the Google glasses? Google contact lenses? Google eye implants? Google brain implants? Go-Go Gadget Legs? (Bionic limbs) By 2093, it will be common for humans to have wearable and body-implanted advanced technologies.
Not enough automation and robots on the space ship. Computers and machines will doing way more of the work, reducing the need for resource-hogging humans.
‘Professor Simon Blackmore, head of engineering, argues that an even bigger benefit could be in providing fleets of small, light robots, perhaps aided by drones, to replace vast and heavy tractors that plough the land mainly to undo the damage done by soil compaction caused by . . . vast and heavy tractors. Robot tractors could be smaller mainly because they don’t have to justify the wages of a driver, and they could work all night, and in co-ordinated platoons.’ http://www.rationaloptimist.com/blog/robot-farm-machinery/
Many of the action scenes in the superhero movie Logan were done using incredibly lifelike CGI simulacra of the actors. Will actors someday make money by licensing the use of their digital likenesses in movies instead of physically starring in them? https://youtu.be/TxWu5Brx_As
I definitely agree with this. I think Elon Musk’s statements about AI and other things are part of a deliberate strategy to keep the public’s attention focused on him. As a businessman dependent upon the faith of his investors to keep his enterprises afloat, he needs to constantly stay in the spotlight and to project an image of being smarter than everyone else to keep the dollars flowing. https://www.bloomberg.com/news/articles/2017-09-19/google-s-ai-boss-blasts-musk-s-scare-tactics-on-machine-takeover
‘Distributed ledgers are useful technology, just like banks. As they become a larger part of finance, the temptation to abuse them will be just as great. History instructs that no governance is perfect, and humans are reliably awful.’ https://www.economist.com/blogs/freeexchange/2017/09/not-so-novel
Our cells are full of organic nanomachines which support our most basic life functions, and since Richard Feynman’s 1959 lecture on the subject, it has been recognized that this was proof of concept that fully synthetic nanomachines were also feasible. Yet only recently have scientists managed to build crude but functional nanomachines in labs. http://blogs.sciencemag.org/pipeline/archives/2017/09/25/building-our-own-molecular-machines
Seeing as how so many people vote on the basis of falsehoods, prejudice, or party affiliation and commonly vote against their own self-interests without realizing it, might democracy benefit if machines tried to reason with voters one-on-one before elections? http://www.bbc.com/news/uk-politics-40860937
Chinese scientists have edited the genomes of human embryos to fix single-point DNA mutations (yes, this appeared in a peer-reviewed journal). http://www.bbc.com/news/health-41386849
Two years ago, Russia and Turkey were at each other’s throats over the latter’s shootdown of the former’s attack plane, and there was talk of all NATO being dragged into war over it. Recently, Turkey turned it back on the the alliance to buy an advanced antiaircraft system from Russia. https://www.nytimes.com/2017/09/12/world/europe/turkey-russia-missile-deal.html
Britain is repainting its Challenger tanks with a camouflage scheme similar to what it had for its West Berlin-based tank unit in 1982. What’s old is new, and visual concealment methods don’t seem to have improved in 35 years. https://warisboring.com/new-urban-camo-wont-save-british-tanks/
A video that clearly and simply describes the operation of Mazda’s new, high-efficiency gas engine, which operates like a diesel part of the time. https://youtu.be/9KhzMGbQXmY
I got my hands on several years’ worth of Consumer Reports Buying Guides, and thought it would be useful to compare the 2006 and 2016 editions to broadly examine how consumer technologies have changed and stayed the same. Incidentally, the 2006 Guide is interesting in its own right since it provides a snapshot of a moment in time when a number of transitional (and now largely forgotten) consumer technologies were in use.
Comparing the Guides at a gross level, I first note that the total number of product types listed in the respective Tables of Contents declined from 46 to 34 from 2006 to 2016. Some of these deletions were clearly just the results of editorial decisions (ex – mattresses), but some deletions owed to entire technologies going obsolete (ex – PDAs). Here’s a roundup of those in the second category:
DVD players (Totally obsolete format, and the audiovisual quality difference among the Blu-Ray player models that succeeded them is so negligible that maybe Consumer Reports realized it wasn’t worth measuring the differences anymore)
MP3 players (Arguably, there’s still a niche role for small, cheap, clip-on MP3 players for people to wear while exercising, but that’s it. Smartphones have replaced MP3 players in all other roles. The classic iPod was discontinued in 2013, and the iPod Nano and Shuffle were discontinued last month.)
Cell phones (AKA “dumb phones.” The price difference between a cheap smartphone and a 2006-era clamshell phone is so small and the capabilities difference so great that it makes no sense at all to buy the latter.)
Consumer Reports recommended the Samsung MM-A700 in 2006
Cordless phones
PDAs (Made obsolete by smartphones and tablets)
Scanners (Standalone image and analog film scanners. These were made obsolete by printer-scanner-copier combo machines and by the death of 35mm film cameras.)
Here’s a list of new product types added to the Table of Contents between 2006 and 2016, thanks to advances in technology and not editorial choices:
Smartphones
Sound bars
Steaming Media Players (ex – Roku box)
Tablets
As an aside, here are my predictions for new product types that will appear in the 2026 Consumer Reports Buying Guide:
4K Ultra HD players (Note: 8K players will also be commercially available, but they might not be popular enough to warrant a Consumer Reports review)
Virtual/Augmented Reality Glasses
All-in-one Personal Assistant AI systems (close to the technology shown in the movie Her)
Streaming Game Consoles (resurrection of the OnLive concept)–it’s also possible this capability could be standard on future Streaming Media Players
Single device (perhaps resembling a mirrorless camera) that merges camcorders, D-SLRs, and larger standalone digital cameras. This would fill the gap between smartphone cameras and professional-level cameras.
It’s also interesting to look at how technology has (not) changed within Consumer Reports product types conserved from 2006 to 2016:
Camcorders. Most of the 2006 models still used analog tapes or mini-DVDs, and transferring the recordings to computers or the internet was complicated and required intermediary steps and separate devices. The 2016 models all use flash memory sticks and can seamlessly transfer their footage to computers or internet platforms. The 2016 models appear to be significantly smaller as well.
For a brief time, there were camcorders that recorded footage onto internal DVD discs.
Digital cameras. Standalone digital cameras have gotten vastly better, but also less common thanks to the rise of smartphones with built-in cameras. The only reason to buy a standalone digital camera today is to take high-quality artistic photos, which few people have a real need to do. Coincidentally, I bought my first digital camera in 2006–a mid-priced Canon slightly smaller than a can of soda. Its photos, which I still have on my PC hard drive, still look completely sharp and are no worse than photos from today’s best smartphone cameras. Digital cameras are a type of technology that hit the point of being “good enough for all practical purposes” long ago, and picture quality has experienced very little meaningful improvement since. Improvements have happened to peripheral qualities of cameras, such as weight, size, and photo capacity. At some point, meaningful improvements in those dimensions of performance will top out as well.
TV sets. Reading about the profusion of different picture formats and TV designs in 2006 hits home what a transitional time it was for the technology: 420p format, plasma, digital tuners, CRTs, DLPs. Ahhh…brings back memories. Consumers have spoken in the intervening years, however, and 1080p LCD TVs are the standard. Not mentioned in Consumer Reports is the not-entirely-predictable rejection of 3D TVs over the last decade, and a revealed consumer preference for the largest possible TV screen at the lowest possible cost. It turns out people like to keep things simple. I also recall even the best 2006-era digital TVs having problems with motion judder, narrow frontal viewing angles, and problems displaying pure white and black colors (picking a TV model back then meant doing a ton of research and considering several complicated tradeoffs).
DLP TVs were not as thick or as heavy as older CRT TVs, but that wasn’t saying much. They briefly competed with flatscreen TVs based on LCD and plasma technology before being vanquished.
Dishwashers. The Ratings seem to indicate that dishwashers got slightly more energy efficient from 2006-16 (which isn’t surprising considering the DOE raised the energy standards during that period), but that’s it, and the monetized energy savings might be cancelled out by an increase in mean dishwasher prices. The machines haven’t gotten better at cleaning dirty dishes, their cleaning cycles haven’t gotten shorter, and they’re not quieter on average while operating.
Clothes washers. Same deal as dishwashers: Slight improvement in energy efficiency, but that’s about it.
Clothes dryers. Something strange has happened here. “Drying performance” and “Noise” don’t appear to have improved at all in ten years, but average prices have increased by 30 – 50%. I suspect this cost inflation is driven by induced demand for non-value-added features like digital controls, complex permutations of drying cycles that no one ever uses, and the bizarre consumer fetish for stainless steel appliances. Considering dishwashers, clothes washers, and dryers together, we’re reminded of how slowly technology typically improves when it isn’t subject to Moore’s Law.
Autos. This section comprises almost half of the page counts in both books, so I don’t have enough time to even attempt a comparison. That being said, I note that “Electric Cars/Plug-In Hybrids” are listed as a subcategory of vehicles in the 2016 Buying Guide, but not in the 2006 Buying Guide.
I had to swing by the local pharmacy last weekend to get a prescription. There was no line, so the trip was mercifully short and efficient. But as usual I couldn’t help shake my head at the primitive, labor-intensive nature of the operation: Human beings work behind the counter, tediously putting pills into little orange bottles by hand. The pharmacist gets paid $121,500/yr to “supervise” this pill-pouring and to make sure patients aren’t given combinations of pills that can dangerously interact inside their bodies, even though computer programs that automatically detect such contraindications have existed for many years.
We have self-driving cars and stealth fighters, computing devices improve exponentially in many ways each year, and a billion people have been lifted out of poverty in the last 20 years. Pharmacies, on the other hand, don’t seem to have progressed since the 1980s.
For the life of me, I can’t understand the stagnation. Pharmacies seem ideally suited for automation, and I don’t see why they can’t be replaced with large gumball machines and other off-the-shelf technologies. Just envision the back wall of the pharmacy being covered in a grid of gumball machines, each containing a unique type of pill. Whenever the pharmacy received an order for a prescription, the gumball machine containing the right pills would automatically dispense them down a chute and into an empty prescription bottle. The number and type of pills in the bottle would be confirmed using a camera (the FDA requires all pills to have unique shapes, colors, and imprinted markings), a small scale, and some simple AI visual pattern recognition software to crunch the data. This whole process would be automated. Empty pill bottles would be stored in a detachable rotary clip or something (take inspiration from whatever machines Bayer uses to fill thousands of bottles of aspirin per day). Sticky paper labels would be printed as needed and mechanically attached to the pill bottles.
Every morning, a minimum-wage pharmacy technician would receive boxes of fresh pills from the UPS delivery man and then pour the right pills into the matching gumball machines. Everything would be clearly labeled, but to lower the odds of mistakes even further, the gumball machine globes would have internal cameras and weight scales to scan the pills that were inside of them and to verify the human tech hadn’t mixed things up. (And since the gumball machines would continuously monitor their contents, they’d be able to preemptively order new pills before the old ones ran out.) The pharmacy tech would spend the rest of the day handing pill bottles to customers, verifying customer identities by looking at their photo IDs (especially important for sales of narcotics), and swapping out rotary clips of empty pill bottles. If a customer were unwittingly buying a combination of medications that could harmfully interact inside their body, then the pharmacy computer system would flag the purchase and tell the pharmacy technician to deny them one or other type of pill.
I’m kind of content to stop there, as automating just those tasks would be a huge improvement over the current way of business (one human could do the work currently done by three), but here some more ideas:
Confirming a customer’s identity before giving them their pills could also be automated by installing a machine at the front counter that would have a front-facing camera and a slot for inserting a photo ID card. The machine would work like the U.S. Customs’ Automated Passport Control machines, and it would use facial recognition algorithms to compare the customer’s face with the face shot on their photo ID. I’ve used the APC machines during overseas trips and never had a problem.
The act of physically handing prescription bottles to customers could also be automated with glorified vending machine technology, or a conveyor belt, or a robot grabber arm.
Eighty percent of pharmacy customers are repeat buyers who are already in the computer system and are just picking up a fresh bottle of pills because the old bottle was exhausted. There’s no need for small talk, questions, or verbal information from the pharmacist about this prescription they’ve been taking for months or years. That being true, the level of automation I’ve described would leave pharmacists with a lot of time to twiddle their thumbs during the intervals between the other 20% of customers who need special help (e.g. – first-time customer and not in the patient database, have questions about medications or side effects). Having a pharmacist inside every pharmacy would no longer be financially justified, and instead each pharmacy could install telepresence kiosks (i.e. – a station with a TV, sound speakers, a front-facing camera, and a microphone) through which customers could talk to pharmacists at remote locations. With this technology, one pharmacist could manage multiple pharmacies and keep themselves busy.
An Automated Passport Control machine in use
As far as I can tell, the only recent advances in the pharmacy/pill selling business model have been 1) the sale of prescriptions through the mail and 2) the ability to order refills via phone or Internet. If you choose to physically go into a pharmacy, the experience is the same as it was when I was a kid.
Is there a good reason it has to be the way it is now? I suspect the current business model persists thanks to:
Political lobbying from pharmacists who want to protect their own jobs and salaries from automation (see “The Logic of Collective Action”).
Unfounded fears among laypeople and politicians that automated pharmacies would make mistakes and kill granny by giving her the wrong pills. The best counterargument is to point out that pharmacies staffed by humans also routinely make those same errors. Pharmacists will also probably chime in here to make some vague claim that it’s more safe for them to interact with customers than to just have a pharmacy tech or robot arm hand them the pills at the counter.
Fears that automated pharmacies will provide worse customer service. Again, 80% of the time, there’s no need for human interaction since the customer is just refilling a prescription they’ve been using for a long time, so “customer service” doesn’t enter into the equation. It’s entirely plausible that a pharmacist could satisfy the remaining 20% of customer needs through telepresence just as well as he or she would on-site.
High up-front costs of pharmacy machines. OK, I have no experience building pharmacy robots, but my own observations about the state of technology (including simple tech like gumball machines) convince me that there’s no reason these machines should be more expensive than paying for human labor. Even if we assume that each gumball machine costs an exorbitant $1,000, you could still buy 121 of them for the same amount a typical pharmacist would make in a year, and each gumball machine would last for years before breaking. It’s possible that pharmacy machines are unaffordable right now thanks to patents, which is a problem time will soon solve.
Well, at the end of my pharmacy visit, I decided to just ask the pharmacist about this. She said she knew of pharmacy robots, and thought they were put to good use in hospital pharmacies and military bases, but they weren’t suited to small retail pharmacies like hers because they were too expensive and took up too much physical space. I would have talked to her longer, but there was a long line of impatient people behind me waiting to be handed their pill bottles.
My idea: Solar/battery powered, self-adjusting Venetian blinds
The slats would have paper-thin, light-colored, flexible solar panels on the sides facing the outside of the house. They wouldn’t need to be efficient at converting sunlight to electricity. The sides facing the inside of the house would be white.
The headrail would contain a tiny electric motor that could slowly open or close the blinds; a replaceable battery; a simple photosensor; a thermometer; and a small computer with WiFi.
The solar panels on the outward-facing sides of the slats would harvest direct and ambient sunlight to recharge the battery.
The computer would be networked with other sensors in the house, and would know 1) when humans were inside the house, 2) when the heating and cooling systems were active, and 3) what the temperature was outside the house (this could be determined by checking internet weather sites).
Based on all of those data, the Venetian blinds would automatically open or close themselves to attenuate the amount of sunlight shining through the windows. Since sunlight heats up objects, controlling the sunlight would also control the internal house temperature.
During hot summer days, the blinds would completely close to block sunlight from entering the house, keeping it cooler inside. During cold winter days, the blinds would open.
If the blinds were trying to maximize the amount of sunlight entering a house, they could continuously adjust the angling of the slats over the course of a single day to match the Sun’s changing position in the sky.
The photosensors and thermometers in each “Smart Venetian Blind” could also help identify window leaks and windows that were accidentally left open.
The blinds could also be used for home security if programmed to completely close each night, preventing potential burglars from looking inside the house. The homeowner could use a smartphone app to control all the blinds and set this as a default preference. Sudden changes in temperature at a particular window during periods where no one was in the house could also be registered as possible break-ins.
Humans could, at any time, manually adjust the Venetian blinds by pulling on the cord connected to the headrail. The computer would look for patterns in this behavior to determine if any user preferences existed, and if so, the blinds would try to incorporate them into the standard open/close daily routine.
The Smart Venetian Blinds could function in a standalone manner, but ideally, they would be installed in houses that had other “Smart” features. All of the devices would share data and work together for maximum efficiency.
Every month, the homeowner would get a short, simple email that estimated how much money the blinds had saved them in heating and cooling costs. Data on the blinds’ lifetime ROI would also be provided.
Smart Venetian Blinds with vertical slats could be installed over large windows and glass doors.
UPDATE (6/28/2018): A company called “SolarGaps” beat me to it! Looks like they’ve been in business since early 2017. https://youtu.be/whrroUUWCYo
‘[Although] I certainly believe that any member of our highly digital society should be familiar with how these [software] platforms work, universal code literacy won’t solve our employment crisis any more than the universal ability to read and write would result in a full-employment economy of book publishing.’
It’s a little unclear what “employment crisis” the author is talking about since the U.S. unemployment rate is a very healthy 4.4%, but it probably refers to three things scattered throughout the article:
Skills obsolescence among older workers. As people age, the skills they learned in college and early in their careers get less useful because technologies and processes change, but the people fail to adapt. Accordingly, their value as employees declines, along with their pay and job security. This phenomenon is nothing new: in Prehistoric times, the same “career arc” existed, with people becoming progressively less useful as hunters and parents upon reaching middle age. Older workers faced the same problems in more recent historical eras when work entailed farming and then factory labor. That being the case, does it make sense to describe today’s skills obsolescence as a “crisis”? “Just the way things are” is more fitting.
Stagnation of real median wages in the U.S. Adjusted for inflation, the median American household wage has barely increased since the 1970s. First, this isn’t in the strictest sense of the word an “employment crisis” since it relates to wages and not the availability of employment. “Pay crisis” might be a better term. Second, much of the stagnation in median pay evaporates once you consider that the average American household has steadily shrunk since the 1970s: Single-parent households have become more common, and in such families, there is only one breadwinner. Knowing whether someone is talking about median wages per worker or median wages per household is crucial. Third, this only counts as a crisis if you ignore the fact that many things have gotten cheaper and/or better since the 1970s (cars, personal electronics, many forms of entertainment, housing except in some cities), so the same salary can support a higher standard of living now. Most of that owes to technological improvement.
Note the data stop in 2012, when the U.S. economy was still recovering from the Great Recession
Automation of human jobs. Towards the end of the article, it becomes clear this is what the author is really thinking about. He cites research done by academics Erik Brynjolfsson and Andrew McAfee as proof that machines have been hollowing out the middle class and reducing incomes and the number of jobs. I didn’t look at the source material, but the article says they made those comments in 2013, which means their analysis was probably based on economic data that stopped in 2012, in the miserable hangover of the Great Recession when people we openly questioning whether the economy would ever get back on its feet. I remember it well, and specifically, I remember futurists citing Brynjolfsson and McAfee’s research as proof that the job automation inflection point had been reached during the Great Recession, explaining why the unemployment rate was staying stubbornly high and would never go down again. Well they were wrong, as today’s healthy unemployment numbers and rising real wages demonstrate. So if the article’s author thinks that job automation is causing “our employment crisis,” then he has failed to present proof the latter exists at all.
For the record, I do believe that machines will someday put the vast majority of humans–perhaps 100% of us–out of gainful work. When they finally do that, we will have an “employment crisis.” However, I have yet to see proof that machines have started destroying jobs faster than new ones are created, so speaking of an automation-driven “employment crisis” should be done in the future tense (which the author doesn’t). Right now, “our employment crisis,” like so many other “crises” reported in the media, simply doesn’t exist.
Bloomberg New Energy Finance just released an analysis, “Electric Vehicle Outlook 2017,” that estimates all-electric cars will get as cheap as traditional gas-powered cars sometime between 2025 and 2030. Importantly, the estimate assumes that government subsidies for electric cars are discontinued by that time, so the future price figures are market rates.
Bloomberg thinks electric car prices will drop thanks to price-lowering economies of scale and to competition among carmakers. It doesn’t assume any technological breakthroughs like new batteries that can store twice as much energy. This is good: making predictions about the future that hinge on a technological breakthrough that may or may not actually happen is always a bad idea, and will get you thinking something like the Singularity is right around the corner.
The Bloomberg Executive Summary is here: https://data.bloomberglp.com/bnef/sites/14/2017/07/BNEF_EVO_2017_ExecutiveSummary.pdf
Interestingly, the analysis also concludes that better electric cars will make plug-in hybrids obsolete since the latter are more mechanically complex and hence more expensive. A shortage of at-home car charging stations will also limit the potential customer base for electric cars, and cause electric cars as a share of the total passenger vehicle fleet to stabilize at about 50% by 2040. I wish I had access to the full report, so I can only guess that the at-home car charging problem will be most acute for poorer people who can’t afford to install them or who live in rental properties that lack them.
Futurist, transhumanist, and singularitarian Ray Kurzweil just did a short interview regarding his views on the future of jobs and some other topics. You can see it here:
My thoughts:
0:12 – Kurzweil nods to the Boring Truth of our age: The clash of ideologies ended in the 20th century (except for the ongoing sideshow that is non-viable Islamism vs. everyone else), and there’s consensus among academics and leaders in the industrialized world that having a mixed economy and some social welfare programs is close to the optimal setup for a country. In the West, conservatives and liberals push and pull, but within narrow boundaries. Similarly, the new political faultlines pit “nationalists” against “globalists,” but no one in the former camp wants to completely forsake trade. There really is a lot less drama today than the news media makes you think.
1:50 – The reasons for the rise of welfare states in the early 20th century are more complex than that, but Kurzweil makes a good point that they wouldn’t have been sustainable had there not been the economic surpluses made possible by Industrialization. If you take the long view like Kurzweil does, and you assume that technology keeps improving, the concomitant economic surpluses keep growing, and social welfare programs grow in an intelligent manner, then a future where all humans are on the dole and few if any people work is indeed the logical endpoint.
3:06 – Uh-oh. Kurzweil makes predictions that will be true in “a decade.” So by 2027, 3D printers will be able to make “at low cost, all of the physical things we need,” including large Lego-like pieces of building materials that you will be able to “snap together” to make your own house. Vertical farms will also be making “very high-quality” food at “very low prices” by 2027. Yikes. I’m skeptical of the 3D printed house prediction because the construction industry and consumers have failed to even embrace modular buildings (there’s a great report on this here: http://www.mckinsey.com/industries/capital-projects-and-infrastructure/our-insights/reinventing-construction-through-a-productivity-revolution). The notion that something even more radical like 3D printed Lego houses will become common in just ten years bucks the trend way too much. Also, I don’t see how an average person in 2027 will be able to assemble his or her house from giant Legos considering: 1) the need to pour solid concrete foundations will still exist, 2) local governments are highly unlikely to relax building codes to allow unlicensed, inexperienced people to build houses, and 3) few people have the skills to even put such Lego pieces together, particularly with enough accuracy to ensure the surfaces are truly level and plumb. Maybe what Kurzweil is trying to say is that, in 2027, there will be some construction companies that will specialize in building cheap, prefabricated houses comprised partly of 3D printed components. Plausible, but only a tiny bit different from how things are today. As for vertical farms, they’ve proven to be much more expensive to run than normal “flat” farms and haven’t caught on thanks to basic economics. If Kurzweil knows of some way that they can make food at “very low prices” in just ten years, then he should quit his job at Google and pursue it full-time since it will be worth billions of dollars. And he should also ask himself whether it would be more efficient and profitable to use that secret method to improve “flat” farms. For example, if Kurzweil thinks vertical farm costs will drop thanks to cheap, 3D printed building techniques, then won’t the same techniques also make it possible to cheaply build greenhouses over standard cropfields? If farm robots will eliminate labor costs at vertical farms, won’t they do the same at flat farms? Why would the vertical farms benefit more?
4:09 – Kurzweil observes (as he has in the past) that most of the Earth’s surface is sparsely populated, meaning there is ample room for humans to spread out. While true, it’s important to remember the reasons why: Beachfront property in Florida is more aesthetically appealing and provides more opportunities for recreation than a plot of land in the middle of Nebraska. The climate in San Francisco is more conducive to human life than that of Minot, North Dakota. Humans are also social animals (particularly when young), meaning they like to live in places where there are other people. The high (and still rising) rates of suicide and substance abuse in rural America attest to the ill effects of isolation and lack of varied things to do. He doesn’t say it in this interview, but I know from his books that his response would be something like “future technologies will substitute for all that,” meaning virtual reality will be as real as The Matrix someday, so hanging out on virtual reality Miami Beach while you’re actually lying in a VR pod in your living room will feel as real as hanging out on the real Miami Beach with your actual body. Whether or not sufficiently advanced brain-computer interfaces can be made to do that is an open question, but for sure, I doubt the technology will exist by 2027, or even 2057.
5:00 – Kurzweil predicts that, by 2027, virtual reality and “virtual avatars” will be so good that many people won’t need to live in cities anymore, and he seems to suggest there will be a detectable change to the global urbanization trend. Thanks to virtual reality, people will be able to work and play from anywhere, so they’ll choose to live outside of cities to save money. I think this is a prime example of a prediction that Kurzweil can’t possibly get wrong, and that is also almost useless. As he admits around this part of the interview, many of his colleagues at Google already work remotely, and most of us know someone who works from home. It doesn’t take a futurist or economist to see that the practice is getting more popular, so it’s a simple assumption that it will be more common by, say, 2027. Technologies related to computing, videoconferencing, and virtual reality are all obviously improving, and it’s just common sense that they will make it easier for people to work remotely. And while the number of people living in cities is growing, so is the number of people living outside of them in the suburbs and exurbs. By 2027, the suburban/exurban population could be growing faster than the truly urban population, which Kurzweil could cite as proof his prediction was right. So on close analysis, Kurzweil’s prediction is nothing more than a simple synthesis of three long-running trends in America that most adults are already aware of through direct experience. It will be almost impossible for him to be wrong, but the prediction about the future is so general and so incrementally different from today that it has no real value.
6:35 – He says we will use 3D printers to make clothes, without giving a date for when the prediction will come to pass (by 2027?). Regardless of when or if it happens, this has always struck me as a useless application of 3D printers. Today, I can buy a pack of six new cotton undershirts from Wal-Mart for $15, and they will last for years before falling apart. I can go to a local thrift store and buy durable, surprisingly good-looking used clothes that are 75% discounted from their original prices, and which will also last me many years. I can go on Craigslist right now and find people in my area who are giving away clothes for free. There is no evidence at all that our existing textile technology is deficient making clothes, or that our “standards of living” will meaningfully improve if we started making clothes with futuristic 3D printers. Even if we assume 3D printers are so superior at making clothes that they’re (almost) “free,” how much better is that than the present condition? Clothes are already free or trivially cheap. Lowering the price farther might free up enough money for you to buy a slightly bigger morning coffee at Starbucks, but that’s it. The only real beneficiaries would be fashion-obsessed people who shudder at the thought of wearing the same outfit twice and want their 3D printer to spit out some zany new creation each morning. Yay for the coming empowerment of vain people.
8:10 – Kurzweil cites changes to the nature of jobs over the last 100 years (workforce transformed from hard labor on the farm and factory to doing computer stuff in office buildings) as proof that there will always be jobs for humans in the future. While humans have always managed to move up the skills ladder and create new, gainful work for themselves as machines took over the less skilled jobs, there’s no reason to think the trend will continue forever. His argument also gets muddled when he equates people in college with people who have jobs. Studying poetry or art in college isn’t the same thing as being gainfully employed. Moreover, its a common fate for such students to have problems finding employment after college, and for them to settle for jobs that are unsatisfactory because they pay little, or because they have nothing to do with what they studied (think of the waitress with the Literature B.A.). I think it’s much safer to predict that “Humans in the future will be able to find things to do with their days but they won’t necessarily get paid much money or any money at all for what they do, and automation will be good overall for humans since it will eliminate unpleasant drudge work.”