Roundup of interesting internet articles, August 2017 edition

  1. In a parallel universe, it’s still the 1990s and there was a Moore’s Law for the number of random pipes and hoses strewn across the landscape.
    http://www.boredpanda.com/scifi-girl-robot-traveling-artbook-simon-stalenhag/
  2. China has rolled out (pun intended) a copy of the U.S. Stryker armored vehicle
    http://www.janes.com/article/73292/norinco-rolls-out-vp10-8×8-vehicle-variants
  3. Here’s a needlessly long-winded piece about the long, long history of Christian theologians, cult leaders and even Popes wrongly predicting Biblical Doomsday.
    https://www.theguardian.com/news/2017/aug/25/yearning-for-the-end-of-the-world
  4. A survey of 150 people shows about 1/3 of them could hand-draw common corporate logos (for companies like Burger King and Starbucks) from memory, with disturbing accuracy. (I estimate it takes 100 – 200 repetitions of the same commercial before I become cognizant of whatever is being advertised).
    http://www.dailymail.co.uk/sciencetech/article-4813514/Only-16-people-correctly-recall-famous-logos.html 
  5. Britain is repainting its Challenger tanks with a camouflage scheme similar to what it had for its West Berlin-based tank unit in 1982. What’s old is new, and visual concealment methods don’t seem to have improved in 35 years.
    https://warisboring.com/new-urban-camo-wont-save-british-tanks/
  6. Medical micromachines were successfully used on lab rats to deliver antibiotic loads. I don’t understand why medical nanomachines get all the attention and hype when micromachines are more technically plausible and could do many of the same things.
    http://bigthink.com/design-for-good/for-the-first-time-tiny-robots-treat-infection-in-a-living-organism
  7. An analysis of the accuracy of Gartner Hype Cycles from 2000 to 2016 is an informative catalog of failures.
    https://www.linkedin.com/pulse/8-lessons-from-20-years-hype-cycles-michael-mullany
  8. Here’s an outstanding rebuttal to Kevin Kelly’s recent “Myth of a Superhuman AI” article. I’ve never heard of the author before, his blog only has three entries, but he’s written a very thorough and convincing treatise, all the more impressive since he didn’t write it in his native language (he’s Finnish).
    https://hypermagicalultraomnipotence.wordpress.com/2017/07/26/there-are-no-free-lunches-but-organic-lunches-are-super-expensive-why-the-tradeoffs-constraining-human-cognition-do-not-limit-artificial-superintelligences/
  9. Geneticists have uncovered the chemical steps through which magic mushrooms produce their hallucinogenic agent, psilocybin. Large scale industrial production of it could be possible, because the world badly needs more drugs.
    http://cen.acs.org/articles/95/web/2017/08/Magic-mushroomenzyme-mystery-solved.html
  10. A reality check for lab grown meat.
    http://gizmodo.com/behind-the-hype-of-lab-grown-meat-1797383294
  11. A video that clearly and simply describes the operation of Mazda’s new, high-efficiency gas engine, which operates like a diesel part of the time.
    https://youtu.be/9KhzMGbQXmY
  12. Scientists and inventors are dispensable, but great artists, writers and musicians are not. (A somewhat humbling thing to remember when debating the usefulness of a STEM vs. humanities education.)
    http://www.rationaloptimist.com/blog/tim-harford-review/#.WX-QCkibyHc.facebook

A Tale of Two Buying Guides


I got my hands on several years’ worth of Consumer Reports Buying Guides, and thought it would be useful to compare the 2006 and 2016 editions to broadly examine how consumer technologies have changed and stayed the same. Incidentally, the 2006 Guide is interesting in its own right since it provides a snapshot of a moment in time when a number of transitional (and now largely forgotten) consumer technologies were in use.

Comparing the Guides at a gross level, I first note that the total number of product types listed in the respective Tables of Contents declined from 46 to 34 from 2006 to 2016. Some of these deletions were clearly just the results of editorial decisions (ex – mattresses), but some deletions owed to entire technologies going obsolete (ex – PDAs). Here’s a roundup of those in the second category:

  • DVD players (Totally obsolete format, and the audiovisual quality difference among the Blu-Ray player models that succeeded them is so negligible that maybe Consumer Reports realized it wasn’t worth measuring the differences anymore)
  • MP3 players (Arguably, there’s still a niche role for small, cheap, clip-on MP3 players for people to wear while exercising, but that’s it. Smartphones have replaced MP3 players in all other roles. The classic iPod was discontinued in 2013, and the iPod Nano and Shuffle were discontinued last month.)
  • Cell phones (AKA  “dumb phones.” The price difference between a cheap smartphone and a 2006-era clamshell phone is so small and the capabilities difference so great that it makes no sense at all to buy the latter.)

    Consumer Reports recommended the Samsung MM-A700 in 2006
  • Cordless phones
  • PDAs (Made obsolete by smartphones and tablets)
  • Scanners (Standalone image and analog film scanners. These were made obsolete by printer-scanner-copier combo machines and by the death of 35mm film cameras.)

Here’s a list of new product types added to the Table of Contents between 2006 and 2016, thanks to advances in technology and not editorial choices:

  • Smartphones
  • Sound bars
  • Steaming Media Players (ex – Roku box)
  • Tablets

As an aside, here are my predictions for new product types that will appear in the 2026 Consumer Reports Buying Guide:

  • 4K Ultra HD players (Note: 8K players will also be commercially available, but they might not be popular enough to warrant a Consumer Reports review)
  • Virtual/Augmented Reality Glasses
  • All-in-one Personal Assistant AI systems (close to the technology shown in the movie Her)
  • Streaming Game Consoles (resurrection of the OnLive concept)–it’s also possible this capability could be standard on future Streaming Media Players
  • Single device (perhaps resembling a mirrorless camera) that merges camcorders, D-SLRs, and larger standalone digital cameras. This would fill the gap between smartphone cameras and professional-level cameras.

It’s also interesting to look at how technology has (not) changed within Consumer Reports product types conserved from 2006 to 2016:

  • Camcorders. Most of the 2006 models still used analog tapes or mini-DVDs, and transferring the recordings to computers or the internet was complicated and required intermediary steps and separate devices. The 2016 models all use flash memory sticks and can seamlessly transfer their footage to computers or internet platforms. The 2016 models appear to be significantly smaller as well.

    For a brief time, there were camcorders that recorded footage onto internal DVD discs.
  • Digital cameras. Standalone digital cameras have gotten vastly better, but also less common thanks to the rise of smartphones with built-in cameras. The only reason to buy a standalone digital camera today is to take high-quality artistic photos, which few people have a real need to do. Coincidentally, I bought my first digital camera in 2006–a mid-priced Canon slightly smaller than a can of soda. Its photos, which I still have on my PC hard drive, still look completely sharp and are no worse than photos from today’s best smartphone cameras. Digital cameras are a type of technology that hit the point of being “good enough for all practical purposes” long ago, and picture quality has experienced very little meaningful improvement since. Improvements have happened to peripheral qualities of cameras, such as weight, size, and photo capacity. At some point, meaningful improvements in those dimensions of performance will top out as well.
  • TV sets. Reading about the profusion of different picture formats and TV designs in 2006 hits home what a transitional time it was for the technology: 420p format, plasma, digital tuners, CRTs, DLPs. Ahhh…brings back memories. Consumers have spoken in the intervening years, however, and 1080p LCD TVs are the standard. Not mentioned in Consumer Reports is the not-entirely-predictable rejection of 3D TVs over the last decade, and a revealed consumer preference for the largest possible TV screen at the lowest possible cost. It turns out people like to keep things simple. I also recall even the best 2006-era digital TVs having problems with motion judder, narrow frontal viewing angles, and problems displaying pure white and black colors (picking a TV model back then meant doing a ton of research and considering several complicated tradeoffs).

    DLP TVs were not as thick or as heavy as older CRT TVs, but that wasn’t saying much. They briefly competed with flatscreen TVs based on LCD and plasma technology before being vanquished.
  • Dishwashers. The Ratings seem to indicate that dishwashers got slightly more energy efficient from 2006-16 (which isn’t surprising considering the DOE raised the energy standards during that period), but that’s it, and the monetized energy savings might be cancelled out by an increase in mean dishwasher prices. The machines haven’t gotten better at cleaning dirty dishes, their cleaning cycles haven’t gotten shorter, and they’re not quieter on average while operating.
  • Clothes washers. Same deal as dishwashers: Slight improvement in energy efficiency, but that’s about it.
  • Clothes dryers. Something strange has happened here. “Drying performance” and “Noise” don’t appear to have improved at all in ten years, but average prices have increased by 30 – 50%. I suspect this cost inflation is driven by induced demand for non-value-added features like digital controls, complex permutations of drying cycles that no one ever uses, and the bizarre consumer fetish for stainless steel appliances. Considering dishwashers, clothes washers, and dryers together, we’re reminded of how slowly technology typically improves when it isn’t subject to Moore’s Law.
  • Autos. This section comprises almost half of the page counts in both books, so I don’t have enough time to even attempt a comparison. That being said, I note that “Electric Cars/Plug-In Hybrids” are listed as a subcategory of vehicles in the 2016 Buying Guide, but not in the 2006 Buying Guide.

Links

  1. https://www.cnet.com/news/why-would-anyone-want-an-mp3-player-today/
  2. http://dishwashers.reviewed.com/features/why-obamas-dishwasher-efficiency-crusade-makes-sense
  3. https://www.cnet.com/news/rip-rear-projection-tv/

Why aren’t pharmacies automated?

I had to swing by the local pharmacy last weekend to get a prescription. There was no line, so the trip was mercifully short and efficient. But as usual I couldn’t help shake my head at the primitive, labor-intensive nature of the operation: Human beings work behind the counter, tediously putting pills into little orange bottles by hand. The pharmacist gets paid $121,500/yr to “supervise” this pill-pouring and to make sure patients aren’t given combinations of pills that can dangerously interact inside their bodies, even though computer programs that automatically detect such contraindications have existed for many years.

We have self-driving cars and stealth fighters, computing devices improve exponentially in many ways each year, and a billion people have been lifted out of poverty in the last 20 years. Pharmacies, on the other hand, don’t seem to have progressed since the 1980s.

For the life of me, I can’t understand the stagnation. Pharmacies seem ideally suited for automation, and I don’t see why they can’t be replaced with large gumball machines and other off-the-shelf technologies. Just envision the back wall of the pharmacy being covered in a grid of gumball machines, each containing a unique type of pill. Whenever the pharmacy received an order for a prescription, the gumball machine containing the right pills would automatically dispense them down a chute and into an empty prescription bottle. The number and type of pills in the bottle would be confirmed using a camera (the FDA requires all pills to have unique shapes, colors, and imprinted markings), a small scale, and some simple AI visual pattern recognition software to crunch the data. This whole process would be automated. Empty pill bottles would be stored in a detachable rotary clip or something (take inspiration from whatever machines Bayer uses to fill thousands of bottles of aspirin per day). Sticky paper labels would be printed as needed and mechanically attached to the pill bottles.

Every morning, a minimum-wage pharmacy technician would receive boxes of fresh pills from the UPS delivery man and then pour the right pills into the matching gumball machines. Everything would be clearly labeled, but to lower the odds of mistakes even further, the gumball machine globes would have internal cameras and weight scales to scan the pills that were inside of them and to verify the human tech hadn’t mixed things up. (And since the gumball machines would continuously monitor their contents, they’d be able to preemptively order new pills before the old ones ran out.) The pharmacy tech would spend the rest of the day handing pill bottles to customers, verifying customer identities by looking at their photo IDs (especially important for sales of narcotics), and swapping out rotary clips of empty pill bottles. If a customer were unwittingly buying a combination of medications that could harmfully interact inside their body, then the pharmacy computer system would flag the purchase and tell the pharmacy technician to deny them one or other type of pill.

I’m kind of content to stop there, as automating just those tasks would be a huge improvement over the current way of business (one human could do the work currently done by three), but here some more ideas:

  • Confirming a customer’s identity before giving them their pills could also be automated by installing a machine at the front counter that would have a front-facing camera and a slot for inserting a photo ID card. The machine would work like the U.S. Customs’ Automated Passport Control machines, and it would use facial recognition algorithms to compare the customer’s face with the face shot on their photo ID. I’ve used the APC machines during overseas trips and never had a problem.
  • The act of physically handing prescription bottles to customers could also be automated with glorified vending machine technology, or a conveyor belt, or a robot grabber arm.
  • Eighty percent of pharmacy customers are repeat buyers who are already in the computer system and are just picking up a fresh bottle of pills because the old bottle was exhausted. There’s no need for small talk, questions, or verbal information from the pharmacist about this prescription they’ve been taking for months or years. That being true, the level of automation I’ve described would leave pharmacists with a lot of time to twiddle their thumbs during the intervals between the other 20% of customers who need special help (e.g. – first-time customer and not in the patient database, have questions about medications or side effects). Having a pharmacist inside every pharmacy would no longer be financially justified, and instead each pharmacy could install telepresence kiosks (i.e. – a station with a TV, sound speakers, a front-facing camera, and a microphone) through which customers could talk to pharmacists at remote locations. With this technology, one pharmacist could manage multiple pharmacies and keep themselves busy.
An Automated Passport Control machine in use

As far as I can tell, the only recent advances in the pharmacy/pill selling business model have been 1) the sale of prescriptions through the mail and 2) the ability to order refills via phone or Internet. If you choose to physically go into a pharmacy, the experience is the same as it was when I was a kid.

Is there a good reason it has to be the way it is now? I suspect the current business model persists thanks to:

  1. Political lobbying from pharmacists who want to protect their own jobs and salaries from automation (see “The Logic of Collective Action”).
  2. Unfounded fears among laypeople and politicians that automated pharmacies would make mistakes and kill granny by giving her the wrong pills. The best counterargument is to point out that pharmacies staffed by humans also routinely make those same errors. Pharmacists will also probably chime in here to make some vague claim that it’s more safe for them to interact with customers than to just have a pharmacy tech or robot arm hand them the pills at the counter.
  3. Fears that automated pharmacies will provide worse customer service. Again, 80% of the time, there’s no need for human interaction since the customer is just refilling a prescription they’ve been using for a long time, so “customer service” doesn’t enter into the equation. It’s entirely plausible that a pharmacist could satisfy the remaining 20% of customer needs through telepresence just as well as he or she would on-site.
  4. High up-front costs of pharmacy machines. OK, I have no experience building pharmacy robots, but my own observations about the state of technology (including simple tech like gumball machines) convince me that there’s no reason these machines should be more expensive than paying for human labor. Even if we assume that each gumball machine costs an exorbitant $1,000, you could still buy 121 of them for the same amount a typical pharmacist would make in a year, and each gumball machine would last for years before breaking. It’s possible that pharmacy machines are unaffordable right now thanks to patents, which is a problem time will soon solve.

Well, at the end of my pharmacy visit, I decided to just ask the pharmacist about this. She said she knew of pharmacy robots, and thought they were put to good use in hospital pharmacies and military bases, but they weren’t suited to small retail pharmacies like hers because they were too expensive and took up too much physical space. I would have talked to her longer, but there was a long line of impatient people behind me waiting to be handed their pill bottles.

Random idea: “Smart Venetian Blinds”

A typical Venetian blind

My idea: Solar/battery powered, self-adjusting Venetian blinds

  • The slats would have paper-thin, light-colored, flexible solar panels on the sides facing the outside of the house. They wouldn’t need to be efficient at converting sunlight to electricity. The sides facing the inside of the house would be white.
  • The headrail would contain a tiny electric motor that could slowly open or close the blinds; a replaceable battery; a simple photosensor; a thermometer; and a small computer with WiFi.
  • The solar panels on the outward-facing sides of the slats would harvest direct and ambient sunlight to recharge the battery.
  • The computer would be networked with other sensors in the house, and would know 1) when humans were inside the house, 2) when the heating and cooling systems were active, and 3) what the temperature was outside the house (this could be determined by checking internet weather sites).
  • Based on all of those data, the Venetian blinds would automatically open or close themselves to attenuate the amount of sunlight shining through the windows. Since sunlight heats up objects, controlling the sunlight would also control the internal house temperature.
  • During hot summer days, the blinds would completely close to block sunlight from entering the house, keeping it cooler inside. During cold winter days, the blinds would open.
  • If the blinds were trying to maximize the amount of sunlight entering a house, they could continuously adjust the angling of the slats over the course of a single day to match the Sun’s changing position in the sky.
  • The photosensors and thermometers in each “Smart Venetian Blind” could also help identify window leaks and windows that were accidentally left open.
  • The blinds could also be used for home security if programmed to completely close each night, preventing potential burglars from looking inside the house. The homeowner could use a smartphone app to control all the blinds and set this as a default preference. Sudden changes in temperature at a particular window during periods where no one was in the house could also be registered as possible break-ins.
  • Humans could, at any time, manually adjust the Venetian blinds by pulling on the cord connected to the headrail. The computer would look for patterns in this behavior to determine if any user preferences existed, and if so, the blinds would try to incorporate them into the standard open/close daily routine.
  • The Smart Venetian Blinds could function in a standalone manner, but ideally, they would be installed in houses that had other “Smart” features. All of the devices would share data and work together for maximum efficiency.
  • Every month, the homeowner would get a short, simple email that estimated how much money the blinds had saved them in heating and cooling costs. Data on the blinds’ lifetime ROI would also be provided.
Smart Venetian Blinds with vertical slats could be installed over large windows and glass doors.

UPDATE (6/28/2018): A company called “SolarGaps” beat me to it! Looks like they’ve been in business since early 2017.
https://youtu.be/whrroUUWCYo

Have machines created an “employment crisis”?

Dovetailing off of yesterday’s blog entry (“Teaching more people to code isn’t a good jobs strategy”), I’d like to examine an assumption implicit in the first passage I quoted:

‘[Although] I certainly believe that any member of our highly digital society should be familiar with how these [software] platforms work, universal code literacy won’t solve our employment crisis any more than the universal ability to read and write would result in a full-employment economy of book publishing.’

It’s a little unclear what “employment crisis” the author is talking about since the U.S. unemployment rate is a very healthy 4.4%, but it probably refers to three things scattered throughout the article:

  1. Skills obsolescence among older workers. As people age, the skills they learned in college and early in their careers get less useful because technologies and processes change, but the people fail to adapt. Accordingly, their value as employees declines, along with their pay and job security. This phenomenon is nothing new: in Prehistoric times, the same “career arc” existed, with people becoming progressively less useful as hunters and parents upon reaching middle age. Older workers faced the same problems in more recent historical eras when work entailed farming and then factory labor. That being the case, does it make sense to describe today’s skills obsolescence as a “crisis”? “Just the way things are” is more fitting.
  2. Stagnation of real median wages in the U.S. Adjusted for inflation, the median American household wage has barely increased since the 1970s. First, this isn’t in the strictest sense of the word an “employment crisis” since it relates to wages and not the availability of employment. “Pay crisis” might be a better term. Second, much of the stagnation in median pay evaporates once you consider that the average American household has steadily shrunk since the 1970s: Single-parent households have become more common, and in such families, there is only one breadwinner. Knowing whether someone is talking about median wages per worker or median wages per household is crucial. Third, this only counts as a crisis if you ignore the fact that many things have gotten cheaper and/or better since the 1970s (cars, personal electronics, many forms of entertainment, housing except in some cities), so the same salary can support a higher standard of living now. Most of that owes to technological improvement.

    Note the data stop in 2012, when the U.S. economy was still recovering from the Great Recession
  3. Automation of human jobs. Towards the end of the article, it becomes clear this is what the author is really thinking about. He cites research done by academics Erik Brynjolfsson and Andrew McAfee as proof that machines have been hollowing out the middle class and reducing incomes and the number of jobs. I didn’t look at the source material, but the article says they made those comments in 2013, which means their analysis was probably based on economic data that stopped in 2012, in the miserable hangover of the Great Recession when people we openly questioning whether the economy would ever get back on its feet. I remember it well, and specifically, I remember futurists citing Brynjolfsson and McAfee’s research as proof that the job automation inflection point had been reached during the Great Recession, explaining why the unemployment rate was staying stubbornly high and would never go down again. Well they were wrong, as today’s healthy unemployment numbers and rising real wages demonstrate. So if the article’s author thinks that job automation is causing “our employment crisis,” then he has failed to present proof the latter exists at all.

For the record, I do believe that machines will someday put the vast majority of humans–perhaps 100% of us–out of gainful work. When they finally do that, we will have an “employment crisis.” However, I have yet to see proof that machines have started destroying jobs faster than new ones are created, so speaking of an automation-driven “employment crisis” should be done in the future tense (which the author doesn’t). Right now, “our employment crisis,” like so many other “crises” reported in the media, simply doesn’t exist.

Links

  1. https://www.fastcompany.com/3058251/why-learning-to-code-wont-save-your-job
  2. https://fivethirtyeight.com/features/the-american-middle-class-hasnt-gotten-a-raise-in-15-years/

Bloomberg: Electric cars could be as cheap as gas-powered cars by 2025

Bloomberg New Energy Finance just released an analysis, “Electric Vehicle Outlook 2017,” that estimates all-electric cars will get as cheap as traditional gas-powered cars sometime between 2025 and 2030. Importantly, the estimate assumes that government subsidies for electric cars are discontinued by that time, so the future price figures are market rates.

Bloomberg thinks electric car prices will drop thanks to price-lowering economies of scale and to competition among carmakers. It doesn’t assume any technological breakthroughs like new batteries that can store twice as much energy. This is good: making predictions about the future that hinge on a technological breakthrough that may or may not actually happen is always a bad idea, and will get you thinking something like the Singularity is right around the corner.

The Bloomberg Executive Summary is here: https://data.bloomberglp.com/bnef/sites/14/2017/07/BNEF_EVO_2017_ExecutiveSummary.pdf

Interestingly, the analysis also concludes that better electric cars will make plug-in hybrids obsolete since the latter are more mechanically complex and hence more expensive. A shortage of at-home car charging stations will also limit the potential customer base for electric cars, and cause electric cars as a share of the total passenger vehicle fleet to stabilize at about 50% by 2040. I wish I had access to the full report, so I can only guess that the at-home car charging problem will be most acute for poorer people who can’t afford to install them or who live in rental properties that lack them.

Links:

  1. https://venturebeat.com/2015/05/27/an-electric-car-future-is-coming-just-more-slowly-than-predicted/
  2. https://finance.yahoo.com/news/rise-electric-cars-kill-gas-150000844.html

 

Ray Kurzweil on the future of capitalism and the boring world of 2027

Futurist, transhumanist, and singularitarian Ray Kurzweil just did a short interview regarding his views on the future of jobs and some other topics. You can see it here:

My thoughts:

0:12 – Kurzweil nods to the Boring Truth of our age: The clash of ideologies ended in the 20th century (except for the ongoing sideshow that is non-viable Islamism vs. everyone else), and there’s consensus among academics and leaders in the industrialized world that having a mixed economy and some social welfare programs is close to the optimal setup for a country. In the West, conservatives and liberals push and pull, but within narrow boundaries. Similarly, the new political faultlines pit “nationalists” against “globalists,” but no one in the former camp wants to completely forsake trade. There really is a lot less drama today than the news media makes you think.

1:50 – The reasons for the rise of welfare states in the early 20th century are more complex than that, but Kurzweil makes a good point that they wouldn’t have been sustainable had there not been the economic surpluses made possible by Industrialization. If you take the long view like Kurzweil does, and you assume that technology keeps improving, the concomitant economic surpluses keep growing, and social welfare programs grow in an intelligent manner, then a future where all humans are on the dole and few if any people work is indeed the logical endpoint.

3:06 – Uh-oh. Kurzweil makes predictions that will be true in “a decade.” So by 2027, 3D printers will be able to make “at low cost, all of the physical things we need,” including large Lego-like pieces of building materials that you will be able to “snap together” to make your own house.  Vertical farms will also be making “very high-quality” food at “very low prices” by 2027. Yikes. I’m skeptical of the 3D printed house prediction because the construction industry and consumers have failed to even embrace modular buildings (there’s a great report on this here: http://www.mckinsey.com/industries/capital-projects-and-infrastructure/our-insights/reinventing-construction-through-a-productivity-revolution). The notion that something even more radical like 3D printed Lego houses will become common in just ten years bucks the trend way too much. Also, I don’t see how an average person in 2027 will be able to assemble his or her house from giant Legos considering: 1) the need to pour solid concrete foundations will still exist, 2) local governments are highly unlikely to relax building codes to allow unlicensed, inexperienced people to build houses, and 3) few people have the skills to even put such Lego pieces together, particularly with enough accuracy to ensure the surfaces are truly level and plumb. Maybe what Kurzweil is trying to say is that, in 2027, there will be some construction companies that will specialize in building cheap, prefabricated houses comprised partly of 3D printed components. Plausible, but only a tiny bit different from how things are today. As for vertical farms, they’ve proven to be much more expensive to run than normal “flat” farms and haven’t caught on thanks to basic economics. If Kurzweil knows of some way that they can make food at “very low prices” in just ten years, then he should quit his job at Google and pursue it full-time since it will be worth billions of dollars. And he should also ask himself whether it would be more efficient and profitable to use that secret method to improve “flat” farms. For example, if Kurzweil thinks vertical farm costs will drop thanks to cheap, 3D printed building techniques, then won’t the same techniques also make it possible to cheaply build greenhouses over standard cropfields? If farm robots will eliminate labor costs at vertical farms, won’t they do the same at flat farms? Why would the vertical farms benefit more?

4:09 – Kurzweil observes (as he has in the past) that most of the Earth’s surface is sparsely populated, meaning there is ample room for humans to spread out. While true, it’s important to remember the reasons why: Beachfront property in Florida is more aesthetically appealing and provides more opportunities for recreation than a plot of land in the middle of Nebraska. The climate in San Francisco is more conducive to human life than that of Minot, North Dakota. Humans are also social animals (particularly when young), meaning they like to live in places where there are other people. The high (and still rising) rates of suicide and substance abuse in rural America attest to the ill effects of isolation and lack of varied things to do. He doesn’t say it in this interview, but I know from his books that his response would be something like “future technologies will substitute for all that,” meaning virtual reality will be as real as The Matrix someday, so hanging out on virtual reality Miami Beach while you’re actually lying in a VR pod in your living room will feel as real as hanging out on the real Miami Beach with your actual body. Whether or not sufficiently advanced brain-computer interfaces can be made to do that is an open question, but for sure, I doubt the technology will exist by 2027, or even 2057.

5:00 – Kurzweil predicts that, by 2027, virtual reality and “virtual avatars” will be so good that many people won’t need to live in cities anymore, and he seems to suggest there will be a detectable change to the global urbanization trend. Thanks to virtual reality, people will be able to work and play from anywhere, so they’ll choose to live outside of cities to save money. I think this is a prime example of a prediction that Kurzweil can’t possibly get wrong, and that is also almost useless. As he admits around this part of the interview, many of his colleagues at Google already work remotely, and most of us know someone who works from home. It doesn’t take a futurist or economist to see that the practice is getting more popular, so it’s a simple assumption that it will be more common by, say, 2027. Technologies related to computing, videoconferencing, and virtual reality are all obviously improving, and it’s just common sense that they will make it easier for people to work remotely. And while the number of people living in cities is growing, so is the number of people living outside of them in the suburbs and exurbs. By 2027, the suburban/exurban population could be growing faster than the truly urban population, which Kurzweil could cite as proof his prediction was right. So on close analysis, Kurzweil’s prediction is nothing more than a simple synthesis of three long-running trends in America that most adults are already aware of through direct experience. It will be almost impossible for him to be wrong, but the prediction about the future is so general and so incrementally different from today that it has no real value.

6:35 – He says we will use 3D printers to make clothes, without giving a date for when the prediction will come to pass (by 2027?). Regardless of when or if it happens, this has always struck me as a useless application of 3D printers. Today, I can buy a pack of six new cotton undershirts from Wal-Mart for $15, and they will last for years before falling apart. I can go to a local thrift store and buy durable, surprisingly good-looking used clothes that are 75% discounted from their original prices, and which will also last me many years. I can go on Craigslist right now and find people in my area who are giving away clothes for free. There is no evidence at all that our existing textile technology is deficient making clothes, or that our “standards of living” will meaningfully improve if we started making clothes with futuristic 3D printers. Even if we assume 3D printers are so superior at making clothes that they’re (almost) “free,” how much better is that than the present condition? Clothes are already free or trivially cheap. Lowering the price farther might free up enough money for you to buy a slightly bigger morning coffee at Starbucks, but that’s it. The only real beneficiaries would be fashion-obsessed people who shudder at the thought of wearing the same outfit twice and want their 3D printer to spit out some zany new creation each morning. Yay for the coming empowerment of vain people.

8:10 – Kurzweil cites changes to the nature of jobs over the last 100 years (workforce transformed from hard labor on the farm and factory to doing computer stuff in office buildings) as proof that there will always be jobs for humans in the future. While humans have always managed to move up the skills ladder and create new, gainful work for themselves as machines took over the less skilled jobs, there’s no reason to think the trend will continue forever. His argument also gets muddled when he equates people in college with people who have jobs. Studying poetry or art in college isn’t the same thing as being gainfully employed. Moreover, its a common fate for such students to have problems finding employment after college, and for them to settle for jobs that are unsatisfactory because they pay little, or because they have nothing to do with what they studied (think of the waitress with the Literature B.A.). I think it’s much safer to predict that “Humans in the future will be able to find things to do with their days but they won’t necessarily get paid much money or any money at all for what they do, and automation will be good overall for humans since it will eliminate unpleasant drudge work.”

Battlestar Galactica nitpick: 2D radar screens to depict 3D space

I’ve been a huge science fiction fan since childhood, but one franchise that has oddly not attracted my interest until now is Battlestar Galactica. Specifically, I’m talking about the series that aired from 2003-09.

When that series was ongoing, I tried watching a few episodes but just couldn’t get into it. For some reason, the quasi-documentary nature of how the show was filmed put me off. I also didn’t watch the show from the first episode onward, so each time I sat down and gave it a try, I had no clue who the characters were or what the plot arc was about.

Last night I had nothing to do and discovered Battlestar Galactica was available to me through Hulu. I watched the very first episode, and this time around was gripped.

For those of you who don’t know, Battlestar Galactica is about a space war between humans and their robots, called “Cylons” (SAI-lahns). The technology depicted in the show is much more advanced than our own (i.e. – there are giant spaceships, faster than light travel, and bipedal robots), and the events take place in a different part of our galaxy. The humans are descended from people who came from Earth, but for some reason, Earth’s location has been lost to history. Forty years before the events depicted in the series, the Cylons–who were servant robots–violently revolted against their human masters and flew away in space ships to found their own worlds. During the first episode of the series, the Cylons return for unexplained reasons and stage a massive sneak attack (meaning this is the Second Human-Cylon war) that destroys the human planets and the human space fleet. Of the handful of surviving human space ships, the most powerful is an aircraft carrier called the Battlestar Galactica, commanded by an older man named “Adama.” Disorganized, demoralized and grievously wounded, the remaining humans have to find a way to survive against overwhelming odds.

For reasons I’ll describe in much greater detail in future blog posts, I think the vision of a future where humans explore the galaxy in faster-than-light space ships and still do tasks like flying fighter planes and fixing plumbing leaks with wrenches will never come to pass. So at the most basic level, I think Battlestar Galactica is an inaccurate depiction of what our future might look like.  But one thing that really stuck out to me as silly was the use of old-fashioned radar screens on the bridge of the Battlestar Galactica. Here’s a screenshot of the bridge:

And a tight shot of one of the “radar screens” (and yes, before anyone complains, I’m sure they’re actually making use of more advanced sensor technology than solely radar):

There’s a basic problem here: 2-dimensional screen displays are terrible at depicting 3-dimensional space. 2D displays are fine when you’re dealing with 2D environments, such as in naval warfare, where your surface ship is in the middle of a basically flat, featureless plain and uses its radar to locate other surface ships also on the plain. But in space, the ships are free to move in any direction and to approach each other from any vector, making useless any conceptualization of space as being planar, or of there being an “up” or “down.” Trying to “square the circle” by thinking like that will just get you into trouble, particularly if you’re in a fast-paced space battle with a smart enemy that has figured out what your spatial-thinking biases and limitations are (“Battlestar Galactica returns fire fast when we attack it from the front, back, left, and right, but it returns fire slowly and misses a lot when we attack from the top or bottom.”). The time spent looking at a flattened visual depiction of the space around you and then mentally calculating what the elevations and depressions of other ships are and then trying to synthesize it all into some global picture of where everything is and how it’s all moving around will cost you dearly in an actual space battle.

The best approach will instead be to show space ship commanders accurate, 3D representations of their surroundings. I’ve seen this depicted well in other sci-fi. For example, in Return of the Jedi, the Rebel command ship’s bridge had a hologram of the Death Star, which the commanders presumably used for real-time monitoring of that vessel and their own fighters that were attacking it. Using a more zoomed-out view, the Rebel commander could have used holograms to track the progress of the broader space battle and to see the locations of all ships, in 3D space.

Return of the Jedi bridge hologram

Babylon 5 and Ender’s Game also depicted another approach: Making the bridge’s interior one, giant, 360 degree wraparound screen that displayed live video footage from outside the ship.

Minbari ship bridge
Ender’s Game command center view

And Star Trek Deep Space Nine depicted the same visioning capabilities for ship commanders, but delivered via augmented reality glasses instead of wall screens (a smart use of a limited TV show budget).

DS9 eyepiece

All of these visioning technologies are hands-down better than using 2D radar screens to try and see what’s happening outside your space ship. And considering the overall level of technology present in the Battlestar Galactica (Faster than light engine? Enough said.), I don’t see why the ship couldn’t have also had one or all of these other devices on its bridge. Maybe someone on the show’s creative team just didn’t think things through enough, maybe they did but didn’t have the budget for anything but small computer screens, or maybe they were deliberately trying to make the ship look old-fashioned (but again, the result is nonsensical).

This silliness gets taken a step farther when Adama announces that he’s taking the Battlestar Galactica to a remote outpost to regroup against the Cylons, and to chart a course there, he unrolls a paper star map on the big table in the middle of the bridge and starts drawing on it with rulers and crayons. Sigh. I realize Adama’s “thing” is that he’s a grizzled old guy who doesn’t like technology, but this is taking it too far. Typing the desired coordinates into the ship’s computer would instantly spit out a more efficient and accurate course than he ever plot using old-time mariner’s tools.

I think that whenever we actually do have space ships of similar size and sophistication to the Battlestar Galactica, their bridges won’t look anything like they do on the TV show. Just for the sake of redundancy, I think there might be small, 2D sensor screens and even paper star maps shoved off to the side somewhere, but they’ll only be used in emergency situations where all the better technology has broken. The notion of a space battle being managed by an old human man who likes to look at screens and draw lines on paper will be laughable. In reality, the ship’s systems–including its weapons systems–would probably be entirely automated, and the best captains and fighter pilots would all be machines. The old guys would die fast.

The religious qualities of Singularitarianism

Aeon has a good article about the religious undertones to Singularitarianism. (FYI, “Singularitarianism” is the belief that the Technological Singularity will happen in the future. While Singularitarians can’t agree if it will be good or bad for humans, they do agree that we should do whatever we can until then to nudge it towards a positive outcome.) This passage sums up the article’s key points:

‘A god-like being of infinite knowing (the singularity); an escape of the flesh and this limited world (uploading our minds); a moment of transfiguration or ‘end of days’ (the singularity as a moment of rapture); prophets (even if they work for Google); demons and hell (even if it’s an eternal computer simulation of suffering), and evangelists who wear smart suits (just like the religious ones do). Consciously and unconsciously, religious ideas are at work in the narratives of those discussing, planning, and hoping for a future shaped by AI.’

Having spent years reading futurist books, interacting with futurists on social media, and even going to futurist conferences, I’ve come to view Singularitarians as a subcategory of futurists, who are defined by their belief in the coming Singularity and by the religious qualities of their beliefs. Not only do they indulge in fantastical ruminations about what the future will be like thanks to the Singularity, but they use rhetorical hand-waving–usually by invoking “exponential acceleration of technology” or something like that–to explain how we’ll get there from our present state. This sharply contrasts with other futurists who are rigidly scientific and make predictions by carefully identifying and extrapolating existing trends, which in turn almost always results in slower growth future scenarios.

A sizable minority of Singularitarians I’ve encountered also seem to be mentally ill and/or poor, and the thought of an upending of daily life and of the existing socioeconomic order, and the thought of an end to human suffering thanks to advanced technologies appeal to them for obvious reasons. Their belief in the Singularity truly is like the psychological salve of religion, so challenge them at your own risk.

Singularitarians could also be thought of as a subcategory of Transhumanists, the latter being people who believe in using technology to upgrade human beings past their natural limitations (such as intelligence, lifespan, physical strength, etc.). If you believe that the Singularity will bring with it the ability for humans to upload their minds into computers and live forever, then you are by default a Transhumanist. And you’re a doubleplus Transhumanist if you go a step farther and make a value judgement that such an “upgrade” will be good for humans.

With those distinctions made clear, let me say that I am a futurist and a Transhumanist, but I am not a Singularitarian. I plan to explain my reasons in depth in a future blog post, but for now let me summarize by saying I don’t see evidence of exponential improvement in artificial intelligence or nanomachines, which are the two pillars upon which the Singularity hypothesis rests. And even if an artificial intelligence became smarter than humans and gained the ability to rapidly improve itself, something called the “complexity brake” would slow its progress enough for humans to have some control over it or to at least comprehend what it was doing. Many Singularitarians believe in scenarios where the Singularity unfolds over the course of literally a few days, with a machine exceeding human intelligence at the beginning, and all of planet Earth being transformed into a wonderland of carbon nanotube structures, robots, humans sleeping in Matrix pods, and perhaps some kind of weird spiritual transcendence by the end. The transformation is predicted to be so abrupt that humans will have no time to react or to even fully understand what’s happening around them.

Links

  1. https://aeon.co/essays/why-is-the-language-of-transhumanists-and-religion-so-similar
  2. https://en.wikipedia.org/wiki/Singularitarianism

The scary future of fake news: Perfect-quality CGI audio and video

The Economist has a rather disturbing article about how advances in “generative adversarial networks” will soon make it possible to create computer-generated audio and video footage that is indistinguishable from the real thing. The potential for spreading misinformation is obvious. The article offers some ways that such fakes could be spotted:

‘Yet even as technology drives new forms of artifice, it also offers new ways to combat it. One form of verification is to demand that recordings come with their metadata, which show when, where and how they were captured. Knowing such things makes it possible to eliminate a photograph as a fake on the basis, for example, of a mismatch with known local conditions at the time.

…Amnesty International is already grappling with some of these issues. Its Citizen Evidence Lab verifies videos and images of alleged human-rights abuses. It uses Google Earth to examine background landscapes and to test whether a video or image was captured when and where it claims. It uses Wolfram Alpha, a search engine, to cross-reference historical weather conditions against those claimed in the video. Amnesty’s work mostly catches old videos that are being labelled as a new atrocity, but it will have to watch out for generated video, too. Cryptography could also help to verify that content has come from a trusted organisation. Media could be signed with a unique key that only the signing organisation—or the originating device—possesses.’

However, it would be naive to think that these methods couldn’t be defeated with better CGI algorithms and through hacking file metadata and cryptographic keys.

And even if the “good guys” manage to forever stay one step ahead, we’re still rapidly approaching an era where the forgeries will be so good that unaided human eyesight and hearing won’t be sensitive enough to detect them, and humans will have to rely on machines to tell them what is real and what is fake (which is itself an interesting state of affairs from a philosophical standpoint, but that’s a talk for a different time). Something like a few fragments of aberrant computer code embedded in an otherwise perfect-looking fake video might be the only thing that reveals the lie. Considering the short attention span and low level of scientific and technological literacy in most countries, how could the computer forensic findings in such a case ever be explained to average people?

They couldn’t, which means belief or disbelief in accusations of forgery will twist in the winds of whatever preexisting biases each person has, which is how it is now. Americans will believe it when their government tells them a video originating in Russia is fake, and Russians who mistrust America will reflexively disagree and believe their own government’s claims it is genuine. The truth will of course be out in the open, but so abstruse that only a small minority will be able to see it clearly on their own.

Moreover, the ability to make perfect computer generated audio and video imitations of people could lead to disaster in crisis situations where the intended target lacks either the ability or the time to verify their authenticity using their own technology: Imagine a military battle where one side transmits false orders to the other, in the voice of the latter’s commander, or a situation where a hacker posing as a rich investor calls his stock broker and insistently tells him to trade some massive number of shares.

*Update (7/13/2017): Computer scientists at the University of Washington have developed a way to merge audio recordings of someone speaking with video footage of them, so their mouth appears to be moving in sync with the words, even though the audio and video are from two different sources. Here’s a sample of them manipulating a speech by Barack Obama:

Links

https://www.economist.com/news/science-and-technology/21724370-generating-convincing-audio-and-video-fake-events-fake-news-you-aint-seen