This is not a NASA Website. You might learn something. It's YOUR space agency. Get involved. Take it back. Make it work - for YOU.
Exploration

Hey NASA: These Are The Droids You Should Be Looking For

By Keith Cowing
NASA Watch
November 16, 2017
Filed under

Keith’s 16 Nov update: Just the other day I posted some new video (below) of the latest cool droid from Boston Dynamics. Now they have simply outdone themselves. Compare NASA’s tethered/hoisted R5 make stiff dance moves and then watch Boston Dynamics’ untethered and nimble Atlas DOING A BACKFLIP. NASA really needs to put their own bot research on the shelf and see what the private sector can offer.
Keith’s 14 Nov note: NASA poured lots of money into its R5 robot that cannot walk unless it is on a hoist and controlled by a human. It is always broken. So they gave away these broken droids to several universities to see if the students could salvage something useful. Meanwhile, Boston Dynamics continues to make astonishing progress on autonomous robots.
Imagine if you had something like this on Mars as part of a sample return mission. This droid, equipped with other features that Boston Dynamics has mastered, would allow access to places that rovers cannot go and has dexterity unmatched by anything NASA has ever built. And I am sure you could buy a bunch of them for vastly less than it would take NASA to develop them.
Does NASA Have A Robot That Can Do This?, earlier post
The Droid That NASA Should Be Sending To Mars, earlier post
NASA Challenges People To Use Its Broken Robot To Fix Things on Mars, earlier post
Using a Last Place Robot for NASA’s Robotics Challenge, earlier post
NASA JSC Has Developed A Girl Robot in Secret (Revised With NASA Responses), earlier post

NASA Watch founder, Explorers Club Fellow, ex-NASA, Away Teams, Journalist, Space & Astrobiology, Lapsed climber.

60 responses to “Hey NASA: These Are The Droids You Should Be Looking For”

  1. Paul451 says:
    0
    0

    Boston Dynamics occupies that middle ground between uniquely awesome and will surely kill us all.

    • Search says:
      0
      0

      They get plenty of DARPA funding. So while the are creepy/cute (you can’t help but feel sorry for it when their engineers purposely kick them over in other videos), you can bet phase 2 includes mounting a machine gun or carrying a few hundred pounds of C4 into some jihadi tunnel complex.

      • Daniel Woodard says:
        0
        0

        That does not sound like the market Google is looking for. Maybe that’s why they sold the company.

  2. Daniel Woodard says:
    0
    0

    The company has made remarkable progress and has demonstrated several original concepts. I am curious how it is funded; if it were government funding this would normally be prominent in its announcements.

    • ThomasLMatula says:
      0
      0

      It’s owned by Google. Funding is unlimited without government paperwork or bureaucratic micro-management.

      • Sam S says:
        0
        0

        Was owned by Google. Now owned by Softbank, whom I assume is probably less interested in funding “blue sky” projects and more focused on returns. This SpotMini looks like a potential contender for the first truly helpful home robot, assuming when it’s fetching you a beer it isn’t actually just testing which bait works best on humans. It’s terrifying and beautiful all at once.

        • Daniel Woodard says:
          0
          0

          At the moment I suspect it’s too expensive for the consumer market, although that could change.

      • Johnhouboltsmyspiritanimal says:
        0
        0

        Actually Alphabet/Google sold the unit off to SoftBank.

      • Lynn Roth says:
        0
        0

        Alphabet sold them earlier this year: https://techcrunch.com/2017

      • space1999 says:
        0
        0

        Google only bought them recently… and then sold them a few years later. Marc Raibert was the founder… if I recall the company has been around since the 90s, for most of its existence relying heavily on DARPA funding. Raibert was/is a pioneer in the field of walking/hopping robots. Their robots are a result of decades of research with a singular vision.

  3. Richard Malcolm says:
    0
    0

    Nice idea. But it would need radiation hardening and high durability. There ain’t no robotics Mr. Goodwrench on Mars.

    • BlueMoon says:
      0
      0

      True. But NASA knows how to rad-harden electronics to stay alive for a long time. Examples: Voyagers, Cassini, many Mars orbiters and rovers, etc.. And if we sent robots to Mars instead of humans, we could send a lot of them, to provide backups or to even do repairs on one another. We could send many more robots than humans, because robots will not need food, water, clothing, bunks, toilets and toilet supplies, etc. Only electricity, and that can be provided by RTGs.

      In an alternate universe, a NASA not driven by the Human Spaceflight Mafia has already done extensive exploration of the Moon’s South Pole, Mars, and several of Jupiter’s moons using anthropomorphic and other designs of robots. Samples from all of those bodies have already been returned to Earth and are being examined. That alternate-universe NASA is rapidly learning where the ice and water and other useful resources are located, the stuff needed for eventual self-sustaining human outposts.

      • fcrary says:
        0
        0

        It’s not that easy. The radiation hard (and otherwise space rated) electronics used on spacecraft like Cassini (or Juno or MAVEN, for more current examples, or Europa Clipper, for an example in current development) is extremely massive, power-hungry and expensive compared to the parts Boston Dynamics almost certainly uses. The flight rated parts are also lower performance. Just as an example, 16 Gb of solid state memory for a planetary spacecraft can easily cost millions. A 256 Gb memory stick costs $50 at Office Max or Best Buy.

        • Vladislaw says:
          0
          0

          Is that the commercial off the shelf price or the nasa in house price?

          • fcrary says:
            0
            0

            Definitely not off the shelf. I think the most recent conversation I heard on the subject was about what a major aerospace company would charge NASA for custom (or near-custom), flight-rated memory.

      • Daniel Woodard says:
        0
        0

        There are areas of rugged terrain on the Moon and Mars where legged robots would have advantages, but so far we have not even scratched the large percentage of the surface that wheeled rovers can reach, so the need does not seem urgent.

  4. Michael Spencer says:
    0
    0

    I have to say they are a little creepy, you know?

    Still I thought dropping a soda can into the trash- judging where it would fall- was impressive, as was moving below a table.

    I wonder why they chose backward-bending joints?

    Lastly: there are some denizens here never missing the opportunity to sing the praises of remote operations – what is the argument against using these critters on Mars? Aside from obvious lubrication and power requirements and other changes to account for the environment, I mean.

    • ThomasLMatula says:
      0
      0

      I would think that the Moon would be better because you have huge time delay and so could take full advantage of the mobility, versus the slow progress they would be limited to on Mars.

      • mfwright says:
        0
        0

        And the Moon is really close, and has water (ice)! What are we waiting for?

        • Donald Barker says:
          0
          0

          Moon has water? ice? 100% proof? You might want to check the data showing ppm ranges as the only hard physical evidence.

      • fcrary says:
        0
        0

        That depends on the mission. If it simply calls for moving from one place to another, I don’t think the light time would be a problem. Autonomously identifying and waking around rocks is a solvable software problem. (Unless you have a project manager or review boards who are strongly opposed to any risk of loss of mission due to a mistake.)

        I’d be more concerned about programing it to notice unexpected things along the way. It’s much harder to program something to autonomously recognize something interesting and decide it’s worth stopping for a closer look. But if the goals don’t require that, it wouldn’t be an issue.

        • Michael Spencer says:
          0
          0

          Which made me wonder: what sort of findings could Cassini have missed? Alien artifacts amongst the rings, where albedo adjustments could be easily hidden, for instance?

          OK, silly example, but still: when the mission was in early times, when the instrument manifest was being developed, what was left out? Anything that comes to mind?

          Fun speculation.

          • fcrary says:
            0
            0

            You do like broad questions… Actually, I’ve got a huge, long list those things for Cassini (and I’m actually getting paid to write some of them down, as part of the project’s closeout activities.)

            I can’t say what we didn’t see because we never thought to look. I am not aware of anything that has not yet been discovered. But here are a couple of examples in two categories of near misses.

            Very late in the mission, someone was doing very careful analysis of icy satellite images, involving highly stretched colors to pull out faint features. Although he was doing it for completely different reasons, he discovered what we ended up calling the “red Tethys” lines. Very faintly red, linear features in concentric arcs. (He wanted to call it something related to the _Nightmare_on_Elm_Street_ movie, but that got vetoed.) They don’t correlate with any other geographic feature, and we have no idea what they are. But he found them just in time for the project to re-plan and insert some additional observations. Hopefully, someone in the next few years will be able to figure it out, given those last-minute (well, last-year) observations.

            In terms missing hardware, something from my own field comes to mind. We found out Titan’s ionosphere was vastly more complicated than we expected. The ion and neutral mass spectrometer was designed to measure positive ions with a mass less than 100 AMU. They expected a dozen or so species, and there was some debate about whether then needed to go up to 100 AMU, or if 50 AMU would be good enough. In reality, it’s more like hundreds of species (every single mass from 12 to 100 AMU is populated), with ions going up to thousands of AMU and including both positive and negative ions. And the implied ion chemistry now seems to be closely related to the formation of Titan’s haze layer. We discovered the heavy and negative ions because another instrument (two of the three CAPS plasma sensors) just happened, under the conditions of Titan’s ionosphere, to act as a fair mass spectrometer. An electron analyzer isn’t designed to measure negative ions, and it definitely wasn’t calibrated to do so, but it actually can, and get a rough idea of their mass under the right conditions. If anyone’s going back to Titan’s upper atmosphere, I’d definitely suggest bringing a real ion mass spectrometer going up to very high masses and able to measure both positive and negative species.

          • Daniel Woodard says:
            0
            0

            The complex atmospheric chemistry was clearly unexpected, yet the combination of robots and humans detected it and used the available instrumentation in novel ways to observe it, even though the humans were separated from the robotic spacecraft by a time lag of hours. It is difficult to see how a human crew on the spacecraft could have accomplished more.

          • fcrary says:
            0
            0

            If I understand the instrument correctly, swapping two wires (probably not even high voltage ones) would have made the mass spectrometer measure negative ions rather than positive ones. Of course, you can and probably will point out that building a robot capable of that (or, capable of making repairs and using that ability to modify instrument wiring for other reasons) is easier than sending people to Saturn. I don’t think I’ll argue with that.

          • Daniel Woodard says:
            0
            0

            It’s an interesting point. We have never had robots designed for general maintenance capabilities. Maybe such a device could have overcome the unexpected friction in the Galileo main anntenna deployment mechanism. However management would have to be convinced to invest in mass that might not even be needed.

          • fcrary says:
            0
            0

            I’ve heard managers disapprove of adding telemetry channels to diagnose instrument anomalies. Why spend resources on making sure you can identify and fix a problem, instead of designing hardware which won’t break in the first place? (Because designing to avoid a problem means identifying the problem in advance, and most flight failures are due to previously unidentified problems.)

        • Daniel Woodard says:
          0
          0

          Programming a robot to identify unexpected observations is not impossible, in fact it is not that much different from training a human to do the same thing. Astronauts are always separated from their environment, whether by miles of space or gloves and visor. We all rely on remote sensing of one kind or another.

          It’s only in recent years that basic computing power has approached the level needed for autonomous rovers, whether wheeled, walking, flying or swimming, so we still have a tendency to say that a human has something indefinable that a robot lacks. It’s hard work to prove the opposite, but Boston Dynamics is making remarkable progress. We need to stop thinking about robots as having to look like humans because they don’t have to fit our accomodations. Form follows function.

          Even in the intense radiation around Jupiter, where human survival is impossible, we can use shielded electronics, redundant dispersed systems, and similar solutions. There is unfortunately no comparable strategy in view that will make it possible for humans to survive not just radiation but isolation, inaccessibility, and risk of physical death in the extraordinary environments we will encounter in the exploration of deep space.

          • fcrary says:
            0
            0

            I didn’t say a machine couldn’t be programmed to identify unexpected but interesting things. I just said that was more difficult than programming one to move a specified distance, in a specified direction, while avoiding obstacles.

            But for identifying unexpected things, I can think of three ways to program it in, and none of them are perfect.

            First, you can get people to define their intuition, what they mean by, “I know it when I see it.” And then code that up. I think past work on artificial intelligence shows this is a dead end.

            Second, you can code up machine learning and teach the machine to recognize things. This works, but it’s limited by the teaching process. Anything that isn’t covered won’t be recognized or interpreted correctly For example, Mr. Spencer’s example of an alien spaceship hiding in Saturn’s rings probably wouldn’t be we’d think to teach an autonomous spacecraft to consider.)

            Third, you can set up machine learning to be self-educating. But the results can be unpredictable. Just like a self-educated person, sometimes the results are great but sometimes they end up with odd ideas and fixations.

            Of course, human abilities are based on the last two of those ideas: Organized learning and personal experience. By saying the machine versions aren’t perfect isn’t too different from saying we are perfect at human education. But I can’t see a NASA project manager betting his mission on a robot learning as it goes and figuring things out.

          • Daniel Woodard says:
            0
            0

            We are not allowed to bet missions on human crews figuring things out along the way either. Yet that’s exactly what any capable planetary mission team does. We’ve heard repeatedly of minor and major software modifications during the mission. The astronaut is not alone, but neither is the robot. As to machine learning, it is difficult but the abiltiy of Google voice to understand conversational speech, which I would not have believed a few years ago, was based in large part on AI and machine learning. We are witnessing the evolution of what may become a new lifeform.

            The difference to me, as a physician struggling with the failings of the human mind and body, is that AI doesn’t share our physical limitations. If it even approaches our capabilities, I can see no reason it will stop there. One day we will glimpse AI on the rear view mirror, the next it may be ahead of us and disappearing over the horizon.

          • fcrary says:
            0
            0

            Looking at the way spacecraft autonomy is used on planetary (and Earth orbiting) science missions, I think part of this is psychological. If a scientist makes a mistake, or doesn’t do things the way someone else would, he can explain it.

            One common example is dealing with large data rate observations where you only need those high rates a few percent of time. The standard these days is to record everything on spacecraft, send down a low-resolution or highly compressed version, have a scientist pick out the interesting parts, and then command the spacecraft to send those parts down at full resolution. If different scientists disagree about what’s interesting, they can talk about it at a team meeting or a conference. At the very least, the scientist who made the decision can explain the logic behind the selections.

            If the same choices were made on spacecraft, by an AI, you couldn’t do that (especially if it were a self-teaching one.) I guess a lot of mission and instrument PIs just aren’t comfortable handing over that much responsibility, or having to say, “ask the spacecraft” is someone asks why things were done in a particular way.

          • Daniel Woodard says:
            0
            0

            Good point. The spacecraft control system, like the dogs used in police work and combat, the junior members of the science team, or human astronauts in training, will have to prove itself by interacting regularly with the rest of the group during training and mission preparation, answering questions and solving problems, and generally gaining the confidence of its human partners.

      • Donald Barker says:
        0
        0

        Not designed for that yet. You dont design and use a 1 g machine for a 1/6th g environment any more than you design an igloo and use it in a desert. Now, if they did a redesign specifically for the moon, then, we’ll see.

    • fcrary says:
      0
      0

      I think you’d have to get creative to make it work. A traditional approach to dealing with the space environment would drive the mass and power consumption of the electronics way up, and the performance way down. You might be able to do something with massively redundant parts (fly 256 Gb of commercial flash memory and expect the needed 16 Gb to survive to end of mission), or shielding (the physical size of commercial components is so small that wrapping them in an inch of lead might mass less than a radiation hard part with similar performance.)

      But review boards and aerospace managers tend to be very conservative. A solution that has been tried before will pass a review, even if it isn’t as good, more easily than a novel solution. You have to convince people that the novel solution will actually work, and it just takes one reviewer saying, “I’m not convinced,” to sink a proposed mission. That makes people reluctant to propose novel solutions.

      • Paul451 says:
        0
        0

        the physical size of commercial components is so small that wrapping them in an inch of lead

        Lead? Not hydrogen-rich plastics? (My understanding was that the worst radiation for electronics was high energy, and the worst shielding against that was heavy elements.)

        • fcrary says:
          0
          0

          It depends on what you’re trying to stop (ions, electrons or photons), the particle’s energy and what sort of problems you are worried about (permanent damage, transient bit flips, noise in sensitive detectors.) Hydrogen-rich plastics are good for photons (x rays and gamma rays) but not as good as high Z material for stopping energetic electrons. On the other hand, the process of stopping energetic electrons generates x rays and gamma rays (bremsstrahlung) so some applications call for a layer of hydrogen-rich material inside a layer of high Z material.

          And, in any case, I was using a little artistic license when I wrote “lead.” Things like tantalum or beryllium-copper are better choices for a high Z material. But people associate lead with radiation shielding more clearly.

          • Daniel Woodard says:
            0
            0

            For human spaceflight the metal hull of the spacecraft is fairly thick and most of the exposure is to secondary x-rays, so the secondary shielding used around sleeping areas is usually low-Z plastic. I was under the impression that all photons generated by bremsstrahlung (I love that word) are considered X-rays, but I could be wrong.

          • fcrary says:
            0
            0

            Technically, you get bremsstrahlung (lit. stopping radiation) any time you decelerate a charged particle. The amount and wavelength depends on the change in the particle’s kinetic energy. Lower energy electrons can produce photons in the UV, but that doesn’t really cause any harm so no one talks about it. Producing gamma rays in quantities to cause concern would require very energetic electrons (off the top of my head, I want to say over five or ten MeV.) In most cases, there aren’t many electrons with that much energy. So you generally hear about x-rays when someone talks about bremsstrahlung.

        • Daniel Woodard says:
          0
          0

          The shielding on Juno is titanium. Possibly the trapped particulate radiation in the Jovian magnetic field has a diffent composition than the solar protons that are the main concern on ISS,

      • Daniel Woodard says:
        0
        0

        That isn’t just in robotics. The unwillingness of NASA to test propulsive landing for the Dragon will hold back progress for years. In many cases the decision makers don’t have confidence in innovative solutions because they do not have recent practical experience with the hardware.

        The Juno electronics vault is titanium and apparently only 1 cm thick. Given the mass which is reported as 180Kg this is prettry big, at least by the standards of commercial electonics. What is the vault volume?

        • fcrary says:
          0
          0

          The Juno vault is more or less a cube and just under a meter on a side. (If it were a cube, 1 cm thick and 180 kg would be 0.8 m on a side.)

          For reference, the electronics box for one of the instruments is 0.2 x 0.2 x 0.3 m. Juno uses a variety of processors, but one is the RAD750. That’s about 200 MHz and 5 W. So it isn’t even close to the processing power or low electrical power of modern consumer electronics. But it was more-or-less state of the art for a planetary mission launched in 2011.

          And I do know exactly what you’re implying. And I agree. But I have to convince people in the field that we now have enough CPU power to do more than Rice compression, and that 4:1 lossless shouldn’t be a problem. (And, since you mentioned Juno, the camera does quite a bit better than that but the _requirements_ for the camera assumed very limited data compression. Everything more is officially a bonus.)

          • Daniel Woodard says:
            0
            0

            Surface area of a cube is proportional to length^2 and volume to length^3. If the brain volume could be reduced by a factor of eight, the skull weight could be cut by a factor of four, or the protection factor quadrupled, maybe enough to go to commercial grade components. One wonders how much of the volume inside the vault is actual components vs packaging and wiring? Possibly more of the computing power for the instruments could be shared. Maybe management would approve a “test” of some “optional” CPU capabilities in a small but well shielded subcortex?

          • fcrary says:
            0
            0

            There is quite a bit of packaging. One concern is radio frequency interference. It’s usually minor, but alternating currents in one card can induce currents in traces on adjacent cards. It’s a common practice to put each subsystem’s components inside its own thin-walled chassis to provide a Faraday cage (and a bit more mechanical structure at the same time.)

            Wiring and cabling were also issues, but I’m not sure how much that contributed to volume as opposed to headaches in the layout inside the vault.

            Another issue is the layout itself. Assembly and testing of spacecraft (especially planetary ones) is fairly carefully choreographed. That involves what gets connected when and what you’d have to do if a subsystem in the center of the vault has to be pulled after other subsystems have been connected. That can impact the volume required.

            More and less heavily regions inside a vault have been suggested (and, I think, are being incorporated for Europa Clipper.) The RAD750 processor is rated to 100 krad, while the environment inside the vault only requires 50 krad. But other cards are only good to 50 krad, so one vault isn’t ideal. But I don’t think anyone has gone as far as suggesting commercial parts (5 or 10 krad) and truly massive shielding.

            Finally, shared electronics has been used, and used successfully on Earth-orbiting missions. But it adds some risk, since the failure of a common data processing unit or low voltage power supply would affect multiple instruments. It would probably have to be treated as a critical flight system rather than science payload.

            There is also schedule risk, since you might have three instruments already delivered while the payload DPU is behind schedule. If you can’t test (or possible install) those instruments, you’re potentially losing weeks of time waiting. For planetary missions, that’s more of a concern; launch windows are farther apart than for terrestrial missions.

            Overall, the conventional approach is to make every subsystem as separate and independent as practical, and minimize the complexities caused by lots of complicated, interdependent interfaces.

          • Daniel Woodard says:
            0
            0

            A complicated problem, but there have been advances on many fronts that would allow miniaturization. RF cuircuitry is a bit of a black art but radiation from components can be minimized by shielding individual components and waveguides. Modularization and extremely fast digital data busses can simplify interconnection and subsystem changes. The Wikipedia page on the vault says it reduces exposure by a facor of 800. From 20MRad to 25KRad? Is there any simple way to calculate how much shielding would be needed to get down another factor of 5 or 10 to the commercial level? If the enclosed volume were ~zero, the same mass would allow a titanium sphere with a radius, i.e. thickness, of ~20cm. At a small enough size, depleted uranium might have an advantage over titanium or lead because its higher density would minimize the average shield radius for a given enclosed volume.

          • fcrary says:
            0
            0

            20 Mrad sounds about right for Juno. But it’s also a bit misleading. A good fraction of that is from particles which are very easy to shield (e.g. electrons under 1 MeV.) Just being inside the usual 100 mills (0.254 cm) of aluminum from a standard case probably do that. That’s also why it’s hard to estimate how much additional shielding it would take to get the dose down to the ~5 krad commercial parts could probably take. The dose inside 1 cm of titanium is primarily from the higher energy particles and another centimeter of titanium won’t give you another factor of 800. But the details depend on the energy spectrum of the particles, and that isn’t very well-known at Jupiter.

          • Daniel Woodard says:
            0
            0

            I did a little googling and was impressed at the depth of modeling of the Earth’s magetosphere within a few years of the detection of the van Allen belts, in the pre-computer era. (Wilmot Hess, 1962). A lot of modeling has apparently been done with respect to the Jupiter magnetosphere as well, so maybe some modeling of shielding strategies is possible. Of course GCR have even higher energies and effective shielding may not be possible but the flux is much lower and redundancy might be sufficient. I wonder if it is easier to shield humans or computers with AI capability, even if the latter requires electronics with lower radiation tolerance.

  5. SpaceHoosier says:
    0
    0

    If they ever start putting fur on one of these, I may trade my dogs in. I bet it won’t chew on my shoes!

    • fcrary says:
      0
      0

      I’m sure it could be programed to chew on your shoes. If they ever want to market it as a pet, it might even be good marketing to program in familiar, pet-like behavior.

      Seriously, if these things are marketed as a pet (unlikely) or come into common enough use for average people to see them on a day-to-day basis (more likely), then I think NASA may have a real PR problem. People will start choking on press releases calling the 2020 rover “sophisticated.”

  6. rb1957 says:
    0
    0

    “And I am sure you could buy a bunch of them for vastly less than it would take NASA to develop them.” … I’m sure you’re right, although some work would need to go into making them (can you call robots “them”, or “sir” ?) reliable and suitable for off-world use.

  7. passinglurker says:
    0
    0

    Have they tested one in a vacuum chamber yet?

  8. MountainHighAstro says:
    0
    0

    Until these can survive a rocket launch, deep space cruise, EDL and then continue to operate remotely in a harsh environment for years, I fail to see the comparison. Note, I’m not a fan of R5. I just do not think it’s an apt commentary

  9. Odyssey2020 says:
    0
    0

    Can you imagine being on the front line and all a sudden 10-20 of these SpotMini’s appear all around you? Instant white flag.

  10. Chris says:
    0
    0

    Too bad Musk or Bezos did not buy Boston Dynamics.

  11. Michael Spencer says:
    0
    0

    BD (and Tesla, SpaceX, Blue Origin, among others) can do something that NASA cannot do: publicly display design iterations, mistakes, and errors. The SX march from F1 to BFD, with attendant and large changes, is one example, among others.

    One of the BD videos, for instance, shows Jumping Man falling on his chest rather than feet (could be a feature I guess).

    This is directly related to the poorly educated funding agencies- the US Congress.

    • dbooker says:
      0
      0

      Of course it’s a feature. Hello NFL. No longer have to deal with the NFLPA. And I bet the robots will stand for the National Anthem.

      • Vladislaw says:
        0
        0

        I thought the military paid the nfl to have players stand as a show of patriotism as a recruiting tool for when they ran ads during games?

    • fcrary says:
      0
      0

      NASA can’t “display design iterations, mistakes, and errors” because of a self-created myth of infallibility and a selection process that perpetuates that myth. (Sorry, I guess I’m in a bad mood…)