Regulating Digital Media

Technological progress begets societal challenge. History has shown this to be especially true of how all media technologies first found their audiences and then changed how those audiences thought about the world.

Regulating Digital Media

When examining digital media’s impact on society, the necessary starting point is identifying clearly what technological capabilities are actually new and different. Then we should compare today’s digitally enabled communications systems to those of the past. The answer seems to be the unprecedented power of combining user data, artificial intelligence, and the Internet. The result has been iterative generations of continuously evolving digital media applications which far exceed in scope and efficiency what has gone before.

Technological progress begets societal challenge. History has shown this to be especially true of how all media technologies first found their audiences and then changed how those audiences thought about the world.

From the first writing instruments through to the printing press, the telephone, radio, television and beyond, it turns out our media tools were changing us while we had the hubris to believe we were controlling them. Turns out that while we were busy authoring new outer worlds with our new superhero powers, the capabilities those tools gave us were very quietly, but on a grand scale, redefining our inner language for comprehending those same outer worlds we thought we were mastering. Marshall McLuhan got it right when he said, “We shape our tools, and then our tools shape us.” Equally prescient is Frank Pasquale of the University of Maryland School of Law in observing that “there is a delicate balance between appropriating new technologies and being appropriated by them.”

Describing all the indignities and impoverishments digital media may have conspired to thrust upon us (real or imagined, and putting aside the benefits) is a herculean task. So please feel free to paint your own whether it relates to the impact of media on the US elections or the Brexit vote, teenagers being glued to their screens, the creepiness of devices that eerily seem to know what you are thinking, or those applications hellbent on telling you precisely what to do next (turn right, pedal harder)…. So many possibilities. Pick any of them as the object of your disaffection and read on.

If we accept that it is neither possible nor desirable to turn back time, then what principles can we apply to this brave new world (analogy chosen intentionally) to inoculate us from the potential of its most pernicious consequences. In this quest, we at least have generations of law and policy that can inform some helpful principles.

For most of us, most of the time, these principles can be translated into laws requiring:

  1. Full transparency from media platforms regarding what data of ours they have, how they got it, and what they want to do with it coupled with a requirement that informed consent is necessary in order to do any of that.
  2. That end-user license agreements and related contractual documents must originate as a standard form that is short, readable and provides options. Professor Margaret Jane Radin’s brilliant book “Boilerplate: The Fine Print, Vanishing Rights, and the Rule of Law” is a crucial touchstone for evolving a road forward on these fraught questions.
  3. That media platforms and applications are responsible for serious harm caused to us arising from the use of our information. The concept of “information fiduciaries” has been proposed by Professors Jack Balkin and Jonathan Zittrain, who would have the law impose duties of care, confidentiality, and loyalty upon the Facebooks of the world.
  4. Real-time source transparency, meaning that as a general rule we get to know the true human sources of messages we receive, and that bots and other non-human inhabitants of the Internet be disclosed as such. Detection of the false and fraudulent is improving rapidly with every passing day as evidenced by the extraordinary level of detail we have about the Russians involved in interfering in the 2016 US elections. If someone is trying to manipulate us, why should they be entitled to any privacy protections at all?

Technological tools are rarely ever simply good or bad. It is how those tools are used that is the point — as it has been from Martin Luther to William Randolph Hurst to Mark Zuckerberg. As long as the law does not cower in the face of the sea of uncertainty and confusion before it, and holds fast to human first principles of fairness and justice in adapting as necessary, I like our chances.