Published on:

Mary Minow: Good morning. I understand that international treaty discussions concerning libraries, archives and copyright are scheduled in Geneva in November 2011. How did that come to be?

Winston Tabb: Really, where we began was at the International Federation of Library Associations and Instititutions (IFLA) World Congress in Oslo in 2005. We didn’t start with the idea of a treaty at all, but with an interest in finding real-life, detailed examples from our colleagues from all parts of the world about what issues they were facing with copyright and managing their libraries. So, we planned a program session in which we organized people into discussion groups based on regions, both because of linguistic affinities and because typically regional differences may matter a lot in the challenges faced by libraries in dealing with intellectual property. Through this session we came up with a list of very specific problems that our library colleagues face in different parts of the world, and that became the basis of our thinking.

I should add that we were led to plan this session in the first place because a group of Latin American countries had strongly suggested at WIPO in 2004 that the Standing Committee on Copyright and Related rights (SCCR) should focus in the need for limitations and exceptions, and we as a library community wanted to be prepared to say which L&Es were most critical to our mission.

Continue reading →

Published on:

The Stanford Copyright and Fair Use site is pleased to announce a new feature to aid readers in keeping up and understanding copyright cases in a timely manner: copyright case summaries. To explain this new feature, Mary Minow talks to two editors of Justia, Cicely Wilson and Courtney Minick.

Mary Minow: Tell us about the copyright case summaries that the Stanford Fair Use site will be offering to readers.

Cicely Wilson and Courtney Minick: We will send a feed of summaries for cases that involve copyright issues to the Fair Use site. The summaries themselves are short blurbs that describe the key issues and holdings of a particular case. They are designed to give the reader a sense of whether they need or want to read the case in its entirety. The summaries link to the full text of the opinion on the Justia site, and they are also displayed on the same page as the opinion. This way someone browsing or searching for caselaw on our site gets the benefit of the overview as well.

As the number of opinion summaries grow in this feed, it serves as a survey of sorts for copyright and fair use law — something that we hope will provide a lot of value as a free tool.

Minow: Who is writing the summaries?

Wilson and Minick: We have hired a team of experienced writers, all of whom are licensed attorneys, to write the summaries. They summarize the cases in a concise manner and tag the cases with relevant areas of law.

Minow: You’re saying that a private company has hired a team of attorneys to write case law summaries, and then make those summaries available to the public for free? Why would you do that?

Wilson and Minick: Great question, Mary. At Justia we believe we all “do well by doing good.”  To that end, one part of our core mission is to advance the availability of free legal resources on the web. The newsletter summaries fit in as a part of this by expanding access to the law and add value to the free primary law on our portal.

Minow: Any last words?

Wilson and Minick: Thanks Mary! We are very excited about this new product, and hope it will provide a lot value to lawyers, law librarians, and others who need to stay on top of legal developments. We are also looking forward to the addition of editorial information to our database of free legal opinions, as a way to help organize and contextualize the material.

Minow: By the way, who are the pugs?

Wilson and Minick: The pugs are our co-workers, Sheba and Belle!  You can see more of there Justia office adventures on their Facebook page.

Published on:

Rich StimRich Stim is corporate counsel for Nolo. Rich is the author of several Nolo intellectual property books including:

Patent, Copyright & Trademark: An Intellectual Property Desk Reference
Patent Pending in 24 Hours

Music Law: How to Run Your Band’s Business

Rich also writes two blogs for Nolo, What Price Justice and Nolo’s Patent, Copyright & Trademark Blog, and provides information about trade secrets and nondisclosure agreements at NDAs For Free. He lives in San Francisco and has been without cable TV since 2006.

Nolo has published a new edition of the volume Getting Permission, a comprehensive, up-to-the-minute book on securing the use of copyrighted images, text, music and more. Moreover, Nolo has granted permission to the Stanford Copyright & Fair Use to provide free and open access to salient chapters dealing with copyright, fair use, and web-based content. Fair Use’s Executive Editor Mary Minow has a brief interview with Rich Stim about the new edition of the book, and what’s new in fair use law.

Mary Minow: thanks so much for sharing the rich Nolo content with the Fair Use site. What have been some of the recent changes worth pointing out?

Rich Stim: The mix of recent fair use case hasn’t been too surprising. For example, we learned it’s not a fair use to create a Harry Potter lexicon or to create a postage stamp from a sculpture. And it’s not a fair use/parody to create a sequel to Catcher in the Rye. It is a fair use, however, to reproduce movie monster magazine covers in a book about the cover artist. No surprises with any of these decisions.

The most important fair use ruling may have been Lenz v. Universal Music Corp. In that case, Universal Music issued a takedown notice for a video of a child dancing to the song, ‘Let’s Go Crazy,’ by Prince. The owner of the video claimed that since Universal didn’t consider the issue of fair use, Universal could have not had a “good faith belief” they were entitled to a takedown. Faced with this novel issue, a district court agreed that the failure to consider fair use when sending a DMCA notice could give rise to a claim of failing to act in good faith. That may have an effect on the trend towards automated mass DMCA notices. Let’s hope so.

Minow: What’s your assessment of these changes with regards to the big picture of copyright law, especially as it affects the higher education community?

Stim: I’m not sure much has happened recently will affect the higher education community. It’s all been business as usual although we’ll see what happens as a result of this recent ruling regarding the Google book archive. That may have a profound effect on the ability to access orphaned works.

There was a recent case that may, by analogy, effect the ability to claim fair use when copying electronic texts. In Capitol Records Inc. v. Alaujan, a defendant in a music file sharing case was prohibited from claiming fair use because he had failed to provide evidence that his copying of music files involved any transformative use. The court held that “In the end, fair use is not a referendum on fairness in the abstract …” In other words, making a copy of a digital file and using that file for the purpose for which it was intended (in the case of purloined MP3s, that means copying it to listen to) can not be a fair use. To some people that may seem to chip away at the underpinnings of the Betamax case in which time-shifting of television shows for the purpose of later viewing was permitted as a fair use.

Published on:

The Council on Library and Information Resources (CLIR) and the Digital Library Federation (DLF) have launched a new publication series, with the inviting name of “Ruminations.”  It will feature short research papers and essays with fresh perspectives in the digital environment for scholarship and teaching.

Kicking off the launch is a new rumination from John P. Wilkin, who we interviewed not so long ago, about his work helping old titles “rise” into the public domain.

John writes us:

“I’d like to point readers to a piece I recently wrote about publication patterns and copyright status, which was just published on the CLIR website at http://www.clir.org/pubs/ruminations/01wilkin/wilkin.html.  Based on the analysis of over 5 million books in HathiTrust and several years of copyright status analysis for US 1923-1963 works, I point out some important patterns in the dates and origin of the works.  The date distributions and work Michigan has led on copyright determination helps make clear how few of these books (proportionately) are likely to be in the public domain.  On a more speculative note, the numbers lead me to conclude that ‘orphans’ may represent a startlingly high percentage of published books.  If nothing else, I hope what I show here stimulates more debate and even more work to help refine our sense of what’s in the public domain, what’s in copyright, what’s likely to be an orphan, and what the consequences of these numbers is.”

Posted in: Commentary
Published on:
Updated:
Published on:

Conducted by Mary Minow and Eli Edwards, at ALA Midwinter Meeting in San Diego, California

Minow: Tell us about this major new step forward in the quest for open access.

Julia Blixrud: A part of the background for this effort was an author rights addendum that came out of work several years ago by SPARC, the Scholarly Publishing and Academic Resources Coalition. We worked with lawyers to develop a legal instrument that modifies the publisher’s agreement and allows authors to keep key rights to their articles.  How could authors amend their agreements to allow them to use their own work in the way they wanted to?

Ivy Anderson: That was for an individual author, which is different from content licensing.

Blixrud: At the time, we thought the best way to be able to get our authors’ content made freely accessible in libraries was for authors to say, “oh, wait I ought to retain some of my rights in order to be able to deposit and use my work in my environment.”

You see, a lot of authors get an agreement from a publisher and they just automatically sign it without reading it. The agreement basically says, we the publisher have all rights to do whatever we want with this article in perpetuity.

Which means that if you’re the author, and you want to reuse your own work, you may have to get permission.

Blixrud: Get permission, or pay some fees … and no one at your institution can do anything with your stuff either, unless they bought it and paid fees and so on.

The author addendum was the first attempt to get that content opened up and made available to the author herself as well as to the institution.

Continue reading →

Published on:

pearse.jpg

Mary Minow had a chance to talk with a colleague at Harvard Law School about Open Access.

Nearly two years ago, the Harvard University Faculty of Arts and Sciences unanimously voted to grant the university a non-exclusive, irrevocable, worldwide license to distribute faculty’s scholarly articles, with an opt-out mechanism for instance in the case of incompatible rights assignment to a publisher.

Today, Mary talked with Michelle Pearse, Research Librarian for Open Access Initiatives and Scholarly Communication, Harvard Law School Library.

Minow: Michelle, now that the Open Access Policy has been in place for two years, how has it been working out?

Pearse: It has been an interesting journey. We are still in the process of reaching out to and educating the faculty, trying to get them to understand the policy and get it into their personal workflows. As part of our reorganization in Summer 2009, we made publication support part of library services, so we have tried to implement and educate faculty about the policy in that context (i.e. the policy is one aspect of the publication process now). The policy is often referred to as a mandate, which is a bit of a misnomer because faculty are always free to seek a waiver. (See the Director of Harvard’s Office for Scholarly Communication posting about this issue on his Occasional Pamphlet blog.)

It can be challenging implementing such a policy. It is important that we make the process as simple and straightforward as possible. While the traditional mark of repository success seems to be the number of items deposited, I think the more important metric at this point is progress in educating the faculty and cultivating relationships with them so they see the library as a partner in their publishing experience—from initial research to the disseminating the final product.

The open access policy itself applies only to scholarly journal articles, and our faculty actively publish books and other materials that do not even fall under the policy. We envision a “one-stop-shopping” system literally and figuratively. We are trying to develop workflows and technical systems that can truly realize that vision.

Minow: Since you have experience now with the journals, what has been the journal reaction to the policy?

Pearse: Overall, there is confusion about what these policies mean or are trying to do, so there is quite a bit of education with the publishers. The “teachable moment” often comes up when an author uses the addendum that the university has provided for faculty to send along with publication agreements. Most of the larger publishers of the peer-reviewed journals are already aware of the policy, and some have started asking their authors to show proof that they have submitted waivers. We have waiver language for faculty, that states that the faculty member has granted Harvard a license with respect to his or her scholarly articles, and that a waiver is requested for a particular article.

In an odd way, it actually facilitates my outreach work with faculty as it brings the issue to the forefront.

There have been some instances where even when a waiver has been submitted, in the end the publisher agrees to budge a little bit from its routine policy as a compromise.

Minow: In what way?

Pearse: For example, the publisher may authorize self-archiving of a later version than it normally permits. With some of the bigger publishers, it can be a challenge figuring out the appropriate person with whom to discuss these issues.

Minow: Law reviews are produced by the law schools, and edited by students. Do you get a different reception from law reviews than you do from other journal publishers?

Pearse: Yes. By contrast, the law school law reviews are generally more supportive of the policy (particular the ones that have their contents open or “gratis open access”), but they are not always comfortable with or understand the terms of the Harvard license. We are trying to compile a list of law journals that are expressly supportive of the policy to facilitate workflow and educate faculty when they are publishing. At some point, if more law schools adopt open access policies, it would be great to have that information incorporated into submission systems and journal web pages.

Minow: How has it been implementing it in a university environment that has different schools enacting open access (e.g. centralized vs. local practices)?

Pearse: We were only the second school after the Faculty of Arts and Sciences (FAS) to adopt the open access policy, so it has been interesting to watch the Office for Scholarly Communication (OSC) evolve over time. We now have 6 schools at Harvard with OA policies. The growth in the number of schools has provided a fabulous opportunity to meet with colleagues working on similar issues, to share thoughts and processes for workflow, experiences with implementing the policies, etc. … especially where scholarship has become so interdisciplinary now. Over time, the OSC has also developed rich external and internal sites where we can share tools to help with the administrative aspects of implementing the policy. It also has open access student “fellows” that we have occasionally used to help populate the repository. We are also hoping that centralized discussions and negotiating with publishers will be helpful in communicating with publishers and facilitating the deposit of content.

Some of the “advantages” of centralization, however, can also create some of the biggest challenges. For example, we are fortunate to have a central office to run the repository on a technical level (it uses DSpace), but it also means we sometimes have to wait for certain developments to take place or compromise if have different ideas about the look and feel of the interface. In general, these issues tend to work themselves out. For example, delays in technical developments that are problematic for us often tend to be important to other schools as well, which can cause them to move up the priority list. The schools (and disciplines) have very different cultures, so it is interesting to see how these local cultural differences sometimes affect how we might approach certain aspects of implementing the policy like outreach and workflow. It is also interesting to see how the language of the policies themselves are slightly different and have evolved with each new school adopting a policy. (At this point, each school has its own language and responsibilities in figuring out how it wants the policy to operate in its own school.) While we can share technical resources and information and harness the synergies that exist, I think we will have to think about ways to create overlays and develop underlying workflows that can be customized to accommodate our own needs.

Minow: Thank you so much for your update!

========================================================================

For part two of Open Access Scholarship, we will be discussing the Durham Statement and what has happened in the two years since its publication with Richard A. Danner, Rufty Research Professor of Law and Senior Associate Dean for Information Services at Duke Law School.

========================================================================

Mary Minow is the Executive Editor of the Stanford Copyright & Fair Use site.

Michelle Pearse is the Research Librarian for Open Access Initiatives and Scholarly Communication, Harvard Law School Library. You can follow her on Twitter at @aabibliographer.

Published on:

Copyright and controversies over its enforcement by no means limited to the United States. The world’s first copyright legislation was England’s Statute of Anne, enacted in 1710. The Berne Convention for the Protection of Literary and Artistic Works, the first international copyright agreement, was first written in 1886.

And while debates over copyright enforcement, length of protection and the extent of exemptions continue in the U.S., similar efforts and arguments are being made in Canada, the UK and Europe. Our video page has excerpts from the ongoing conversation. One highlight is a speech on copyright from Mathias Klang, a researcher and senior lecturer at the University of Göteborg in Sweden. Most of the latest videos are from a July 2010 conference called ORGCon, conducted by the Open Rights Group, a group devoted to advocating digital rights in the UK.

But for you hardcore Lawrence Lessig fans (and I am one, thank you very much), there’s also a new TED talk from him on copyright, fair use and remix culture mashed up with politics. Brief, but humorous and thought-provoking, as one would expect from Prof. Lessig.

   — Eli Edwards, Content Minion

Published on:

The Center for Internet and Society presents

Judith Finell
Invasion of the Tune Snatchers – Does Copyright Law Inhibit or Enhance Musical Creativity Today?

Thursday, October 21, 2010
Room 280A, Stanford Law School
12:45pm-2:00pm
Lunch will be served.
http://cyberlaw.stanford.edu/node/6538

Music technology has radically changed the way in which music is composed, produced, performed, and obtained. Many artists openly utilize the works of others, often altering the core sonic characteristics of a sampled fragment. These developments pose new challenges to doctrines such as fair use, scenes a faire, and infringement criteria, such as access, transformative use, and prior art. Musicologist and expert witness Judith Finell will discuss these issues, and present musical examples from recent copyright cases.

Judith Finell is a musicologist who specializes in issues involving music as intellectual property. Her arena is the intersection of music, law, and technology. She formed her consulting firm Judith Finell Musicservices Inc. in New York over 20 years ago, to serve copyright and entertainment attorneys, and the music, entertainment, media, technology, and advertising industries. She has testified as an expert witness in many leading copyright cases throughout the country, and is a frequent guest speaker before attorney groups, law schools, and intellectual property organizations.

Her paper on this topic can be found at: http://www.law.stanford.edu/calendar/details/4548/CIS%20Speaker%20Series%20-%20Judith%20Finell%20/#related_media

Posted in: Stanford
Published on:
Updated:
Published on:

Rising Into the Public Domain: The Copyright Review Management System (CRMS) at the University of Michigan

Interview with John Wilkin, Associate University Librarian for Library Information Technology and Executive Director, HathiTrust and Principal Investigator for CRMS

jpwilkin-125x135.jpg

Mary Minow: Where does CRMS fit into the scheme of other copyright tools, such as the Determinator?

John Wilkin: The Determinator is a good point of comparison for us. It serves as a resource for helping someone make a determination, and what we wanted to do is actually make determinations. The focus is on materials in our Collections, across the HathiTrust partnership. We are not so concerned about where a book comes from, because we think of [the corpus] as a “collective collection” … materials from across the board.

I think we did have, early on, perhaps a naive sense that we might be able to make those determinations without the materials being in front of us, digitally or in print. We quickly concluded, though, that the only way to do the work was to have those works in hand. And we chose to have them in hand, digitally. And the digital flow of materials drives the prioritization process.

Minow: When you say digitally in hand, it sounds like researchers are allowed to look at the text, the preface, etc.

Wilkin: That’s right. We have a strong authentication and authorization system, and it’s tied into the Michigan CoSign system, but also it uses Shibboleth. So that gives us a lot of tools there. In this case, we use a two factor authentication for all reviewers. They have to authenticate [with a password], and they have to be, essentially, at their desk. They can’t take their identities home and start looking at materials that are still in copyright. So it’s very much justified by the work they’re doing.

Minow: Doesn’t Google make its own determinations of what’s in the Public Domain? Do they come up with different determinations? Is there duplicative work going on?

Wilkin: We’re doing the 1923-1963 work.

Minow: That is, a focus on books published between 1923 and 1963. Books published in the U.S. prior to 1923 are in the Public Domain. The Copyright Renewal Act of 1992 automatically extended the copyright terms of works published in 1964 and later.

Wilkin: Right. So far as we know, Google is not doing the 23-63 work. Both Google and HathiTrust do a layer of very automatic determinations. Ours is entirely automatic, based on elements in the MARC record. They have reviewers look at materials to do some [consultation] because occasionally the bibliographic information is not reliable. That’s the point at which we’ll look most similar, with some exceptions.

There are important areas where we deviate. We are opening up U.S. Federal Docs, post 1922. Google is considering that now, but they have been slow to do that. They’re considering what classes of materials they’ll open up. HathiTrust will say that U.S. government docs are, by and large, in the Public Domain.

Then we diverge. For example, we’re going to look at U.S. pre-1923 materials as in the Public Domain, and we’re going to look at users outside the U.S. differently for materials that were published outside the US does that make sense?

Minow: Help me out here.

Wilkin: For the user in the U.S. or really for anybody in the world, we deem U.S. works pre-1923 as being in the Public Domain. And for the user in the U.S., we also deem non-U.S. works pre-1923 as in the Public Domain. For users outside the U.S., we are fairly conservative with non-U.S. works. I think the date we’re using now is about 1870. It’s a rolling wall, and essentially a best guess. It would be that date for a young author who lived a long time who published something. We use statistical probability, and we roll that wall forward every year.

Minow: How do you figure out if the work was published first outside the country?

Wilkin: We primarily use the bib record of the publication. If the place of publication is outside the U.S., we assume that it was [first published there]. Effectively we are conservative unless we get a good look at something and make an individual determination.

We ingested 700,000 volumes one month, so that gives you a sense of the scale we’re working at. We’re never going to have the resources needed to do individual sorts of this one should go here and that one should go there.

Minow: You mentioned that you’re using the Determinator, but that’s only available for Class A books. Are most of your materials Class A books?

Wilkin: They’re all Class A books. The reviewers use the Determinator and other tools, they look at the book and they make an assessment. They look to see that there are not embedded rights problems in making those determinations.

Minow: Inserts – photos, stories, poems – you’d almost have to read every page.

Wilkin: Well, we look at acknowledgements, not the entire book. There are going to be some cases where the acknowledgements are not that adequate. We have an advertised takedown policy, and we’ve never been contacted about anything that is an insert.

Minow: It takes my breath away to look at that level.

Wilkin: The insert issue is of particular concern in Congressional materials, such as materials that are inserted into the record for hearings. We work with the assumption that these inserts are part of the public record and that they are provided or reproduced in that context.

Minow: In Section 108(h), the copyright law gives 20 years back to libraries and archives even on the web, if not subject to normal commercial exploitation. Here’s a chart I made, showing that, for example, that libraries and archives may make and distribute copies of works up through 1934 this year, instead of 1922. The catch is that the works cannot be subject to an undefined “normal commercial exploitation.”

Wilkin: We’re not taking advantage of that at this point.

Minow: Another thought I had, after reading Melissa Levine’s article, is that many authors of older works retain their digital rights, because when they signed publisher agreements, digital rights were not yet contemplated. Are you taking advantage of that? [Opening Up Content in HathiTrust: Using HathiTrust Permissions Agreements to Make Authors’ Work Available, Research Library Issues, no. 269 (April 2010): Special Issue on Strategies for Opening Up Content]

Wilkin: We’re not. We’re just testing the waters, taking baby steps. We’re only dealing with works where the rights have reverted to the author and when the author or publisher knows they own the rights. As it turns out, we’ve had some fairly large lump permissions. For example, in at least one case where a journal died, the journal publisher gave us permission to open up the full run of the journal. As it turns out, a few organizations have opened up a large number of publications.

Melissa’s article is an early step for us. We haven’t gone out to seek permiss
ions from authors, yet. But it’s most definitely something we want to do.

Minow: The University of Michigan is a player in the OCLC pilot project, the WorldCat Copyright Evidence Registry. Does that mean your determinations of copyright for the works you examine then feed into that Registry?

Wilkin: I think that effort is in limbo right now. We did set up a mechanism that we could share our determinations with them. The Registry was set up to allow institutions to identify records that need to be enhanced or annotated with information about URLs and rights, etc. In our distribution mechanism, there’s one record for every volume in the repository at this point.

We think of OCLC as a central switching point for bibliographic info, so it seemed like a natural for them to have a registry of copyright evidence. We were making data available to them, but in fact we have now 6 million volumes, each identified with our either automatic or manual copyright determination, so that’s more than what OCLC would have, I guess, aspired to do.

In the CRMS process, that’s only been tens of thousands of volumes, but someone could start with our 6 million volumes and look for changes.

Minow: But it wouldn’t be open in the sense that someone could put their own data in, right?

Wilkin: Exactly, and the Copyright Evidence Registry was intended to be that.

Minow: Is there anything you’d like to add?

Wilkin: Well, for us, the question is “what next?” The easiest “what next” is expanding to other partners. Anne’s been busy as we laid out in the grant, she is training staff in Indiana, Minnesota and Wisconsin – just finished Wisconsin – the three pilots along with the Michigan staff. [Anne Karle-Zenith, Copyright Review Project Librarian]. This winter she’ll probably incorporate staff at a California partner.

And as we bring more hands in, it puts more pressure on the training and reliability piece as more people are making determinations.

Minow: Do you see members of the public as becoming able to add notes or comments in the future?

Wilkin: We have a tagging application for bib records. Probably not a day passes when someone doesn’t say, “I think this is in the Public Domain” or ask, “is this in the public domain?” That’s what stimulates someone to look at it. So it is user driven now. We won’t take someone’s assertion as fact, but it provides a good starting point to do investigation.

Minow: Do you have plans to add other materials, besides “Class A” books?

Wilkin: In HathiTrust, we have much more than “Class A,” but the only ones we’re pushing into the workflow right now are “Class A.” So that becomes a question for you. Then. How would we go beyond “Class A”? How could we build sustainable cost effective system? Probably going to be something piece by piece, right?

Minow: I’ve heard that the Copyright Office is working on a retrospective conversion of the copyright registration and renewal records of rest of the material types, beyond “Class A books.” If they make the records available in bulk, as they did with “Class A,” then others can set up or build on databases like Stanford’s “Determinator.”

Wilkin: Did you know that we’ve found about between 55% and, 60% of our materials have been found in the public domain?

Minow: Fantastic!

Wilkin: The numbers you see out there say like, only 15% are in copyright. Some assertions are pretty wild. There was some early work done by the copyright office, but the law was in flux at the time. Best to have something so statistically sound. I’m guessing that between pre-CRMS and CRMS, we’ve gone through 100,000 titles and those numbers have held. I think we have another 400,000 titles to deal with in that period. One question we have, how many titles ARE there in the 23-63 period? There’s just so much indeterminacy because of variation in cataloguing practice and ways of reporting things, and so.

Minow: Are the other 40% ones that you’ve determined are in copyright or you just can’t figure them out?

Wilkin: I think early on it was about 30% in copyright and 10% in UND (undetermined or undeterminable). Anne found that as staff got more experienced, they were getting stuck on complicated problems, and we often found a lower yield of public domain determinations. So Anne encouraged staff to push things to UND rather than get some finality. So the number of UND has gone up, but the numbers in the Public Domain have stayed constant. That’s really a workflow strategy kind of thing.

It’s exciting to get those works opened up. The surprise has come in the titles. Because of the required renewal process, it’s stunning to see what was not renewed. The first time I encountered this was with my 13 year old daughter, who was doing a book report on code breakers. We found really modern materials by living mathematicians. I thought, “oh, we’re in trouble.” Then, looking further, these were ones where renewal did not take place. Interesting to learn the behavioral piece …

But the numbers, the numbers are really very interesting, the 60/40 sort of thing.

Minow: And yet, going forward, this is not going to be the case, because now there’s no renewal required. An anomaly really, unless law changes again in the other direction, which doesn’t seem likely.

Wilkin: That’s something for us to ponder as a society, as a culture, that these works are overwhelmingly not on the market. What’s happening is, without this effort, no one is able to take advantage of the information that’s there, or only in a limited way.

Another surprise is the Committee on Institutional Cooperation, the CIC, the non-Michigan, non-Wisconsin CIC institutions, don’t get back their in-copyright materials … by contract with Google. I think what we ought to say is they don’t get back those things that are putatively in copyright. With those numbers in mind, think about what are we not able to put online because they’re assumed to be in copyright, when we know that 60% or some large percent are in the public domain.

Minow: You mean, those institutions are not getting access to the full text of their own books?

Wilkin: They stay at Google, they’re embargoed. That may change with an amended agreement, but for now, Google doesn’t provide them back.

Minow: I thought those were called “library copies.”

Wilkin: It is important to call them “embargoed copies.” Jack Bernard, our Assistant General Counsel, has asked us to use the term “rising into the public domain” instead of “falling into the public domain.”

Minow: That’s a good title for this interview. Thanks so much for talking with us today.