Publishing News: Data is proving to be the backbone of emerging publishing models

Data-driven publishing, ReDigi loses in court, and the Digital Public Library of America is ready to launch.

Data’s growing role in the digital publishing ecosystem

Data is becoming a driving force in the era of digital content. From subscription strategies to target marketing and advertising to content curation and methods of consumption, data is increasingly becoming the backbone of new publishing models.

Mashable’s Lauren Indvik took an in-depth look this week at the role data is playing in the Financial Times’ success in its digital first campaign. Last year, the newspaper notably reported its number of digital subscribers surpassed its number of print subscribers, and today, Indvik reports, the revenue from subscriptions accounts for more than 50% of the Financial Times’ revenue, compared to 39% from advertising.

Financial Times CEO John Ridding told Indvik the digital subscriber success stems from collecting reader data at their paywall to map reader behavior leading up to a subscription, and requiring readers to register to access up to eight free articles per month allows them to gather user-specific data to better target potential subscribers. Their data-driven approach also is helping to target advertising and marketing, and to give advertisers highly detailed reports on a particular campaign’s performance. “We can prove in real-time quite effectively what advertising is working and put that data in front of advertisers,” Ridding told Indvik.

The newspaper also tracks user data to help inform future editorial products and coverage to keep readers engaged. You can read Indvik’s full report at Mashable.

In a guest post at PaidContent, Roger Wood, founder of the (Art+Data) Institute, and Evelyn Robbrecht, a content design fellow at the Institute, looked at how data could be used to determine the content readers will consume — and change the content and its display based on a reader’s reading behavior. Wood and Robbrecht argue that we’re “on the cusp of an era of incredible evolution: one where the design of information changes in real time in response to data about the readers consuming it.” What’s more, the content itself will become a data-gathering tool, what they dub “Intelligent Content.” They argue:

“In a way, books and magazines of the future will act as sort of human compilers, translating your reading desires into pure machine language that tells the publisher how to present the material for faster and more pleasurable absorption. … The content itself will be designed to gather information about the reader, mash it up with data about others interested in related subjects, authors, or publishers, then decide what content to present to you next. This is what we mean by Intelligent Content.”

Wood and Robbrecht also predict the big data gathered from readers will be used to help publishers produce successful content on demand, and they argue that algorithms will replace content editors and curators. You can read their full piece at PaidContent.

Court rules in favor of Capitol Records

A much anticipated ruling (PDF) in the Capitol Records v. ReDigi case was handed down this week. Jonathan Stempel and Alistair Barr report at Reuters that U.S. District Judge Richard Sullivan in Manhattan ruled in favor of Capitol Records, holding that ReDigi’s business platform, which allows users to buy and sell “used” digital music tracks purchased from iTunes, infringes on Capitol Record’s music copyrights. Stempel and Barr note the gist of the judge’s decision:

“To sell music bought from iTunes on ReDigi, a user ‘must produce a new phonorecord on the ReDigi server,’ Sullivan wrote. ‘Because it is therefore impossible for the user to sell her ‘particular’ phonorecord on ReDigi, the first sale statute cannot provide a defense.'”

New York Law School professor James Grimmelmann reports at Publishers Weekly that Judge Sullivan used a Star Trek analogy to clarify: “When Kirk stands on the transporter on the Enterprise, is the person who materializes on the planet still Kirk?” ReDigi argued the answer is yes, but Judge Sullivan disagreed “because the new Kirk-copy is a different ‘material object’ than the old one, made up of different atoms, stored on a different hard drive.”

Grimmelmann says this is where we stray from real-world consequences on which copyright law should be based: “Whether Copy A is the ‘same’ as Copy B, and if so in what senses, is not a question that anyone ought to care about.” He suggests we might stay more on track if we address the questions of digital resale with fair use rather than “first sale.” You can read his full piece at Publishers Weekly.

Reporting on the case at Wired, David Kravets highlights the potential wide-reaching effects of the case’s outcome: “Judge Sullivan’s ruling, if it withstands appellate scrutiny, likely means used digital sales venues must first acquire the permission of rights holders.”

A national library as a platform prepares for launch

The Digital Public Library of America (DPLA) announced its launch this week. According to the press release, the library will launch April 18 with a goal “to make the holdings of America’s research libraries, archives, and museums available to all Americans — and eventually to everyone in the world — online and free of charge.”

Reporting on the announcement at The Verge, Tim Carmody notes that the DPLA’s project differs from Google Books’ efforts. “[T]he DPLA doesn’t hoover up institutions’ documents to be stored on its own servers,” he writes. “Its primary goal is to support coordinate scanning efforts by each of its partner institutions, and to act as a central search engine and metadata repository.” Carmody also highlights the openness of the project and its aim to be a "library as a platform":

“The DPLA has been equipped with a rich API for developers, artists, and others to engage, adapt, and revisualize art and text. ‘The DPLA’s terms, if you look at them, are extremely permissive,’ [DPLA Executive Director Dan Cohen] adds. ‘We are really fighting for a maximally usable and transferrable knowledge base. Everything, where possible, will be CC-zero licensed. If you’re Google, you can come right in and take everything. It’s just like Wikipedia. You can grab this stuff and use it as you want.’ Text mining, mapping, art projects — it’s all open for business.”

Carmody also explores the DPLA’s competitive goal to catch up with platforms like Europeana, and says the hard part — coming into existence — is done. “The only things the brand-new institution will have to navigate are getting money, finding talented developers, fostering public awareness, and balancing the interests of a wide range of stakeholders, both public and private, commercial and noncommercial, with the collected cultural heritage of a nation in the balance,” he writes. “But while those obstacles are formidable, they’re small compared to the inertial forces that could have kept the DPLA from ever getting off the ground.” You can read Carmody’s full piece at The Verge.

In related news, the New York Public Library announced the first release of its Digital Collections API this week. The announcement release on the New York Public Library blog states that the API will allow “software developers both in and outside of the library to write programs that search [the library’s] digital collections, process the descriptions of each object, and find links to the relevant pages on the NYPL Digital Gallery.” You can read more about the API, along with thoughts on the project from Doug Reside, digital curator for the performing arts at the Library for the Performing Arts, on the library’s blog.

Tip us off

News tips and suggestions are always welcome, so please send them along.

Related:

tags: , , , ,