Integrating Podcast Producer 2 and Your CMS
Volume Number: 26
Issue Number: 03
Column Tag: Podcast Producer
Integrating Podcast Producer 2 and Your CMS
How to use Podcast Producer 2
and several popular CMS packages
by Michele (Mike) Hjörleifsson
Syndicate, Syndicate, Syndicate
In the previous installment, we looked at and dissected a workflow, its contents, the plist files and how to modify them to your requirements. After reading the article, one may think that Podcast Producer 2 customization and content distribution is intimidating for any purpose other than the standard fare provided in Podcast Composer.
Well, this is far from the truth and I wanted to set the record straight with some explicit examples of how to integrate Podcast Producer 2 content into the three of the arguably most popular open source content management or blogging software. Next, I'll demonstrate how you can, with a simple Python script, pull content from the Podcast Producer Library and send it to whatever process you'd like to perform on the content. And last, but certainly not least, I wanted to point out a neat tool that was released after the last article that you can use to add YouTube publishing to your workflow files. Yes, that's right, you can publish right from Podcast Producer 2 directly to YouTube thanks to a gentleman named Marcel Borsten.
To get started, let's explore the Podcast Library provided by Podcast Producer 2. This feature, new in Podcast Producer v2, is a required output in Podcast Composer, which creates and maintains several RSS and Atom feeds. These feeds are categorized and contain information on all of your published podcasts. As seen in the following screenshots, you can retrieve feeds based on user, workflow, date (historical), keyword, and custom feeds.
Safari presents the feed pages with some user friendly options such as expanding or contracting the length of "article," which in our case is a podcast, information, sorting options, filtering options and the ability to update, mail a link to the feed, subscribe to the link in mail or add a bookmark to the feed in Safari.
So what is so great about Podcast Producer creating these unexciting feed pages? That's simple: RSS and Atom have become widely used standards for syndicating content into all sorts of applications and websites. Simply pointing a feed reader to the feed will provide a dynamic display of your podcasts in another website, a portal, a blog or any RSS and Atom enabled application, whether web based or desktop based. Let me demonstrate with examples in WordPress, Drupal and Joomla.
Using WordPress, logged in as an administrator, you can simply click on the appearance category, select widgets, and then drag the RSS widget to the sidebar. This presents a popup for you to enter the feed URL, title and several other options for title and description. Just copy and paste the link from the Podcast Producer feed page that you want to display in your WordPress site into the feed URL. Next, change the URI from feed:// to http:// and then change atom_feeds to rss_feeds in the URL itself, give it a title and then click Close. Open your WordPress site in a separate window and you will see something like this. Voila! It's that easy.
If you have podcasts in your feed they will almost instantly populate the widget and provide a link for anyone who would like to view or download them. Another neat feature is that by default, publishing puts the format of the podcast into parentheses after the entry. This way, your users can choose which format they would like to view or download.
Drupal, a popular open source blog and community building platform uses a feature called blocks to add functionality. Using the Feed Aggregator "block" in Drupal accomplishes the same task with ease. Configuration was again quick and easy:
1. Log on as a site administrator
2. Follow the Administer->Site Building->Blocks->Modules links
3. Select the Aggregator module check box
4. Save your settings.
This enables the RSS/Atom reader functionality in the Drupal software.
5. Click the "administration by module page" link in the middle of the page.
6. Click the Feed Aggregator in the Aggregator section, and then click the Add Feed link at the top of the center section.
7. Enter a title for your podcast section and then paste the feed URL into the URL box. Remember to change the URL to rss_feed from atom_feed then
8. Click the Save button.
Now that your feed is configured you will need to let Drupal know to put the feed on your public or private page(s) and where to present it.
9. Click on the Administer link in the left sidebar
10. Click on the Configure Permissions link for the Aggregator section. A permissions table will be presented for various modules.
11. Locate the aggregator selection and Choose the anonymous or authenticated user selection.
12. Click the Save Permissions button at the bottom of the page.
Last but not least, we need to place the feed somewhere in the sites presentation structure.
13. Click on the Site Building link in the left sidebar
14. Click on the Blocks link which presents the Blocks configuration page.
15. Find the title of your podcast feed and drag it from Disabled to the display area on the site which you would like to see the feed. For my example I chose right-sidebar.
16. Click the Save Blocks button and you are done.
This may seem to involve a lot of steps, but Drupal administrators should know this path well.
Joomla is an open source content management system used for corporate, community and blog websites. Joomla uses modules to enable functionality and provides a module called mod_feed that allows you to integrate feeds from external sites into its content management system. Configuration was again quick and easy:
1. Log into the administrative portion of the site.
2. Click Extensions, and then select Modules.
3. Click on the Feed Reader and then click Edit.
4. Add the feed URL the same way one would in WordPress and subsequently in Drupal and then select the desired options.
To position the feed properly on your site, Joomla uses a methodology called location tags.
5. Select the appropriate one for your site (shown on the right in the example below).
Installing various CMS products usually entails their own set of instructions, so check the documentation for the CMS you're working with.
In general, I installed Joomla, Drupal and Wordpress using the latest tar files available from their sites and extracting them each into a separate directory in /Library/WebServer/Documents, opened a MySQL session from the terminal to create a separate database for each. Finally, I opened a browser and ran the default web-based installs for each program by simply directing the browser to their respective directories. The entire process took less than an hour, from start to syndication, for the entire group of sites. I have never used WordPress or Drupal before. That should provide an indication of how easy it is to integrate Podcast Producer into your own blog or website.
Other Publishing Engines
There are other CMS and publishing engines that also can use an RSS feed to aggregate information.
Microsoft Sharepoint Server also provides a "Web Part" for RSS feeds that allows you to accomplish the same functionality similar to that shown with the Open Source CMSs in a Microsoft Sharepoint site. The installation is similar to the three previous installations: log on as an administrator, add the Web Part, configure it with the feed URL and then publish it to the location on your Sharepoint site(s) and bingo, it's available.
Last but certainly not least, Apple's Wiki/Blog can be added as a target output directly in Podcast Composer to accept Podcast posts automatically so there is no need to integrate Wiki/Blog with the RSS/Atom feeds. You just need to create the Wiki/Blog site and enable podcast publishing.
As you've seen, the feeds provided by Podcast Producer 2 provide for easy, fast integration with any web-based application that supports RSS or Atom feeds. This is a great advance for Podcast Producer administrators who want to leverage the podcasting infrastructure into existing web applications already deployed in their organization. If you have multiple feeds from multiple workflows or even multiple Podcast Producer sites, you can collect them all into single or multiple blogs or websites with this simple technique.
Now that you have a firm handle on the feeds, I will let you in on a little secret. Modifying workflows using the XML technique to add custom functionality can be a bear, but it has the advantage of leveraging Xgrid to distribute the processing load. But, if you either know how to submit your own scripts to Xgrid or, the more likely case, have one process you need to run on the videos posted to your Podcast Producer that won't benefit from Xgrid.
I have written a basic Python script example that will read a configuration file containing your feed(s) and then execute whatever command line application you'd like on the resultant video file(s). The benefit of using an external program that reads the feeds and processes the finalized videos may not be obvious at first so let me explain a use case.
Let's say you have content creators that want to be able to change their bumper videos (the videos appended and post-pended to the original content), Quartz Composer introduction, watermarks etc. Well, if they open a workflow that had its XML modified for custom processing, they will either not open or require you to apply the custom XML bits back to the new XML files created by Podcast Capture. Also, if Apple changes the functionality of, or more likely, adds features to Podcast Composer you end up in the same predicament. Using an external script to process the videos after production allows the content creators to play around with the workflow content processing as much as they like and your process will still execute on the output of their workflow.
This process requires three to five files depending on whether you'd like the process to be automated by launchd or not. The three common files are the Python script, the pdata.lst file and the workflow_html file. The Python script (code listed below) does the processing. The pdata.lst file is used to store a list of identifiers for podcasts that have been processed to avoid repeated processing of the same content. Finally, the workflow_html file contains a simple .conf style list of the feed titles, URL for the feed and location to store the resultant file(s). The optional two files are a plist file for /Library/LaunchDaemons specifying when to execute the script, and a bash script to properly execute the Python scripts (the latter is probably not necessary in most cases). The assumption made is that you have two video output files but you can see clearly where to modify or add more in the #Custom Definitions section. You must run sudo easy_install feedparser to get the feedparser library onto your system.
To start, create a directory in /Library/Application Support called something you will remember and place an empty pdata.lst file in there. Next, create a file called workflow_html in the same location using the following format:
The information between the brackets is the name of the feed and is just a placeholder. The URL is the feed URL, and the location is the place where the files will be placed. Note that the trailing slash in the location entry is critical. Notice the area for custom definitions that will be specific to your installation and the last two lines which simply execute a shell script passing the filename. You can pass any of the variables for the podcast I collect in the script to your shell script in that line but append them as parameters. Next is the Python script, which is listed below:
from subprocess import call
from sys import exit
from ConfigParser import ConfigParser
# CUSTOM DEFINITIONS GO HERE
podcastdatabase = '/Library/Application Support/MyStuff/pdata.lst'
listtoprocess = '/Library/Application Support/MyStuff/workflow_html'
orderAppleTV = 1
orderiDevice = 0
# Checks to see if podcast was processed & exits gracefully
def checkpodcast (pcastid,database):
pdatabase = open(database, 'r')
processed = False
podlist = pdatabase.read()
index = podlist.find(pcastid)
if (index >= 0):
processed = True
#Adds podcast urn to pdata.lst so we dont process twice
def addtopdata (pcastid,database):
pdatabase = open(database, 'a')
pcastentry = pcastid + '\n'
##### ACTUAL CODE ######
pasnum = 1
## Open the listtoprocess to get list of workflows
config = ConfigParser()
sections = config.sections()
#process the podcast feeds
for item in sections:
# Parse the workflows for feed and directory info
workflowfeed = config.get(item,'url')
locationstring = config.get(item,'location')
# grab the destination location of the output
podcastDestination = 
theFeed = feedparser.parse(workflowfeed)
# If no feed postings exit move on to next workflow
latestentry = theFeed.entries
#check to see if we already processed this one if so quit
checkifprocessed = checkpodcast(latestentry.id,podcastdatabase)
# Store id for adding to the pdata list
entryid = theFeed.entries.id
#Get the Header data for the feed itself
podcastTitle = latestentry.title
podcastAuthor = latestentry.author
podcastDescription = latestentry.description
# RSS puts the date and time within a weird string
# we only need the date so split at the T in the date
# string and just grab the date
podcastDate = tDate
#initialize the lists
podcastDuration = 
podcastFile = 
podcastType = 
podcastDir = 
podcastFname = 
podcastDest = 
# Gather the data from the feed into lists
# The last item is a thumbnail so use length - 1
# iPod/iPhone version should come first and AppleTV
# should come second when creating workflow with Podcast
# Composer, if not change the order in the custom
# definitions settings orderiDevice and orderAppleTV
for i in range(len(latestentry.links)-1):
podcastDtemp = latestentry.date.split('T')
# Since content is published on the same date,
# we only need the first one
podcastDate = podcastDtemp
aTVhref = latestentry.links[orderAppleTV].href
iPhonehref = latestentry.links[orderiDevice].href
# download iPhone version and write the file
mysock = urllib.urlopen(iPhonehref)
fileToSave = mysock.read()
oFileName = podcastDestination + podcastFname[orderiDevice]
oFile = open(oFileName,'wb')
#download AppleTV version and write the file
mysock2 = urllib.urlopen(aTVhref)
fileToSave2 = mysock2.read()
oFileName2 = podcastDestination + podcastFname[orderAppleTV]
oFile2 = open(oFileName2,'wb')
#Update the podcast database
b = addtopdata(entryid,podcastdatabase)
# Put your shell commands (cmd) and arguments (arg) here
call(['cmd', 'arg1', 'arg2'], stdin='...', stdout='...')
call(['cmd', 'arg1', 'arg2'], stdin='...', stdout='...')
Granted, this script is a little rough. I should have iterated the content rather than fixing it to two files in and two downloads out, but it you get the general idea of what I was trying to accomplish in under a hundred lines of actual code.
I can't express enough the leaps and bounds of benefits of using Podcast Producer 2 has made versus the first version, or even a manual process. It is a great tool. For continually updated how how-to's and information about the Podcast Producer community there is a great non-commercial resource at http://podcastproducer.org. Take a browse at the how to section and you will find a script and detailed instructions for adding YouTube output to your workflow XML files as an additional publishing output. I have tested this and it works great. Thanks to Marcel for posting it. And, if you come up with a neat tip or tool please participate and share with the community. Well that is all on Podcast Producer for now. If you'd like to see more, please email the editor, and I'll be happy to oblige. Till then... Happy tech'ng.
Michele (Mike) Hjörleifsson has been programming Apple computers since the Apple II+, and implementing network and remote access security technologies since the early '90s. He has worked with the nation's largest corporations and government institutions. Mike is currently a certified Apple trainer and independent consultant. Feel free to contact him at email@example.com