blog/content/posts/podcast-setup-for-broke-boys-whose-trash-phone-cant-hack-modern-apps.md
2025-03-06 17:27:32 +00:00

127 lines
4.4 KiB
Markdown

---
title: podcast setup for broke boys whose trash phone cant hack modern apps
date: 2023-01-24
tags:
- python
draft: false
---
I have an old sad android phone with 2GB of ram which nowadays seems to struggle with anything but the most lightweight apps. As a result of this I have been 'podcast-player-hopping' without success for the last couple of months trying to find something which doesn't nuke my phone whenever I use it. In a moment of desperation it occured to me that a creative solution might be required. The gameplan was this:
- write python script to download podcasts
- set up cron job on my server to run script every couple of hours
- sync podcasts across my devices using the lovely [syncthing](https://syncthing.net/)
- listen to podcasts using vlc which my phone loves
For the python script I used the lovely [feedparser](https://feedparser.readthedocs.io/en/latest/introduction.html) module for easy talking to my rss feeds.
### WHERE THE PODCASTS GO
First thing I would want my script to do is create a subdirectory of my main podcast directory for each individual podcast. After plopping all my feeds in a list like this:
```python
rss_urls = [
'https://anchor.fm/s/1311c8b8/podcast/rss',
'https://feeds.acast.com/public/shows/5e7b777ba085cbe7192b0607'
]
```
I wrote a little function that would parse each of these feeds get its name, and make a directory if one does not already exist.
```python
def create_dirs():
for url in rss_urls:
f = feedparser.parse(url)
feed_name = f['feed']['title']
current_feeds = os.listdir(pod_dir)
if feed_name not in current_feeds:
os.makedirs(pod_dir + feed_name)
```
### DOWNLOADING
With this sorted I now turned to the actual downloading of podcasts. This function parses each rss feed, filters it for entries from the last week, then grabs a title and a url for the audio file. These are stuck together into a list of lists with each list representing a separate entry.
```python
def get_pods():
global feed_info
feed_info = []
for url in rss_urls:
f = feedparser.parse(url)
for pod in f.entries:
if time.time() - time.mktime(pod.published_parsed) < (86400*7):
feed_name = f.feed.title
pod_name = pod.title
pod_dl = pod.enclosures[0].href
pod_info = [
feed_name,
pod_name,
pod_dl
]
feed_info.append(pod_info)
return feed_info
```
This next function looks at all the podcast subdirectories and returns a list of all the podcasts I already have downloaded. This can be used when downloading to only get new podcasts.
```python
def get_downloads():
downloads = []
pods = os.listdir(pod_dir)
for dir in pods:
if os.path.isdir(pod_dir + dir):
for file in os.listdir(pod_dir + dir):
downloads.append(file)
return downloads
```
Now for the actual getting of the audio files. Here we use requests to make a request to the audio file url and write the content to the relevant directory. I also append a .mp3 to the filenames so they play nice with media players.
```python
def download():
a = get_pods()
for pod in a:
b = get_downloads()
if pod[1]+'.mp3' not in b:
try:
dl = requests.get(pod[2])
except:
print('Download Error')
with open(pod_dir + pod[0] + '/' + pod[1] + '.mp3', 'wb') as file:
file.write(dl.content)
```
### PRUNING
As it stands, the script does downloading great. The only thing we need is some kind of automatic deletion so my phone doesnt get clogged up with old podcasts. This function checks for files which were created over a week ago and deletes the offenders.
```python
def trim():
for dir in os.listdir(pod_dir):
if os.path.isdir(pod_dir + dir):
pods = os.listdir(pod_dir + dir)
for pod in pods:
st = os.stat(pod_dir + dir + '/' + pod)
mtime=st.st_mtime
if time.time() - mtime > (86400*trim_age):
os.remove(pod_dir + dir + '/' + pod)
```
The last thing is to call the functions:
```python
create_dirs()
download()
trim()
```
Of course this slightly ramshackle approach is certainly not for everyone lol but as it stands it's working quite nicely for me. Lots of love and happy listening :)