mirror of
https://github.com/samuelclay/NewsBlur.git
synced 2025-11-01 09:09:51 +00:00
Merge branch 'master' into sictiru
* master: (410 commits) Don't delete redis keys because they take time to rebuild and subs can be counted incorrectly during that time. Adding make collectstatic to make nb because even though it's slower, it fixes the #1 issue people have when deploying on their own machines. Fixes #1717. Time to take Nicholas and Mark off the about page. Updating David's photo on the About page. Fixes #1716. Fixing issue of Unread only not seeing every story. Due to the removal of UserSubscription.get_stories in30127d7c72, we were no longer caching story hashes for single feeds. User count should include beyond yearly. Deferring execution of expensive statistics. Caching newsblur_users.py statistics. Adding cutoff from trimming stories for memory profiling. Switching timestamp to better calculator that doesn't spit out -1's. Matchesc469608and fixes #1671. Merging SDIFFSTORE and ZINTERSTORE into a single ZDIFFSTORE, thanks to redis 6.2.0. Requires new docker image. Bumping highlights max from 1024 characters to 16384 characters. Maxing out mark read dates. Adding mark read dates to dialog for archive users that stretches into the past year. Revert "Merging SDIFFSTORE and ZINTERSTORE into a single ZDIFFSTORE, thanks to redis 6.2.0. Requires new docker image." Merging SDIFFSTORE and ZINTERSTORE into a single ZDIFFSTORE, thanks to redis 6.2.0. Requires new docker image. Improving redis performance by reading the config. Don't remove unread stories for U: unread stories list because users are actively paging through. Re-exposing forgot password link. Trying 1 hour for story unread list. ...
This commit is contained in:
commit
afea0a0a49
580 changed files with 27122 additions and 21477 deletions
1
.gitignore
vendored
1
.gitignore
vendored
|
|
@ -41,7 +41,6 @@ media/css/circular
|
|||
config/settings
|
||||
config/secrets
|
||||
templates/maintenance_on.html
|
||||
vendor/mms-agent/settings.py
|
||||
apps/social/spam.py
|
||||
venv*
|
||||
backup
|
||||
|
|
|
|||
4
.vscode/settings.json
vendored
4
.vscode/settings.json
vendored
|
|
@ -7,6 +7,8 @@
|
|||
"--ignore=E501,W293,W503,W504,E302,E722,E226,E221,E402,E401"
|
||||
],
|
||||
"python.pythonPath": "~/.virtualenvs/newsblur3/bin/python",
|
||||
"editor.bracketPairColorization.enabled": true,
|
||||
"editor.guides.bracketPairs":"active",
|
||||
"git.ignoreLimitWarning": true,
|
||||
"search.exclude": {
|
||||
"clients": true,
|
||||
|
|
@ -33,4 +35,6 @@
|
|||
"files.associations": {
|
||||
"*.yml": "ansible"
|
||||
},
|
||||
"nrf-connect.toolchain.path": "${nrf-connect.toolchain:1.9.1}",
|
||||
"C_Cpp.default.configurationProvider": "nrf-connect",
|
||||
}
|
||||
|
|
|
|||
|
|
@ -125,4 +125,18 @@ You got the downtime message either through email or SMS. This is the order of o
|
|||
crack are automatically fixed after 24 hours, but if many feeds fall through due to a bad
|
||||
deploy or electrical failure, you'll want to accelerate that check by just draining the
|
||||
tasked feeds pool, adding those feeds back into the queue. This command is idempotent.
|
||||
|
||||
|
||||
## Python 3
|
||||
|
||||
### Switching to a new redis server
|
||||
|
||||
When the new redis server is connected to the primary redis server:
|
||||
|
||||
# db-redis-story2 = moving to new server
|
||||
# db-redis-story = old server about to be shutdown
|
||||
make celery_stop
|
||||
make maintenance_on
|
||||
apd -l db-redis-story2 -t replicaofnoone
|
||||
aps -l db-redis-story,db-redis-story2 -t consul
|
||||
make maintenance_off
|
||||
make task
|
||||
|
|
|
|||
67
Makefile
67
Makefile
|
|
@ -5,44 +5,44 @@ newsblur := $(shell docker ps -qf "name=newsblur_web")
|
|||
|
||||
.PHONY: node
|
||||
|
||||
#creates newsblur, but does not rebuild images or create keys
|
||||
start:
|
||||
- RUNWITHMAKEBUILD=True CURRENT_UID=${CURRENT_UID} CURRENT_GID=${CURRENT_GID} docker compose up -d
|
||||
nb: pull bounce migrate bootstrap collectstatic
|
||||
|
||||
metrics:
|
||||
- RUNWITHMAKEBUILD=True CURRENT_UID=${CURRENT_UID} CURRENT_GID=${CURRENT_GID} docker compose -f docker-compose.yml -f docker-compose.metrics.yml up -d
|
||||
|
||||
metrics-ps:
|
||||
- RUNWITHMAKEBUILD=True docker compose -f docker-compose.yml -f docker-compose.metrics.yml ps
|
||||
|
||||
rebuild:
|
||||
- RUNWITHMAKEBUILD=True CURRENT_UID=${CURRENT_UID} CURRENT_GID=${CURRENT_GID} docker compose down
|
||||
- RUNWITHMAKEBUILD=True CURRENT_UID=${CURRENT_UID} CURRENT_GID=${CURRENT_GID} docker compose up -d
|
||||
|
||||
collectstatic:
|
||||
- rm -fr static
|
||||
- docker pull newsblur/newsblur_deploy
|
||||
- docker run --rm -v $(shell pwd):/srv/newsblur newsblur/newsblur_deploy
|
||||
|
||||
#creates newsblur, builds new images, and creates/refreshes SSL keys
|
||||
nb: pull
|
||||
bounce:
|
||||
- RUNWITHMAKEBUILD=True CURRENT_UID=${CURRENT_UID} CURRENT_GID=${CURRENT_GID} docker compose down
|
||||
- [[ -d config/certificates ]] && echo "keys exist" || make keys
|
||||
- RUNWITHMAKEBUILD=True CURRENT_UID=${CURRENT_UID} CURRENT_GID=${CURRENT_GID} docker compose up -d --build --remove-orphans
|
||||
- docker exec newsblur_web ./manage.py migrate
|
||||
|
||||
bootstrap:
|
||||
- docker exec newsblur_web ./manage.py loaddata config/fixtures/bootstrap.json
|
||||
|
||||
nbup:
|
||||
- RUNWITHMAKEBUILD=True CURRENT_UID=${CURRENT_UID} CURRENT_GID=${CURRENT_GID} docker compose up -d --build --remove-orphans
|
||||
coffee:
|
||||
- coffee -c -w **/*.coffee
|
||||
|
||||
migrations:
|
||||
- docker exec -it newsblur_web ./manage.py makemigrations
|
||||
makemigration: migrations
|
||||
datamigration:
|
||||
- docker exec -it newsblur_web ./manage.py makemigrations --empty $(app)
|
||||
migration: migrations
|
||||
migrate:
|
||||
- docker exec -it newsblur_web ./manage.py migrate
|
||||
shell:
|
||||
- RUNWITHMAKEBUILD=True CURRENT_UID=${CURRENT_UID} CURRENT_GID=${CURRENT_GID} docker-compose exec newsblur_web ./manage.py shell_plus
|
||||
- docker exec -it newsblur_web ./manage.py shell_plus
|
||||
bash:
|
||||
- RUNWITHMAKEBUILD=True CURRENT_UID=${CURRENT_UID} CURRENT_GID=${CURRENT_GID} docker-compose exec newsblur_web bash
|
||||
- docker exec -it newsblur_web bash
|
||||
# allows user to exec into newsblur_web and use pdb.
|
||||
debug:
|
||||
- RUNWITHMAKEBUILD=True CURRENT_UID=${CURRENT_UID} CURRENT_GID=${CURRENT_GID} docker attach ${newsblur}
|
||||
- docker attach ${newsblur}
|
||||
log:
|
||||
- RUNWITHMAKEBUILD=True docker compose logs -f --tail 20 newsblur_web newsblur_node
|
||||
logweb: log
|
||||
|
|
@ -54,7 +54,14 @@ logmongo:
|
|||
alllogs:
|
||||
- RUNWITHMAKEBUILD=True docker compose logs -f --tail 20
|
||||
logall: alllogs
|
||||
# brings down containers
|
||||
mongo:
|
||||
- docker exec -it db_mongo mongo --port 29019
|
||||
redis:
|
||||
- docker exec -it db_redis redis-cli -p 6579
|
||||
postgres:
|
||||
- docker exec -it db_postgres psql -U newsblur
|
||||
stripe:
|
||||
- stripe listen --forward-to localhost/zebra/webhooks/v2/
|
||||
down:
|
||||
- RUNWITHMAKEBUILD=True docker compose -f docker-compose.yml -f docker-compose.metrics.yml down
|
||||
nbdown: down
|
||||
|
|
@ -73,10 +80,20 @@ keys:
|
|||
- openssl dhparam -out config/certificates/dhparam-2048.pem 2048
|
||||
- openssl req -x509 -nodes -new -sha256 -days 1024 -newkey rsa:2048 -keyout config/certificates/RootCA.key -out config/certificates/RootCA.pem -subj "/C=US/CN=Example-Root-CA"
|
||||
- openssl x509 -outform pem -in config/certificates/RootCA.pem -out config/certificates/RootCA.crt
|
||||
- openssl req -new -nodes -newkey rsa:2048 -keyout config/certificates/localhost.key -out config/certificates/localhost.csr -subj "/C=US/ST=YourState/L=YourCity/O=Example-Certificates/CN=localhost.local"
|
||||
- openssl req -new -nodes -newkey rsa:2048 -keyout config/certificates/localhost.key -out config/certificates/localhost.csr -subj "/C=US/ST=YourState/L=YourCity/O=Example-Certificates/CN=localhost"
|
||||
- openssl x509 -req -sha256 -days 1024 -in config/certificates/localhost.csr -CA config/certificates/RootCA.pem -CAkey config/certificates/RootCA.key -CAcreateserial -out config/certificates/localhost.crt
|
||||
- cat config/certificates/localhost.crt config/certificates/localhost.key > config/certificates/localhost.pem
|
||||
- /usr/bin/security add-trusted-cert -d -r trustAsRoot -k /Library/Keychains/System.keychain ./config/certificates/RootCA.crt
|
||||
- sudo /usr/bin/security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ./config/certificates/RootCA.crt
|
||||
|
||||
# Doesn't work yet
|
||||
mkcert:
|
||||
- mkdir config/mkcert
|
||||
- docker run -v $(shell pwd)/config/mkcert:/root/.local/share/mkcert brunopadz/mkcert-docker:latest \
|
||||
/bin/sh -c "mkcert -install && \
|
||||
mkcert -cert-file /root/.local/share/mkcert/mkcert.pem \
|
||||
-key-file /root/.local/share/mkcert/mkcert.key localhost"
|
||||
- cat config/mkcert/rootCA.pem config/mkcert/rootCA-key.pem > config/certificates/localhost.pem
|
||||
- sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ./config/mkcert/rootCA.pem
|
||||
|
||||
# Digital Ocean / Terraform
|
||||
list:
|
||||
|
|
@ -143,6 +160,7 @@ node: deploy_node
|
|||
deploy_task:
|
||||
- ansible-playbook ansible/deploy.yml -l task
|
||||
task: deploy_task
|
||||
celery: deploy_task
|
||||
deploy_www:
|
||||
- ansible-playbook ansible/deploy.yml -l haproxy
|
||||
www: deploy_www
|
||||
|
|
@ -157,6 +175,8 @@ deploy_staging:
|
|||
staging: deploy_staging
|
||||
celery_stop:
|
||||
- ansible-playbook ansible/deploy.yml -l task --tags stop
|
||||
sentry:
|
||||
- ansible-playbook ansible/setup.yml -l sentry -t sentry
|
||||
maintenance_on:
|
||||
- ansible-playbook ansible/deploy.yml -l web --tags maintenance_on
|
||||
maintenance_off:
|
||||
|
|
@ -169,7 +189,14 @@ oldfirewall:
|
|||
- ANSIBLE_CONFIG=/srv/newsblur/ansible.old.cfg ansible-playbook ansible/all.yml -l db --tags firewall
|
||||
repairmongo:
|
||||
- sudo docker run -v "/srv/newsblur/docker/volumes/db_mongo:/data/db" mongo:4.0 mongod --repair --dbpath /data/db
|
||||
|
||||
mongodump:
|
||||
- docker exec -it db_mongo mongodump --port 29019 -d newsblur -o /data/mongodump
|
||||
- cp -fr docker/volumes/db_mongo/mongodump docker/volumes/mongodump
|
||||
# - docker exec -it db_mongo cp -fr /data/db/mongodump /data/mongodump
|
||||
# - docker exec -it db_mongo rm -fr /data/db/
|
||||
mongorestore:
|
||||
- cp -fr docker/volumes/mongodump docker/volumes/db_mongo/
|
||||
- docker exec -it db_mongo mongorestore --port 29019 -d newsblur /data/db/mongodump/newsblur
|
||||
|
||||
# performance tests
|
||||
perf-cli:
|
||||
|
|
|
|||
|
|
@ -5,14 +5,10 @@
|
|||
when: "'haproxy' in group_names"
|
||||
- import_playbook: playbooks/deploy_node.yml
|
||||
when: "'node' in group_names"
|
||||
- import_playbook: playbooks/deploy_monitor.yml
|
||||
when: "'postgres' in group_names"
|
||||
- import_playbook: playbooks/deploy_monitor.yml
|
||||
when: "'mongo' in group_names"
|
||||
- import_playbook: playbooks/deploy_monitor.yml
|
||||
- import_playbook: playbooks/deploy_redis.yml
|
||||
when: "'redis' in group_names"
|
||||
- import_playbook: playbooks/deploy_monitor.yml
|
||||
when: "'elasticsearch' in group_names"
|
||||
when: '"postgres" in group_names or "mongo" in group_names or "redis" in group_names or "elasticsearch" in group_names'
|
||||
- import_playbook: playbooks/deploy_task.yml
|
||||
when: "'task' in group_names"
|
||||
- import_playbook: playbooks/deploy_staging.yml
|
||||
|
|
|
|||
|
|
@ -29,6 +29,7 @@ groups:
|
|||
search: inventory_hostname.startswith('db-elasticsearch')
|
||||
elasticsearch: inventory_hostname.startswith('db-elasticsearch')
|
||||
redis: inventory_hostname.startswith('db-redis')
|
||||
redis_story: inventory_hostname.startswith('db-redis-story')
|
||||
postgres: inventory_hostname.startswith('db-postgres')
|
||||
mongo: inventory_hostname.startswith('db-mongo') and not inventory_hostname.startswith('db-mongo-analytics')
|
||||
mongo_analytics: inventory_hostname.startswith('db-mongo-analytics')
|
||||
|
|
|
|||
|
|
@ -119,7 +119,7 @@
|
|||
- name: Reload gunicorn due to no git upstream changes
|
||||
become: yes
|
||||
block:
|
||||
- name: Find gunicorn master process
|
||||
- name: Find gunicorn process
|
||||
shell: "ps -C gunicorn fch -o pid | head -n 1"
|
||||
register: psaux
|
||||
- name: Reload gunicorn
|
||||
|
|
|
|||
21
ansible/playbooks/deploy_redis.yml
Normal file
21
ansible/playbooks/deploy_redis.yml
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
- name: DEPLOY -> redis
|
||||
hosts: redis
|
||||
gather_facts: false
|
||||
vars_files:
|
||||
- ../env_vars/base.yml
|
||||
|
||||
tasks:
|
||||
- name: Turning off secondary for redis by deleting redis_replica.conf
|
||||
copy:
|
||||
dest: /srv/newsblur/docker/redis/redis_replica.conf
|
||||
content: ""
|
||||
tags:
|
||||
- never
|
||||
- replicaofnoone
|
||||
|
||||
- name: Setting Redis REPLICAOF NO ONE
|
||||
shell: docker exec redis redis-cli REPLICAOF NO ONE
|
||||
tags:
|
||||
- never
|
||||
- replicaofnoone
|
||||
|
|
@ -6,7 +6,7 @@
|
|||
- ../env_vars/base.yml
|
||||
- roles/letsencrypt/defaults/main.yml
|
||||
handlers:
|
||||
- include: roles/haproxy/handlers/main.yml
|
||||
- import_tasks: roles/haproxy/handlers/main.yml
|
||||
|
||||
tasks:
|
||||
- name: Template haproxy.cfg file
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
- name: SETUP -> app containers
|
||||
hosts: web
|
||||
serial: "3"
|
||||
# serial: "3"
|
||||
vars_files:
|
||||
- ../env_vars/base.yml
|
||||
vars:
|
||||
|
|
|
|||
|
|
@ -20,4 +20,4 @@
|
|||
- {role: 'mongo-exporter', tags: ['mongo-exporter', 'metrics']}
|
||||
- {role: 'monitor', tags: 'monitor'}
|
||||
- {role: 'flask_metrics', tags: ['flask-metrics', 'metrics']}
|
||||
- {role: 'benchmark', tags: 'benchmark'}
|
||||
# - {role: 'benchmark', tags: 'benchmark'}
|
||||
|
|
|
|||
|
|
@ -9,7 +9,6 @@
|
|||
|
||||
roles:
|
||||
- {role: 'base', tags: 'base'}
|
||||
- {role: 'ufw', tags: 'ufw'}
|
||||
- {role: 'docker', tags: 'docker'}
|
||||
- {role: 'repo', tags: ['repo', 'pull']}
|
||||
- {role: 'dnsmasq', tags: 'dnsmasq'}
|
||||
|
|
@ -17,5 +16,6 @@
|
|||
- {role: 'consul-client', tags: 'consul'}
|
||||
- {role: 'node-exporter', tags: ['node-exporter', 'metrics']}
|
||||
- {role: 'postgres', tags: 'postgres'}
|
||||
- {role: 'ufw', tags: 'ufw'}
|
||||
- {role: 'monitor', tags: 'monitor'}
|
||||
- {role: 'backups', tags: 'backups'}
|
||||
|
|
|
|||
|
|
@ -18,3 +18,4 @@
|
|||
- {role: 'redis', tags: 'redis'}
|
||||
- {role: 'flask_metrics', tags: ['flask-metrics', 'metrics', 'flask_metrics']}
|
||||
- {role: 'monitor', tags: 'monitor'}
|
||||
- {role: 'benchmark', tags: 'benchmark'}
|
||||
|
|
|
|||
|
|
@ -1,8 +1,12 @@
|
|||
---
|
||||
- name: Ensure backups directory
|
||||
become: yes
|
||||
file:
|
||||
path: /srv/newsblur/backups
|
||||
path: /srv/newsblur/docker/volumes/postgres/backups/
|
||||
state: directory
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
tags: restore_postgres
|
||||
|
||||
- name: Ensure pip installed
|
||||
become: yes
|
||||
|
|
@ -17,7 +21,7 @@
|
|||
- name: Set backup vars
|
||||
set_fact:
|
||||
redis_story_filename: backup_redis_story_2021-04-13-04-00.rdb.gz
|
||||
postgres_filename: backup_postgresql_2022-02-03-04-00.sql.gz
|
||||
postgres_filename: backup_postgresql_2022-05-03-04-00.sql.sql
|
||||
mongo_filename: backup_mongo_2021-03-15-04-00.tgz
|
||||
redis_filename: backup_redis_2021-03-15-04-00.rdb.gz
|
||||
tags: never, restore_postgres, restore_mongo, restore_redis, restore_redis_story
|
||||
|
|
@ -25,30 +29,34 @@
|
|||
- name: Download archives
|
||||
amazon.aws.aws_s3:
|
||||
bucket: "newsblur-backups"
|
||||
object: "{{ item.dir }}{{ item.file }}"
|
||||
dest: "/srv/newsblur/backups/{{ item.file }}"
|
||||
object: "{{ item.s3_dir }}{{ item.file }}"
|
||||
dest: "{{ item.backup_dir }}{{ item.file }}"
|
||||
mode: get
|
||||
overwrite: different
|
||||
aws_access_key: "{{ lookup('ini', 'aws_access_key_id section=default file=/srv/secrets-newsblur/keys/aws.s3.token') }}"
|
||||
aws_secret_key: "{{ lookup('ini', 'aws_secret_access_key section=default file=/srv/secrets-newsblur/keys/aws.s3.token') }}"
|
||||
with_items:
|
||||
# - dir: /redis_story/
|
||||
# - s3_dir: /redis_story/
|
||||
# backup_dir: /srv/newsblur/backups
|
||||
# file: "{{ redis_story_filename }}"
|
||||
- dir: /postgres/
|
||||
- s3_dir: /backup_db_postgres2/
|
||||
backup_dir: /srv/newsblur/docker/volumes/postgres/backups/
|
||||
file: "{{ postgres_filename }}"
|
||||
# - dir: /mongo/
|
||||
# - s3_dir: /mongo/
|
||||
# backup_dir: /srv/newsblur/backups
|
||||
# file: "{{ mongo_filename }}"
|
||||
# - dir: /backup_redis/
|
||||
# - s3_dir: /backup_redis/
|
||||
# backup_dir: /srv/newsblur/backups
|
||||
# file: "{{ redis_filename }}"
|
||||
tags: never, restore_postgres, restore_mongo, restore_redis, restore_redis_story
|
||||
|
||||
|
||||
- name: Restore postgres
|
||||
block:
|
||||
- name: pg_restore
|
||||
become: yes
|
||||
command: |
|
||||
docker exec -i postgres bash -c
|
||||
"pg_restore -U newsblur --role=newsblur --dbname=newsblur /var/lib/postgresql/backup/{{ postgres_filename }}"
|
||||
"pg_restore -U newsblur --role=newsblur --dbname=newsblur /var/lib/postgresql/backups/{{ postgres_filename }}"
|
||||
tags: never, restore_postgres
|
||||
|
||||
- name: Restore mongo
|
||||
|
|
@ -76,3 +84,43 @@
|
|||
command: "mv -f /srv/newsblur/backups/{{ redis_story_filename }} /srv/newsblur/docker/volumes/redis/dump.rdb"
|
||||
ignore_errors: yes
|
||||
tags: never, restore_redis_story
|
||||
|
||||
- name: Start postgres basebackup on secondary
|
||||
block:
|
||||
- name: Stop existing postgres
|
||||
become: yes
|
||||
command:
|
||||
docker stop postgres
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Move old data dir
|
||||
become: yes
|
||||
command:
|
||||
mv -f /srv/newsblur/docker/volumes/postgres/data /srv/newsblur/docker/volumes/postgres/data.prebasebackup
|
||||
ignore_errors: yes
|
||||
|
||||
- name: pg_basebackup
|
||||
become: yes
|
||||
command:
|
||||
docker run --rm --name=pg_basebackup --network=host -e POSTGRES_PASSWORD=newsblur -v /srv/newsblur/docker/volumes/postgres/data:/var/lib/postgresql/data postgres:13 pg_basebackup -h db-postgres.service.nyc1.consul -p 5432 -U newsblur -D /var/lib/postgresql/data -Fp -R -Xs -P -c fast
|
||||
|
||||
- name: start postgresql
|
||||
become: yes
|
||||
command:
|
||||
docker start postgres
|
||||
# when: (inventory_hostname | regex_replace('[0-9]+', '')) in ['db-postgres-secondary']
|
||||
tags:
|
||||
- never
|
||||
- pg_basebackup
|
||||
|
||||
- name: Promote secondary postgres to primary
|
||||
block:
|
||||
- name: pg_ctl promote
|
||||
become: yes
|
||||
command:
|
||||
docker exec -it postgres su - postgres -c "/usr/lib/postgresql/13/bin/pg_ctl -D /var/lib/postgresql/data promote"
|
||||
# when: (inventory_hostname | regex_replace('[0-9]+', '')) in ['db-postgres-secondary']
|
||||
tags:
|
||||
- never
|
||||
- pg_promote
|
||||
|
||||
|
|
|
|||
|
|
@ -48,14 +48,6 @@
|
|||
group: "{{ ansible_effective_group_id|int }}"
|
||||
recurse: yes
|
||||
|
||||
- name: Copy /etc/hosts from old installation (remove when upgraded)
|
||||
become: yes
|
||||
copy:
|
||||
src: /srv/secrets-newsblur/configs/hosts
|
||||
dest: /etc/hosts
|
||||
tags: hosts
|
||||
notify: reload dnsmasq
|
||||
|
||||
- name: "Add inventory_hostname to /etc/hosts"
|
||||
become: yes
|
||||
lineinfile:
|
||||
|
|
|
|||
|
|
@ -20,33 +20,70 @@
|
|||
package: sysbench
|
||||
state: latest
|
||||
|
||||
- name: Run sysbench CPU
|
||||
shell:
|
||||
# chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench cpu --cpu-max-prime=20000 run
|
||||
register: cpu
|
||||
- name: Run sysbench on native fs
|
||||
block:
|
||||
- name: Run sysbench CPU
|
||||
shell:
|
||||
# chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench cpu --cpu-max-prime=20000 run
|
||||
register: cpu
|
||||
|
||||
- name: Benchmark cpu results
|
||||
debug: msg="{{ cpu.stdout.split('\n') }}"
|
||||
- name: Benchmark cpu results
|
||||
debug: msg="{{ cpu.stdout }}"
|
||||
|
||||
- name: Prepare sysbench disk i/o
|
||||
become: yes
|
||||
shell:
|
||||
# chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench fileio --file-total-size=50G prepare
|
||||
|
||||
- name: Run sysbench disk i/o
|
||||
become: yes
|
||||
shell:
|
||||
# chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench fileio --file-total-size=50G --file-test-mode=rndrw --time=300 --max-requests=0 run
|
||||
register: io
|
||||
|
||||
- name: Cleanup sysbench
|
||||
become: yes
|
||||
shell:
|
||||
# chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench fileio --file-total-size=150G cleanup
|
||||
- name: Prepare sysbench disk i/o
|
||||
become: yes
|
||||
shell:
|
||||
# chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench fileio --file-total-size=5G prepare
|
||||
|
||||
- name: Run sysbench disk i/o
|
||||
become: yes
|
||||
shell:
|
||||
# chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench fileio --file-total-size=5G --file-test-mode=rndrw --time=30 --max-requests=0 run
|
||||
register: io
|
||||
|
||||
- name: Cleanup sysbench
|
||||
become: yes
|
||||
shell:
|
||||
# chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench fileio --file-total-size=5G cleanup
|
||||
|
||||
- name: Benchmark io results
|
||||
debug: msg="{{ io.stdout }}"
|
||||
when: (inventory_hostname | regex_replace('[0-9]+', '')) not in ['db-mongo-secondary', 'db-mongo-analytics']
|
||||
|
||||
- name: Benchmark io results
|
||||
debug: msg="{{ io.stdout.split('\n') }}"
|
||||
- name: Run sysbench on mounted fs
|
||||
block:
|
||||
- name: Run sysbench CPU (on mounted volume)
|
||||
shell:
|
||||
chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench cpu --cpu-max-prime=20000 run
|
||||
register: cpu
|
||||
|
||||
- name: Benchmark cpu results
|
||||
debug: msg="{{ cpu.stdout }}"
|
||||
|
||||
- name: Prepare sysbench disk i/o (on mounted volume)
|
||||
become: yes
|
||||
shell:
|
||||
chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench fileio --file-total-size=50G prepare
|
||||
|
||||
- name: Run sysbench disk i/o (on mounted volume)
|
||||
become: yes
|
||||
shell:
|
||||
chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench fileio --file-total-size=50G --file-test-mode=rndrw --time=300 --max-requests=0 run
|
||||
register: io
|
||||
|
||||
- name: Cleanup sysbench (on mounted volume)
|
||||
become: yes
|
||||
shell:
|
||||
chdir: "/mnt/{{ inventory_hostname|regex_replace('db-', '')|regex_replace('-', '') }}"
|
||||
cmd: sysbench fileio --file-total-size=150G cleanup
|
||||
|
||||
- name: Benchmark io results (on mounted volume)
|
||||
debug: msg="{{ io.stdout }}"
|
||||
when: (inventory_hostname | regex_replace('[0-9]+', '')) in ['db-mongo-secondary', 'db-mongo-analytics']
|
||||
|
|
|
|||
|
|
@ -109,6 +109,17 @@
|
|||
state: touch
|
||||
mode: 0666
|
||||
|
||||
- name: Add spam.py for task-work
|
||||
become: yes
|
||||
copy:
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
src: /srv/secrets-newsblur/spam/spam.py
|
||||
dest: /srv/newsblur/apps/social/spam.py
|
||||
when: "'task-work' in inventory_hostname"
|
||||
tags:
|
||||
- spam
|
||||
|
||||
- name: Add sanity checkers cronjob for feeds fetched
|
||||
become: yes
|
||||
copy:
|
||||
|
|
|
|||
|
|
@ -2,11 +2,11 @@
|
|||
# tasks file for docker-ce-ansible-role
|
||||
|
||||
- name: Install docker-ce (RedHat)
|
||||
include: install-EL.yml
|
||||
include_tasks: install-EL.yml
|
||||
when: ansible_os_family == 'RedHat'
|
||||
|
||||
- name: Install docker-ce (Ubuntu)
|
||||
include: install-Ubuntu.yml
|
||||
include_tasks: install-Ubuntu.yml
|
||||
when: ansible_distribution == 'Ubuntu'
|
||||
|
||||
- name: Enable Docker CE service on startup
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@
|
|||
docker_container:
|
||||
pull: true
|
||||
name: grafana
|
||||
image: grafana/grafana:8.2.6
|
||||
image: grafana/grafana:9.0.2
|
||||
restart_policy: unless-stopped
|
||||
hostname: "{{ inventory_hostname }}"
|
||||
user: root
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
---
|
||||
- include: certbot.yml
|
||||
- include: certbot-dns.yml
|
||||
- include_tasks: certbot.yml
|
||||
- include_tasks: certbot-dns.yml
|
||||
|
|
|
|||
|
|
@ -3,7 +3,9 @@
|
|||
become: yes
|
||||
file:
|
||||
state: directory
|
||||
mode: 0777
|
||||
mode: 0755
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
path: /var/log/mongodb
|
||||
|
||||
- name: Block for mongo volume
|
||||
|
|
@ -23,6 +25,8 @@
|
|||
file:
|
||||
path: "/mnt/{{ inventory_hostname | regex_replace('db-|-', '') }}"
|
||||
state: directory
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
|
||||
- name: Mount volume read-write
|
||||
become: yes
|
||||
|
|
@ -32,15 +36,6 @@
|
|||
fstype: xfs
|
||||
opts: defaults,discard
|
||||
state: mounted
|
||||
|
||||
- name: Set permissions on volume
|
||||
become: yes
|
||||
file:
|
||||
path: "/mnt/{{ inventory_hostname | regex_replace('db-|-', '') }}"
|
||||
state: directory
|
||||
owner: 999
|
||||
group: 999
|
||||
recurse: yes
|
||||
|
||||
when: (inventory_hostname | regex_replace('[0-9]+', '')) in ['db-mongo-secondary', 'db-mongo-analytics']
|
||||
|
||||
|
|
@ -49,31 +44,44 @@
|
|||
copy:
|
||||
content: "{{ mongodb_keyfile }}"
|
||||
dest: /srv/newsblur/config/mongodb_keyfile.key
|
||||
owner: 999
|
||||
group: 999
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
mode: 0400
|
||||
tags:
|
||||
- keyfile
|
||||
|
||||
- name: Set permissions on mongo volume
|
||||
become: yes
|
||||
file:
|
||||
path: "/mnt/{{ inventory_hostname | regex_replace('db-|-', '') }}"
|
||||
state: directory
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
recurse: yes
|
||||
|
||||
- name: Make backup directory
|
||||
become: yes
|
||||
file:
|
||||
path: "/mnt/{{ inventory_hostname | regex_replace('db-|-', '') }}/backup/"
|
||||
state: directory
|
||||
mode: 0777
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
mode: 0755
|
||||
|
||||
- name: Create symlink to mounted volume for backups to live
|
||||
file:
|
||||
state: link
|
||||
src: "/mnt/{{ inventory_hostname | regex_replace('db-|-', '') }}/backup"
|
||||
path: /srv/newsblur/backup
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
force: yes
|
||||
|
||||
- name: Start db-mongo docker container
|
||||
become: yes
|
||||
docker_container:
|
||||
name: mongo
|
||||
image: mongo:3.6
|
||||
image: mongo:4.0
|
||||
state: started
|
||||
container_default_behavior: no_defaults
|
||||
hostname: "{{ inventory_hostname }}"
|
||||
|
|
@ -88,6 +96,7 @@
|
|||
# ports:
|
||||
# - "27017:27017"
|
||||
command: --config /etc/mongod.conf
|
||||
user: 1000:1001
|
||||
volumes:
|
||||
- /mnt/{{ inventory_hostname | regex_replace('db-|-', '') }}:/data/db
|
||||
- /srv/newsblur/ansible/roles/mongo/templates/mongo.conf:/etc/mongod.conf
|
||||
|
|
@ -100,7 +109,7 @@
|
|||
become: yes
|
||||
docker_container:
|
||||
name: mongo
|
||||
image: mongo:3.6
|
||||
image: mongo:4.0
|
||||
state: started
|
||||
container_default_behavior: no_defaults
|
||||
hostname: "{{ inventory_hostname }}"
|
||||
|
|
@ -115,7 +124,7 @@
|
|||
ports:
|
||||
- "27017:27017"
|
||||
command: --config /etc/mongod.conf
|
||||
user: 999:999
|
||||
user: 1000:1001
|
||||
volumes:
|
||||
- /mnt/{{ inventory_hostname | regex_replace('db-|-', '') }}:/data/db
|
||||
- /srv/newsblur/ansible/roles/mongo/templates/mongo.analytics.conf:/etc/mongod.conf
|
||||
|
|
@ -204,15 +213,26 @@
|
|||
dest: /srv/newsblur/newsblur_web/local_settings.py
|
||||
register: app_changed
|
||||
|
||||
- name: Add mongo backup log
|
||||
become: yes
|
||||
file:
|
||||
path: /var/log/mongo_backup.log
|
||||
state: touch
|
||||
mode: 0755
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
when: '"db-mongo-secondary1" in inventory_hostname'
|
||||
|
||||
- name: Add mongo backup
|
||||
cron:
|
||||
name: mongo backup
|
||||
minute: "0"
|
||||
hour: "4"
|
||||
job: /srv/newsblur/docker/mongo/backup_mongo.sh
|
||||
job: /srv/newsblur/docker/mongo/backup_mongo.sh >> /var/log/mongo_backup.log 2>&1
|
||||
when: '"db-mongo-secondary1" in inventory_hostname'
|
||||
tags:
|
||||
- mongo-backup
|
||||
- cron
|
||||
|
||||
# - name: Add mongo starred_stories+stories backup
|
||||
# cron:
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"service": {
|
||||
"name": "db-mongo-staging",
|
||||
"name": "db-mongo",
|
||||
"id": "{{ inventory_hostname }}",
|
||||
"tags": [
|
||||
"db"
|
||||
|
|
|
|||
|
|
@ -11,4 +11,5 @@
|
|||
- name: restart node
|
||||
become: yes
|
||||
command: docker restart node
|
||||
ignore_errors: yes
|
||||
listen: restart node
|
||||
|
|
|
|||
|
|
@ -20,6 +20,17 @@
|
|||
mode: 0600
|
||||
line: 'SERVER_NAME = "{{ inventory_hostname }}"'
|
||||
|
||||
- name: Copy imageproxy secrets
|
||||
copy:
|
||||
src: /srv/secrets-newsblur/settings/imageproxy.key
|
||||
dest: /srv/imageproxy.key
|
||||
register: app_changed
|
||||
notify: restart node
|
||||
with_items:
|
||||
- node-images
|
||||
- staging
|
||||
when: item in inventory_hostname
|
||||
|
||||
- name: Get the volume name
|
||||
shell: ls /dev/disk/by-id/ | grep -v part
|
||||
register: volume_name_raw
|
||||
|
|
@ -105,9 +116,13 @@
|
|||
- "{{ item.ports }}"
|
||||
env:
|
||||
NODE_ENV: "production"
|
||||
IMAGEPROXY_CACHE: "memory:200:4h"
|
||||
IMAGEPROXY_SIGNATUREKEY: "@/srv/imageproxy.key"
|
||||
IMAGEPROXY_VERBOSE: "1"
|
||||
restart_policy: unless-stopped
|
||||
volumes:
|
||||
- /srv/newsblur/node:/srv/node
|
||||
- /srv/imageproxy.key:/srv/imageproxy.key
|
||||
with_items:
|
||||
- container_name: imageproxy
|
||||
image: ghcr.io/willnorris/imageproxy
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@
|
|||
"checks": [{
|
||||
"id": "{{inventory_hostname}}-ping",
|
||||
{% if item.target_host == "node-images" %}
|
||||
"http": "http://{{ ansible_ssh_host }}:{{ item.port }}/sc,sN1megONJiGNy-CCvqzVPTv-TWRhgSKhFlf61XAYESl4=/http:/samuelclay.com/static/images/2019%20-%20Cuba.jpg",
|
||||
"http": "http://{{ ansible_ssh_host }}:{{ item.port }}/sc,seLJDaKBog3LLEMDe8cjBefMhnVSibO4RA5boZhWcVZ0=/https://samuelclay.com/static/images/2019%20-%20Cuba.jpg",
|
||||
{% elif item.target_host == "node-favicons" %}
|
||||
"http": "http://{{ ansible_ssh_host }}:{{ item.port }}/rss_feeds/icon/1",
|
||||
{% elif item.target_host == "node-text" %}
|
||||
|
|
|
|||
|
|
@ -10,5 +10,5 @@
|
|||
|
||||
- name: reload postgres config
|
||||
become: yes
|
||||
command: docker exec postgres pg_ctl reload
|
||||
command: docker exec -u postgres postgres pg_ctl reload
|
||||
listen: reload postgres
|
||||
|
|
|
|||
|
|
@ -7,22 +7,20 @@
|
|||
notify: reload postgres
|
||||
register: updated_config
|
||||
|
||||
- name: Ensure postgres archive directory
|
||||
- name: Create Postgres docker volumes with correct permissions
|
||||
become: yes
|
||||
file:
|
||||
path: /srv/newsblur/docker/volumes/postgres/archive
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
mode: 0777
|
||||
|
||||
- name: Ensure postgres backup directory
|
||||
become: yes
|
||||
file:
|
||||
path: /srv/newsblur/backups
|
||||
state: directory
|
||||
mode: 0777
|
||||
|
||||
recurse: yes
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
with_items:
|
||||
- /srv/newsblur/docker/volumes/postgres/archive
|
||||
- /srv/newsblur/docker/volumes/postgres/backups
|
||||
- /srv/newsblur/docker/volumes/postgres/data
|
||||
|
||||
- name: Start postgres docker containers
|
||||
become: yes
|
||||
docker_container:
|
||||
name: postgres
|
||||
image: postgres:13
|
||||
|
|
@ -34,7 +32,6 @@
|
|||
POSTGRES_PASSWORD: "{{ postgres_password }}"
|
||||
hostname: "{{ inventory_hostname }}"
|
||||
networks_cli_compatible: yes
|
||||
# network_mode: host
|
||||
network_mode: default
|
||||
networks:
|
||||
- name: newsblurnet
|
||||
|
|
@ -42,16 +39,27 @@
|
|||
- postgres
|
||||
ports:
|
||||
- 5432:5432
|
||||
user: 1000:1001
|
||||
volumes:
|
||||
- /srv/newsblur/docker/volumes/postgres:/var/lib/postgresql
|
||||
- /srv/newsblur/docker/volumes/postgres/data:/var/lib/postgresql/data
|
||||
- /srv/newsblur/docker/volumes/postgres/archive:/var/lib/postgresql/archive
|
||||
- /srv/newsblur/docker/volumes/postgres/backups:/var/lib/postgresql/backups
|
||||
- /srv/newsblur/docker/postgres/postgres.conf:/etc/postgresql/postgresql.conf
|
||||
- /srv/newsblur/docker/postgres/postgres_hba-13.conf:/etc/postgresql/pg_hba.conf
|
||||
- /srv/newsblur/backups/:/var/lib/postgresql/backup/
|
||||
- /srv/newsblur/docker/postgres/postgres_ident-13.conf:/etc/postgresql/pg_ident.conf
|
||||
restart_policy: unless-stopped
|
||||
when: (inventory_hostname | regex_replace('[0-9]+', '')) in ['db-postgres-primary', 'db-postgres']
|
||||
|
||||
- name: Change ownership in postgres docker container
|
||||
become: yes
|
||||
command: >
|
||||
docker exec postgres chown -fR postgres.postgres /var/lib/postgresql
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Ensure newsblur role in postgres
|
||||
become: yes
|
||||
shell: >
|
||||
sleep 5;
|
||||
sleep 15;
|
||||
docker exec postgres createuser -s newsblur -U postgres;
|
||||
docker exec postgres createdb newsblur -U newsblur;
|
||||
register: ensure_role
|
||||
|
|
@ -77,19 +85,20 @@
|
|||
register: app_changed
|
||||
|
||||
- name: Add sanity checkers cronjob for disk usage
|
||||
become: yes
|
||||
cron:
|
||||
name: disk_usage_sanity_checker
|
||||
user: root
|
||||
cron_file: /etc/cron.hourly/disk_usage_sanity_checker
|
||||
minute: "0"
|
||||
job: >-
|
||||
docker pull newsblur/newsblur_python3:latest;
|
||||
docker run --rm -it
|
||||
OUTPUT=$(eval sudo df / | head -n 2 | tail -1);
|
||||
-v /srv/newsblur:/srv/newsblur
|
||||
--network=newsblurnet
|
||||
--hostname {{ ansible_hostname }}
|
||||
newsblur/newsblur_python3 /srv/newsblur/utils/monitor_disk_usage.py $OUTPUT
|
||||
OUTPUT=$(df / | head -n 2 | tail -1) docker run --rm -it -v /srv/newsblur:/srv/newsblur --network=newsblurnet --hostname {{ ansible_hostname }} newsblur/newsblur_python3 /srv/newsblur/utils/monitor_disk_usage.py $OUTPUT
|
||||
tags: cron
|
||||
|
||||
- name: Add postgresql archive cleaner cronjob
|
||||
cron:
|
||||
name: postgres_archive_cleaner
|
||||
minute: "0"
|
||||
job: >-
|
||||
sudo find /srv/newsblur/docker/volumes/postgres/archive -type f -mmin +180 -delete
|
||||
tags: cron
|
||||
|
||||
- name: Add postgres backup log
|
||||
become: yes
|
||||
|
|
@ -105,5 +114,6 @@
|
|||
name: postgres backup
|
||||
minute: "0"
|
||||
hour: "4"
|
||||
job: /srv/newsblur/docker/postgres/backup_postgres.sh 1> /var/log/postgres_backup.log 2>&1
|
||||
job: /srv/newsblur/docker/postgres/backup_postgres.sh >> /var/log/postgres_backup.log 2>&1
|
||||
tags: cron
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,10 @@
|
|||
{
|
||||
"service": {
|
||||
"name": "db-postgres-staging",
|
||||
{% if inventory_hostname.startswith('db-postgres3') %}
|
||||
"name": "db-postgres",
|
||||
{% else %}
|
||||
"name": "db-postgres-secondary",
|
||||
{% endif %}
|
||||
"tags": [
|
||||
"db"
|
||||
],
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
become: yes
|
||||
docker_container:
|
||||
name: "{{item.redis_target}}-exporter"
|
||||
image: oliver006/redis_exporter
|
||||
image: oliver006/redis_exporter:latest
|
||||
restart_policy: unless-stopped
|
||||
container_default_behavior: no_defaults
|
||||
env:
|
||||
|
|
|
|||
|
|
@ -1,4 +1,22 @@
|
|||
---
|
||||
- name: Install sysfsutils for disabling transparent huge pages
|
||||
become: yes
|
||||
package:
|
||||
name: sysfsutils
|
||||
state: latest
|
||||
|
||||
- name: Disable transparent huge pages for redis performance - persistent change
|
||||
become: yes
|
||||
lineinfile:
|
||||
path: /etc/sysfs.conf
|
||||
create: true
|
||||
regexp: '^kernel\/mm\/transparent\_hugepage\/enabled'
|
||||
line: "kernel/mm/transparent_hugepage/enabled = never"
|
||||
|
||||
- name: Disable transparent huge pages for redis performance - live change
|
||||
become: yes
|
||||
shell: echo never {{ ">" }} /sys/kernel/mm/transparent_hugepage/enabled
|
||||
|
||||
- name: Add a vm.overcommit_memory setting at the end of the sysctl.conf
|
||||
become: yes
|
||||
sysctl: name=vm.overcommit_memory value=1 state=present reload=yes
|
||||
|
|
@ -10,25 +28,19 @@
|
|||
notify: restart redis
|
||||
register: updated_config
|
||||
|
||||
- name: Turning off secondary for redis by deleting redis_replica.conf
|
||||
copy:
|
||||
dest: /srv/newsblur/docker/redis/redis_replica.conf
|
||||
content: ""
|
||||
tags:
|
||||
# - never
|
||||
- replicaofnoone
|
||||
- name: Create Redis docker volume with correct permissions
|
||||
file:
|
||||
path: /srv/newsblur/docker/volumes/redis
|
||||
state: directory
|
||||
recurse: yes
|
||||
owner: "{{ ansible_effective_user_id|int }}"
|
||||
group: "{{ ansible_effective_group_id|int }}"
|
||||
|
||||
- name: Setting Redis REPLICAOF NO ONE
|
||||
shell: docker exec redis redis-cli REPLICAOF NO ONE
|
||||
tags:
|
||||
# - never
|
||||
- replicaofnoone
|
||||
|
||||
- name: Start redis docker containers
|
||||
become: yes
|
||||
docker_container:
|
||||
name: redis
|
||||
image: redis:6.2.6
|
||||
image: redis:6.2.7
|
||||
state: started
|
||||
command: /usr/local/etc/redis/redis_server.conf
|
||||
container_default_behavior: no_defaults
|
||||
|
|
|
|||
|
|
@ -1,6 +1,10 @@
|
|||
{
|
||||
"service": {
|
||||
"name": "{{ inventory_hostname|regex_replace('\d+', '') }}",
|
||||
{% if inventory_hostname in ["db-redis-user", "db-redis-story1", "db-redis-session", "db-redis-pubsub"] %}
|
||||
"name": "{{ inventory_hostname|regex_replace('\d+', '') }}",
|
||||
{% else %}
|
||||
"name": "{{ inventory_hostname|regex_replace('\d+', '') }}-staging",
|
||||
{% endif %}
|
||||
"id": "{{ inventory_hostname }}",
|
||||
"tags": [
|
||||
"redis"
|
||||
|
|
@ -8,13 +12,13 @@
|
|||
"port": 6379,
|
||||
"checks": [{
|
||||
"id": "{{inventory_hostname}}-ping",
|
||||
{% if inventory_hostname == 'db-redis-story' %}
|
||||
{% if inventory_hostname.startswith('db-redis-story') %}
|
||||
"http": "http://{{ ansible_ssh_host }}:5579/db_check/redis_story?consul=1",
|
||||
{% elif inventory_hostname == 'db-redis-user' %}
|
||||
{% elif inventory_hostname.startswith('db-redis-user') %}
|
||||
"http": "http://{{ ansible_ssh_host }}:5579/db_check/redis_user?consul=1",
|
||||
{% elif inventory_hostname == 'db-redis-pubsub' %}
|
||||
{% elif inventory_hostname.startswith('db-redis-pubsub') %}
|
||||
"http": "http://{{ ansible_ssh_host }}:5579/db_check/redis_pubsub?consul=1",
|
||||
{% elif inventory_hostname == 'db-redis-sessions' %}
|
||||
{% elif inventory_hostname.startswith('db-redis-sessions') %}
|
||||
"http": "http://{{ ansible_ssh_host }}:5579/db_check/redis_sessions?consul=1",
|
||||
{% else %}
|
||||
"http": "http://{{ ansible_ssh_host }}:5000/db_check/redis?consul=1",
|
||||
|
|
|
|||
|
|
@ -1,21 +1,12 @@
|
|||
---
|
||||
- name: Ensure /srv directory exists
|
||||
become: yes
|
||||
file:
|
||||
path: /srv
|
||||
state: directory
|
||||
mode: 0755
|
||||
owner: nb
|
||||
group: nb
|
||||
|
||||
- name: Ensure nb /srv/newsblur owner
|
||||
become: yes
|
||||
file:
|
||||
path: /srv/newsblur
|
||||
state: directory
|
||||
owner: nb
|
||||
group: nb
|
||||
recurse: yes
|
||||
# - name: Ensure nb /srv/newsblur owner
|
||||
# become: yes
|
||||
# file:
|
||||
# path: /srv/newsblur
|
||||
# state: directory
|
||||
# owner: nb
|
||||
# group: nb
|
||||
# recurse: yes
|
||||
|
||||
- name: Pull newsblur_web github
|
||||
git:
|
||||
|
|
|
|||
|
|
@ -1,7 +1 @@
|
|||
---
|
||||
- name: reload sentry
|
||||
become: yes
|
||||
command:
|
||||
chdir: /srv/sentry/
|
||||
cmd: ./install.sh
|
||||
listen: reload sentry
|
||||
|
|
|
|||
|
|
@ -4,7 +4,16 @@
|
|||
repo: https://github.com/getsentry/self-hosted.git
|
||||
dest: /srv/sentry/
|
||||
version: master
|
||||
notify: reload sentry
|
||||
|
||||
- name: Updating Sentry
|
||||
command:
|
||||
chdir: /srv/sentry/
|
||||
cmd: ./install.sh
|
||||
|
||||
- name: docker-compuse up -d
|
||||
command:
|
||||
chdir: /srv/sentry/
|
||||
cmd: docker-compose up -d
|
||||
|
||||
- name: Register sentry in consul
|
||||
tags: consul
|
||||
|
|
|
|||
|
|
@ -86,7 +86,6 @@
|
|||
user: 1000:1001
|
||||
volumes:
|
||||
- /srv/newsblur:/srv/newsblur
|
||||
- /etc/hosts:/etc/hosts
|
||||
|
||||
- name: Register web app in consul
|
||||
tags: consul
|
||||
|
|
@ -104,6 +103,15 @@
|
|||
tags:
|
||||
- logrotate
|
||||
|
||||
- name: Force reload gunicorn
|
||||
debug:
|
||||
msg: Forcing reload...
|
||||
register: app_changed
|
||||
changed_when: true
|
||||
tags:
|
||||
- never
|
||||
- force
|
||||
|
||||
- name: Reload gunicorn
|
||||
debug:
|
||||
msg: Reloading gunicorn
|
||||
|
|
|
|||
|
|
@ -75,7 +75,8 @@ def save_classifier(request):
|
|||
'social_user_id': social_user_id or 0,
|
||||
}
|
||||
if content_type in ('author', 'tag', 'title'):
|
||||
classifier_dict.update({content_type: post_content})
|
||||
max_length = ClassifierCls._fields[content_type].max_length
|
||||
classifier_dict.update({content_type: post_content[:max_length]})
|
||||
if content_type == 'feed':
|
||||
if not post_content.startswith('social:'):
|
||||
classifier_dict['feed_id'] = post_content
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ from django.conf.urls import url
|
|||
from apps.monitor.views import ( AppServers, AppTimes,
|
||||
Classifiers, DbTimes, Errors, FeedCounts, Feeds, LoadTimes,
|
||||
Stories, TasksCodes, TasksPipeline, TasksServers, TasksTimes,
|
||||
Updates, Users
|
||||
Updates, Users, FeedSizes
|
||||
)
|
||||
urlpatterns = [
|
||||
url(r'^app-servers?$', AppServers.as_view(), name="app_servers"),
|
||||
|
|
@ -11,6 +11,7 @@ urlpatterns = [
|
|||
url(r'^db-times?$', DbTimes.as_view(), name="db_times"),
|
||||
url(r'^errors?$', Errors.as_view(), name="errors"),
|
||||
url(r'^feed-counts?$', FeedCounts.as_view(), name="feed_counts"),
|
||||
url(r'^feed-sizes?$', FeedSizes.as_view(), name="feed_sizes"),
|
||||
url(r'^feeds?$', Feeds.as_view(), name="feeds"),
|
||||
url(r'^load-times?$', LoadTimes.as_view(), name="load_times"),
|
||||
url(r'^stories?$', Stories.as_view(), name="stories"),
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ from apps.monitor.views.newsblur_classifiers import Classifiers
|
|||
from apps.monitor.views.newsblur_dbtimes import DbTimes
|
||||
from apps.monitor.views.newsblur_errors import Errors
|
||||
from apps.monitor.views.newsblur_feed_counts import FeedCounts
|
||||
from apps.monitor.views.newsblur_feed_sizes import FeedSizes
|
||||
from apps.monitor.views.newsblur_feeds import Feeds
|
||||
from apps.monitor.views.newsblur_loadtimes import LoadTimes
|
||||
from apps.monitor.views.newsblur_stories import Stories
|
||||
|
|
|
|||
42
apps/monitor/views/newsblur_feed_sizes.py
Normal file
42
apps/monitor/views/newsblur_feed_sizes.py
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
from django.conf import settings
|
||||
from django.shortcuts import render
|
||||
from django.views import View
|
||||
from django.db.models import Sum
|
||||
import redis
|
||||
from apps.rss_feeds.models import Feed, DuplicateFeed
|
||||
from apps.push.models import PushSubscription
|
||||
from apps.statistics.models import MStatistics
|
||||
|
||||
class FeedSizes(View):
|
||||
|
||||
def get(self, request):
|
||||
|
||||
fs_size_bytes = MStatistics.get('munin:fs_size_bytes')
|
||||
if not fs_size_bytes:
|
||||
fs_size_bytes = Feed.objects.aggregate(Sum('fs_size_bytes'))['fs_size_bytes__sum']
|
||||
MStatistics.set('munin:fs_size_bytes', fs_size_bytes, 60*60*12)
|
||||
|
||||
archive_users_size_bytes = MStatistics.get('munin:archive_users_size_bytes')
|
||||
if not archive_users_size_bytes:
|
||||
archive_users_size_bytes = Feed.objects.filter(archive_subscribers__gte=1).aggregate(Sum('fs_size_bytes'))['fs_size_bytes__sum']
|
||||
MStatistics.set('munin:archive_users_size_bytes', archive_users_size_bytes, 60*60*12)
|
||||
|
||||
data = {
|
||||
'fs_size_bytes': fs_size_bytes,
|
||||
'archive_users_size_bytes': archive_users_size_bytes,
|
||||
}
|
||||
chart_name = "feed_sizes"
|
||||
chart_type = "counter"
|
||||
|
||||
formatted_data = {}
|
||||
for k, v in data.items():
|
||||
formatted_data[k] = f'{chart_name}{{category="{k}"}} {v}'
|
||||
|
||||
context = {
|
||||
"data": formatted_data,
|
||||
"chart_name": chart_name,
|
||||
"chart_type": chart_type,
|
||||
}
|
||||
return render(request, 'monitor/prometheus_data.html', context, content_type="text/plain")
|
||||
|
||||
|
||||
|
|
@ -5,19 +5,41 @@ from django.shortcuts import render
|
|||
from django.views import View
|
||||
|
||||
from apps.profile.models import Profile, RNewUserQueue
|
||||
from apps.statistics.models import MStatistics
|
||||
|
||||
class Users(View):
|
||||
|
||||
def get(self, request):
|
||||
last_year = datetime.datetime.utcnow() - datetime.timedelta(days=365)
|
||||
last_month = datetime.datetime.utcnow() - datetime.timedelta(days=30)
|
||||
last_day = datetime.datetime.utcnow() - datetime.timedelta(minutes=60*24)
|
||||
|
||||
expiration_sec = 60*60 # 1 hour
|
||||
|
||||
data = {
|
||||
'all': User.objects.count(),
|
||||
'monthly': Profile.objects.filter(last_seen_on__gte=last_month).count(),
|
||||
'daily': Profile.objects.filter(last_seen_on__gte=last_day).count(),
|
||||
'premium': Profile.objects.filter(is_premium=True).count(),
|
||||
'queued': RNewUserQueue.user_count(),
|
||||
'all': MStatistics.get('munin:users_count',
|
||||
lambda: User.objects.count(),
|
||||
set_default=True, expiration_sec=expiration_sec),
|
||||
'yearly': MStatistics.get('munin:users_yearly',
|
||||
lambda: Profile.objects.filter(last_seen_on__gte=last_year).count(),
|
||||
set_default=True, expiration_sec=expiration_sec),
|
||||
'monthly': MStatistics.get('munin:users_monthly',
|
||||
lambda: Profile.objects.filter(last_seen_on__gte=last_month).count(),
|
||||
set_default=True, expiration_sec=expiration_sec),
|
||||
'daily': MStatistics.get('munin:users_daily',
|
||||
lambda: Profile.objects.filter(last_seen_on__gte=last_day).count(),
|
||||
set_default=True, expiration_sec=expiration_sec),
|
||||
'premium': MStatistics.get('munin:users_premium',
|
||||
lambda: Profile.objects.filter(is_premium=True).count(),
|
||||
set_default=True, expiration_sec=expiration_sec),
|
||||
'archive': MStatistics.get('munin:users_archive',
|
||||
lambda: Profile.objects.filter(is_archive=True).count(),
|
||||
set_default=True, expiration_sec=expiration_sec),
|
||||
'pro': MStatistics.get('munin:users_pro',
|
||||
lambda: Profile.objects.filter(is_pro=True).count(),
|
||||
set_default=True, expiration_sec=expiration_sec),
|
||||
'queued': MStatistics.get('munin:users_queued',
|
||||
lambda: RNewUserQueue.user_count(),
|
||||
set_default=True, expiration_sec=expiration_sec),
|
||||
}
|
||||
chart_name = "users"
|
||||
chart_type = "counter"
|
||||
|
|
|
|||
|
|
@ -67,9 +67,7 @@ class RedisGrafanaMetric(View):
|
|||
return render(request, 'monitor/prometheus_data.html', context, content_type="text/plain")
|
||||
|
||||
class RedisActiveConnection(RedisGrafanaMetric):
|
||||
|
||||
def get_context(self):
|
||||
|
||||
|
||||
def get_fields(self):
|
||||
return (
|
||||
('connected_clients', dict(
|
||||
|
|
|
|||
|
|
@ -148,7 +148,7 @@ class EmailNewsletter:
|
|||
from_email='NewsBlur <%s>' % settings.HELLO_EMAIL,
|
||||
to=['%s <%s>' % (user, user.email)])
|
||||
msg.attach_alternative(html, "text/html")
|
||||
msg.send(fail_silently=True)
|
||||
msg.send()
|
||||
|
||||
logging.user(user, "~BB~FM~SBSending first newsletter email to: %s" % user.email)
|
||||
|
||||
|
|
|
|||
|
|
@ -327,8 +327,7 @@ def api_unread_story(request, trigger_slug=None):
|
|||
found_feed_ids = [feed_id]
|
||||
found_trained_feed_ids = [feed_id] if usersub.is_trained else []
|
||||
stories = usersub.get_stories(order="newest", read_filter="unread",
|
||||
offset=0, limit=limit,
|
||||
default_cutoff_date=user.profile.unread_cutoff)
|
||||
offset=0, limit=limit)
|
||||
else:
|
||||
folder_title = feed_or_folder
|
||||
if folder_title == "Top Level":
|
||||
|
|
|
|||
|
|
@ -10,6 +10,8 @@ from apps.social.models import MSocialProfile
|
|||
|
||||
PLANS = [
|
||||
("newsblur-premium-36", mark_safe("$36 / year <span class='NB-small'>($3/month)</span>")),
|
||||
("newsblur-premium-archive", mark_safe("$99 / year <span class='NB-small'>(~$8/month)</span>")),
|
||||
("newsblur-premium-pro", mark_safe("$299 / year <span class='NB-small'>(~$25/month)</span>")),
|
||||
]
|
||||
|
||||
class HorizRadioRenderer(forms.RadioSelect):
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ class Command(BaseCommand):
|
|||
try:
|
||||
c = db_conn.cursor()
|
||||
connected = True
|
||||
print("Connected to postgres")
|
||||
# print("Connected to postgres")
|
||||
except OperationalError as e:
|
||||
print(f"Waiting for db_postgres: {e}")
|
||||
print(f" ---> Waiting for db_postgres: {e}")
|
||||
time.sleep(5)
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ class Command(BaseCommand):
|
|||
def add_arguments(self, parser):
|
||||
parser.add_argument("-d", "--days", dest="days", nargs=1, type=int, default=365, help="Number of days to go back")
|
||||
parser.add_argument("-o", "--offset", dest="offset", nargs=1, type=int, default=0, help="Offset customer (in date DESC)")
|
||||
parser.add_argument("-f", "--force", dest="force", nargs=1, type=bool, default=False, help="Force reimport for every user")
|
||||
|
||||
def handle(self, *args, **options):
|
||||
stripe.api_key = settings.STRIPE_SECRET
|
||||
|
|
@ -20,8 +21,6 @@ class Command(BaseCommand):
|
|||
limit = 100
|
||||
offset = options.get('offset')
|
||||
|
||||
ignore_user_ids = [18759, 30189, 64184, 254899, 37485, 260097, 244361, 2386, 133148, 102747, 113990, 67222, 5665, 213213, 274, 10462, 240747, 27473, 37748, 85501, 38646, 242379, 53887, 144792, 249582, 126886, 6337, 258479, 43075, 273339, 24347, 178338, 142873, 82601, 18776, 22356, 37524, 124160, 27551, 34427, 35953, 136492, 45476, 14922, 106089, 15848, 33187, 21913, 19860, 43097, 7257, 101133, 147496, 13500, 26762, 44189, 179498, 90799, 44003, 43825, 43861, 43847, 276609, 43007, 43041, 273707, 29652, 171964, 42045, 173859, 109149, 221251, 42344, 29359, 26284, 29251, 10387, 42502, 42043, 42036, 263720, 77766, 41870, 6589, 25411, 262875, 261455, 24292, 41529, 33303, 41343, 40422, 41146, 5561, 71937, 249531, 260228, 258502, 40883, 40859, 40832, 40608, 259295, 218791, 127438, 27354, 27009, 257426, 257289, 7450, 173558, 25773, 4136, 3404, 2251, 3492, 3397, 24927, 39968, 540, 24281, 24095, 24427, 39899, 39887, 17804, 23613, 116173, 3242, 23388, 2760, 22868, 22640, 39465, 39222, 39424, 39268, 238280, 143982, 21964, 246042, 252087, 202824, 38937, 19715, 38704, 139267, 249644, 38549, 249424, 224057, 248477, 236813, 36822, 189335, 139732, 242454, 18817, 37420, 37435, 178748, 206385, 200703, 233798, 177033, 19706, 244002, 167606, 73054, 50543, 19431, 211439, 239137, 36433, 60146, 167373, 19730, 253812]
|
||||
|
||||
while True:
|
||||
logging.debug(" ---> At %s" % offset)
|
||||
user_ids = PaymentHistory.objects.filter(payment_provider='paypal',
|
||||
|
|
@ -32,9 +31,6 @@ class Command(BaseCommand):
|
|||
break
|
||||
offset += limit
|
||||
for user_id in user_ids:
|
||||
if user_id in ignore_user_ids:
|
||||
# ignore_user_ids can be removed after 2016-05-17
|
||||
continue
|
||||
try:
|
||||
user = User.objects.get(pk=user_id)
|
||||
except User.DoesNotExist:
|
||||
|
|
@ -49,6 +45,8 @@ class Command(BaseCommand):
|
|||
user.profile.setup_premium_history()
|
||||
elif user.profile.premium_expire > datetime.datetime.now() + datetime.timedelta(days=365):
|
||||
user.profile.setup_premium_history()
|
||||
elif options.get('force'):
|
||||
user.profile.setup_premium_history()
|
||||
else:
|
||||
logging.debug(" ---> %s is fine" % user.username)
|
||||
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ class DBProfilerMiddleware:
|
|||
|
||||
def process_celery(self):
|
||||
setattr(self, 'activated_segments', [])
|
||||
if random.random() < 0.01:
|
||||
if random.random() < 0.01 or settings.DEBUG_QUERIES:
|
||||
self.activated_segments.append('db_profiler')
|
||||
connection.use_debug_cursor = True
|
||||
setattr(settings, 'ORIGINAL_DEBUG', settings.DEBUG)
|
||||
|
|
@ -151,14 +151,16 @@ class SQLLogToConsoleMiddleware:
|
|||
if not self.activated(request):
|
||||
return response
|
||||
if connection.queries:
|
||||
time_elapsed = sum([float(q['time']) for q in connection.queries])
|
||||
queries = connection.queries
|
||||
if getattr(connection, 'queriesx', False):
|
||||
queries.extend(connection.queriesx)
|
||||
connection.queriesx = []
|
||||
time_elapsed = sum([float(q['time']) for q in connection.queries])
|
||||
for query in queries:
|
||||
sql_time = float(query['time'])
|
||||
query['color'] = '~FC' if sql_time < 0.015 else '~FK~SB' if sql_time < 0.05 else '~FR~SB'
|
||||
if query.get('mongo'):
|
||||
query['sql'] = "~FM%s: %s" % (query['mongo']['collection'], query['mongo']['query'])
|
||||
query['sql'] = "~FM%s %s: %s" % (query['mongo']['op'], query['mongo']['collection'], query['mongo']['query'])
|
||||
elif query.get('redis_user'):
|
||||
query['sql'] = "~FC%s" % (query['redis_user']['query'])
|
||||
elif query.get('redis_story'):
|
||||
|
|
@ -177,13 +179,13 @@ class SQLLogToConsoleMiddleware:
|
|||
query['sql'] = re.sub(r'INSERT', '~FGINSERT', query['sql'])
|
||||
query['sql'] = re.sub(r'UPDATE', '~FY~SBUPDATE', query['sql'])
|
||||
query['sql'] = re.sub(r'DELETE', '~FR~SBDELETE', query['sql'])
|
||||
|
||||
if (
|
||||
settings.DEBUG
|
||||
and settings.DEBUG_QUERIES
|
||||
settings.DEBUG_QUERIES
|
||||
and not getattr(settings, 'DEBUG_QUERIES_SUMMARY_ONLY', False)
|
||||
):
|
||||
t = Template(
|
||||
"{% for sql in sqllog %}{% if not forloop.first %} {% endif %}[{{forloop.counter}}] ~FC{{sql.time}}s~FW: {{sql.sql|safe}}{% if not forloop.last %}\n{% endif %}{% endfor %}"
|
||||
"{% for sql in sqllog %}{% if not forloop.first %} {% endif %}[{{forloop.counter}}] {{sql.color}}{{sql.time}}~SN~FW: {{sql.sql|safe}}{% if not forloop.last %}\n{% endif %}{% endfor %}"
|
||||
)
|
||||
logging.debug(
|
||||
t.render(
|
||||
|
|
|
|||
44
apps/profile/migrations/0004_auto_20220110_2106.py
Normal file
44
apps/profile/migrations/0004_auto_20220110_2106.py
Normal file
File diff suppressed because one or more lines are too long
18
apps/profile/migrations/0005_profile_is_archive.py
Normal file
18
apps/profile/migrations/0005_profile_is_archive.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 3.1.10 on 2022-01-11 15:55
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('profile', '0004_auto_20220110_2106'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='profile',
|
||||
name='is_archive',
|
||||
field=models.BooleanField(blank=True, default=False, null=True),
|
||||
),
|
||||
]
|
||||
18
apps/profile/migrations/0006_profile_days_of_unread.py
Normal file
18
apps/profile/migrations/0006_profile_days_of_unread.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 3.1.10 on 2022-01-13 21:08
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('profile', '0005_profile_is_archive'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='profile',
|
||||
name='days_of_unread',
|
||||
field=models.IntegerField(default=30, blank=True, null=True),
|
||||
),
|
||||
]
|
||||
24
apps/profile/migrations/0007_auto_20220125_2108.py
Normal file
24
apps/profile/migrations/0007_auto_20220125_2108.py
Normal file
File diff suppressed because one or more lines are too long
18
apps/profile/migrations/0008_profile_paypal_sub_id.py
Normal file
18
apps/profile/migrations/0008_profile_paypal_sub_id.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 3.1.10 on 2022-02-07 19:25
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('profile', '0007_auto_20220125_2108'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='profile',
|
||||
name='paypal_sub_id',
|
||||
field=models.CharField(blank=True, max_length=24, null=True),
|
||||
),
|
||||
]
|
||||
24
apps/profile/migrations/0009_paypalids.py
Normal file
24
apps/profile/migrations/0009_paypalids.py
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
# Generated by Django 3.1.10 on 2022-02-08 23:15
|
||||
|
||||
from django.conf import settings
|
||||
from django.db import migrations, models
|
||||
import django.db.models.deletion
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
|
||||
('profile', '0008_profile_paypal_sub_id'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name='PaypalIds',
|
||||
fields=[
|
||||
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('paypal_sub_id', models.CharField(blank=True, max_length=24, null=True)),
|
||||
('user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='paypal_ids', to=settings.AUTH_USER_MODEL)),
|
||||
],
|
||||
),
|
||||
]
|
||||
18
apps/profile/migrations/0010_profile_active_provider.py
Normal file
18
apps/profile/migrations/0010_profile_active_provider.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 3.1.10 on 2022-02-14 20:01
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('profile', '0009_paypalids'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='profile',
|
||||
name='active_provider',
|
||||
field=models.CharField(blank=True, max_length=24, null=True),
|
||||
),
|
||||
]
|
||||
29
apps/profile/migrations/0011_auto_20220408_1908.py
Normal file
29
apps/profile/migrations/0011_auto_20220408_1908.py
Normal file
File diff suppressed because one or more lines are too long
19
apps/profile/migrations/0012_auto_20220511_1710.py
Normal file
19
apps/profile/migrations/0012_auto_20220511_1710.py
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load diff
|
|
@ -15,6 +15,33 @@ def EmailNewPremium(user_id):
|
|||
user_profile = Profile.objects.get(user__pk=user_id)
|
||||
user_profile.send_new_premium_email()
|
||||
|
||||
@app.task()
|
||||
def FetchArchiveFeedsForUser(user_id):
|
||||
# subs = UserSubscription.objects.filter(user=user_id)
|
||||
# user_profile = Profile.objects.get(user__pk=user_id)
|
||||
# logging.user(user_profile.user, f"~FCBeginning archive feed fetches for ~SB~FG{subs.count()} feeds~SN...")
|
||||
|
||||
UserSubscription.fetch_archive_feeds_for_user(user_id)
|
||||
|
||||
@app.task()
|
||||
def FetchArchiveFeedsChunk(user_id, feed_ids):
|
||||
# logging.debug(" ---> Fetching archive stories: %s for %s" % (feed_ids, user_id))
|
||||
UserSubscription.fetch_archive_feeds_chunk(user_id, feed_ids)
|
||||
|
||||
@app.task()
|
||||
def FinishFetchArchiveFeeds(results, user_id, start_time, starting_story_count):
|
||||
# logging.debug(" ---> Fetching archive stories finished for %s" % (user_id))
|
||||
|
||||
ending_story_count, pre_archive_count = UserSubscription.finish_fetch_archive_feeds(user_id, start_time, starting_story_count)
|
||||
|
||||
user_profile = Profile.objects.get(user__pk=user_id)
|
||||
user_profile.send_new_premium_archive_email(ending_story_count, pre_archive_count)
|
||||
|
||||
@app.task(name="email-new-premium-pro")
|
||||
def EmailNewPremiumPro(user_id):
|
||||
user_profile = Profile.objects.get(user__pk=user_id)
|
||||
user_profile.send_new_premium_pro_email()
|
||||
|
||||
@app.task(name="premium-expire")
|
||||
def PremiumExpire(**kwargs):
|
||||
# Get expired but grace period users
|
||||
|
|
|
|||
|
|
@ -11,10 +11,17 @@ urlpatterns = [
|
|||
url(r'^set_collapsed_folders/?', views.set_collapsed_folders),
|
||||
url(r'^paypal_form/?', views.paypal_form),
|
||||
url(r'^paypal_return/?', views.paypal_return, name='paypal-return'),
|
||||
url(r'^paypal_archive_return/?', views.paypal_archive_return, name='paypal-archive-return'),
|
||||
url(r'^stripe_return/?', views.paypal_return, name='stripe-return'),
|
||||
url(r'^switch_stripe_subscription/?', views.switch_stripe_subscription, name='switch-stripe-subscription'),
|
||||
url(r'^switch_paypal_subscription/?', views.switch_paypal_subscription, name='switch-paypal-subscription'),
|
||||
url(r'^is_premium/?', views.profile_is_premium, name='profile-is-premium'),
|
||||
url(r'^paypal_webhooks/?', include('paypal.standard.ipn.urls'), name='paypal-webhooks'),
|
||||
url(r'^paypal_ipn/?', include('paypal.standard.ipn.urls'), name='paypal-ipn'),
|
||||
url(r'^is_premium_archive/?', views.profile_is_premium_archive, name='profile-is-premium-archive'),
|
||||
# url(r'^paypal_ipn/?', include('paypal.standard.ipn.urls'), name='paypal-ipn'),
|
||||
url(r'^paypal_ipn/?', views.paypal_ipn, name='paypal-ipn'),
|
||||
url(r'^paypal_webhooks/?', views.paypal_webhooks, name='paypal-webhooks'),
|
||||
url(r'^stripe_form/?', views.stripe_form, name='stripe-form'),
|
||||
url(r'^stripe_checkout/?', views.stripe_checkout, name='stripe-checkout'),
|
||||
url(r'^activities/?', views.load_activities, name='profile-activities'),
|
||||
url(r'^payment_history/?', views.payment_history, name='profile-payment-history'),
|
||||
url(r'^cancel_premium/?', views.cancel_premium, name='profile-cancel-premium'),
|
||||
|
|
|
|||
|
|
@ -1,6 +1,8 @@
|
|||
import re
|
||||
import stripe
|
||||
import requests
|
||||
import datetime
|
||||
import dateutil
|
||||
from django.contrib.auth.decorators import login_required
|
||||
from django.views.decorators.http import require_POST
|
||||
from django.views.decorators.csrf import csrf_protect, csrf_exempt
|
||||
|
|
@ -15,7 +17,7 @@ from django.urls import reverse
|
|||
from django.shortcuts import render
|
||||
from django.core.mail import mail_admins
|
||||
from django.conf import settings
|
||||
from apps.profile.models import Profile, PaymentHistory, RNewUserQueue, MRedeemedCode, MGiftCode
|
||||
from apps.profile.models import Profile, PaymentHistory, RNewUserQueue, MRedeemedCode, MGiftCode, PaypalIds
|
||||
from apps.reader.models import UserSubscription, UserSubscriptionFolders, RUserStory
|
||||
from apps.profile.forms import StripePlusPaymentForm, PLANS, DeleteAccountForm
|
||||
from apps.profile.forms import ForgotPasswordForm, ForgotPasswordReturnForm, AccountSettingsForm
|
||||
|
|
@ -25,14 +27,17 @@ from apps.rss_feeds.models import MStarredStory, MStarredStoryCounts
|
|||
from apps.social.models import MSocialServices, MActivity, MSocialProfile
|
||||
from apps.analyzer.models import MClassifierTitle, MClassifierAuthor, MClassifierFeed, MClassifierTag
|
||||
from utils import json_functions as json
|
||||
import json as python_json
|
||||
from utils.user_functions import ajax_login_required
|
||||
from utils.view_functions import render_to, is_true
|
||||
from utils.user_functions import get_user
|
||||
from utils import log as logging
|
||||
from vendor.paypalapi.exceptions import PayPalAPIResponseError
|
||||
from paypal.standard.forms import PayPalPaymentsForm
|
||||
from paypal.standard.ipn.views import ipn as paypal_standard_ipn
|
||||
|
||||
SINGLE_FIELD_PREFS = ('timezone','feed_pane_size','hide_mobile','send_emails',
|
||||
INTEGER_FIELD_PREFS = ('feed_pane_size', 'days_of_unread')
|
||||
SINGLE_FIELD_PREFS = ('timezone','hide_mobile','send_emails',
|
||||
'hide_getting_started', 'has_setup_feeds', 'has_found_friends',
|
||||
'has_trained_intelligence')
|
||||
SPECIAL_PREFERENCES = ('old_password', 'new_password', 'autofollow_friends', 'dashboard_date',)
|
||||
|
|
@ -50,6 +55,12 @@ def set_preference(request):
|
|||
if preference_value in ['true','false']: preference_value = True if preference_value == 'true' else False
|
||||
if preference_name in SINGLE_FIELD_PREFS:
|
||||
setattr(request.user.profile, preference_name, preference_value)
|
||||
elif preference_name in INTEGER_FIELD_PREFS:
|
||||
if preference_name == "days_of_unread" and int(preference_value) != request.user.profile.days_of_unread:
|
||||
UserSubscription.all_subs_needs_unread_recalc(request.user.pk)
|
||||
setattr(request.user.profile, preference_name, int(preference_value))
|
||||
if preference_name in preferences:
|
||||
del preferences[preference_name]
|
||||
elif preference_name in SPECIAL_PREFERENCES:
|
||||
if preference_name == 'autofollow_friends':
|
||||
social_services = MSocialServices.get_user(request.user.pk)
|
||||
|
|
@ -185,6 +196,7 @@ def set_view_setting(request):
|
|||
feed_order_setting = request.POST.get('feed_order_setting')
|
||||
feed_read_filter_setting = request.POST.get('feed_read_filter_setting')
|
||||
feed_layout_setting = request.POST.get('feed_layout_setting')
|
||||
feed_dashboard_count_setting = request.POST.get('feed_dashboard_count_setting')
|
||||
view_settings = json.decode(request.user.profile.view_settings)
|
||||
|
||||
setting = view_settings.get(feed_id, {})
|
||||
|
|
@ -192,6 +204,7 @@ def set_view_setting(request):
|
|||
if feed_view_setting: setting['v'] = feed_view_setting
|
||||
if feed_order_setting: setting['o'] = feed_order_setting
|
||||
if feed_read_filter_setting: setting['r'] = feed_read_filter_setting
|
||||
if feed_dashboard_count_setting: setting['d'] = feed_dashboard_count_setting
|
||||
if feed_layout_setting: setting['l'] = feed_layout_setting
|
||||
|
||||
view_settings[feed_id] = setting
|
||||
|
|
@ -259,7 +272,58 @@ def set_collapsed_folders(request):
|
|||
response = dict(code=code)
|
||||
return response
|
||||
|
||||
@ajax_login_required
|
||||
def paypal_ipn(request):
|
||||
try:
|
||||
return paypal_standard_ipn(request)
|
||||
except AssertionError:
|
||||
# Paypal may have sent webhooks to ipn, so redirect
|
||||
logging.user(request, f" ---> Paypal IPN to webhooks redirect: {request.body}")
|
||||
return paypal_webhooks(request)
|
||||
|
||||
def paypal_webhooks(request):
|
||||
try:
|
||||
data = json.decode(request.body)
|
||||
except python_json.decoder.JSONDecodeError:
|
||||
# Kick it over to paypal ipn
|
||||
return paypal_standard_ipn(request)
|
||||
|
||||
logging.user(request, f" ---> Paypal webhooks {data.get('event_type', '<no event_type>')} data: {data}")
|
||||
|
||||
if data['event_type'] == "BILLING.SUBSCRIPTION.CREATED":
|
||||
# Don't start a subscription but save it in case the payment comes before the subscription activation
|
||||
user = User.objects.get(pk=int(data['resource']['custom_id']))
|
||||
user.profile.store_paypal_sub_id(data['resource']['id'], skip_save_primary=True)
|
||||
elif data['event_type'] in ["BILLING.SUBSCRIPTION.ACTIVATED", "BILLING.SUBSCRIPTION.UPDATED"]:
|
||||
user = User.objects.get(pk=int(data['resource']['custom_id']))
|
||||
user.profile.store_paypal_sub_id(data['resource']['id'])
|
||||
# plan_id = data['resource']['plan_id']
|
||||
# if plan_id == Profile.plan_to_paypal_plan_id('premium'):
|
||||
# user.profile.activate_premium()
|
||||
# elif plan_id == Profile.plan_to_paypal_plan_id('archive'):
|
||||
# user.profile.activate_archive()
|
||||
# elif plan_id == Profile.plan_to_paypal_plan_id('pro'):
|
||||
# user.profile.activate_pro()
|
||||
user.profile.cancel_premium_stripe()
|
||||
user.profile.setup_premium_history()
|
||||
if data['event_type'] == "BILLING.SUBSCRIPTION.ACTIVATED":
|
||||
user.profile.cancel_and_prorate_existing_paypal_subscriptions(data)
|
||||
elif data['event_type'] == "PAYMENT.SALE.COMPLETED":
|
||||
user = User.objects.get(pk=int(data['resource']['custom']))
|
||||
user.profile.setup_premium_history()
|
||||
elif data['event_type'] == "PAYMENT.CAPTURE.REFUNDED":
|
||||
user = User.objects.get(pk=int(data['resource']['custom_id']))
|
||||
user.profile.setup_premium_history()
|
||||
elif data['event_type'] in ["BILLING.SUBSCRIPTION.CANCELLED", "BILLING.SUBSCRIPTION.SUSPENDED"]:
|
||||
custom_id = data['resource'].get('custom_id', None)
|
||||
if custom_id:
|
||||
user = User.objects.get(pk=int(custom_id))
|
||||
else:
|
||||
paypal_id = PaypalIds.objects.get(paypal_sub_id=data['resource']['id'])
|
||||
user = paypal_id.user
|
||||
user.profile.setup_premium_history()
|
||||
|
||||
return HttpResponse("OK")
|
||||
|
||||
def paypal_form(request):
|
||||
domain = Site.objects.get_current().domain
|
||||
if settings.DEBUG:
|
||||
|
|
@ -289,11 +353,20 @@ def paypal_form(request):
|
|||
# Output the button.
|
||||
return HttpResponse(form.render(), content_type='text/html')
|
||||
|
||||
@login_required
|
||||
def paypal_return(request):
|
||||
|
||||
return render(request, 'reader/paypal_return.xhtml', {
|
||||
'user_profile': request.user.profile,
|
||||
})
|
||||
|
||||
|
||||
@login_required
|
||||
def paypal_archive_return(request):
|
||||
|
||||
return render(request, 'reader/paypal_archive_return.xhtml', {
|
||||
'user_profile': request.user.profile,
|
||||
})
|
||||
|
||||
@login_required
|
||||
def activate_premium(request):
|
||||
return HttpResponseRedirect(reverse('index'))
|
||||
|
|
@ -304,7 +377,6 @@ def profile_is_premium(request):
|
|||
# Check tries
|
||||
code = 0
|
||||
retries = int(request.GET['retries'])
|
||||
profile = Profile.objects.get(user=request.user)
|
||||
|
||||
subs = UserSubscription.objects.filter(user=request.user)
|
||||
total_subs = subs.count()
|
||||
|
|
@ -315,12 +387,42 @@ def profile_is_premium(request):
|
|||
if not request.user.profile.is_premium:
|
||||
subject = "Premium activation failed: %s (%s/%s)" % (request.user, activated_subs, total_subs)
|
||||
message = """User: %s (%s) -- Email: %s""" % (request.user.username, request.user.pk, request.user.email)
|
||||
mail_admins(subject, message, fail_silently=True)
|
||||
request.user.profile.is_premium = True
|
||||
request.user.profile.save()
|
||||
mail_admins(subject, message)
|
||||
request.user.profile.activate_premium()
|
||||
|
||||
profile = Profile.objects.get(user=request.user)
|
||||
return {
|
||||
'is_premium': profile.is_premium,
|
||||
'is_premium_archive': profile.is_archive,
|
||||
'code': code,
|
||||
'activated_subs': activated_subs,
|
||||
'total_subs': total_subs,
|
||||
}
|
||||
|
||||
@ajax_login_required
|
||||
@json.json_view
|
||||
def profile_is_premium_archive(request):
|
||||
# Check tries
|
||||
code = 0
|
||||
retries = int(request.GET['retries'])
|
||||
|
||||
subs = UserSubscription.objects.filter(user=request.user)
|
||||
total_subs = subs.count()
|
||||
activated_subs = subs.filter(feed__archive_subscribers__gte=1).count()
|
||||
|
||||
if retries >= 30:
|
||||
code = -1
|
||||
if not request.user.profile.is_premium_archive:
|
||||
subject = "Premium archive activation failed: %s (%s/%s)" % (request.user, activated_subs, total_subs)
|
||||
message = """User: %s (%s) -- Email: %s""" % (request.user.username, request.user.pk, request.user.email)
|
||||
mail_admins(subject, message)
|
||||
request.user.profile.activate_archive()
|
||||
|
||||
profile = Profile.objects.get(user=request.user)
|
||||
|
||||
return {
|
||||
'is_premium': profile.is_premium,
|
||||
'is_premium_archive': profile.is_archive,
|
||||
'code': code,
|
||||
'activated_subs': activated_subs,
|
||||
'total_subs': total_subs,
|
||||
|
|
@ -340,7 +442,7 @@ def save_ios_receipt(request):
|
|||
logging.user(request, "~BM~FBSending iOS Receipt email: %s %s" % (product_identifier, transaction_identifier))
|
||||
subject = "iOS Premium: %s (%s)" % (request.user.profile, product_identifier)
|
||||
message = """User: %s (%s) -- Email: %s, product: %s, txn: %s, receipt: %s""" % (request.user.username, request.user.pk, request.user.email, product_identifier, transaction_identifier, receipt)
|
||||
mail_admins(subject, message, fail_silently=True)
|
||||
mail_admins(subject, message)
|
||||
else:
|
||||
logging.user(request, "~BM~FBNot sending iOS Receipt email, already paid: %s %s" % (product_identifier, transaction_identifier))
|
||||
|
||||
|
|
@ -360,7 +462,7 @@ def save_android_receipt(request):
|
|||
logging.user(request, "~BM~FBSending Android Receipt email: %s %s" % (product_id, order_id))
|
||||
subject = "Android Premium: %s (%s)" % (request.user.profile, product_id)
|
||||
message = """User: %s (%s) -- Email: %s, product: %s, order: %s""" % (request.user.username, request.user.pk, request.user.email, product_id, order_id)
|
||||
mail_admins(subject, message, fail_silently=True)
|
||||
mail_admins(subject, message)
|
||||
else:
|
||||
logging.user(request, "~BM~FBNot sending Android Receipt email, already paid: %s %s" % (product_id, order_id))
|
||||
|
||||
|
|
@ -473,6 +575,88 @@ def stripe_form(request):
|
|||
}
|
||||
)
|
||||
|
||||
@login_required
|
||||
def switch_stripe_subscription(request):
|
||||
plan = request.POST['plan']
|
||||
if plan == "change_stripe":
|
||||
return stripe_checkout(request)
|
||||
elif plan == "change_paypal":
|
||||
paypal_url = request.user.profile.paypal_change_billing_details_url()
|
||||
return HttpResponseRedirect(paypal_url)
|
||||
|
||||
switch_successful = request.user.profile.switch_stripe_subscription(plan)
|
||||
|
||||
logging.user(request, "~FCSwitching subscription to ~SB%s~SN~FC (%s)" %(
|
||||
plan,
|
||||
'~FGsucceeded~FC' if switch_successful else '~FRfailed~FC'
|
||||
))
|
||||
|
||||
if switch_successful:
|
||||
return HttpResponseRedirect(reverse('stripe-return'))
|
||||
|
||||
return stripe_checkout(request)
|
||||
|
||||
def switch_paypal_subscription(request):
|
||||
plan = request.POST['plan']
|
||||
if plan == "change_stripe":
|
||||
return stripe_checkout(request)
|
||||
elif plan == "change_paypal":
|
||||
paypal_url = request.user.profile.paypal_change_billing_details_url()
|
||||
return HttpResponseRedirect(paypal_url)
|
||||
|
||||
approve_url = request.user.profile.switch_paypal_subscription_approval_url(plan)
|
||||
|
||||
logging.user(request, "~FCSwitching subscription to ~SB%s~SN~FC (%s)" %(
|
||||
plan,
|
||||
'~FGsucceeded~FC' if approve_url else '~FRfailed~FC'
|
||||
))
|
||||
|
||||
if approve_url:
|
||||
return HttpResponseRedirect(approve_url)
|
||||
|
||||
paypal_return = reverse('paypal-return')
|
||||
if plan == "archive":
|
||||
paypal_return = reverse('paypal-archive-return')
|
||||
return HttpResponseRedirect(paypal_return)
|
||||
|
||||
@login_required
|
||||
def stripe_checkout(request):
|
||||
stripe.api_key = settings.STRIPE_SECRET
|
||||
domain = Site.objects.get_current().domain
|
||||
plan = request.POST['plan']
|
||||
|
||||
if plan == "change_stripe":
|
||||
checkout_session = stripe.billing_portal.Session.create(
|
||||
customer=request.user.profile.stripe_id,
|
||||
return_url="http://%s%s?next=payments" % (domain, reverse('index')),
|
||||
)
|
||||
return HttpResponseRedirect(checkout_session.url, status=303)
|
||||
|
||||
price = Profile.plan_to_stripe_price(plan)
|
||||
|
||||
session_dict = {
|
||||
"line_items": [
|
||||
{
|
||||
'price': price,
|
||||
'quantity': 1,
|
||||
},
|
||||
],
|
||||
"mode": 'subscription',
|
||||
"metadata": {"newsblur_user_id": request.user.pk},
|
||||
"success_url": "http://%s%s" % (domain, reverse('stripe-return')),
|
||||
"cancel_url": "http://%s%s" % (domain, reverse('index')),
|
||||
}
|
||||
if request.user.profile.stripe_id:
|
||||
session_dict['customer'] = request.user.profile.stripe_id
|
||||
else:
|
||||
session_dict["customer_email"] = request.user.email
|
||||
|
||||
checkout_session = stripe.checkout.Session.create(**session_dict)
|
||||
|
||||
logging.user(request, "~BM~FBLoading Stripe checkout")
|
||||
|
||||
return HttpResponseRedirect(checkout_session.url, status=303)
|
||||
|
||||
@render_to('reader/activities_module.xhtml')
|
||||
def load_activities(request):
|
||||
user = get_user(request)
|
||||
|
|
@ -519,11 +703,37 @@ def payment_history(request):
|
|||
}
|
||||
}
|
||||
|
||||
next_invoice = None
|
||||
stripe_customer = user.profile.stripe_customer()
|
||||
paypal_api = user.profile.paypal_api()
|
||||
if stripe_customer:
|
||||
try:
|
||||
invoice = stripe.Invoice.upcoming(customer=stripe_customer.id)
|
||||
for lines in invoice.lines.data:
|
||||
next_invoice = dict(payment_date=datetime.datetime.fromtimestamp(lines.period.start),
|
||||
payment_amount=invoice.amount_due/100.0,
|
||||
payment_provider="(scheduled)",
|
||||
scheduled=True)
|
||||
break
|
||||
except stripe.error.InvalidRequestError:
|
||||
pass
|
||||
|
||||
if paypal_api and not next_invoice and user.profile.premium_renewal and len(history):
|
||||
next_invoice = dict(payment_date=history[0].payment_date+dateutil.relativedelta.relativedelta(years=1),
|
||||
payment_amount=history[0].payment_amount,
|
||||
payment_provider="(scheduled)",
|
||||
scheduled=True)
|
||||
|
||||
return {
|
||||
'is_premium': user.profile.is_premium,
|
||||
'is_archive': user.profile.is_archive,
|
||||
'is_pro': user.profile.is_pro,
|
||||
'premium_expire': user.profile.premium_expire,
|
||||
'premium_renewal': user.profile.premium_renewal,
|
||||
'active_provider': user.profile.active_provider,
|
||||
'payments': history,
|
||||
'statistics': statistics,
|
||||
'next_invoice': next_invoice,
|
||||
}
|
||||
|
||||
@ajax_login_required
|
||||
|
|
@ -541,15 +751,16 @@ def cancel_premium(request):
|
|||
def refund_premium(request):
|
||||
user_id = request.POST.get('user_id')
|
||||
partial = request.POST.get('partial', False)
|
||||
provider = request.POST.get('provider', None)
|
||||
user = User.objects.get(pk=user_id)
|
||||
try:
|
||||
refunded = user.profile.refund_premium(partial=partial)
|
||||
refunded = user.profile.refund_premium(partial=partial, provider=provider)
|
||||
except stripe.error.InvalidRequestError as e:
|
||||
refunded = e
|
||||
except PayPalAPIResponseError as e:
|
||||
refunded = e
|
||||
|
||||
return {'code': 1 if refunded else -1, 'refunded': refunded}
|
||||
return {'code': 1 if type(refunded) == int else -1, 'refunded': refunded}
|
||||
|
||||
@staff_member_required
|
||||
@ajax_login_required
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ def push_callback(request, push_id):
|
|||
if request.method == 'GET':
|
||||
mode = request.GET['hub.mode']
|
||||
topic = request.GET['hub.topic']
|
||||
challenge = request.GET['hub.challenge']
|
||||
challenge = request.GET.get('hub.challenge', '')
|
||||
lease_seconds = request.GET.get('hub.lease_seconds')
|
||||
verify_token = request.GET.get('hub.verify_token', '')
|
||||
|
||||
|
|
@ -61,7 +61,7 @@ def push_callback(request, push_id):
|
|||
|
||||
# Don't give fat ping, just fetch.
|
||||
# subscription.feed.queue_pushed_feed_xml(request.body)
|
||||
if subscription.feed.active_premium_subscribers >= 1:
|
||||
if subscription.feed.active_subscribers >= 1:
|
||||
subscription.feed.queue_pushed_feed_xml("Fetch me", latest_push_date_delta=latest_push_date_delta)
|
||||
MFetchHistory.add(feed_id=subscription.feed_id,
|
||||
fetch_type='push')
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ import time
|
|||
import re
|
||||
import redis
|
||||
import pymongo
|
||||
import celery
|
||||
import mongoengine as mongo
|
||||
from operator import itemgetter
|
||||
from pprint import pprint
|
||||
from utils import log as logging
|
||||
|
|
@ -99,41 +101,66 @@ class UserSubscription(models.Model):
|
|||
Q(unread_count_positive__gt=0))
|
||||
if not feed_ids:
|
||||
usersubs = usersubs.filter(user=user_id,
|
||||
active=True).only('feed', 'mark_read_date', 'is_trained')
|
||||
active=True).only('feed', 'mark_read_date', 'is_trained', 'needs_unread_recalc')
|
||||
else:
|
||||
usersubs = usersubs.filter(user=user_id,
|
||||
active=True,
|
||||
feed__in=feed_ids).only('feed', 'mark_read_date', 'is_trained')
|
||||
feed__in=feed_ids).only('feed', 'mark_read_date', 'is_trained', 'needs_unread_recalc')
|
||||
|
||||
return usersubs
|
||||
|
||||
@classmethod
|
||||
def story_hashes(cls, user_id, feed_ids=None, usersubs=None, read_filter="unread", order="newest",
|
||||
include_timestamps=False, group_by_feed=True, cutoff_date=None,
|
||||
across_all_feeds=True):
|
||||
include_timestamps=False, group_by_feed=False, cutoff_date=None,
|
||||
across_all_feeds=True, store_stories_key=None, offset=0, limit=500):
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
pipeline = r.pipeline()
|
||||
user = User.objects.get(pk=user_id)
|
||||
story_hashes = {} if group_by_feed else []
|
||||
is_archive = user.profile.is_archive
|
||||
|
||||
if not feed_ids and not across_all_feeds:
|
||||
return story_hashes
|
||||
|
||||
if not usersubs:
|
||||
usersubs = cls.subs_for_feeds(user_id, feed_ids=feed_ids, read_filter=read_filter)
|
||||
if not usersubs:
|
||||
usersubs = cls.subs_for_feeds(user_id, feed_ids=feed_ids, read_filter="all")
|
||||
feed_ids = [sub.feed_id for sub in usersubs]
|
||||
if not feed_ids:
|
||||
return story_hashes
|
||||
|
||||
|
||||
current_time = int(time.time() + 60*60*24)
|
||||
if not cutoff_date:
|
||||
cutoff_date = datetime.datetime.now() - datetime.timedelta(days=settings.DAYS_OF_STORY_HASHES)
|
||||
cutoff_date = user.profile.unread_cutoff
|
||||
feed_counter = 0
|
||||
|
||||
unread_ranked_stories_keys = []
|
||||
|
||||
read_dates = dict()
|
||||
needs_unread_recalc = dict()
|
||||
manual_unread_pipeline = r.pipeline()
|
||||
manual_unread_feed_oldest_date = dict()
|
||||
oldest_manual_unread = None
|
||||
# usersub_count = len(usersubs)
|
||||
for us in usersubs:
|
||||
read_dates[us.feed_id] = int(max(us.mark_read_date, cutoff_date).strftime('%s'))
|
||||
|
||||
for feed_id_group in chunks(feed_ids, 20):
|
||||
if read_filter == "unread":
|
||||
needs_unread_recalc[us.feed_id] = us.needs_unread_recalc # or usersub_count == 1
|
||||
user_manual_unread_stories_feed_key = f"uU:{user_id}:{us.feed_id}"
|
||||
manual_unread_pipeline.exists(user_manual_unread_stories_feed_key)
|
||||
user_unread_ranked_stories_key = f"zU:{user_id}:{us.feed_id}"
|
||||
manual_unread_pipeline.exists(user_unread_ranked_stories_key)
|
||||
if read_filter == "unread":
|
||||
results = manual_unread_pipeline.execute()
|
||||
for i, us in enumerate(usersubs):
|
||||
if results[i*2]: # user_manual_unread_stories_feed_key
|
||||
user_manual_unread_stories_feed_key = f"uU:{user_id}:{us.feed_id}"
|
||||
oldest_manual_unread = r.zrevrange(user_manual_unread_stories_feed_key, -1, -1, withscores=True)
|
||||
manual_unread_feed_oldest_date[us.feed_id] = int(oldest_manual_unread[0][1])
|
||||
if read_filter == "unread" and not results[i*2+1]: # user_unread_ranked_stories_key
|
||||
needs_unread_recalc[us.feed_id] = True
|
||||
|
||||
for feed_id_group in chunks(feed_ids, 500):
|
||||
pipeline = r.pipeline()
|
||||
for feed_id in feed_id_group:
|
||||
stories_key = 'F:%s' % feed_id
|
||||
|
|
@ -141,132 +168,116 @@ class UserSubscription(models.Model):
|
|||
read_stories_key = 'RS:%s:%s' % (user_id, feed_id)
|
||||
unread_stories_key = 'U:%s:%s' % (user_id, feed_id)
|
||||
unread_ranked_stories_key = 'zU:%s:%s' % (user_id, feed_id)
|
||||
expire_unread_stories_key = False
|
||||
user_manual_unread_stories_feed_key = f"uU:{user_id}:{feed_id}"
|
||||
|
||||
max_score = current_time
|
||||
if read_filter == 'unread':
|
||||
# +1 for the intersection b/w zF and F, which carries an implicit score of 1.
|
||||
min_score = read_dates[feed_id] + 1
|
||||
pipeline.sdiffstore(unread_stories_key, stories_key, read_stories_key)
|
||||
expire_unread_stories_key = True
|
||||
min_score = read_dates[feed_id]
|
||||
# if needs_unread_recalc[feed_id]:
|
||||
# pipeline.sdiffstore(unread_stories_key, stories_key, read_stories_key)
|
||||
# # pipeline.expire(unread_stories_key, unread_cutoff_diff.days*24*60*60)
|
||||
# pipeline.expire(unread_stories_key, 1*60*60) # 1 hour
|
||||
else:
|
||||
min_score = 0
|
||||
unread_stories_key = stories_key
|
||||
|
||||
if order == 'oldest':
|
||||
byscorefunc = pipeline.zrangebyscore
|
||||
else:
|
||||
byscorefunc = pipeline.zrevrangebyscore
|
||||
min_score, max_score = max_score, min_score
|
||||
|
||||
pipeline.zinterstore(unread_ranked_stories_key, [sorted_stories_key, unread_stories_key])
|
||||
byscorefunc(unread_ranked_stories_key, min_score, max_score, withscores=include_timestamps)
|
||||
pipeline.delete(unread_ranked_stories_key)
|
||||
if expire_unread_stories_key:
|
||||
pipeline.delete(unread_stories_key)
|
||||
|
||||
ranked_stories_key = unread_ranked_stories_key
|
||||
if read_filter == 'unread':
|
||||
if needs_unread_recalc[feed_id]:
|
||||
pipeline.zdiffstore(unread_ranked_stories_key, [sorted_stories_key, read_stories_key])
|
||||
# pipeline.expire(unread_ranked_stories_key, unread_cutoff_diff.days*24*60*60)
|
||||
pipeline.expire(unread_ranked_stories_key, 1*60*60) # 1 hours
|
||||
if order == 'oldest':
|
||||
pipeline.zremrangebyscore(ranked_stories_key, 0, min_score-1)
|
||||
pipeline.zremrangebyscore(ranked_stories_key, max_score+1, 2*max_score)
|
||||
else:
|
||||
pipeline.zremrangebyscore(ranked_stories_key, 0, max_score-1)
|
||||
pipeline.zremrangebyscore(ranked_stories_key, min_score+1, 2*min_score)
|
||||
else:
|
||||
ranked_stories_key = sorted_stories_key
|
||||
|
||||
# If archive premium user has manually marked an older story as unread
|
||||
if is_archive and feed_id in manual_unread_feed_oldest_date and read_filter == "unread":
|
||||
if order == 'oldest':
|
||||
min_score = manual_unread_feed_oldest_date[feed_id]
|
||||
else:
|
||||
max_score = manual_unread_feed_oldest_date[feed_id]
|
||||
|
||||
pipeline.zunionstore(unread_ranked_stories_key, [unread_ranked_stories_key, user_manual_unread_stories_feed_key], aggregate="MAX")
|
||||
|
||||
if settings.DEBUG and False:
|
||||
debug_stories = r.zrevrange(unread_ranked_stories_key, 0, -1, withscores=True)
|
||||
print((" ---> Story hashes (%s/%s - %s/%s) %s stories: %s" % (
|
||||
min_score, datetime.datetime.fromtimestamp(min_score).strftime('%Y-%m-%d %T'),
|
||||
max_score, datetime.datetime.fromtimestamp(max_score).strftime('%Y-%m-%d %T'),
|
||||
len(debug_stories),
|
||||
debug_stories)))
|
||||
|
||||
if not store_stories_key:
|
||||
byscorefunc(ranked_stories_key, min_score, max_score, withscores=include_timestamps, start=offset, num=limit)
|
||||
unread_ranked_stories_keys.append(ranked_stories_key)
|
||||
|
||||
results = pipeline.execute()
|
||||
|
||||
for hashes in results:
|
||||
if not isinstance(hashes, list): continue
|
||||
if group_by_feed:
|
||||
story_hashes[feed_ids[feed_counter]] = hashes
|
||||
feed_counter += 1
|
||||
else:
|
||||
story_hashes.extend(hashes)
|
||||
|
||||
return story_hashes
|
||||
|
||||
def get_stories(self, offset=0, limit=6, order='newest', read_filter='all', withscores=False,
|
||||
hashes_only=False, cutoff_date=None, default_cutoff_date=None):
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
renc = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL_ENCODED)
|
||||
rt = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_TEMP_POOL)
|
||||
ignore_user_stories = False
|
||||
|
||||
stories_key = 'F:%s' % (self.feed_id)
|
||||
read_stories_key = 'RS:%s:%s' % (self.user_id, self.feed_id)
|
||||
unread_stories_key = 'U:%s:%s' % (self.user_id, self.feed_id)
|
||||
|
||||
unread_ranked_stories_key = 'z%sU:%s:%s' % ('h' if hashes_only else '',
|
||||
self.user_id, self.feed_id)
|
||||
if withscores or not offset or not rt.exists(unread_ranked_stories_key):
|
||||
rt.delete(unread_ranked_stories_key)
|
||||
if not r.exists(stories_key):
|
||||
# print " ---> No stories on feed: %s" % self
|
||||
return []
|
||||
elif read_filter == 'all' or not r.exists(read_stories_key):
|
||||
ignore_user_stories = True
|
||||
unread_stories_key = stories_key
|
||||
if not store_stories_key:
|
||||
for hashes in results:
|
||||
if not isinstance(hashes, list): continue
|
||||
if group_by_feed:
|
||||
story_hashes[feed_ids[feed_counter]] = hashes
|
||||
feed_counter += 1
|
||||
else:
|
||||
story_hashes.extend(hashes)
|
||||
|
||||
if store_stories_key:
|
||||
chunk_count = 0
|
||||
chunk_size = 1000
|
||||
if len(unread_ranked_stories_keys) < chunk_size:
|
||||
r.zunionstore(store_stories_key, unread_ranked_stories_keys)
|
||||
else:
|
||||
r.sdiffstore(unread_stories_key, stories_key, read_stories_key)
|
||||
sorted_stories_key = 'zF:%s' % (self.feed_id)
|
||||
r.zinterstore(unread_ranked_stories_key, [sorted_stories_key, unread_stories_key])
|
||||
if not ignore_user_stories:
|
||||
r.delete(unread_stories_key)
|
||||
|
||||
dump = renc.dump(unread_ranked_stories_key)
|
||||
if dump:
|
||||
pipeline = rt.pipeline()
|
||||
pipeline.delete(unread_ranked_stories_key)
|
||||
pipeline.restore(unread_ranked_stories_key, 1*60*60*1000, dump)
|
||||
pipeline = r.pipeline()
|
||||
for unread_ranked_stories_keys_group in chunks(unread_ranked_stories_keys, chunk_size):
|
||||
pipeline.zunionstore(f"{store_stories_key}-chunk{chunk_count}", unread_ranked_stories_keys_group, aggregate="MAX")
|
||||
chunk_count += 1
|
||||
pipeline.execute()
|
||||
r.zunionstore(store_stories_key, [f"{store_stories_key}-chunk{i}" for i in range(chunk_count)], aggregate="MAX")
|
||||
pipeline = r.pipeline()
|
||||
for i in range(chunk_count):
|
||||
pipeline.delete(f"{store_stories_key}-chunk{i}")
|
||||
pipeline.execute()
|
||||
r.delete(unread_ranked_stories_key)
|
||||
|
||||
current_time = int(time.time() + 60*60*24)
|
||||
if not cutoff_date:
|
||||
cutoff_date = datetime.datetime.now() - datetime.timedelta(days=settings.DAYS_OF_UNREAD)
|
||||
if read_filter == "unread":
|
||||
cutoff_date = max(cutoff_date, self.mark_read_date)
|
||||
elif default_cutoff_date:
|
||||
cutoff_date = default_cutoff_date
|
||||
|
||||
if order == 'oldest':
|
||||
byscorefunc = rt.zrangebyscore
|
||||
if read_filter == 'unread':
|
||||
min_score = int(time.mktime(cutoff_date.timetuple())) + 1
|
||||
else:
|
||||
min_score = int(time.mktime(cutoff_date.timetuple())) - 1000
|
||||
max_score = current_time
|
||||
else:
|
||||
byscorefunc = rt.zrevrangebyscore
|
||||
min_score = current_time
|
||||
if read_filter == 'unread':
|
||||
# +1 for the intersection b/w zF and F, which carries an implicit score of 1.
|
||||
max_score = int(time.mktime(cutoff_date.timetuple())) + 1
|
||||
else:
|
||||
max_score = 0
|
||||
|
||||
if settings.DEBUG and False:
|
||||
debug_stories = rt.zrevrange(unread_ranked_stories_key, 0, -1, withscores=True)
|
||||
print((" ---> Unread all stories (%s - %s) %s stories: %s" % (
|
||||
min_score,
|
||||
max_score,
|
||||
len(debug_stories),
|
||||
debug_stories)))
|
||||
story_ids = byscorefunc(unread_ranked_stories_key, min_score,
|
||||
max_score, start=offset, num=500,
|
||||
withscores=withscores)[:limit]
|
||||
if not store_stories_key:
|
||||
return story_hashes
|
||||
|
||||
if withscores:
|
||||
story_ids = [(s[0], int(s[1])) for s in story_ids]
|
||||
def get_stories(self, offset=0, limit=6, order='newest', read_filter='all', cutoff_date=None):
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
unread_ranked_stories_key = 'zU:%s:%s' % (self.user_id, self.feed_id)
|
||||
|
||||
if withscores or hashes_only:
|
||||
return story_ids
|
||||
elif story_ids:
|
||||
story_date_order = "%sstory_date" % ('' if order == 'oldest' else '-')
|
||||
mstories = MStory.objects(story_hash__in=story_ids).order_by(story_date_order)
|
||||
stories = Feed.format_stories(mstories)
|
||||
return stories
|
||||
if offset and r.exists(unread_ranked_stories_key):
|
||||
byscorefunc = r.zrevrange
|
||||
if order == "oldest":
|
||||
byscorefunc = r.zrange
|
||||
story_hashes = byscorefunc(unread_ranked_stories_key, start=offset, end=offset+limit)[:limit]
|
||||
else:
|
||||
return []
|
||||
story_hashes = UserSubscription.story_hashes(self.user.pk, feed_ids=[self.feed.pk],
|
||||
order=order, read_filter=read_filter,
|
||||
offset=offset, limit=limit,
|
||||
cutoff_date=cutoff_date)
|
||||
|
||||
story_date_order = "%sstory_date" % ('' if order == 'oldest' else '-')
|
||||
mstories = MStory.objects(story_hash__in=story_hashes).order_by(story_date_order)
|
||||
stories = Feed.format_stories(mstories)
|
||||
return stories
|
||||
|
||||
@classmethod
|
||||
def feed_stories(cls, user_id, feed_ids=None, offset=0, limit=6,
|
||||
order='newest', read_filter='all', usersubs=None, cutoff_date=None,
|
||||
all_feed_ids=None, cache_prefix=""):
|
||||
rt = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_TEMP_POOL)
|
||||
rt = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
across_all_feeds = False
|
||||
|
||||
if order == 'oldest':
|
||||
|
|
@ -299,34 +310,24 @@ class UserSubscription(models.Model):
|
|||
rt.delete(ranked_stories_keys)
|
||||
rt.delete(unread_ranked_stories_keys)
|
||||
|
||||
story_hashes = cls.story_hashes(user_id, feed_ids=feed_ids,
|
||||
cls.story_hashes(user_id, feed_ids=feed_ids,
|
||||
read_filter=read_filter, order=order,
|
||||
include_timestamps=True,
|
||||
group_by_feed=False,
|
||||
include_timestamps=False,
|
||||
usersubs=usersubs,
|
||||
cutoff_date=cutoff_date,
|
||||
across_all_feeds=across_all_feeds)
|
||||
if not story_hashes:
|
||||
return [], []
|
||||
|
||||
pipeline = rt.pipeline()
|
||||
for story_hash_group in chunks(story_hashes, 100):
|
||||
pipeline.zadd(ranked_stories_keys, dict(story_hash_group))
|
||||
pipeline.execute()
|
||||
across_all_feeds=across_all_feeds,
|
||||
store_stories_key=ranked_stories_keys)
|
||||
story_hashes = range_func(ranked_stories_keys, offset, limit)
|
||||
|
||||
if read_filter == "unread":
|
||||
unread_feed_story_hashes = story_hashes
|
||||
rt.zunionstore(unread_ranked_stories_keys, [ranked_stories_keys])
|
||||
else:
|
||||
unread_story_hashes = cls.story_hashes(user_id, feed_ids=feed_ids,
|
||||
cls.story_hashes(user_id, feed_ids=feed_ids,
|
||||
read_filter="unread", order=order,
|
||||
include_timestamps=True,
|
||||
group_by_feed=False,
|
||||
cutoff_date=cutoff_date)
|
||||
if unread_story_hashes:
|
||||
for unread_story_hash_group in chunks(unread_story_hashes, 100):
|
||||
rt.zadd(unread_ranked_stories_keys, dict(unread_story_hash_group))
|
||||
cutoff_date=cutoff_date,
|
||||
store_stories_key=unread_ranked_stories_keys)
|
||||
unread_feed_story_hashes = range_func(unread_ranked_stories_keys, offset, limit)
|
||||
|
||||
rt.expire(ranked_stories_keys, 60*60)
|
||||
|
|
@ -334,6 +335,15 @@ class UserSubscription(models.Model):
|
|||
|
||||
return story_hashes, unread_feed_story_hashes
|
||||
|
||||
def oldest_manual_unread_story_date(self, r=None):
|
||||
if not r:
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
|
||||
user_manual_unread_stories_feed_key = f"uU:{self.user_id}:{self.feed_id}"
|
||||
oldest_manual_unread = r.zrevrange(user_manual_unread_stories_feed_key, -1, -1, withscores=True)
|
||||
|
||||
return oldest_manual_unread
|
||||
|
||||
@classmethod
|
||||
def truncate_river(cls, user_id, feed_ids, read_filter, cache_prefix=""):
|
||||
rt = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_TEMP_POOL)
|
||||
|
|
@ -501,7 +511,96 @@ class UserSubscription(models.Model):
|
|||
if stale_feeds:
|
||||
stale_feeds = list(set([f.feed_id for f in stale_feeds]))
|
||||
cls.queue_new_feeds(user, new_feeds=stale_feeds)
|
||||
|
||||
@classmethod
|
||||
def schedule_fetch_archive_feeds_for_user(cls, user_id):
|
||||
from apps.profile.tasks import FetchArchiveFeedsForUser
|
||||
FetchArchiveFeedsForUser.apply_async(kwargs=dict(user_id=user_id),
|
||||
queue='search_indexer',
|
||||
time_limit=settings.MAX_SECONDS_COMPLETE_ARCHIVE_FETCH)
|
||||
|
||||
# Should be run as a background task
|
||||
@classmethod
|
||||
def fetch_archive_feeds_for_user(cls, user_id):
|
||||
from apps.profile.tasks import FetchArchiveFeedsChunk, FinishFetchArchiveFeeds
|
||||
|
||||
start_time = time.time()
|
||||
user = User.objects.get(pk=user_id)
|
||||
r = redis.Redis(connection_pool=settings.REDIS_PUBSUB_POOL)
|
||||
r.publish(user.username, 'fetch_archive:start')
|
||||
|
||||
subscriptions = UserSubscription.objects.filter(user=user).only('feed')
|
||||
total = subscriptions.count()
|
||||
|
||||
|
||||
feed_ids = []
|
||||
starting_story_count = 0
|
||||
for sub in subscriptions:
|
||||
try:
|
||||
feed_ids.append(sub.feed.pk)
|
||||
except Feed.DoesNotExist:
|
||||
continue
|
||||
starting_story_count += MStory.objects(story_feed_id=sub.feed.pk).count()
|
||||
|
||||
feed_id_chunks = [c for c in chunks(feed_ids, 1)]
|
||||
logging.user(user, "~FCFetching archive stories from ~SB%s feeds~SN in %s chunks..." %
|
||||
(total, len(feed_id_chunks)))
|
||||
|
||||
search_chunks = [FetchArchiveFeedsChunk.s(feed_ids=feed_id_chunk,
|
||||
user_id=user_id
|
||||
).set(queue='search_indexer')
|
||||
.set(time_limit=settings.MAX_SECONDS_ARCHIVE_FETCH_SINGLE_FEED,
|
||||
soft_time_limit=settings.MAX_SECONDS_ARCHIVE_FETCH_SINGLE_FEED-30)
|
||||
for feed_id_chunk in feed_id_chunks]
|
||||
callback = FinishFetchArchiveFeeds.s(user_id=user_id,
|
||||
start_time=start_time,
|
||||
starting_story_count=starting_story_count).set(queue='search_indexer')
|
||||
celery.chord(search_chunks)(callback)
|
||||
|
||||
@classmethod
|
||||
def fetch_archive_feeds_chunk(cls, user_id, feed_ids):
|
||||
from apps.rss_feeds.models import Feed
|
||||
r = redis.Redis(connection_pool=settings.REDIS_PUBSUB_POOL)
|
||||
user = User.objects.get(pk=user_id)
|
||||
|
||||
logging.user(user, "~FCFetching archive stories from %s feeds..." % len(feed_ids))
|
||||
|
||||
for feed_id in feed_ids:
|
||||
feed = Feed.get_by_id(feed_id)
|
||||
if not feed: continue
|
||||
|
||||
feed.fill_out_archive_stories()
|
||||
|
||||
r.publish(user.username, 'fetch_archive:feeds:%s' %
|
||||
','.join([str(f) for f in feed_ids]))
|
||||
|
||||
@classmethod
|
||||
def finish_fetch_archive_feeds(cls, user_id, start_time, starting_story_count):
|
||||
r = redis.Redis(connection_pool=settings.REDIS_PUBSUB_POOL)
|
||||
user = User.objects.get(pk=user_id)
|
||||
subscriptions = UserSubscription.objects.filter(user=user).only('feed')
|
||||
total = subscriptions.count()
|
||||
duration = time.time() - start_time
|
||||
|
||||
ending_story_count = 0
|
||||
pre_archive_count = 0
|
||||
for sub in subscriptions:
|
||||
try:
|
||||
ending_story_count += MStory.objects(story_feed_id=sub.feed.pk).count()
|
||||
pre_archive_count += Feed.get_by_id(sub.feed.pk).number_of_stories_to_store(pre_archive=True)
|
||||
except Feed.DoesNotExist:
|
||||
continue
|
||||
|
||||
new_story_count = ending_story_count - starting_story_count
|
||||
logging.user(user, f"~FCFinished archive feed fetches for ~SB~FG{subscriptions.count()} feeds~FC~SN: ~FG~SB{new_story_count:,} new~SB~FC, ~FG{ending_story_count:,} total (pre-archive: {pre_archive_count:,} stories)")
|
||||
|
||||
logging.user(user, "~FCFetched archive stories from ~SB%s feeds~SN in ~FM~SB%s~FC~SN sec." %
|
||||
(total, round(duration, 2)))
|
||||
r.publish(user.username, 'fetch_archive:done')
|
||||
|
||||
return ending_story_count, min(pre_archive_count, starting_story_count)
|
||||
|
||||
|
||||
@classmethod
|
||||
def identify_deleted_feed_users(cls, old_feed_id):
|
||||
users = UserSubscriptionFolders.objects.filter(folders__contains=old_feed_id).only('user')
|
||||
|
|
@ -667,8 +766,9 @@ class UserSubscription(models.Model):
|
|||
return
|
||||
|
||||
cutoff_date = cutoff_date - datetime.timedelta(seconds=1)
|
||||
story_hashes = self.get_stories(limit=500, order="newest", cutoff_date=cutoff_date,
|
||||
read_filter="unread", hashes_only=True)
|
||||
story_hashes = UserSubscription.story_hashes(self.user.pk, feed_ids=[self.feed.pk],
|
||||
order="newest", read_filter="unread",
|
||||
cutoff_date=cutoff_date)
|
||||
data = self.mark_story_ids_as_read(story_hashes, aggregated=True)
|
||||
return data
|
||||
|
||||
|
|
@ -695,6 +795,9 @@ class UserSubscription(models.Model):
|
|||
RUserStory.mark_read(self.user_id, self.feed_id, story_hash, aggregated=aggregated)
|
||||
r.publish(self.user.username, 'story:read:%s' % story_hash)
|
||||
|
||||
if self.user.profile.is_archive:
|
||||
RUserUnreadStory.mark_read(self.user_id, story_hash)
|
||||
|
||||
r.publish(self.user.username, 'feed:%s' % self.feed_id)
|
||||
|
||||
self.last_read_date = datetime.datetime.now()
|
||||
|
|
@ -704,13 +807,26 @@ class UserSubscription(models.Model):
|
|||
|
||||
def invert_read_stories_after_unread_story(self, story, request=None):
|
||||
data = dict(code=1)
|
||||
if story.story_date > self.mark_read_date:
|
||||
unread_cutoff = self.user.profile.unread_cutoff
|
||||
if self.mark_read_date > unread_cutoff:
|
||||
unread_cutoff = self.mark_read_date
|
||||
if story.story_date > unread_cutoff:
|
||||
return data
|
||||
|
||||
|
||||
# Check if user is archive and story is outside unread cutoff
|
||||
if self.user.profile.is_archive and story.story_date < self.user.profile.unread_cutoff:
|
||||
RUserUnreadStory.mark_unread(
|
||||
user_id=self.user_id,
|
||||
story_hash=story.story_hash,
|
||||
story_date=story.story_date,
|
||||
)
|
||||
data['story_hashes'] = [story.story_hash]
|
||||
return data
|
||||
|
||||
# Story is outside the mark as read range, so invert all stories before.
|
||||
newer_stories = MStory.objects(story_feed_id=story.story_feed_id,
|
||||
story_date__gte=story.story_date,
|
||||
story_date__lte=self.mark_read_date
|
||||
story_date__lte=unread_cutoff
|
||||
).only('story_hash')
|
||||
newer_stories = [s.story_hash for s in newer_stories]
|
||||
self.mark_read_date = story.story_date - datetime.timedelta(minutes=1)
|
||||
|
|
@ -729,8 +845,8 @@ class UserSubscription(models.Model):
|
|||
oldest_unread_story_date = now
|
||||
|
||||
if self.user.profile.last_seen_on < self.user.profile.unread_cutoff and not force:
|
||||
# if not silent:
|
||||
# logging.info(' ---> [%s] SKIPPING Computing scores: %s (1 week+)' % (self.user, self.feed))
|
||||
if not silent and settings.DEBUG:
|
||||
logging.info(' ---> [%s] SKIPPING Computing scores: %s (1 week+)' % (self.user, self.feed))
|
||||
return self
|
||||
ong = self.unread_count_negative
|
||||
ont = self.unread_count_neutral
|
||||
|
|
@ -762,7 +878,7 @@ class UserSubscription(models.Model):
|
|||
|
||||
unread_story_hashes = self.story_hashes(user_id=self.user_id, feed_ids=[self.feed_id],
|
||||
usersubs=[self],
|
||||
read_filter='unread', group_by_feed=False,
|
||||
read_filter='unread',
|
||||
cutoff_date=self.user.profile.unread_cutoff)
|
||||
|
||||
if not stories:
|
||||
|
|
@ -778,8 +894,8 @@ class UserSubscription(models.Model):
|
|||
|
||||
unread_stories = []
|
||||
for story in stories:
|
||||
if story['story_date'] < date_delta:
|
||||
continue
|
||||
# if story['story_date'] < date_delta:
|
||||
# continue
|
||||
if story['story_hash'] in unread_story_hashes:
|
||||
unread_stories.append(story)
|
||||
if story['story_date'] < oldest_unread_story_date:
|
||||
|
|
@ -827,10 +943,9 @@ class UserSubscription(models.Model):
|
|||
else:
|
||||
feed_scores['neutral'] += 1
|
||||
else:
|
||||
# print " ---> Cutoff date: %s" % date_delta
|
||||
unread_story_hashes = self.story_hashes(user_id=self.user_id, feed_ids=[self.feed_id],
|
||||
usersubs=[self],
|
||||
read_filter='unread', group_by_feed=False,
|
||||
read_filter='unread',
|
||||
include_timestamps=True,
|
||||
cutoff_date=date_delta)
|
||||
|
||||
|
|
@ -895,6 +1010,8 @@ class UserSubscription(models.Model):
|
|||
# Switch read stories
|
||||
RUserStory.switch_feed(user_id=self.user_id, old_feed_id=old_feed.pk,
|
||||
new_feed_id=new_feed.pk)
|
||||
RUserUnreadStory.switch_feed(user_id=self.user_id, old_feed_id=old_feed.pk,
|
||||
new_feed_id=new_feed.pk)
|
||||
|
||||
def switch_feed_for_classifier(model):
|
||||
duplicates = model.objects(feed_id=old_feed.pk, user_id=self.user_id)
|
||||
|
|
@ -962,7 +1079,20 @@ class UserSubscription(models.Model):
|
|||
folders.extend(list(orphan_ids))
|
||||
usf.folders = json.encode(folders)
|
||||
usf.save()
|
||||
|
||||
|
||||
@classmethod
|
||||
def all_subs_needs_unread_recalc(cls, user_id):
|
||||
subs = cls.objects.filter(user=user_id)
|
||||
total = len(subs)
|
||||
needed_recalc = 0
|
||||
for sub in subs:
|
||||
if not sub.needs_unread_recalc:
|
||||
sub.needs_unread_recalc = True
|
||||
sub.save()
|
||||
needed_recalc += 1
|
||||
|
||||
logging.debug(f" ---> Relcaculated {needed_recalc} of {total} subscriptions for user_id: {user_id}")
|
||||
|
||||
@classmethod
|
||||
def verify_feeds_scheduled(cls, user_id):
|
||||
r = redis.Redis(connection_pool=settings.REDIS_FEED_UPDATE_POOL)
|
||||
|
|
@ -994,10 +1124,10 @@ class UserSubscription(models.Model):
|
|||
|
||||
if not safety_net: return
|
||||
|
||||
logging.user(user, "~FBFound ~FR%s unscheduled feeds~FB, scheduling..." % len(safety_net))
|
||||
logging.user(user, "~FBFound ~FR%s unscheduled feeds~FB, scheduling immediately..." % len(safety_net))
|
||||
for feed_id in safety_net:
|
||||
feed = Feed.get_by_id(feed_id)
|
||||
feed.set_next_scheduled_update()
|
||||
feed.schedule_feed_fetch_immediately()
|
||||
|
||||
@classmethod
|
||||
def count_subscribers_to_other_subscriptions(cls, feed_id):
|
||||
|
|
@ -1039,7 +1169,8 @@ class UserSubscription(models.Model):
|
|||
|
||||
return table
|
||||
# return cofeeds
|
||||
|
||||
|
||||
|
||||
class RUserStory:
|
||||
|
||||
@classmethod
|
||||
|
|
@ -1051,11 +1182,8 @@ class RUserStory:
|
|||
ps = redis.Redis(connection_pool=settings.REDIS_PUBSUB_POOL)
|
||||
if not username:
|
||||
username = User.objects.get(pk=user_id).username
|
||||
# if not r2:
|
||||
# r2 = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL2)
|
||||
|
||||
p = r.pipeline()
|
||||
# p2 = r2.pipeline()
|
||||
feed_ids = set()
|
||||
friend_ids = set()
|
||||
|
||||
|
|
@ -1079,7 +1207,6 @@ class RUserStory:
|
|||
cls.mark_read(user_id, feed_id, story_hash, social_user_ids=friends_with_shares, r=p, username=username, ps=ps)
|
||||
|
||||
p.execute()
|
||||
# p2.execute()
|
||||
|
||||
return list(feed_ids), list(friend_ids)
|
||||
|
||||
|
|
@ -1091,8 +1218,6 @@ class RUserStory:
|
|||
s = redis.Redis(connection_pool=settings.REDIS_POOL)
|
||||
if not ps:
|
||||
ps = redis.Redis(connection_pool=settings.REDIS_PUBSUB_POOL)
|
||||
# if not r2:
|
||||
# r2 = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL2)
|
||||
|
||||
friend_ids = set()
|
||||
feed_id, _ = MStory.split_story_hash(story_hash)
|
||||
|
|
@ -1118,6 +1243,8 @@ class RUserStory:
|
|||
feed_read_key = "fR:%s:%s" % (feed_id, week_of_year)
|
||||
|
||||
r.incr(feed_read_key)
|
||||
# This settings.DAYS_OF_STORY_HASHES doesn't need to consider potential pro subscribers
|
||||
# because the feed_read_key is really only used for statistics and not unreads
|
||||
r.expire(feed_read_key, 2*settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
|
||||
@classmethod
|
||||
|
|
@ -1125,8 +1252,6 @@ class RUserStory:
|
|||
aggregated=False, r=None, username=None, ps=None):
|
||||
if not r:
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
# if not r2:
|
||||
# r2 = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL2)
|
||||
|
||||
story_hash = MStory.ensure_story_hash(story_hash, story_feed_id=story_feed_id)
|
||||
|
||||
|
|
@ -1134,9 +1259,7 @@ class RUserStory:
|
|||
|
||||
def redis_commands(key):
|
||||
r.sadd(key, story_hash)
|
||||
# r2.sadd(key, story_hash)
|
||||
r.expire(key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
# r2.expire(key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
r.expire(key, Feed.days_of_story_hashes_for_feed(story_feed_id)*24*60*60)
|
||||
|
||||
all_read_stories_key = 'RS:%s' % (user_id)
|
||||
redis_commands(all_read_stories_key)
|
||||
|
|
@ -1151,25 +1274,36 @@ class RUserStory:
|
|||
for social_user_id in social_user_ids:
|
||||
social_read_story_key = 'RS:%s:B:%s' % (user_id, social_user_id)
|
||||
redis_commands(social_read_story_key)
|
||||
|
||||
feed_id, _ = MStory.split_story_hash(story_hash)
|
||||
|
||||
# Don't remove unread stories from zU because users are actively paging through
|
||||
# unread_stories_key = f"U:{user_id}:{story_feed_id}"
|
||||
# unread_ranked_stories_key = f"zU:{user_id}:{story_feed_id}"
|
||||
# r.srem(unread_stories_key, story_hash)
|
||||
# r.zrem(unread_ranked_stories_key, story_hash)
|
||||
|
||||
if not aggregated:
|
||||
key = 'lRS:%s' % user_id
|
||||
r.lpush(key, story_hash)
|
||||
r.ltrim(key, 0, 1000)
|
||||
r.expire(key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
r.expire(key, Feed.days_of_story_hashes_for_feed(story_feed_id)*24*60*60)
|
||||
|
||||
@staticmethod
|
||||
def story_can_be_marked_read_by_user(story, user):
|
||||
def story_can_be_marked_unread_by_user(story, user):
|
||||
message = None
|
||||
if story.story_date < user.profile.unread_cutoff:
|
||||
if story.story_date < user.profile.unread_cutoff and not user.profile.is_archive:
|
||||
# if user.profile.is_archive:
|
||||
# message = "Story is more than %s days old, change your days of unreads under Preferences." % (
|
||||
# user.profile.days_of_unread)
|
||||
if user.profile.is_premium:
|
||||
message = "Story is more than %s days old, cannot mark as unread." % (
|
||||
message = "Story is more than %s days old. Premium Archive accounts can mark any story as unread." % (
|
||||
settings.DAYS_OF_UNREAD)
|
||||
elif story.story_date > user.profile.unread_cutoff_premium:
|
||||
message = "Story is more than %s days old. Premiums can mark unread up to 30 days." % (
|
||||
settings.DAYS_OF_UNREAD_FREE)
|
||||
message = "Story is older than %s days. Premium has %s days, and Premium Archive can mark anything unread." % (
|
||||
settings.DAYS_OF_UNREAD_FREE, settings.DAYS_OF_UNREAD)
|
||||
else:
|
||||
message = "Story is more than %s days old, cannot mark as unread." % (
|
||||
message = "Story is more than %s days old, only Premium Archive can mark older stories unread." % (
|
||||
settings.DAYS_OF_UNREAD_FREE)
|
||||
return message
|
||||
|
||||
|
|
@ -1177,7 +1311,6 @@ class RUserStory:
|
|||
def mark_unread(user_id, story_feed_id, story_hash, social_user_ids=None, r=None, username=None, ps=None):
|
||||
if not r:
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
# r2 = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL2)
|
||||
|
||||
story_hash = MStory.ensure_story_hash(story_hash, story_feed_id=story_feed_id)
|
||||
|
||||
|
|
@ -1185,9 +1318,7 @@ class RUserStory:
|
|||
|
||||
def redis_commands(key):
|
||||
r.srem(key, story_hash)
|
||||
# r2.srem(key, story_hash)
|
||||
r.expire(key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
# r2.expire(key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
r.expire(key, Feed.days_of_story_hashes_for_feed(story_feed_id)*24*60*60)
|
||||
|
||||
all_read_stories_key = 'RS:%s' % (user_id)
|
||||
redis_commands(all_read_stories_key)
|
||||
|
|
@ -1231,28 +1362,23 @@ class RUserStory:
|
|||
@classmethod
|
||||
def switch_feed(cls, user_id, old_feed_id, new_feed_id):
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
# r2 = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL2)
|
||||
p = r.pipeline()
|
||||
# p2 = r2.pipeline()
|
||||
story_hashes = cls.get_stories(user_id, old_feed_id, r=r)
|
||||
|
||||
story_hashes = UserSubscription.story_hashes(user_id, feed_ids=[old_feed_id])
|
||||
# story_hashes = cls.get_stories(user_id, old_feed_id, r=r)
|
||||
|
||||
for story_hash in story_hashes:
|
||||
_, hash_story = MStory.split_story_hash(story_hash)
|
||||
new_story_hash = "%s:%s" % (new_feed_id, hash_story)
|
||||
read_feed_key = "RS:%s:%s" % (user_id, new_feed_id)
|
||||
p.sadd(read_feed_key, new_story_hash)
|
||||
# p2.sadd(read_feed_key, new_story_hash)
|
||||
p.expire(read_feed_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
# p2.expire(read_feed_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
p.expire(read_feed_key, Feed.days_of_story_hashes_for_feed(new_feed_id)*24*60*60)
|
||||
|
||||
read_user_key = "RS:%s" % (user_id)
|
||||
p.sadd(read_user_key, new_story_hash)
|
||||
# p2.sadd(read_user_key, new_story_hash)
|
||||
p.expire(read_user_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
# p2.expire(read_user_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
p.expire(read_user_key, Feed.days_of_story_hashes_for_feed(new_feed_id)*24*60*60)
|
||||
|
||||
p.execute()
|
||||
# p2.execute()
|
||||
|
||||
if len(story_hashes) > 0:
|
||||
logging.info(" ---> %s read stories" % len(story_hashes))
|
||||
|
|
@ -1260,9 +1386,7 @@ class RUserStory:
|
|||
@classmethod
|
||||
def switch_hash(cls, feed, old_hash, new_hash):
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
# r2 = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL2)
|
||||
p = r.pipeline()
|
||||
# p2 = r2.pipeline()
|
||||
|
||||
usersubs = UserSubscription.objects.filter(feed_id=feed.pk, last_read_date__gte=feed.unread_cutoff)
|
||||
logging.info(" ---> ~SB%s usersubs~SN to switch read story hashes..." % len(usersubs))
|
||||
|
|
@ -1271,18 +1395,13 @@ class RUserStory:
|
|||
read = r.sismember(rs_key, old_hash)
|
||||
if read:
|
||||
p.sadd(rs_key, new_hash)
|
||||
# p2.sadd(rs_key, new_hash)
|
||||
p.expire(rs_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
# p2.expire(rs_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
p.expire(rs_key, feed.days_of_story_hashes*24*60*60)
|
||||
|
||||
read_user_key = "RS:%s" % sub.user.pk
|
||||
p.sadd(read_user_key, new_hash)
|
||||
# p2.sadd(read_user_key, new_hash)
|
||||
p.expire(read_user_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
# p2.expire(read_user_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
p.expire(read_user_key, feed.days_of_story_hashes*24*60*60)
|
||||
|
||||
p.execute()
|
||||
# p2.execute()
|
||||
|
||||
@classmethod
|
||||
def read_story_count(cls, user_id):
|
||||
|
|
@ -1733,3 +1852,84 @@ class Feature(models.Model):
|
|||
|
||||
class Meta:
|
||||
ordering = ["-date"]
|
||||
|
||||
class RUserUnreadStory:
|
||||
"""Model to store manually unread stories that are older than a user's unread_cutoff
|
||||
(same as days_of_unread). This is built for Premium Archive purposes.
|
||||
|
||||
If a story is marked as unread but is within the unread_cutoff, no need to add a
|
||||
UserUnreadStory instance as it will be automatically marked as read according to
|
||||
the user's days_of_unread preference.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def mark_unread(cls, user_id, story_hash, story_date, r=None):
|
||||
if not r:
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
if isinstance(story_date, float):
|
||||
story_date = int(story_date)
|
||||
if not isinstance(story_date, int):
|
||||
story_date = int(time.mktime(story_date.timetuple()))
|
||||
|
||||
feed_id, _ = MStory.split_story_hash(story_hash)
|
||||
user_manual_unread_stories_key = f"uU:{user_id}"
|
||||
user_manual_unread_stories_feed_key = f"uU:{user_id}:{feed_id}"
|
||||
|
||||
r.zadd(user_manual_unread_stories_key, {story_hash: story_date})
|
||||
r.zadd(user_manual_unread_stories_feed_key, {story_hash: story_date})
|
||||
|
||||
@classmethod
|
||||
def mark_read(cls, user_id, story_hashes, r=None):
|
||||
if not isinstance(story_hashes, list):
|
||||
story_hashes = [story_hashes]
|
||||
if not r:
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
|
||||
pipeline = r.pipeline()
|
||||
for story_hash in story_hashes:
|
||||
feed_id, _ = MStory.split_story_hash(story_hash)
|
||||
|
||||
user_manual_unread_stories_key = f"uU:{user_id}"
|
||||
user_manual_unread_stories_feed_key = f"uU:{user_id}:{feed_id}"
|
||||
|
||||
pipeline.zrem(user_manual_unread_stories_key, story_hash)
|
||||
pipeline.zrem(user_manual_unread_stories_feed_key, story_hash)
|
||||
pipeline.execute()
|
||||
|
||||
@classmethod
|
||||
def unreads(cls, user_id, story_hash):
|
||||
if not isinstance(story_hash, list):
|
||||
story_hash = [story_hash]
|
||||
|
||||
user_unread_stories = cls.objects.filter(user_id=user_id, story_hash__in=story_hash)
|
||||
|
||||
return user_unread_stories
|
||||
|
||||
@staticmethod
|
||||
def get_stories_and_dates(user_id, feed_id, r=None):
|
||||
if not r:
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
|
||||
user_manual_unread_stories_feed_key = f"uU:{user_id}:{feed_id}"
|
||||
story_hashes = r.zrange(user_manual_unread_stories_feed_key, 0, -1, withscores=True)
|
||||
|
||||
return story_hashes
|
||||
|
||||
@classmethod
|
||||
def switch_feed(cls, user_id, old_feed_id, new_feed_id):
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
p = r.pipeline()
|
||||
story_hashes = cls.get_stories_and_dates(user_id, old_feed_id, r=r)
|
||||
|
||||
for (story_hash, story_timestamp) in story_hashes:
|
||||
_, hash_story = MStory.split_story_hash(story_hash)
|
||||
new_story_hash = "%s:%s" % (new_feed_id, hash_story)
|
||||
# read_feed_key = "RS:%s:%s" % (user_id, new_feed_id)
|
||||
# user_manual_unread_stories_feed_key = f"uU:{user_id}:{new_feed_id}"
|
||||
cls.mark_unread(user_id, new_story_hash, story_timestamp, r=p)
|
||||
|
||||
p.execute()
|
||||
|
||||
if len(story_hashes) > 0:
|
||||
logging.info(" ---> %s archived unread stories" % len(story_hashes))
|
||||
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ urlpatterns = [
|
|||
url(r'^$', views.index),
|
||||
url(r'^buster', views.iframe_buster, name='iframe-buster'),
|
||||
url(r'^login_as', views.login_as, name='login_as'),
|
||||
url(r'^welcome', views.welcome_req, name='welcome'),
|
||||
url(r'^logout', views.logout, name='welcome-logout'),
|
||||
url(r'^login', views.login, name='welcome-login'),
|
||||
url(r'^autologin/(?P<username>\w+)/(?P<secret>\w+)/?', views.autologin, name='autologin'),
|
||||
|
|
@ -63,4 +64,5 @@ urlpatterns = [
|
|||
url(r'^save_search', views.save_search, name='save-search'),
|
||||
url(r'^delete_search', views.delete_search, name='delete-search'),
|
||||
url(r'^save_dashboard_river', views.save_dashboard_river, name='save-dashboard-river'),
|
||||
url(r'^remove_dashboard_river', views.remove_dashboard_river, name='remove-dashboard-river'),
|
||||
]
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ from apps.analyzer.models import apply_classifier_titles, apply_classifier_feeds
|
|||
from apps.analyzer.models import apply_classifier_authors, apply_classifier_tags
|
||||
from apps.analyzer.models import get_classifiers_for_user, sort_classifiers_by_feed
|
||||
from apps.profile.models import Profile, MCustomStyling, MDashboardRiver
|
||||
from apps.reader.models import UserSubscription, UserSubscriptionFolders, RUserStory, Feature
|
||||
from apps.reader.models import UserSubscription, UserSubscriptionFolders, RUserStory, RUserUnreadStory, Feature
|
||||
from apps.reader.forms import SignupForm, LoginForm, FeatureForm
|
||||
from apps.rss_feeds.models import MFeedIcon, MStarredStoryCounts, MSavedSearch
|
||||
from apps.notifications.models import MUserFeedNotification
|
||||
|
|
@ -79,6 +79,8 @@ ALLOWED_SUBDOMAINS = [
|
|||
'discovery',
|
||||
'debug',
|
||||
'debug3',
|
||||
'staging2',
|
||||
'staging3',
|
||||
'nb',
|
||||
]
|
||||
|
||||
|
|
@ -116,9 +118,9 @@ def index(request, **kwargs):
|
|||
def dashboard(request, **kwargs):
|
||||
user = request.user
|
||||
feed_count = UserSubscription.objects.filter(user=request.user).count()
|
||||
recommended_feeds = RecommendedFeed.objects.filter(is_public=True,
|
||||
approved_date__lte=datetime.datetime.now()
|
||||
).select_related('feed')[:2]
|
||||
# recommended_feeds = RecommendedFeed.objects.filter(is_public=True,
|
||||
# approved_date__lte=datetime.datetime.now()
|
||||
# ).select_related('feed')[:2]
|
||||
unmoderated_feeds = []
|
||||
if user.is_staff:
|
||||
unmoderated_feeds = RecommendedFeed.objects.filter(is_public=False,
|
||||
|
|
@ -144,13 +146,18 @@ def dashboard(request, **kwargs):
|
|||
'custom_styling' : custom_styling,
|
||||
'dashboard_rivers' : dashboard_rivers,
|
||||
'account_images' : list(range(1, 4)),
|
||||
'recommended_feeds' : recommended_feeds,
|
||||
# 'recommended_feeds' : recommended_feeds,
|
||||
'unmoderated_feeds' : unmoderated_feeds,
|
||||
'statistics' : statistics,
|
||||
'social_profile' : social_profile,
|
||||
'debug' : settings.DEBUG,
|
||||
'debug_assets' : settings.DEBUG_ASSETS,
|
||||
}, "reader/dashboard.xhtml"
|
||||
|
||||
|
||||
@render_to('reader/dashboard.xhtml')
|
||||
def welcome_req(request, **kwargs):
|
||||
return welcome(request, **kwargs)
|
||||
|
||||
def welcome(request, **kwargs):
|
||||
user = get_user(request)
|
||||
statistics = MStatistics.all()
|
||||
|
|
@ -664,9 +671,9 @@ def load_single_feed(request, feed_id):
|
|||
# User must be subscribed to a newsletter in order to read it
|
||||
raise Http404
|
||||
|
||||
if page > 200:
|
||||
logging.user(request, "~BR~FK~SBOver page 200 on single feed: %s" % page)
|
||||
raise Http404
|
||||
if page > 400:
|
||||
logging.user(request, "~BR~FK~SBOver page 400 on single feed: %s" % page)
|
||||
assert False
|
||||
|
||||
if query:
|
||||
if user.profile.is_premium:
|
||||
|
|
@ -682,11 +689,10 @@ def load_single_feed(request, feed_id):
|
|||
story_feed_id=feed_id
|
||||
).order_by('%sstarred_date' % ('-' if order == 'newest' else ''))[offset:offset+limit]
|
||||
stories = Feed.format_stories(mstories)
|
||||
elif usersub and (read_filter == 'unread' or order == 'oldest'):
|
||||
stories = usersub.get_stories(order=order, read_filter=read_filter, offset=offset, limit=limit,
|
||||
default_cutoff_date=user.profile.unread_cutoff)
|
||||
elif usersub and read_filter == 'unread':
|
||||
stories = usersub.get_stories(order=order, read_filter=read_filter, offset=offset, limit=limit)
|
||||
else:
|
||||
stories = feed.get_stories(offset, limit)
|
||||
stories = feed.get_stories(offset, limit, order=order)
|
||||
|
||||
checkpoint1 = time.time()
|
||||
|
||||
|
|
@ -722,7 +728,6 @@ def load_single_feed(request, feed_id):
|
|||
unread_story_hashes = UserSubscription.story_hashes(user.pk, read_filter='unread',
|
||||
feed_ids=[usersub.feed_id],
|
||||
usersubs=[usersub],
|
||||
group_by_feed=False,
|
||||
cutoff_date=user.profile.unread_cutoff)
|
||||
story_hashes = [story['story_hash'] for story in stories if story['story_hash']]
|
||||
starred_stories = MStarredStory.objects(user_id=user.pk,
|
||||
|
|
@ -753,7 +758,7 @@ def load_single_feed(request, feed_id):
|
|||
story['long_parsed_date'] = format_story_link_date__long(story_date, nowtz)
|
||||
if usersub:
|
||||
story['read_status'] = 1
|
||||
if story['story_date'] < user.profile.unread_cutoff:
|
||||
if not user.profile.is_archive and story['story_date'] < user.profile.unread_cutoff:
|
||||
story['read_status'] = 1
|
||||
elif (read_filter == 'all' or query) and usersub:
|
||||
story['read_status'] = 1 if story['story_hash'] not in unread_story_hashes else 0
|
||||
|
|
@ -765,7 +770,7 @@ def load_single_feed(request, feed_id):
|
|||
starred_date = localtime_for_timezone(starred_story['starred_date'],
|
||||
user.profile.timezone)
|
||||
story['starred_date'] = format_story_link_date__long(starred_date, now)
|
||||
story['starred_timestamp'] = starred_date.strftime('%s')
|
||||
story['starred_timestamp'] = int(starred_date.timestamp())
|
||||
story['user_tags'] = starred_story['user_tags']
|
||||
story['user_notes'] = starred_story['user_notes']
|
||||
story['highlights'] = starred_story['highlights']
|
||||
|
|
@ -1024,7 +1029,7 @@ def load_starred_stories(request):
|
|||
story['long_parsed_date'] = format_story_link_date__long(story_date, nowtz)
|
||||
starred_date = localtime_for_timezone(story['starred_date'], user.profile.timezone)
|
||||
story['starred_date'] = format_story_link_date__long(starred_date, nowtz)
|
||||
story['starred_timestamp'] = starred_date.strftime('%s')
|
||||
story['starred_timestamp'] = int(starred_date.timestamp())
|
||||
story['read_status'] = 1
|
||||
story['starred'] = True
|
||||
story['intelligence'] = {
|
||||
|
|
@ -1164,7 +1169,7 @@ def folder_rss_feed(request, user_id, secret_token, unread_filter, folder_slug):
|
|||
feed_ids, folder_title = user_sub_folders.feed_ids_under_folder_slug(folder_slug)
|
||||
|
||||
usersubs = UserSubscription.subs_for_feeds(user.pk, feed_ids=feed_ids)
|
||||
if feed_ids and user.profile.is_premium:
|
||||
if feed_ids and user.profile.is_archive:
|
||||
params = {
|
||||
"user_id": user.pk,
|
||||
"feed_ids": feed_ids,
|
||||
|
|
@ -1261,12 +1266,13 @@ def folder_rss_feed(request, user_id, secret_token, unread_filter, folder_slug):
|
|||
if story['story_authors']:
|
||||
story_data['author_name'] = story['story_authors']
|
||||
rss.add_item(**story_data)
|
||||
|
||||
if not user.profile.is_premium:
|
||||
|
||||
# TODO: Remove below date hack to accomodate users who paid for premium but want folder rss
|
||||
if not user.profile.is_archive and (datetime.datetime.now() > datetime.datetime(2023, 7, 1)):
|
||||
story_data = {
|
||||
'title': "You must have a premium account on NewsBlur to have RSS feeds for folders.",
|
||||
'link': "https://%s" % domain,
|
||||
'description': "You must have a premium account on NewsBlur to have RSS feeds for folders.",
|
||||
'title': "You must have a premium archive subscription on NewsBlur to have RSS feeds for folders.",
|
||||
'link': "https://%s/?next=premium" % domain,
|
||||
'description': "You must have a premium archive subscription on NewsBlur to have RSS feeds for folders.",
|
||||
'unique_id': "https://%s/premium_only" % domain,
|
||||
'pubdate': localtime_for_timezone(datetime.datetime.now(), user.profile.timezone),
|
||||
}
|
||||
|
|
@ -1348,7 +1354,7 @@ def load_read_stories(request):
|
|||
starred_date = localtime_for_timezone(starred_story['starred_date'],
|
||||
user.profile.timezone)
|
||||
story['starred_date'] = format_story_link_date__long(starred_date, now)
|
||||
story['starred_timestamp'] = starred_date.strftime('%s')
|
||||
story['starred_timestamp'] = int(starred_date.timestamp())
|
||||
if story['story_hash'] in shared_stories:
|
||||
story['shared'] = True
|
||||
story['shared_comments'] = strip_tags(shared_stories[story['story_hash']]['comments'])
|
||||
|
|
@ -1420,7 +1426,6 @@ def load_river_stories__redis(request):
|
|||
mstories = stories
|
||||
unread_feed_story_hashes = UserSubscription.story_hashes(user.pk, feed_ids=feed_ids,
|
||||
read_filter="unread", order=order,
|
||||
group_by_feed=False,
|
||||
cutoff_date=user.profile.unread_cutoff)
|
||||
else:
|
||||
stories = []
|
||||
|
|
@ -1528,7 +1533,7 @@ def load_river_stories__redis(request):
|
|||
starred_date = localtime_for_timezone(starred_stories[story['story_hash']]['starred_date'],
|
||||
user.profile.timezone)
|
||||
story['starred_date'] = format_story_link_date__long(starred_date, now)
|
||||
story['starred_timestamp'] = starred_date.strftime('%s')
|
||||
story['starred_timestamp'] = int(starred_date.timestamp())
|
||||
story['user_tags'] = starred_stories[story['story_hash']]['user_tags']
|
||||
story['user_notes'] = starred_stories[story['story_hash']]['user_notes']
|
||||
story['highlights'] = starred_stories[story['story_hash']]['highlights']
|
||||
|
|
@ -1674,50 +1679,10 @@ def complete_river(request):
|
|||
if feed_ids:
|
||||
stories_truncated = UserSubscription.truncate_river(user.pk, feed_ids, read_filter, cache_prefix="dashboard:")
|
||||
|
||||
if page > 1:
|
||||
if page >= 1:
|
||||
logging.user(request, "~FC~BBRiver complete on page ~SB%s~SN, truncating ~SB%s~SN stories from ~SB%s~SN feeds" % (page, stories_truncated, len(feed_ids)))
|
||||
|
||||
return dict(code=1, message="Truncated %s stories from %s" % (stories_truncated, len(feed_ids)))
|
||||
|
||||
@json.json_view
|
||||
def unread_story_hashes__old(request):
|
||||
user = get_user(request)
|
||||
feed_ids = request.GET.getlist('feed_id') or request.GET.getlist('feed_id[]')
|
||||
feed_ids = [int(feed_id) for feed_id in feed_ids if feed_id]
|
||||
include_timestamps = is_true(request.GET.get('include_timestamps', False))
|
||||
usersubs = {}
|
||||
|
||||
if not feed_ids:
|
||||
usersubs = UserSubscription.objects.filter(Q(unread_count_neutral__gt=0) |
|
||||
Q(unread_count_positive__gt=0),
|
||||
user=user, active=True)
|
||||
feed_ids = [sub.feed_id for sub in usersubs]
|
||||
else:
|
||||
usersubs = UserSubscription.objects.filter(Q(unread_count_neutral__gt=0) |
|
||||
Q(unread_count_positive__gt=0),
|
||||
user=user, active=True, feed__in=feed_ids)
|
||||
|
||||
unread_feed_story_hashes = {}
|
||||
story_hash_count = 0
|
||||
|
||||
usersubs = dict((sub.feed_id, sub) for sub in usersubs)
|
||||
for feed_id in feed_ids:
|
||||
if feed_id in usersubs:
|
||||
us = usersubs[feed_id]
|
||||
else:
|
||||
continue
|
||||
if not us.unread_count_neutral and not us.unread_count_positive:
|
||||
continue
|
||||
unread_feed_story_hashes[feed_id] = us.get_stories(read_filter='unread', limit=500,
|
||||
withscores=include_timestamps,
|
||||
hashes_only=True,
|
||||
default_cutoff_date=user.profile.unread_cutoff)
|
||||
story_hash_count += len(unread_feed_story_hashes[feed_id])
|
||||
|
||||
logging.user(request, "~FYLoading ~FCunread story hashes~FY: ~SB%s feeds~SN (%s story hashes)" %
|
||||
(len(feed_ids), len(story_hash_count)))
|
||||
|
||||
return dict(unread_feed_story_hashes=unread_feed_story_hashes)
|
||||
|
||||
@json.json_view
|
||||
def unread_story_hashes(request):
|
||||
|
|
@ -1731,6 +1696,7 @@ def unread_story_hashes(request):
|
|||
story_hashes = UserSubscription.story_hashes(user.pk, feed_ids=feed_ids,
|
||||
order=order, read_filter=read_filter,
|
||||
include_timestamps=include_timestamps,
|
||||
group_by_feed=True,
|
||||
cutoff_date=user.profile.unread_cutoff)
|
||||
|
||||
logging.user(request, "~FYLoading ~FCunread story hashes~FY: ~SB%s feeds~SN (%s story hashes)" %
|
||||
|
|
@ -1817,6 +1783,9 @@ def mark_story_hashes_as_read(request):
|
|||
return dict(code=-1, message="Missing `story_hash` list parameter.")
|
||||
|
||||
feed_ids, friend_ids = RUserStory.mark_story_hashes_read(request.user.pk, story_hashes, username=request.user.username)
|
||||
|
||||
if request.user.profile.is_archive:
|
||||
RUserUnreadStory.mark_read(request.user.pk, story_hashes)
|
||||
|
||||
if friend_ids:
|
||||
socialsubs = MSocialSubscription.objects.filter(
|
||||
|
|
@ -1952,16 +1921,16 @@ def mark_story_as_unread(request):
|
|||
if not story:
|
||||
logging.user(request, "~FY~SBUnread~SN story in feed: %s (NOT FOUND)" % (feed))
|
||||
return dict(code=-1, message="Story not found.")
|
||||
|
||||
if usersub:
|
||||
data = usersub.invert_read_stories_after_unread_story(story, request)
|
||||
|
||||
message = RUserStory.story_can_be_marked_read_by_user(story, request.user)
|
||||
message = RUserStory.story_can_be_marked_unread_by_user(story, request.user)
|
||||
if message:
|
||||
data['code'] = -1
|
||||
data['message'] = message
|
||||
return data
|
||||
|
||||
if usersub:
|
||||
data = usersub.invert_read_stories_after_unread_story(story, request)
|
||||
|
||||
social_subs = MSocialSubscription.mark_dirty_sharing_story(user_id=request.user.pk,
|
||||
story_feed_id=feed_id,
|
||||
story_guid_hash=story.guid_hash)
|
||||
|
|
@ -1993,7 +1962,7 @@ def mark_story_hash_as_unread(request):
|
|||
return data
|
||||
else:
|
||||
datas.append(data)
|
||||
message = RUserStory.story_can_be_marked_read_by_user(story, request.user)
|
||||
message = RUserStory.story_can_be_marked_unread_by_user(story, request.user)
|
||||
if message:
|
||||
data = dict(code=-1, message=message, story_hash=story_hash)
|
||||
if not is_list:
|
||||
|
|
@ -2870,7 +2839,7 @@ def delete_search(request):
|
|||
def save_dashboard_river(request):
|
||||
river_id = request.POST['river_id']
|
||||
river_side = request.POST['river_side']
|
||||
river_order = request.POST['river_order']
|
||||
river_order = int(request.POST['river_order'])
|
||||
|
||||
logging.user(request, "~FCSaving dashboard river: ~SB%s~SN (%s %s)" % (river_id, river_side, river_order))
|
||||
|
||||
|
|
@ -2880,3 +2849,19 @@ def save_dashboard_river(request):
|
|||
return {
|
||||
'dashboard_rivers': dashboard_rivers,
|
||||
}
|
||||
|
||||
@required_params('river_id', 'river_side', 'river_order')
|
||||
@json.json_view
|
||||
def remove_dashboard_river(request):
|
||||
river_id = request.POST['river_id']
|
||||
river_side = request.POST['river_side']
|
||||
river_order = int(request.POST['river_order'])
|
||||
|
||||
logging.user(request, "~FRRemoving~FC dashboard river: ~SB%s~SN (%s %s)" % (river_id, river_side, river_order))
|
||||
|
||||
MDashboardRiver.remove_river(request.user.pk, river_side, river_order)
|
||||
dashboard_rivers = MDashboardRiver.get_user_rivers(request.user.pk)
|
||||
|
||||
return {
|
||||
'dashboard_rivers': dashboard_rivers,
|
||||
}
|
||||
|
|
|
|||
38
apps/rss_feeds/migrations/0003_auto_20220110_2105.py
Normal file
38
apps/rss_feeds/migrations/0003_auto_20220110_2105.py
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
# Generated by Django 3.1.10 on 2022-01-10 21:05
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('rss_feeds', '0002_remove_mongo_types'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name='feed',
|
||||
name='feed_address_locked',
|
||||
field=models.BooleanField(blank=True, default=False, null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='feed',
|
||||
name='is_push',
|
||||
field=models.BooleanField(blank=True, default=False, null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='feed',
|
||||
name='s3_icon',
|
||||
field=models.BooleanField(blank=True, default=False, null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='feed',
|
||||
name='s3_page',
|
||||
field=models.BooleanField(blank=True, default=False, null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='feed',
|
||||
name='search_indexed',
|
||||
field=models.BooleanField(blank=True, default=None, null=True),
|
||||
),
|
||||
]
|
||||
26
apps/rss_feeds/migrations/0003_mongo_version_4_0.py
Normal file
26
apps/rss_feeds/migrations/0003_mongo_version_4_0.py
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
# Generated by Django 3.1.10 on 2022-05-17 13:35
|
||||
|
||||
from django.db import migrations
|
||||
from django.conf import settings
|
||||
|
||||
def set_mongo_feature_compatibility_version(apps, schema_editor):
|
||||
new_version = "4.0"
|
||||
db = settings.MONGODB.admin
|
||||
doc = db.command({"getParameter": 1, "featureCompatibilityVersion": 1})
|
||||
old_version = doc["featureCompatibilityVersion"]["version"]
|
||||
print(f"\n ---> Current MongoDB featureCompatibilityVersion: {old_version}")
|
||||
|
||||
if old_version != new_version:
|
||||
db.command({"setFeatureCompatibilityVersion": new_version})
|
||||
print(f" ---> Updated MongoDB featureCompatibilityVersion: {new_version}")
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('rss_feeds', '0002_remove_mongo_types'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(set_mongo_feature_compatibility_version, migrations.RunPython.noop)
|
||||
]
|
||||
18
apps/rss_feeds/migrations/0004_feed_pro_subscribers.py
Normal file
18
apps/rss_feeds/migrations/0004_feed_pro_subscribers.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 3.1.10 on 2022-01-10 21:41
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('rss_feeds', '0003_auto_20220110_2105'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='feed',
|
||||
name='pro_subscribers',
|
||||
field=models.IntegerField(blank=True, default=0, null=True),
|
||||
),
|
||||
]
|
||||
18
apps/rss_feeds/migrations/0005_feed_archive_subscribers.py
Normal file
18
apps/rss_feeds/migrations/0005_feed_archive_subscribers.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 3.1.10 on 2022-01-11 15:58
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('rss_feeds', '0004_feed_pro_subscribers'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='feed',
|
||||
name='archive_subscribers',
|
||||
field=models.IntegerField(blank=True, default=0, null=True),
|
||||
),
|
||||
]
|
||||
18
apps/rss_feeds/migrations/0006_feed_fs_size_bytes.py
Normal file
18
apps/rss_feeds/migrations/0006_feed_fs_size_bytes.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 3.1.10 on 2022-05-11 17:10
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('rss_feeds', '0005_feed_archive_subscribers'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='feed',
|
||||
name='fs_size_bytes',
|
||||
field=models.IntegerField(blank=True, null=True),
|
||||
),
|
||||
]
|
||||
14
apps/rss_feeds/migrations/0007_merge_20220517_1355.py
Normal file
14
apps/rss_feeds/migrations/0007_merge_20220517_1355.py
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
# Generated by Django 3.1.10 on 2022-05-17 13:55
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('rss_feeds', '0006_feed_fs_size_bytes'),
|
||||
('rss_feeds', '0003_mongo_version_4_0'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
]
|
||||
18
apps/rss_feeds/migrations/0008_feed_archive_count.py
Normal file
18
apps/rss_feeds/migrations/0008_feed_archive_count.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 3.1.10 on 2022-06-06 19:45
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('rss_feeds', '0007_merge_20220517_1355'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='feed',
|
||||
name='archive_count',
|
||||
field=models.IntegerField(blank=True, null=True),
|
||||
),
|
||||
]
|
||||
|
|
@ -1,4 +1,5 @@
|
|||
import difflib
|
||||
import bson
|
||||
import requests
|
||||
import datetime
|
||||
import time
|
||||
|
|
@ -65,6 +66,8 @@ class Feed(models.Model):
|
|||
num_subscribers = models.IntegerField(default=-1)
|
||||
active_subscribers = models.IntegerField(default=-1, db_index=True)
|
||||
premium_subscribers = models.IntegerField(default=-1)
|
||||
archive_subscribers = models.IntegerField(default=0, null=True, blank=True)
|
||||
pro_subscribers = models.IntegerField(default=0, null=True, blank=True)
|
||||
active_premium_subscribers = models.IntegerField(default=-1)
|
||||
branch_from_feed = models.ForeignKey('Feed', blank=True, null=True, db_index=True, on_delete=models.CASCADE)
|
||||
last_update = models.DateTimeField(db_index=True)
|
||||
|
|
@ -90,6 +93,8 @@ class Feed(models.Model):
|
|||
s3_page = models.BooleanField(default=False, blank=True, null=True)
|
||||
s3_icon = models.BooleanField(default=False, blank=True, null=True)
|
||||
search_indexed = models.BooleanField(default=None, null=True, blank=True)
|
||||
fs_size_bytes = models.IntegerField(null=True, blank=True)
|
||||
archive_count = models.IntegerField(null=True, blank=True)
|
||||
|
||||
class Meta:
|
||||
db_table="feeds"
|
||||
|
|
@ -100,13 +105,17 @@ class Feed(models.Model):
|
|||
if not self.feed_title:
|
||||
self.feed_title = "[Untitled]"
|
||||
self.save()
|
||||
return "%s%s: %s - %s/%s/%s" % (
|
||||
return "%s%s: %s - %s/%s/%s/%s/%s %s stories (%s bytes)" % (
|
||||
self.pk,
|
||||
(" [B: %s]" % self.branch_from_feed.pk if self.branch_from_feed else ""),
|
||||
self.feed_title,
|
||||
self.num_subscribers,
|
||||
self.active_subscribers,
|
||||
self.active_premium_subscribers,
|
||||
self.archive_subscribers,
|
||||
self.pro_subscribers,
|
||||
self.archive_count,
|
||||
self.fs_size_bytes,
|
||||
)
|
||||
|
||||
@property
|
||||
|
|
@ -134,7 +143,7 @@ class Feed(models.Model):
|
|||
def favicon_url_fqdn(self):
|
||||
if settings.BACKED_BY_AWS['icons_on_s3'] and self.s3_icon:
|
||||
return self.favicon_url
|
||||
return "http://%s%s" % (
|
||||
return "https://%s%s" % (
|
||||
Site.objects.get_current().domain,
|
||||
self.favicon_url
|
||||
)
|
||||
|
|
@ -149,11 +158,27 @@ class Feed(models.Model):
|
|||
|
||||
@property
|
||||
def unread_cutoff(self):
|
||||
if self.active_premium_subscribers > 0:
|
||||
if self.archive_subscribers and self.archive_subscribers > 0:
|
||||
return datetime.datetime.utcnow() - datetime.timedelta(days=settings.DAYS_OF_UNREAD_ARCHIVE)
|
||||
if self.premium_subscribers > 0:
|
||||
return datetime.datetime.utcnow() - datetime.timedelta(days=settings.DAYS_OF_UNREAD)
|
||||
|
||||
return datetime.datetime.utcnow() - datetime.timedelta(days=settings.DAYS_OF_UNREAD_FREE)
|
||||
|
||||
@classmethod
|
||||
def days_of_story_hashes_for_feed(cls, feed_id):
|
||||
try:
|
||||
feed = cls.objects.only('archive_subscribers').get(pk=feed_id)
|
||||
return feed.days_of_story_hashes
|
||||
except cls.DoesNotExist:
|
||||
return settings.DAYS_OF_STORY_HASHES
|
||||
|
||||
@property
|
||||
def days_of_story_hashes(self):
|
||||
if self.archive_subscribers and self.archive_subscribers > 0:
|
||||
return settings.DAYS_OF_STORY_HASHES_ARCHIVE
|
||||
return settings.DAYS_OF_STORY_HASHES
|
||||
|
||||
@property
|
||||
def story_hashes_in_unread_cutoff(self):
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
|
|
@ -182,6 +207,8 @@ class Feed(models.Model):
|
|||
'num_subscribers': self.num_subscribers,
|
||||
'updated': relative_timesince(self.last_update),
|
||||
'updated_seconds_ago': seconds_timesince(self.last_update),
|
||||
'fs_size_bytes': self.fs_size_bytes,
|
||||
'archive_count': self.archive_count,
|
||||
'last_story_date': self.last_story_date,
|
||||
'last_story_seconds_ago': seconds_timesince(self.last_story_date),
|
||||
'stories_last_month': self.stories_last_month,
|
||||
|
|
@ -322,13 +349,9 @@ class Feed(models.Model):
|
|||
def expire_redis(self, r=None):
|
||||
if not r:
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
# if not r2:
|
||||
# r2 = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL2)
|
||||
|
||||
r.expire('F:%s' % self.pk, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
# r2.expire('F:%s' % self.pk, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
r.expire('zF:%s' % self.pk, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
# r2.expire('zF:%s' % self.pk, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
r.expire('F:%s' % self.pk, self.days_of_story_hashes*24*60*60)
|
||||
r.expire('zF:%s' % self.pk, self.days_of_story_hashes*24*60*60)
|
||||
|
||||
@classmethod
|
||||
def low_volume_feeds(cls, feed_ids, stories_per_month=30):
|
||||
|
|
@ -592,6 +615,7 @@ class Feed(models.Model):
|
|||
r.zremrangebyrank('error_feeds', 0, -1)
|
||||
else:
|
||||
logging.debug(" ---> No errored feeds to drain")
|
||||
|
||||
def update_all_statistics(self, has_new_stories=False, force=False):
|
||||
recount = not self.counts_converted_to_redis
|
||||
count_extra = False
|
||||
|
|
@ -604,6 +628,9 @@ class Feed(models.Model):
|
|||
if force or has_new_stories or count_extra:
|
||||
self.save_feed_stories_last_month()
|
||||
|
||||
if not self.fs_size_bytes or not self.archive_count:
|
||||
self.count_fs_size_bytes()
|
||||
|
||||
if force or (has_new_stories and count_extra):
|
||||
self.save_popular_authors()
|
||||
self.save_popular_tags()
|
||||
|
|
@ -630,8 +657,7 @@ class Feed(models.Model):
|
|||
|
||||
@classmethod
|
||||
def setup_feeds_for_premium_subscribers(cls, feed_ids):
|
||||
logging.info(" ---> ~SN~FMScheduling immediate premium setup of ~SB%s~SN feeds..." %
|
||||
len(feed_ids))
|
||||
logging.info(f" ---> ~SN~FMScheduling immediate premium setup of ~SB{len(feed_ids)}~SN feeds...")
|
||||
|
||||
feeds = Feed.objects.filter(pk__in=feed_ids)
|
||||
for feed in feeds:
|
||||
|
|
@ -639,7 +665,8 @@ class Feed(models.Model):
|
|||
|
||||
def setup_feed_for_premium_subscribers(self):
|
||||
self.count_subscribers()
|
||||
self.set_next_scheduled_update()
|
||||
self.set_next_scheduled_update(verbose=settings.DEBUG)
|
||||
self.sync_redis()
|
||||
|
||||
def check_feed_link_for_feed_address(self):
|
||||
@timelimit(10)
|
||||
|
|
@ -707,7 +734,7 @@ class Feed(models.Model):
|
|||
if status_code not in (200, 304):
|
||||
self.errors_since_good += 1
|
||||
self.count_errors_in_history('feed', status_code, fetch_history=fetch_history)
|
||||
self.set_next_scheduled_update()
|
||||
self.set_next_scheduled_update(verbose=settings.DEBUG)
|
||||
elif self.has_feed_exception or self.errors_since_good:
|
||||
self.errors_since_good = 0
|
||||
self.has_feed_exception = False
|
||||
|
|
@ -792,7 +819,6 @@ class Feed(models.Model):
|
|||
total_key = "s:%s" % self.original_feed_id
|
||||
premium_key = "sp:%s" % self.original_feed_id
|
||||
last_recount = r.zscore(total_key, -1) # Need to subtract this extra when counting subs
|
||||
last_recount = r.zscore(premium_key, -1) # Need to subtract this extra when counting subs
|
||||
|
||||
# Check for expired feeds with no active users who would have triggered a cleanup
|
||||
if last_recount and last_recount > subscriber_expire:
|
||||
|
|
@ -816,6 +842,8 @@ class Feed(models.Model):
|
|||
total = 0
|
||||
active = 0
|
||||
premium = 0
|
||||
archive = 0
|
||||
pro = 0
|
||||
active_premium = 0
|
||||
|
||||
# Include all branched feeds in counts
|
||||
|
|
@ -831,10 +859,14 @@ class Feed(models.Model):
|
|||
# now+1 ensures `-1` flag will be corrected for later with - 1
|
||||
total_key = "s:%s" % feed_id
|
||||
premium_key = "sp:%s" % feed_id
|
||||
archive_key = "sarchive:%s" % feed_id
|
||||
pro_key = "spro:%s" % feed_id
|
||||
pipeline.zcard(total_key)
|
||||
pipeline.zcount(total_key, subscriber_expire, now+1)
|
||||
pipeline.zcard(premium_key)
|
||||
pipeline.zcount(premium_key, subscriber_expire, now+1)
|
||||
pipeline.zcard(archive_key)
|
||||
pipeline.zcard(pro_key)
|
||||
|
||||
results = pipeline.execute()
|
||||
|
||||
|
|
@ -843,13 +875,17 @@ class Feed(models.Model):
|
|||
active += max(0, results[1] - 1)
|
||||
premium += max(0, results[2] - 1)
|
||||
active_premium += max(0, results[3] - 1)
|
||||
archive += max(0, results[4] - 1)
|
||||
pro += max(0, results[5] - 1)
|
||||
|
||||
original_num_subscribers = self.num_subscribers
|
||||
original_active_subs = self.active_subscribers
|
||||
original_premium_subscribers = self.premium_subscribers
|
||||
original_active_premium_subscribers = self.active_premium_subscribers
|
||||
logging.info(" ---> [%-30s] ~SN~FBCounting subscribers from ~FCredis~FB: ~FMt:~SB~FM%s~SN a:~SB%s~SN p:~SB%s~SN ap:~SB%s ~SN~FC%s" %
|
||||
(self.log_title[:30], total, active, premium, active_premium, "(%s branches)" % (len(feed_ids)-1) if len(feed_ids)>1 else ""))
|
||||
original_archive_subscribers = self.archive_subscribers
|
||||
original_pro_subscribers = self.pro_subscribers
|
||||
logging.info(" ---> [%-30s] ~SN~FBCounting subscribers from ~FCredis~FB: ~FMt:~SB~FM%s~SN a:~SB%s~SN p:~SB%s~SN ap:~SB%s~SN archive:~SB%s~SN pro:~SB%s ~SN~FC%s" %
|
||||
(self.log_title[:30], total, active, premium, active_premium, archive, pro, "(%s branches)" % (len(feed_ids)-1) if len(feed_ids)>1 else ""))
|
||||
else:
|
||||
from apps.reader.models import UserSubscription
|
||||
|
||||
|
|
@ -872,6 +908,22 @@ class Feed(models.Model):
|
|||
)
|
||||
original_premium_subscribers = self.premium_subscribers
|
||||
premium = premium_subs.count()
|
||||
|
||||
archive_subs = UserSubscription.objects.filter(
|
||||
feed__in=feed_ids,
|
||||
active=True,
|
||||
user__profile__is_archive=True
|
||||
)
|
||||
original_archive_subscribers = self.archive_subscribers
|
||||
archive = archive_subs.count()
|
||||
|
||||
pro_subs = UserSubscription.objects.filter(
|
||||
feed__in=feed_ids,
|
||||
active=True,
|
||||
user__profile__is_pro=True
|
||||
)
|
||||
original_pro_subscribers = self.pro_subscribers
|
||||
pro = pro_subs.count()
|
||||
|
||||
active_premium_subscribers = UserSubscription.objects.filter(
|
||||
feed__in=feed_ids,
|
||||
|
|
@ -881,8 +933,8 @@ class Feed(models.Model):
|
|||
)
|
||||
original_active_premium_subscribers = self.active_premium_subscribers
|
||||
active_premium = active_premium_subscribers.count()
|
||||
logging.debug(" ---> [%-30s] ~SN~FBCounting subscribers from ~FYpostgres~FB: ~FMt:~SB~FM%s~SN a:~SB%s~SN p:~SB%s~SN ap:~SB%s" %
|
||||
(self.log_title[:30], total, active, premium, active_premium))
|
||||
logging.debug(" ---> [%-30s] ~SN~FBCounting subscribers from ~FYpostgres~FB: ~FMt:~SB~FM%s~SN a:~SB%s~SN p:~SB%s~SN ap:~SB%s~SN archive:~SB%s~SN pro:~SB%s" %
|
||||
(self.log_title[:30], total, active, premium, active_premium, archive, pro))
|
||||
|
||||
if settings.DOCKERBUILD:
|
||||
# Local installs enjoy 100% active feeds
|
||||
|
|
@ -893,15 +945,20 @@ class Feed(models.Model):
|
|||
self.active_subscribers = active
|
||||
self.premium_subscribers = premium
|
||||
self.active_premium_subscribers = active_premium
|
||||
self.archive_subscribers = archive
|
||||
self.pro_subscribers = pro
|
||||
if (self.num_subscribers != original_num_subscribers or
|
||||
self.active_subscribers != original_active_subs or
|
||||
self.premium_subscribers != original_premium_subscribers or
|
||||
self.active_premium_subscribers != original_active_premium_subscribers):
|
||||
self.active_premium_subscribers != original_active_premium_subscribers or
|
||||
self.archive_subscribers != original_archive_subscribers or
|
||||
self.pro_subscribers != original_pro_subscribers):
|
||||
if original_premium_subscribers == -1 or original_active_premium_subscribers == -1:
|
||||
self.save()
|
||||
else:
|
||||
self.save(update_fields=['num_subscribers', 'active_subscribers',
|
||||
'premium_subscribers', 'active_premium_subscribers'])
|
||||
'premium_subscribers', 'active_premium_subscribers',
|
||||
'archive_subscribers', 'pro_subscribers'])
|
||||
|
||||
if verbose:
|
||||
if self.num_subscribers <= 1:
|
||||
|
|
@ -984,7 +1041,27 @@ class Feed(models.Model):
|
|||
return 'white'
|
||||
else:
|
||||
return 'black'
|
||||
|
||||
|
||||
def fill_out_archive_stories(self, force=False, starting_page=1):
|
||||
"""
|
||||
Starting from page 1 and iterating through N pages, determine whether
|
||||
page(i) matches page(i-1) and if there are any new stories.
|
||||
"""
|
||||
before_story_count = MStory.objects(story_feed_id=self.pk).count()
|
||||
|
||||
if not force and not self.archive_subscribers:
|
||||
logging.debug(" ---> [%-30s] ~FBNot filling out archive stories, no archive subscribers" % (
|
||||
self.log_title[:30]))
|
||||
return before_story_count, before_story_count
|
||||
|
||||
self.update(archive_page=starting_page)
|
||||
|
||||
after_story_count = MStory.objects(story_feed_id=self.pk).count()
|
||||
logging.debug(" ---> [%-30s] ~FCFilled out archive, ~FM~SB%s~SN new stories~FC, total of ~SB%s~SN stories" % (
|
||||
self.log_title[:30],
|
||||
after_story_count - before_story_count,
|
||||
after_story_count))
|
||||
|
||||
def save_feed_stories_last_month(self, verbose=False):
|
||||
month_ago = datetime.datetime.utcnow() - datetime.timedelta(days=30)
|
||||
stories_last_month = MStory.objects(story_feed_id=self.pk,
|
||||
|
|
@ -1188,7 +1265,8 @@ class Feed(models.Model):
|
|||
'debug': kwargs.get('debug'),
|
||||
'fpf': kwargs.get('fpf'),
|
||||
'feed_xml': kwargs.get('feed_xml'),
|
||||
'requesting_user_id': kwargs.get('requesting_user_id', None)
|
||||
'requesting_user_id': kwargs.get('requesting_user_id', None),
|
||||
'archive_page': kwargs.get('archive_page', None),
|
||||
}
|
||||
|
||||
if getattr(settings, 'TEST_DEBUG', False) and "NEWSBLUR_DIR" in self.feed_address:
|
||||
|
|
@ -1213,7 +1291,7 @@ class Feed(models.Model):
|
|||
feed = Feed.get_by_id(feed.pk)
|
||||
if feed:
|
||||
feed.last_update = datetime.datetime.utcnow()
|
||||
feed.set_next_scheduled_update()
|
||||
feed.set_next_scheduled_update(verbose=settings.DEBUG)
|
||||
r.zadd('fetched_feeds_last_hour', { feed.pk: int(datetime.datetime.now().strftime('%s')) })
|
||||
|
||||
if not feed or original_feed_id != feed.pk:
|
||||
|
|
@ -1469,10 +1547,10 @@ class Feed(models.Model):
|
|||
self.save_popular_authors(feed_authors=feed_authors[:-1])
|
||||
|
||||
@classmethod
|
||||
def trim_old_stories(cls, start=0, verbose=True, dryrun=False, total=0):
|
||||
def trim_old_stories(cls, start=0, verbose=True, dryrun=False, total=0, end=None):
|
||||
now = datetime.datetime.now()
|
||||
month_ago = now - datetime.timedelta(days=settings.DAYS_OF_STORY_HASHES)
|
||||
feed_count = Feed.objects.latest('pk').pk
|
||||
feed_count = end or Feed.objects.latest('pk').pk
|
||||
|
||||
for feed_id in range(start, feed_count):
|
||||
if feed_id % 1000 == 0:
|
||||
|
|
@ -1481,7 +1559,11 @@ class Feed(models.Model):
|
|||
feed = Feed.objects.get(pk=feed_id)
|
||||
except Feed.DoesNotExist:
|
||||
continue
|
||||
if feed.active_subscribers <= 0 and (not feed.last_story_date or feed.last_story_date < month_ago):
|
||||
# Ensure only feeds with no active subscribers are being trimmed
|
||||
if (feed.active_subscribers <= 0 and
|
||||
(not feed.archive_subscribers or feed.archive_subscribers <= 0) and
|
||||
(not feed.last_story_date or feed.last_story_date < month_ago)):
|
||||
# 1 month since last story = keep 5 stories, >6 months since, only keep 1 story
|
||||
months_ago = 6
|
||||
if feed.last_story_date:
|
||||
months_ago = int((now - feed.last_story_date).days / 30.0)
|
||||
|
|
@ -1501,6 +1583,12 @@ class Feed(models.Model):
|
|||
|
||||
@property
|
||||
def story_cutoff(self):
|
||||
return self.number_of_stories_to_store()
|
||||
|
||||
def number_of_stories_to_store(self, pre_archive=False):
|
||||
if self.archive_subscribers and self.archive_subscribers > 0 and not pre_archive:
|
||||
return 10000
|
||||
|
||||
cutoff = 500
|
||||
if self.active_subscribers <= 0:
|
||||
cutoff = 25
|
||||
|
|
@ -1526,6 +1614,8 @@ class Feed(models.Model):
|
|||
pipeline = r.pipeline()
|
||||
read_stories_per_week = []
|
||||
now = datetime.datetime.now()
|
||||
|
||||
# Check to see how many stories have been read each week since the feed's days of story hashes
|
||||
for weeks_back in range(2*int(math.floor(settings.DAYS_OF_STORY_HASHES/7))):
|
||||
weeks_ago = now - datetime.timedelta(days=7*weeks_back)
|
||||
week_of_year = weeks_ago.strftime('%Y-%U')
|
||||
|
|
@ -1533,7 +1623,7 @@ class Feed(models.Model):
|
|||
pipeline.get(feed_read_key)
|
||||
read_stories_per_week = pipeline.execute()
|
||||
read_stories_last_month = sum([int(rs) for rs in read_stories_per_week if rs])
|
||||
if read_stories_last_month == 0:
|
||||
if not pre_archive and read_stories_last_month == 0:
|
||||
original_cutoff = cutoff
|
||||
cutoff = min(cutoff, 10)
|
||||
try:
|
||||
|
|
@ -1545,13 +1635,50 @@ class Feed(models.Model):
|
|||
if getattr(settings, 'OVERRIDE_STORY_COUNT_MAX', None):
|
||||
cutoff = settings.OVERRIDE_STORY_COUNT_MAX
|
||||
|
||||
return cutoff
|
||||
return int(cutoff)
|
||||
|
||||
def trim_feed(self, verbose=False, cutoff=None):
|
||||
if not cutoff:
|
||||
cutoff = self.story_cutoff
|
||||
return MStory.trim_feed(feed=self, cutoff=cutoff, verbose=verbose)
|
||||
|
||||
|
||||
stories_removed = MStory.trim_feed(feed=self, cutoff=cutoff, verbose=verbose)
|
||||
|
||||
if not self.fs_size_bytes:
|
||||
self.count_fs_size_bytes()
|
||||
|
||||
return stories_removed
|
||||
|
||||
def count_fs_size_bytes(self):
|
||||
stories = MStory.objects.filter(story_feed_id=self.pk)
|
||||
sum_bytes = 0
|
||||
count = 0
|
||||
|
||||
for story in stories:
|
||||
count += 1
|
||||
story_with_content = story.to_mongo()
|
||||
if story_with_content.get('story_content_z', None):
|
||||
story_with_content['story_content'] = zlib.decompress(story_with_content['story_content_z'])
|
||||
del story_with_content['story_content_z']
|
||||
if story_with_content.get('original_page_z', None):
|
||||
story_with_content['original_page'] = zlib.decompress(story_with_content['original_page_z'])
|
||||
del story_with_content['original_page_z']
|
||||
if story_with_content.get('original_text_z', None):
|
||||
story_with_content['original_text'] = zlib.decompress(story_with_content['original_text_z'])
|
||||
del story_with_content['original_text_z']
|
||||
if story_with_content.get('story_latest_content_z', None):
|
||||
story_with_content['story_latest_content'] = zlib.decompress(story_with_content['story_latest_content_z'])
|
||||
del story_with_content['story_latest_content_z']
|
||||
if story_with_content.get('story_original_content_z', None):
|
||||
story_with_content['story_original_content'] = zlib.decompress(story_with_content['story_original_content_z'])
|
||||
del story_with_content['story_original_content_z']
|
||||
sum_bytes += len(bson.BSON.encode(story_with_content))
|
||||
|
||||
self.fs_size_bytes = sum_bytes
|
||||
self.archive_count = count
|
||||
self.save()
|
||||
|
||||
return sum_bytes
|
||||
|
||||
def purge_feed_stories(self, update=True):
|
||||
MStory.purge_feed_stories(feed=self, cutoff=self.story_cutoff)
|
||||
if update:
|
||||
|
|
@ -1581,8 +1708,11 @@ class Feed(models.Model):
|
|||
# print "db.stories.remove({\"story_feed_id\": %s, \"_id\": \"%s\"})" % (f, u)
|
||||
|
||||
|
||||
def get_stories(self, offset=0, limit=25, force=False):
|
||||
stories_db = MStory.objects(story_feed_id=self.pk)[offset:offset+limit]
|
||||
def get_stories(self, offset=0, limit=25, order="neweat", force=False):
|
||||
if order == "newest":
|
||||
stories_db = MStory.objects(story_feed_id=self.pk)[offset:offset+limit]
|
||||
elif order == "oldest":
|
||||
stories_db = MStory.objects(story_feed_id=self.pk).order_by('story_date')[offset:offset+limit]
|
||||
stories = self.format_stories(stories_db, self.pk)
|
||||
|
||||
return stories
|
||||
|
|
@ -2116,14 +2246,16 @@ class Feed(models.Model):
|
|||
# print 'New/updated story: %s' % (story),
|
||||
return story_in_system, story_has_changed
|
||||
|
||||
def get_next_scheduled_update(self, force=False, verbose=True, premium_speed=False):
|
||||
def get_next_scheduled_update(self, force=False, verbose=True, premium_speed=False, pro_speed=False):
|
||||
if self.min_to_decay and not force and not premium_speed:
|
||||
return self.min_to_decay
|
||||
|
||||
from apps.notifications.models import MUserFeedNotification
|
||||
|
||||
|
||||
if premium_speed:
|
||||
self.active_premium_subscribers += 1
|
||||
if pro_speed:
|
||||
self.pro_subscribers += 1
|
||||
|
||||
spd = self.stories_last_month / 30.0
|
||||
subs = (self.active_premium_subscribers +
|
||||
|
|
@ -2204,13 +2336,22 @@ class Feed(models.Model):
|
|||
# Twitter feeds get 2 hours minimum
|
||||
if 'twitter' in self.feed_address:
|
||||
total = max(total, 60*2)
|
||||
|
||||
|
||||
# Pro subscribers get absolute minimum
|
||||
if self.pro_subscribers and self.pro_subscribers >= 1:
|
||||
if self.stories_last_month == 0:
|
||||
total = min(total, 60)
|
||||
else:
|
||||
total = min(total, settings.PRO_MINUTES_BETWEEN_FETCHES)
|
||||
|
||||
if verbose:
|
||||
logging.debug(" ---> [%-30s] Fetched every %s min - Subs: %s/%s/%s Stories/day: %s" % (
|
||||
logging.debug(" ---> [%-30s] Fetched every %s min - Subs: %s/%s/%s/%s/%s Stories/day: %s" % (
|
||||
self.log_title[:30], total,
|
||||
self.num_subscribers,
|
||||
self.active_subscribers,
|
||||
self.active_premium_subscribers,
|
||||
self.archive_subscribers,
|
||||
self.pro_subscribers,
|
||||
spd))
|
||||
return total
|
||||
|
||||
|
|
@ -2258,7 +2399,7 @@ class Feed(models.Model):
|
|||
r = redis.Redis(connection_pool=settings.REDIS_FEED_UPDATE_POOL)
|
||||
if not self.num_subscribers:
|
||||
logging.debug(' ---> [%-30s] Not scheduling feed fetch immediately, no subs.' % (self.log_title[:30]))
|
||||
return
|
||||
return self
|
||||
|
||||
if verbose:
|
||||
logging.debug(' ---> [%-30s] Scheduling feed fetch immediately...' % (self.log_title[:30]))
|
||||
|
|
@ -2738,52 +2879,39 @@ class MStory(mongo.Document):
|
|||
def sync_redis(self, r=None):
|
||||
if not r:
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
# if not r2:
|
||||
# r2 = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL2)
|
||||
UNREAD_CUTOFF = datetime.datetime.now() - datetime.timedelta(days=settings.DAYS_OF_STORY_HASHES)
|
||||
feed = Feed.get_by_id(self.story_feed_id)
|
||||
|
||||
if self.id and self.story_date > UNREAD_CUTOFF:
|
||||
if self.id and self.story_date > feed.unread_cutoff:
|
||||
feed_key = 'F:%s' % self.story_feed_id
|
||||
r.sadd(feed_key, self.story_hash)
|
||||
r.expire(feed_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
# r2.sadd(feed_key, self.story_hash)
|
||||
# r2.expire(feed_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
r.expire(feed_key, feed.days_of_story_hashes*24*60*60)
|
||||
|
||||
r.zadd('z' + feed_key, { self.story_hash: time.mktime(self.story_date.timetuple()) })
|
||||
r.expire('z' + feed_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
# r2.zadd('z' + feed_key, self.story_hash, time.mktime(self.story_date.timetuple()))
|
||||
# r2.expire('z' + feed_key, settings.DAYS_OF_STORY_HASHES*24*60*60)
|
||||
r.expire('z' + feed_key, feed.days_of_story_hashes*24*60*60)
|
||||
|
||||
def remove_from_redis(self, r=None):
|
||||
if not r:
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
# if not r2:
|
||||
# r2 = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL2)
|
||||
if self.id:
|
||||
r.srem('F:%s' % self.story_feed_id, self.story_hash)
|
||||
# r2.srem('F:%s' % self.story_feed_id, self.story_hash)
|
||||
r.zrem('zF:%s' % self.story_feed_id, self.story_hash)
|
||||
# r2.zrem('zF:%s' % self.story_feed_id, self.story_hash)
|
||||
|
||||
@classmethod
|
||||
def sync_feed_redis(cls, story_feed_id):
|
||||
r = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL)
|
||||
# r2 = redis.Redis(connection_pool=settings.REDIS_STORY_HASH_POOL2)
|
||||
UNREAD_CUTOFF = datetime.datetime.now() - datetime.timedelta(days=settings.DAYS_OF_STORY_HASHES)
|
||||
feed = Feed.get_by_id(story_feed_id)
|
||||
stories = cls.objects.filter(story_feed_id=story_feed_id, story_date__gte=UNREAD_CUTOFF)
|
||||
r.delete('F:%s' % story_feed_id)
|
||||
# r2.delete('F:%s' % story_feed_id)
|
||||
r.delete('zF:%s' % story_feed_id)
|
||||
# r2.delete('zF:%s' % story_feed_id)
|
||||
stories = cls.objects.filter(story_feed_id=story_feed_id, story_date__gte=feed.unread_cutoff)
|
||||
|
||||
# Don't delete redis keys because they take time to rebuild and subs can
|
||||
# be counted incorrectly during that time.
|
||||
# r.delete('F:%s' % story_feed_id)
|
||||
# r.delete('zF:%s' % story_feed_id)
|
||||
|
||||
logging.info(" ---> [%-30s] ~FMSyncing ~SB%s~SN stories to redis" % (feed and feed.log_title[:30] or story_feed_id, stories.count()))
|
||||
p = r.pipeline()
|
||||
# p2 = r2.pipeline()
|
||||
for story in stories:
|
||||
story.sync_redis(r=p)
|
||||
p.execute()
|
||||
# p2.execute()
|
||||
|
||||
def count_comments(self):
|
||||
from apps.social.models import MSharedStory
|
||||
|
|
@ -2964,7 +3092,7 @@ class MStarredStory(mongo.DynamicDocument):
|
|||
story_tags = mongo.ListField(mongo.StringField(max_length=250))
|
||||
user_notes = mongo.StringField()
|
||||
user_tags = mongo.ListField(mongo.StringField(max_length=128))
|
||||
highlights = mongo.ListField(mongo.StringField(max_length=1024))
|
||||
highlights = mongo.ListField(mongo.StringField(max_length=16384))
|
||||
image_urls = mongo.ListField(mongo.StringField(max_length=1024))
|
||||
|
||||
meta = {
|
||||
|
|
|
|||
|
|
@ -81,7 +81,7 @@ class PageImporter(object):
|
|||
self.save_no_page(reason="Broken page")
|
||||
return
|
||||
elif any(s in feed_link.lower() for s in BROKEN_PAGE_URLS):
|
||||
self.save_no_page(reason="Broke page url")
|
||||
self.save_no_page(reason="Banned")
|
||||
return
|
||||
elif feed_link.startswith('http'):
|
||||
if urllib_fallback:
|
||||
|
|
@ -238,7 +238,7 @@ class PageImporter(object):
|
|||
logging.debug(' ---> [%-30s] ~FYNo original page: %s / %s' % (self.feed.log_title[:30], reason, self.feed.feed_link))
|
||||
self.feed.has_page = False
|
||||
self.feed.save()
|
||||
self.feed.save_page_history(404, "Feed has no original page.")
|
||||
self.feed.save_page_history(404, f"Feed has no original page: {reason}")
|
||||
|
||||
def rewrite_page(self, response):
|
||||
BASE_RE = re.compile(r'<head(.*?)>', re.I)
|
||||
|
|
|
|||
|
|
@ -8,7 +8,6 @@ from celery.exceptions import SoftTimeLimitExceeded
|
|||
from utils import log as logging
|
||||
from django.conf import settings
|
||||
from apps.profile.middleware import DBProfilerMiddleware
|
||||
from utils.mongo_raw_log_middleware import MongoDumpMiddleware
|
||||
from utils.redis_raw_log_middleware import RedisDumpMiddleware
|
||||
FEED_TASKING_MAX = 10000
|
||||
|
||||
|
|
@ -130,8 +129,7 @@ def UpdateFeeds(feed_pks):
|
|||
profiler = DBProfilerMiddleware()
|
||||
profiler_activated = profiler.process_celery()
|
||||
if profiler_activated:
|
||||
mongo_middleware = MongoDumpMiddleware()
|
||||
mongo_middleware.process_celery(profiler)
|
||||
settings.MONGO_COMMAND_LOGGER.process_celery(profiler)
|
||||
redis_middleware = RedisDumpMiddleware()
|
||||
redis_middleware.process_celery(profiler)
|
||||
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@ from requests.packages.urllib3.exceptions import LocationParseError
|
|||
from socket import error as SocketError
|
||||
from mongoengine.queryset import NotUniqueError
|
||||
from lxml.etree import ParserError
|
||||
from vendor.readability.readability import Unparseable
|
||||
from utils import log as logging
|
||||
from utils.feed_functions import timelimit, TimeoutError
|
||||
from OpenSSL.SSL import Error as OpenSSLError
|
||||
|
|
@ -137,7 +138,7 @@ class TextImporter:
|
|||
positive_keywords="post, entry, postProp, article, postContent, postField")
|
||||
try:
|
||||
content = original_text_doc.summary(html_partial=True)
|
||||
except (ParserError) as e:
|
||||
except (ParserError, Unparseable) as e:
|
||||
logging.user(self.request, "~SN~FRFailed~FY to fetch ~FGoriginal text~FY: %s" % e)
|
||||
return
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import datetime
|
||||
import base64
|
||||
import redis
|
||||
from urllib.parse import urlparse
|
||||
from utils import log as logging
|
||||
from django.shortcuts import get_object_or_404, render
|
||||
|
|
@ -80,7 +81,7 @@ def load_feed_favicon(request, feed_id):
|
|||
not_found = True
|
||||
|
||||
if not_found or not feed_icon.data:
|
||||
return HttpResponseRedirect(settings.MEDIA_URL + 'img/icons/circular/world.png')
|
||||
return HttpResponseRedirect(settings.MEDIA_URL + 'img/icons/nouns/world.svg')
|
||||
|
||||
icon_data = base64.b64decode(feed_icon.data)
|
||||
return HttpResponse(icon_data, content_type='image/png')
|
||||
|
|
@ -198,6 +199,8 @@ def assemble_statistics(user, feed_id):
|
|||
stats['last_update'] = relative_timesince(feed.last_update)
|
||||
stats['next_update'] = relative_timeuntil(feed.next_scheduled_update)
|
||||
stats['push'] = feed.is_push
|
||||
stats['fs_size_bytes'] = feed.fs_size_bytes
|
||||
stats['archive_count'] = feed.archive_count
|
||||
if feed.is_push:
|
||||
try:
|
||||
stats['push_expires'] = localtime_for_timezone(feed.push.lease_expires,
|
||||
|
|
@ -501,16 +504,35 @@ def exception_change_feed_link(request):
|
|||
|
||||
@login_required
|
||||
def status(request):
|
||||
if not request.user.is_staff:
|
||||
if not request.user.is_staff and not settings.DEBUG:
|
||||
logging.user(request, "~SKNON-STAFF VIEWING RSS FEEDS STATUS!")
|
||||
assert False
|
||||
return HttpResponseForbidden()
|
||||
minutes = int(request.GET.get('minutes', 1))
|
||||
now = datetime.datetime.now()
|
||||
hour_ago = now - datetime.timedelta(minutes=minutes)
|
||||
feeds = Feed.objects.filter(last_update__gte=hour_ago).order_by('-last_update')
|
||||
username = request.GET.get('user', '') or request.GET.get('username', '')
|
||||
if username:
|
||||
user = User.objects.get(username=username)
|
||||
else:
|
||||
user = request.user
|
||||
usersubs = UserSubscription.objects.filter(user=user)
|
||||
feed_ids = usersubs.values('feed_id')
|
||||
if minutes > 0:
|
||||
hour_ago = now + datetime.timedelta(minutes=minutes)
|
||||
feeds = Feed.objects.filter(pk__in=feed_ids, next_scheduled_update__lte=hour_ago).order_by('next_scheduled_update')
|
||||
else:
|
||||
hour_ago = now + datetime.timedelta(minutes=minutes)
|
||||
feeds = Feed.objects.filter(pk__in=feed_ids, last_update__gte=hour_ago).order_by('-last_update')
|
||||
|
||||
r = redis.Redis(connection_pool=settings.REDIS_FEED_UPDATE_POOL)
|
||||
queues = {
|
||||
'tasked_feeds': r.zcard('tasked_feeds'),
|
||||
'queued_feeds': r.scard('queued_feeds'),
|
||||
'scheduled_updates': r.zcard('scheduled_updates'),
|
||||
}
|
||||
return render(request, 'rss_feeds/status.xhtml', {
|
||||
'feeds': feeds
|
||||
'feeds': feeds,
|
||||
'queues': queues
|
||||
})
|
||||
|
||||
@json.json_view
|
||||
|
|
|
|||
|
|
@ -431,6 +431,56 @@ class SearchStory:
|
|||
|
||||
return result_ids
|
||||
|
||||
@classmethod
|
||||
def more_like_this(cls, feed_ids, story_hash, order, offset, limit):
|
||||
try:
|
||||
cls.ES().indices.flush(cls.index_name())
|
||||
except elasticsearch.exceptions.NotFoundError as e:
|
||||
logging.debug(f" ***> ~FRNo search server available: {e}")
|
||||
return []
|
||||
|
||||
body = {
|
||||
"query": {
|
||||
"bool": {
|
||||
"filter": [{
|
||||
"more_like_this": {
|
||||
"fields": [ "title", "content" ],
|
||||
"like": [
|
||||
{
|
||||
"_index": cls.index_name(),
|
||||
"_id": story_hash,
|
||||
}
|
||||
],
|
||||
"min_term_freq": 3,
|
||||
"min_doc_freq": 2,
|
||||
"min_word_length": 4,
|
||||
},
|
||||
},{
|
||||
"terms": { "feed_id": feed_ids[:2000] }
|
||||
}],
|
||||
}
|
||||
},
|
||||
'sort': [{'date': {'order': 'desc' if order == "newest" else "asc"}}],
|
||||
'from': offset,
|
||||
'size': limit
|
||||
}
|
||||
try:
|
||||
results = cls.ES().search(body=body, index=cls.index_name(), doc_type=cls.doc_type())
|
||||
except elasticsearch.exceptions.RequestError as e:
|
||||
logging.debug(" ***> ~FRNo search server available for querying: %s" % e)
|
||||
return []
|
||||
|
||||
logging.info(" ---> ~FG~SNMore like this ~FCstories~FG for: ~SB%s~SN, ~SB%s~SN results (across %s feed%s)" %
|
||||
(story_hash, len(results['hits']['hits']), len(feed_ids), 's' if len(feed_ids) != 1 else ''))
|
||||
|
||||
try:
|
||||
result_ids = [r['_id'] for r in results['hits']['hits']]
|
||||
except Exception as e:
|
||||
logging.info(" ---> ~FRInvalid search query \"%s\": %s" % (query, e))
|
||||
return []
|
||||
|
||||
return result_ids
|
||||
|
||||
|
||||
class SearchFeed:
|
||||
|
||||
|
|
|
|||
7
apps/search/urls.py
Normal file
7
apps/search/urls.py
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
from django.conf.urls import *
|
||||
from apps.search import views
|
||||
|
||||
urlpatterns = [
|
||||
# url(r'^$', views.index),
|
||||
url(r'^more_like_this', views.more_like_this, name='more-like-this'),
|
||||
]
|
||||
|
|
@ -1 +1,29 @@
|
|||
# Create your views here.
|
||||
from apps.rss_feeds.models import Feed, MStory
|
||||
from apps.reader.models import UserSubscription
|
||||
from apps.search.models import SearchStory
|
||||
from utils import json_functions as json
|
||||
from utils.view_functions import required_params
|
||||
from utils.user_functions import get_user, ajax_login_required
|
||||
|
||||
# @required_params('story_hash')
|
||||
@json.json_view
|
||||
def more_like_this(request):
|
||||
user = get_user(request)
|
||||
get_post = getattr(request, request.method)
|
||||
order = get_post.get('order', 'newest')
|
||||
page = int(get_post.get('page', 1))
|
||||
limit = int(get_post.get('limit', 10))
|
||||
offset = limit * (page-1)
|
||||
story_hash = get_post.get('story_hash')
|
||||
|
||||
feed_ids = [us.feed_id for us in UserSubscription.objects.filter(user=user)]
|
||||
feed_ids, _ = MStory.split_story_hash(story_hash)
|
||||
story_ids = SearchStory.more_like_this([feed_ids], story_hash, order, offset=offset, limit=limit)
|
||||
stories_db = MStory.objects(
|
||||
story_hash__in=story_ids
|
||||
).order_by('-story_date' if order == "newest" else 'story_date')
|
||||
stories = Feed.format_stories(stories_db)
|
||||
|
||||
return {
|
||||
"stories": stories,
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1355,7 +1355,9 @@ def shared_stories_rss_feed(request, user_id, username=None):
|
|||
user = User.objects.get(pk=user_id)
|
||||
except User.DoesNotExist:
|
||||
raise Http404
|
||||
|
||||
|
||||
limit = 25
|
||||
offset = request.GET.get('page', 0) * limit
|
||||
username = username and username.lower()
|
||||
profile = MSocialProfile.get_user(user.pk)
|
||||
params = {'username': profile.username_slug, 'user_id': user.pk}
|
||||
|
|
@ -1383,7 +1385,7 @@ def shared_stories_rss_feed(request, user_id, username=None):
|
|||
)
|
||||
rss = feedgenerator.Atom1Feed(**data)
|
||||
|
||||
shared_stories = MSharedStory.objects.filter(user_id=user.pk).order_by('-shared_date')[:25]
|
||||
shared_stories = MSharedStory.objects.filter(user_id=user.pk).order_by('-shared_date')[offset:offset+limit]
|
||||
for shared_story in shared_stories:
|
||||
feed = Feed.get_by_id(shared_story.story_feed_id)
|
||||
content = render_to_string('social/rss_story.xhtml', {
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import mongoengine as mongo
|
|||
import urllib.request, urllib.error, urllib.parse
|
||||
import redis
|
||||
import dateutil
|
||||
import requests
|
||||
from django.conf import settings
|
||||
from apps.social.models import MSharedStory
|
||||
from apps.profile.models import Profile
|
||||
|
|
@ -27,12 +28,18 @@ class MStatistics(mongo.Document):
|
|||
return "%s: %s" % (self.key, self.value)
|
||||
|
||||
@classmethod
|
||||
def get(cls, key, default=None):
|
||||
def get(cls, key, default=None, set_default=False, expiration_sec=None):
|
||||
obj = cls.objects.filter(key=key).first()
|
||||
if not obj:
|
||||
if set_default:
|
||||
default = default()
|
||||
cls.set(key, default, expiration_sec=expiration_sec)
|
||||
return default
|
||||
if obj.expiration_date and obj.expiration_date < datetime.datetime.now():
|
||||
obj.delete()
|
||||
if set_default:
|
||||
default = default()
|
||||
cls.set(key, default, expiration_sec=expiration_sec)
|
||||
return default
|
||||
return obj.value
|
||||
|
||||
|
|
@ -298,8 +305,8 @@ class MFeedback(mongo.Document):
|
|||
def collect_feedback(cls):
|
||||
seen_posts = set()
|
||||
try:
|
||||
data = urllib.request.urlopen('https://forum.newsblur.com/posts.json').read()
|
||||
except (urllib.error.HTTPError) as e:
|
||||
data = requests.get('https://forum.newsblur.com/posts.json', timeout=3).content
|
||||
except (urllib.error.HTTPError, requests.exceptions.ConnectTimeout) as e:
|
||||
logging.debug(" ***> Failed to collect feedback: %s" % e)
|
||||
return
|
||||
data = json.decode(data).get('latest_posts', "")
|
||||
|
|
|
|||
37
blog/_posts/2022-07-01-dashboard-redesign-2022.md
Normal file
37
blog/_posts/2022-07-01-dashboard-redesign-2022.md
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
layout: post
|
||||
title: "2022 redesign: new dashboard layout, refreshed stories and story titles, and entirely redrawn icons"
|
||||
tags: ['web']
|
||||
---
|
||||
|
||||
The launch of the new [Premium Archive subscription tier](/2022/07/01/premium-archive-subscription/) also includes the 2022 redesign. You'll see a third dashboard layout which stretches out your dashboard rivers across the width of the screen.
|
||||
|
||||
<img src="/assets/premium-archive-dashboard-comfortable.png" style="width: calc(140%);margin: 12px 0 12px calc(-20%);max-width: none;border: none">
|
||||
|
||||
The latest redesign style has more accomodations for spacing and padding around each story title element. The result is a cleaner story title with easier to read headlines. The author has been moved and restyled to be next to the story date. Favicons and unread status indicators have been swapped, and font sizes, colors, and weights have been adjusted.
|
||||
|
||||
<img src="/assets/premium-archive-dashboard-compact.png" style="width: calc(140%);margin: 12px 0 12px calc(-20%);max-width: none;border: none">
|
||||
|
||||
If you find the interface to be too airy, there is a setting in the main Manage menu allowing you to switch between Comfortable and Compact. The compact interface is denser than before, giving power users a highly detailed view.
|
||||
|
||||
Transitions have also been added to help you feel the difference. And there are new animations during many of the transitions that accompany changing settings.
|
||||
|
||||
<p>
|
||||
<video autoplay loop playsinline muted width="500" style="width: 500px;border: 2px solid rgba(0,0,0,0.1)">
|
||||
<source src="/assets/premium-archive-grid.mp4" type="video/mp4">
|
||||
</video>
|
||||
</p>
|
||||
|
||||
And lastly, this redesign comes with a suite of all new icons. The goal with this icon redesign is to bring a consistent weight to each icon as well as vectorize them with SVG so they look good at all resolutions.
|
||||
|
||||
<img src="/assets/premium-archive-manage-menu.png" style="width: 275px;border: 1px solid #A0A0A0;margin: 24px auto;display: block;">
|
||||
|
||||
A notable icon change is the unread indicator, which now has different size icons for both unread stories and focus stories, giving focus stories more depth.
|
||||
|
||||
<img src="/assets/premium-archive-unread-dark.png" style="width: 375px;border: 1px solid #A0A0A0;margin: 24px auto;display: block;">
|
||||
|
||||
Here's a screenshot that's only possible with the new premium archive, complete with backfilled blog post from the year 2000, ready to be marked as unread.
|
||||
|
||||
<img src="/assets/premium-archive-unread.png" style="width: 100%;border: 1px solid #A0A0A0;margin: 24px auto;display: block;">
|
||||
|
||||
I tried to find every icon, so if you spot a dialog or menu that you'd like to see given some more love, reach out on the support forum.
|
||||
38
blog/_posts/2022-07-01-premium-archive-subscription.md
Normal file
38
blog/_posts/2022-07-01-premium-archive-subscription.md
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
layout: post
|
||||
title: NewsBlur Premium Archive subscription keeps all of your stories searchable, shareable, and unread forever
|
||||
tags: ['web', 'ios', 'android']
|
||||
---
|
||||
|
||||
For $99/year every story from every site you subscribe to will stay in NewsBlur's archive. This new premium tier also allows you to mark any story as unread as well as choose when stories are automatically marked as read. You can now have full control of your story archive, letting you search, share, and read stories forever without having to worry about them being deleted.
|
||||
|
||||
The NewsBlur Premium Archive subscription offers you the following:
|
||||
|
||||
* <img src="/assets/icons8/icons8-bursts-100.png" style="width: 16px;margin: 0 6px 0 0;display: inline-block;"> Everything in the premium subscription, of course
|
||||
* <img src="/assets/icons8/icons8-relax-with-book-100.png" style="width: 16px;margin: 0 6px 0 0;display: inline-block;"> Choose when stories are automatically marked as read
|
||||
* <img src="/assets/icons8/icons8-filing-cabinet-100.png" style="width: 16px;margin: 0 6px 0 0;display: inline-block;"> Every story from every site is archived and searchable forever
|
||||
* <img src="/assets/icons8/icons8-quadcopter-100.png" style="width: 16px;margin: 0 6px 0 0;display: inline-block;"> Feeds that support paging are back-filled in for a complete archive
|
||||
* <img src="/assets/icons8/icons8-rss-100.png" style="width: 16px;margin: 0 6px 0 0;display: inline-block;"> Export trained stories from folders as RSS feeds
|
||||
* <img src="/assets/icons8/icons8-calendar-100.png" style="width: 16px;margin: 0 6px 0 0;display: inline-block;"> Stories can stay unread forever
|
||||
|
||||
You can now enjoy a new preference for exactly when stories are marked as read:
|
||||
|
||||
<img src="/assets/premium-archive-mark-read-date.png" style="width: 100%;border: 1px solid #A0A0A0;margin: 24px auto;display: block;">
|
||||
|
||||
A technical note about the backfilling of your archive:
|
||||
|
||||
<blockquote>
|
||||
<p>NewsBlur uses two techniques to retrieve older stories that are no longer in the RSS feed. The first strategy is to append `?page=2` and `?paged=2` to the RSS feed and seeing if we're about to blindly iterate through the blog's archive. For WordPress and a few other CMSs, this works great and gives us a full archive. </p>
|
||||
|
||||
<p>A second technique is to use <a href="https://datatracker.ietf.org/doc/html/rfc5005">RFC 5005</a>, which supports links embedded inside the RSS feed to denote next and previous pages of an archive.</p>
|
||||
</blockquote>
|
||||
|
||||
NewsBlur attempts all of these techniques on every single feed you've subscribed to, and when it's done backfilling stories, you'll receive an email showing you how big your archive grew during this backfill process.
|
||||
|
||||
The launch of the new Premium Archive subscription tier also contains the [2022 redesign](/2022/07/01/dashboard-redesign-2022/), which includes a new dashboard layout, a refreshed design for story titles and feed title, and all new icons.
|
||||
|
||||
Here's a screenshot that's only possible with the new premium archive, complete with backfilled blog post from the year 2000, ready to be marked as unread.
|
||||
|
||||
<img src="/assets/premium-archive-unread.png" style="width: 100%;border: 1px solid #A0A0A0;margin: 24px auto;display: block;">
|
||||
|
||||
How's that for an archive?
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue