diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
deleted file mode 100644
index 4ecdd0b6..00000000
--- a/CONTRIBUTING.md
+++ /dev/null
@@ -1,36 +0,0 @@
-# Contribution to Coronavirus Tracker API
-
-First off, thanks for taking the time to contribute!
-Every commit supports the open source ecosystem in case of [COVID-19](https://en.wikipedia.org/wiki/2019%E2%80%9320_coronavirus_pandemic).
-
-## Testing
-
-We have a handful of unit tests to cover most of functions.
-Please write new test cases for new code you create.
-
-## Submitting changes
-
-* If you're unable to find an open issue, [open a new one](https://github.com/ExpDev07/coronavirus-tracker-api/issues/new). Be sure to include a **title and clear description**, as much relevant information as possible
-* Open a new [GitHub Pull Request to coronavirus-tracker-api](https://github.com/ExpDev07/coronavirus-tracker-api/pulls) with a clear list of what you've done (read more about [pull requests](http://help.github.com/pull-requests/)). Include the relevant issue number if applicable.
-* We will love you forever if you include unit tests. We can always use more test coverage
-* If you have updated [Pipefile](./Pipfile), you have to update `Pipfile.lock`, `requirements.txt` and `requirements-dev.txt`. See section [Update requirements files](./README.md#update-requirements-files).
-
-## Your First Code Contribution
-
-Unsure where to begin contributing to coronavirus-tracker-api ? You can start by looking through these issues labels:
-
-* [Enhancement issues](https://github.com/ExpDev07/coronavirus-tracker-api/labels/enhancement) - issues for new feature or request
-* [Help wanted issues](https://github.com/ExpDev07/coronavirus-tracker-api/labels/help%20wanted) - extra attention is needed
-* [Documentation issues](https://github.com/ExpDev07/coronavirus-tracker-api/labels/documentation) - improvements or additions to documentation
-
-## Styleguide
-
-Please follow [PEP8](https://www.python.org/dev/peps/pep-0008/) guide.
-See [Running Test](./README.md#running-tests), [Linting](./README.md#linting) and [Formatting](./README.md#formatting) sections for further instructions to validate your change.
-
-
-We encourage you to pitch in and join the [Coronavirus Tracker API Team](https://github.com/ExpDev07/coronavirus-tracker-api#contributors-)!
-
-Thanks! :heart: :heart: :heart:
-
-[Coronavirus Tracker API Team](https://github.com/ExpDev07/coronavirus-tracker-api#contributors-)
diff --git a/README.md b/README.md
deleted file mode 100644
index 7355d0af..00000000
--- a/README.md
+++ /dev/null
@@ -1,529 +0,0 @@
-
- Coronavirus Tracker API
-
-
-Provides up-to-date data about Coronavirus outbreak. Includes numbers about confirmed cases, deaths and recovered.
-Support multiple data-sources.
-
-
-[](https://coveralls.io/github/ExpDev07/coronavirus-tracker-api?branch=master)
-[](LICENSE.md)
-[](#contributors-)
-[](https://github.com/ExpDev07/coronavirus-tracker-api/stargazers)
-[](https://github.com/ExpDev07/coronavirus-tracker-api/network/members)
-[](https://github.com/ExpDev07/coronavirus-tracker-api/commits/master)
-[](https://github.com/ExpDev07/coronavirus-tracker-api/pulls)
-[](https://github.com/ExpDev07/coronavirus-tracker-api/issues)
-[](https://lgtm.com/projects/g/ExpDev07/coronavirus-tracker-api/alerts/)
-[](https://github.com/psf/black)
-[](https://twitter.com/intent/tweet?text=COVID19%20Live%20Tracking%20API:%20&url=https%3A%2F%2Fgithub.com%2FExpDev07%2Fcoronavirus-tracker-api)
-
-**Live global stats (provided by [fight-covid19/bagdes](https://github.com/fight-covid19/bagdes)) from this API:**
-
-
-
-
-
-## New York Times is now available as a source!
-
-**Specify source parameter with ?source=nyt. NYT also provides a timeseries! To view timelines of cases by US counties use ?source=nyt&timelines=true**
-
-## Recovered cases showing 0
-
-**JHU (our main data provider) [no longer provides data for amount of recoveries](https://github.com/CSSEGISandData/COVID-19/issues/1250), and as a result, the API will be showing 0 for this statistic. Apologies for any inconvenience. Hopefully we'll be able to find an alternative data-source that offers this.**
-
-## Available data-sources:
-
-Currently 3 different data-sources are available to retrieve the data:
-
-* **jhu** - https://github.com/CSSEGISandData/COVID-19 - Worldwide Data repository operated by the Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE).
-
-* **csbs** - https://www.csbs.org/information-covid-19-coronavirus - U.S. County data that comes from the Conference of State Bank Supervisors.
-
-* **nyt** - https://github.com/nytimes/covid-19-data - The New York Times is releasing a series of data files with cumulative counts of coronavirus cases in the United States. This API provides the timeseries at the US county level.
-
-__jhu__ data-source will be used as a default source if you don't specify a *source parameter* in your request.
-
-## API Reference
-
-All endpoints are located at ``coronavirus-tracker-api.herokuapp.com/v2/`` and are accessible via https. For instance: you can get data per location by using this URL:
-*[https://coronavirus-tracker-api.herokuapp.com/v2/locations](https://coronavirus-tracker-api.herokuapp.com/v2/locations)*
-
-You can open the URL in your browser to further inspect the response. Or you can make this curl call in your terminal to see the prettified response:
-
-```
-curl https://coronavirus-tracker-api.herokuapp.com/v2/locations | json_pp
-```
-
-### Swagger/OpenAPI
-
-Consume our API through [our super awesome and interactive SwaggerUI](https://coronavirus-tracker-api.herokuapp.com/) (on mobile, use the [mobile friendly ReDocs](https://coronavirus-tracker-api.herokuapp.com/docs) instead for the best experience).
-
-
-The [OpenAPI](https://swagger.io/docs/specification/about/) json definition can be downloaded at https://coronavirus-tracker-api.herokuapp.com/openapi.json
-
-## API Endpoints
-
-### Sources Endpoint
-
-Getting the data-sources that are currently available to Coronavirus Tracker API to retrieve the data of the pandemic.
-
-```http
-GET /v2/sources
-```
-
-__Sample response__
-```json
-{
- "sources": [
- "jhu",
- "csbs",
- "nyt"
- ]
-}
-```
-
-### Latest Endpoint
-
-Getting latest amount of total confirmed cases, deaths, and recovered.
-
-```http
-GET /v2/latest
-```
-
-__Query String Parameters__
-| __Query string parameter__ | __Description__ | __Type__ |
-| -------------------------- | -------------------------------------------------------------------------------- | -------- |
-| source | The data-source where data will be retrieved from *(jhu/csbs/nyt)*. Default is *jhu* | String |
-
-__Sample response__
-```json
-{
- "latest": {
- "confirmed": 197146,
- "deaths": 7905,
- "recovered": 80840
- }
-}
-```
-
-### Locations Endpoint
-
-Getting latest amount of confirmed cases, deaths, and recovered per location.
-
-#### The Location Object
-```http
-GET /v2/locations/:id
-```
-
-__Path Parameters__
-| __Path parameter__ | __Required/Optional__ | __Description__ | __Type__ |
-| ------------------ | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------- |
-| id | OPTIONAL | The unique location id for which you want to call the Locations Endpoint. The list of valid location IDs (:id) can be found in the locations response: ``/v2/locations`` | Integer |
-
-__Query String Parameters__
-| __Query string parameter__ | __Description__ | __Type__ |
-| -------------------------- | -------------------------------------------------------------------------------- | -------- |
-| source | The data-source where data will be retrieved from *(jhu/csbs/nyt)*. Default is *jhu* | String |
-
-#### Example Request
-```http
-GET /v2/locations/39
-```
-
-__Sample response__
-```json
-{
- "location": {
- "id": 39,
- "country": "Norway",
- "country_code": "NO",
- "country_population": 5009150,
- "province": "",
- "county": "",
- "last_updated": "2020-03-21T06:59:11.315422Z",
- "coordinates": { },
- "latest": { },
- "timelines": {
- "confirmed": {
- "latest": 1463,
- "timeline": {
- "2020-03-16T00:00:00Z": 1333,
- "2020-03-17T00:00:00Z": 1463
- }
- },
- "deaths": { },
- "recovered": { }
- }
- }
-}
-```
-
-#### List of all locations
-```http
-GET /v2/locations
-```
-
-__Query String Parameters__
-| __Query string parameter__ | __Description__ | __Type__ |
-| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | -------- |
-| source | The data-source where data will be retrieved from.
__Value__ can be: *jhu/csbs/nyt*. __Default__ is *jhu* | String |
-| country_code | The ISO ([alpha-2 country_code](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)) to the Country/Province for which you're calling the Endpoint | String |
-| timelines | To set the visibility of timelines (*daily tracking*).
__Value__ can be: *0/1*. __Default__ is *0* (timelines are not visible) | Integer |
-
-__Sample response__
-```json
-{
- "latest": {
- "confirmed": 272166,
- "deaths": 11299,
- "recovered": 87256
- },
- "locations": [
- {
- "id": 0,
- "country": "Thailand",
- "country_code": "TH",
- "country_population": 67089500,
- "province": "",
- "county": "",
- "last_updated": "2020-03-21T06:59:11.315422Z",
- "coordinates": {
- "latitude": "15",
- "longitude": "101"
- },
- "latest": {
- "confirmed": 177,
- "deaths": 1,
- "recovered": 41
- }
- },
- {
- "id": 39,
- "country": "Norway",
- "country_code": "NO",
- "province": "",
- "county": "",
- "last_updated": "2020-03-21T06:59:11.315422Z",
- "coordinates": {
- "latitude": "60.472",
- "longitude": "8.4689"
- },
- "latest": {
- "confirmed": 1463,
- "deaths": 3,
- "recovered": 1
- }
- }
- ]
-}
-```
-
-__Response definitions__
-| __Response Item__ | __Description__ | __Type__ |
-| ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | -------- |
-| {latest} | The total amount of confirmed cases, deaths and recovered for all the locations | Object |
-| {latest}/confirmed | The up-to-date total number of confirmed cases for all the locations within the data-source | Integer |
-| {latest}/deaths | The up-to-date total amount of deaths for all the locations within the data-source | Integer |
-| {latest}/recovered | The up-to-date total amount of recovered for all the locations within the data-source | Integer |
-| {locations} | The collection of locations contained within the data-source | Object |
-| {location} | Information that identifies a location | Object |
-| {latest} | The amount of confirmed cases, deaths and recovered related to the specific location | Object |
-| {locations}/{location}/{latest}/confirmed | The up-to-date number of confirmed cases related to the specific location | Integer |
-| {locations}/{location}/{latest}/deaths | The up-to-date number of deaths related to the specific location | Integer |
-| {locations}/{location}/{latest}/recovered | The up-to-date number of recovered related to the specific location | Integer |
-| {locations}/{location}/id | The location id. This unique id is assigned to the location by the data-source. | Integer |
-| {locations}/{location}/country | The Country name | String |
-| {locations}/{location}/country_code | The [ISO alpha-2 country_code](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) Country code for the location. | String |
-| {locations}/{location}/province | The province where the location belongs to. (Used for US locations coming from __csbs data-source__.
__Empty__ when *jhu data-source* is used | String |
-| {locations}/{location}/{coordinates}/latitude | The location latitude | Float |
-| {locations}/{location}/{coordinates}/longitude | The location longitude | Float |
-
-
-### Example Requests with parameters
-
-__Parameter: country_code__
-
-Getting data for the Country specified by the *country_code parameter*, in this case Italy - IT
-
-```http
-GET /v2/locations?country_code=IT
-```
-
-__Sample Response__
-```json
-{
- "latest": {
- "confirmed": 59138,
- "deaths": 5476,
- "recovered": 7024
- },
- "locations": [
- {
- "id": 16,
- "country": "Italy",
- "country_code": "IT",
- "country_population": 60340328,
- "province": "",
- "county": "",
- "last_updated": "2020-03-23T13:32:23.913872Z",
- "coordinates": {
- "latitude": "43",
- "longitude": "12"
- },
- "latest": {
- "confirmed": 59138,
- "deaths": 5476,
- "recovered": 7024
- }
- }
- ]
-}
-```
-
-__Parameter: source__
-
-Getting the data from the data-source specified by the *source parameter*, in this case [csbs](https://www.csbs.org/information-covid-19-coronavirus)
-
-
-```http
-GET /v2/locations?source=csbs
-```
-
-__Sample Response__
-```json
-{
- "latest": {
- "confirmed": 7596,
- "deaths": 43,
- "recovered": 0
- },
- "locations": [
- {
- "id": 0,
- "country": "US",
- "country_code": "US",
- "country_population": 310232863,
- "province": "New York",
- "state": "New York",
- "county": "New York",
- "last_updated": "2020-03-21T14:00:00Z",
- "coordinates": {
- "latitude": 40.71455,
- "longitude": -74.00714
- },
- "latest": {
- "confirmed": 6211,
- "deaths": 43,
- "recovered": 0
- }
- },
- {
- "id": 1,
- "country": "US",
- "country_code": "US",
- "country_population": 310232863,
- "province": "New York",
- "state": "New York",
- "county": "Westchester",
- "last_updated": "2020-03-21T14:00:00Z",
- "coordinates": {
- "latitude": 41.16319759,
- "longitude": -73.7560629
- },
- "latest": {
- "confirmed": 1385,
- "deaths": 0,
- "recovered": 0
- },
- }
- ]
-}
-```
-
-__Parameter: timelines__
-
-Getting the data for all the locations including the daily tracking of confirmed cases, deaths and recovered per location.
-
-```http
-GET /v2/locations?timelines=1
-```
-Explore the response by opening the URL in your browser [https://coronavirus-tracker-api.herokuapp.com/v2/locations?timelines=1](https://coronavirus-tracker-api.herokuapp.com/v2/locations?timelines=1) or make the following curl call in your terminal:
-
-```
-curl https://coronavirus-tracker-api.herokuapp.com/v2/locations?timelines=1 | json_pp
-```
-
-__NOTE:__ Timelines tracking starts from day 22nd January 2020 and ends to the last available day in the data-source.
-
-
-
-## Wrappers
-
-These are the available API wrappers created by the community. They are not necessarily maintained by any of this project's authors or contributors.
-
-### PHP
-
-* [CovidPHP by @o-ba](https://github.com/o-ba/covid-php).
-
-### Golang
-
-* [Go-corona by @itsksaurabh](https://github.com/itsksaurabh/go-corona).
-
-### C#
-
-* [CovidSharp by @Abdirahiim](https://github.com/Abdirahiim/covidtrackerapiwrapper)
-* [Covid19Tracker.NET by @egbakou](https://github.com/egbakou/Covid19Tracker.NET)
-* [CovidDotNet by @degant](https://github.com/degant/CovidDotNet)
-
-### Python
-
-* [COVID19Py by @Kamaropoulos](https://github.com/Kamaropoulos/COVID19Py).
-
-### Java
-
-* [Coronavirus by @mew](https://github.com/mew/Coronavirus).
-
-### Node.js
-
-* [jhucsse.covid by @Sem1084](https://www.npmjs.com/package/jhucsse.covid).
-
-### Ruby
-
-* [covid19-data-ruby by @jaerodyne](https://github.com/jaerodyne/covid19-data-ruby).
-
-### Lua
-
-* [lua-covid-data by @imolein](https://codeberg.org/imo/lua-covid-data).
-
-## Prerequisites
-
-You will need the following things properly installed on your computer.
-
-* [Python 3](https://www.python.org/downloads/) (with pip)
-* [pipenv](https://pypi.org/project/pipenv/)
-
-## Installation
-
-* `git clone https://github.com/ExpDev07/coronavirus-tracker-api.git`
-* `cd coronavirus-tracker-api`
-
-1. Make sure you have [`python3.8` installed and on your `PATH`](https://docs.python-guide.org/starting/installation/).
-2. [Install the `pipenv` dependency manager](https://pipenv.readthedocs.io/en/latest/install/#installing-pipenv)
- * with [pipx](https://pipxproject.github.io/pipx/) `$ pipx install pipenv`
- * with [Homebrew/Linuxbrew](https://pipenv.readthedocs.io/en/latest/install/#homebrew-installation-of-pipenv) `$ brew install pipenv`
- * with [pip/pip3 directly](https://pipenv.readthedocs.io/en/latest/install/#pragmatic-installation-of-pipenv) `$ pip install --user pipenv`
-3. Create virtual environment and install all dependencies `$ pipenv sync --dev`
-4. Activate/enter the virtual environment `$ pipenv shell`
-
-And don't despair if don't get the python setup working on the first try. No one did. Guido got pretty close... once. But that's another story. Good luck.
-
-## Running / Development
-
-For a live reloading on code changes.
-
-* `pipenv run dev`
-
-Without live reloading.
-
-* `pipenv run start`
-
-Visit your app at [http://localhost:8000](http://localhost:8000).
-
-Alternatively run our API with Docker.
-
-### Running Tests
-> [pytest](https://docs.pytest.org/en/latest/)
-
-```bash
-pipenv run test
-```
-
-
-### Linting
-> [pylint](https://www.pylint.org/)
-
-```bash
-pipenv run lint
-```
-
-### Formatting
-> [black](https://black.readthedocs.io/en/stable/)
-
-```bash
-pipenv run fmt
-```
-
-### Update requirements files
-
-```bash
-invoke generate-reqs
-```
-
-[Pipfile.lock](./Pipfile.lock) will be automatically updated during `pipenv install`.
-
-### Docker
-
-Our Docker image is based on [tiangolo/uvicorn-gunicorn-fastapi/](https://hub.docker.com/r/tiangolo/uvicorn-gunicorn-fastapi/).
-
-```bash
-invoke docker --build
-```
-
-Run with `docker run` or `docker-compose`
-
-#### Alternate Docker images
-
-If a full `gunicorn` deployment is unnecessary or [impractical on your hardware](https://fastapi.tiangolo.com/deployment/#raspberry-pi-and-other-architectures) consider using our single instance [`Uvicorn`](https://www.uvicorn.org/) based [Dockerfile](uvicorn.Dockerfile).
-
-
-### Invoke
-
-Additional developer commands can be run by calling them with the [python `invoke` task runner](http://www.pyinvoke.org/).
-```bash
-invoke --list
-```
-
-### Deploying
-
-## Contributors β¨
-
-Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
-
-
-
-
-
-
-
-
-
-
-## License
-
-See [LICENSE.md](LICENSE.md) for the license. Please link to this repo somewhere in your project :).
diff --git a/app/data/__init__.py b/app/data/__init__.py
deleted file mode 100644
index 60a75dac..00000000
--- a/app/data/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-"""app.data"""
-from ..services.location.csbs import CSBSLocationService
-from ..services.location.jhu import JhuLocationService
-from ..services.location.nyt import NYTLocationService
-
-# Mapping of services to data-sources.
-DATA_SOURCES = {
- "jhu": JhuLocationService(),
- "csbs": CSBSLocationService(),
- "nyt": NYTLocationService(),
-}
-
-
-def data_source(source):
- """
- Retrieves the provided data-source service.
-
- :returns: The service.
- :rtype: LocationService
- """
- return DATA_SOURCES.get(source.lower())
diff --git a/app/io.py b/app/io.py
index 2a563b15..9bf0ba50 100644
--- a/app/io.py
+++ b/app/io.py
@@ -2,36 +2,103 @@
import json
import pathlib
from typing import Dict, List, Union
+from abc import ABC, abstractmethod
import aiofiles
-HERE = pathlib.Path(__file__)
-DATA = HERE.joinpath("..", "data").resolve()
-
-
-def save(
- name: str,
- content: Union[str, Dict, List],
- write_mode: str = "w",
- indent: int = 2,
- **json_dumps_kwargs,
-) -> pathlib.Path:
- """Save content to a file. If content is a dictionary, use json.dumps()."""
- path = DATA / name
- if isinstance(content, (dict, list)):
- content = json.dumps(content, indent=indent, **json_dumps_kwargs)
- with open(DATA / name, mode=write_mode) as f_out:
- f_out.write(content)
- return path
-
-
-def load(name: str, **json_kwargs) -> Union[str, Dict, List]:
- """Loads content from a file. If file ends with '.json', call json.load() and return a Dictionary."""
- path = DATA / name
- with open(path) as f_in:
- if path.suffix == ".json":
- return json.load(f_in, **json_kwargs)
- return f_in.read()
+from __future__ import annotations
+
+
+
+class Context:
+
+ _state = None
+
+
+ def __init__(self, state):
+ self.transition_to(state)
+
+ def transition_to(self, state: State):
+
+ self._state = state
+ self._state.context = self
+
+
+
+ def request_save(self):
+ self._state.handle_save()
+
+ def request_load(self):
+ self._state.handle_load()
+
+
+class State(ABC):
+ HERE = pathlib.Path(__file__)
+ DATA = HERE.joinpath("..", "data").resolve()
+ @property
+ def context(self):
+ return self._context
+
+ @context.setter
+ def context(self, context: Context):
+ self._context = context
+
+ @abstractmethod
+ def handle_save(self) -> None:
+ pass
+
+ @abstractmethod
+ def handle_load(self) -> None:
+ pass
+
+
+class SaveState(State):
+ def handle_save(self, name: str,
+ content: Union[str, Dict, List],
+ write_mode: str = "w",
+ indent: int = 2,
+ **json_dumps_kwargs,):
+ """Save content to a file. If content is a dictionary, use json.dumps()."""
+ path = self.DATA / name
+ if isinstance(content, (dict, list)):
+ content = json.dumps(content, indent=indent, **json_dumps_kwargs)
+ with open(self.DATA / name, mode=write_mode) as f_out:
+ f_out.write(content)
+ return path
+
+class LoadState(State):
+ def handle_load(name: str, **json_kwargs):
+ path = DATA / name
+ with open(path) as f_in:
+ if path.suffix == ".json":
+ return json.load(f_in, **json_kwargs)
+ return f_in.read()
+
+
+
+# def save(
+# name: str,
+# content: Union[str, Dict, List],
+# write_mode: str = "w",
+# indent: int = 2,
+# **json_dumps_kwargs,
+# ) -> pathlib.Path:
+# """Save content to a file. If content is a dictionary, use json.dumps()."""
+# path = DATA / name
+# if isinstance(content, (dict, list)):
+# content = json.dumps(content, indent=indent, **json_dumps_kwargs)
+# with open(DATA / name, mode=write_mode) as f_out:
+# f_out.write(content)
+# return path
+
+
+# def load(name: str, **json_kwargs) -> Union[str, Dict, List]:
+# """Loads content from a file. If file ends with '.json', call json.load() and return a Dictionary."""
+# path = DATA / name
+# with open(path) as f_in:
+# if path.suffix == ".json":
+# return json.load(f_in, **json_kwargs)
+# return f_in.read()
class AIO:
diff --git a/app/main.py b/app/main.py
deleted file mode 100644
index b9aff949..00000000
--- a/app/main.py
+++ /dev/null
@@ -1,121 +0,0 @@
-"""
-app.main.py
-"""
-import logging
-
-import pydantic
-import sentry_sdk
-import uvicorn
-from fastapi import FastAPI, Request, Response
-from fastapi.middleware.cors import CORSMiddleware
-from fastapi.middleware.gzip import GZipMiddleware
-from fastapi.responses import JSONResponse
-from scout_apm.async_.starlette import ScoutMiddleware
-from sentry_sdk.integrations.asgi import SentryAsgiMiddleware
-
-from .config import get_settings
-from .data import data_source
-from .routers import V1, V2
-from .utils.httputils import setup_client_session, teardown_client_session
-
-# ############
-# FastAPI App
-# ############
-LOGGER = logging.getLogger("api")
-
-SETTINGS = get_settings()
-
-if SETTINGS.sentry_dsn: # pragma: no cover
- sentry_sdk.init(dsn=SETTINGS.sentry_dsn)
-
-APP = FastAPI(
- title="Coronavirus Tracker",
- description=(
- "API for tracking the global coronavirus (COVID-19, SARS-CoV-2) outbreak."
- " Project page: https://github.com/ExpDev07/coronavirus-tracker-api."
- ),
- version="2.0.4",
- docs_url="/",
- redoc_url="/docs",
- on_startup=[setup_client_session],
- on_shutdown=[teardown_client_session],
-)
-
-# #####################
-# Middleware
-#######################
-
-# Scout APM
-if SETTINGS.scout_name: # pragma: no cover
- LOGGER.info(f"Adding Scout APM middleware for `{SETTINGS.scout_name}`")
- APP.add_middleware(ScoutMiddleware)
-else:
- LOGGER.debug("No SCOUT_NAME config")
-
-# Sentry Error Tracking
-if SETTINGS.sentry_dsn: # pragma: no cover
- LOGGER.info("Adding Sentry middleware")
- APP.add_middleware(SentryAsgiMiddleware)
-
-# Enable CORS.
-APP.add_middleware(
- CORSMiddleware,
- allow_credentials=True,
- allow_origins=["*"],
- allow_methods=["*"],
- allow_headers=["*"],
-)
-APP.add_middleware(GZipMiddleware, minimum_size=1000)
-
-
-@APP.middleware("http")
-async def add_datasource(request: Request, call_next):
- """
- Attach the data source to the request.state.
- """
- # Retrieve the datas ource from query param.
- source = data_source(request.query_params.get("source", default="jhu"))
-
- # Abort with 404 if source cannot be found.
- if not source:
- return Response("The provided data-source was not found.", status_code=404)
-
- # Attach source to request.
- request.state.source = source
-
- # Move on...
- LOGGER.debug(f"source provided: {source.__class__.__name__}")
- response = await call_next(request)
- return response
-
-
-# ################
-# Exception Handler
-# ################
-
-
-@APP.exception_handler(pydantic.error_wrappers.ValidationError)
-async def handle_validation_error(
- request: Request, exc: pydantic.error_wrappers.ValidationError
-): # pylint: disable=unused-argument
- """
- Handles validation errors.
- """
- return JSONResponse({"message": exc.errors()}, status_code=422)
-
-
-# ################
-# Routing
-# ################
-
-
-# Include routers.
-APP.include_router(V1, prefix="", tags=["v1"])
-APP.include_router(V2, prefix="/v2", tags=["v2"])
-
-
-# Running of app.
-if __name__ == "__main__":
- uvicorn.run(
- "app.main:APP", host="127.0.0.1", port=SETTINGS.port, log_level="info",
- )
diff --git a/app/routers/v2.py b/app/routers/v2.py
deleted file mode 100644
index 31eb408c..00000000
--- a/app/routers/v2.py
+++ /dev/null
@@ -1,110 +0,0 @@
-"""app.routers.v2"""
-import enum
-
-from fastapi import APIRouter, HTTPException, Request
-
-from ..data import DATA_SOURCES
-from ..models import LatestResponse, LocationResponse, LocationsResponse
-
-V2 = APIRouter()
-
-
-class Sources(str, enum.Enum):
- """
- A source available for retrieving data.
- """
-
- JHU = "jhu"
- CSBS = "csbs"
- NYT = "nyt"
-
-
-@V2.get("/latest", response_model=LatestResponse)
-async def get_latest(
- request: Request, source: Sources = Sources.JHU
-): # pylint: disable=unused-argument
- """
- Getting latest amount of total confirmed cases, deaths, and recoveries.
- """
- locations = await request.state.source.get_all()
- return {
- "latest": {
- "confirmed": sum(map(lambda location: location.confirmed, locations)),
- "deaths": sum(map(lambda location: location.deaths, locations)),
- "recovered": sum(map(lambda location: location.recovered, locations)),
- }
- }
-
-
-# pylint: disable=unused-argument,too-many-arguments,redefined-builtin
-@V2.get("/locations", response_model=LocationsResponse, response_model_exclude_unset=True)
-async def get_locations(
- request: Request,
- source: Sources = "jhu",
- country_code: str = None,
- province: str = None,
- county: str = None,
- timelines: bool = False,
-):
- """
- Getting the locations.
- """
- # All query paramameters.
- params = dict(request.query_params)
-
- # Remove reserved params.
- params.pop("source", None)
- params.pop("timelines", None)
-
- # Retrieve all the locations.
- locations = await request.state.source.get_all()
-
- # Attempt to filter out locations with properties matching the provided query params.
- for key, value in params.items():
- # Clean keys for security purposes.
- key = key.lower()
- value = value.lower().strip("__")
-
- # Do filtering.
- try:
- locations = [
- location
- for location in locations
- if str(getattr(location, key)).lower() == str(value)
- ]
- except AttributeError:
- pass
- if not locations:
- raise HTTPException(
- 404, detail=f"Source `{source}` does not have the desired location data.",
- )
-
- # Return final serialized data.
- return {
- "latest": {
- "confirmed": sum(map(lambda location: location.confirmed, locations)),
- "deaths": sum(map(lambda location: location.deaths, locations)),
- "recovered": sum(map(lambda location: location.recovered, locations)),
- },
- "locations": [location.serialize(timelines) for location in locations],
- }
-
-
-# pylint: disable=invalid-name
-@V2.get("/locations/{id}", response_model=LocationResponse)
-async def get_location_by_id(
- request: Request, id: int, source: Sources = Sources.JHU, timelines: bool = True
-):
- """
- Getting specific location by id.
- """
- location = await request.state.source.get(id)
- return {"location": location.serialize(timelines)}
-
-
-@V2.get("/sources")
-async def sources():
- """
- Retrieves a list of data-sources that are availble to use.
- """
- return {"sources": list(DATA_SOURCES.keys())}
diff --git a/app/services/location/__init__.py b/app/services/location/__init__.py
deleted file mode 100644
index 6d292b54..00000000
--- a/app/services/location/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-"""app.services.location"""
-from abc import ABC, abstractmethod
-
-
-class LocationService(ABC):
- """
- Service for retrieving locations.
- """
-
- @abstractmethod
- async def get_all(self):
- """
- Gets and returns all of the locations.
-
- :returns: The locations.
- :rtype: List[Location]
- """
- raise NotImplementedError
-
- @abstractmethod
- async def get(self, id): # pylint: disable=redefined-builtin,invalid-name
- """
- Gets and returns location with the provided id.
-
- :returns: The location.
- :rtype: Location
- """
- raise NotImplementedError
diff --git a/app/services/location/csbs.py b/app/services/location/csbs.py
deleted file mode 100644
index 444ebad6..00000000
--- a/app/services/location/csbs.py
+++ /dev/null
@@ -1,102 +0,0 @@
-"""app.services.location.csbs.py"""
-import csv
-import logging
-from datetime import datetime
-
-from asyncache import cached
-from cachetools import TTLCache
-
-from ...caches import check_cache, load_cache
-from ...coordinates import Coordinates
-from ...location.csbs import CSBSLocation
-from ...utils import httputils
-from . import LocationService
-
-LOGGER = logging.getLogger("services.location.csbs")
-
-
-class CSBSLocationService(LocationService):
- """
- Service for retrieving locations from csbs
- """
-
- async def get_all(self):
- # Get the locations.
- locations = await get_locations()
- return locations
-
- async def get(self, loc_id): # pylint: disable=arguments-differ
- # Get location at the index equal to the provided id.
- locations = await self.get_all()
- return locations[loc_id]
-
-
-# Base URL for fetching data
-BASE_URL = "https://facts.csbs.org/covid-19/covid19_county.csv"
-
-
-@cached(cache=TTLCache(maxsize=1, ttl=1800))
-async def get_locations():
- """
- Retrieves county locations; locations are cached for 1 hour
-
- :returns: The locations.
- :rtype: dict
- """
- data_id = "csbs.locations"
- LOGGER.info(f"{data_id} Requesting data...")
- # check shared cache
- cache_results = await check_cache(data_id)
- if cache_results:
- LOGGER.info(f"{data_id} using shared cache results")
- locations = cache_results
- else:
- LOGGER.info(f"{data_id} shared cache empty")
- async with httputils.CLIENT_SESSION.get(BASE_URL) as response:
- text = await response.text()
-
- LOGGER.debug(f"{data_id} Data received")
-
- data = list(csv.DictReader(text.splitlines()))
- LOGGER.debug(f"{data_id} CSV parsed")
-
- locations = []
-
- for i, item in enumerate(data):
- # General info.
- state = item["State Name"]
- county = item["County Name"]
-
- # Ensure country is specified.
- if county in {"Unassigned", "Unknown"}:
- continue
-
- # Date string without "EDT" at end.
- last_update = " ".join(item["Last Update"].split(" ")[0:2])
-
- # Append to locations.
- locations.append(
- CSBSLocation(
- # General info.
- i,
- state,
- county,
- # Coordinates.
- Coordinates(item["Latitude"], item["Longitude"]),
- # Last update (parse as ISO).
- datetime.strptime(last_update, "%Y-%m-%d %H:%M").isoformat() + "Z",
- # Statistics.
- int(item["Confirmed"] or 0),
- int(item["Death"] or 0),
- )
- )
- LOGGER.info(f"{data_id} Data normalized")
- # save the results to distributed cache
- # TODO: fix json serialization
- try:
- await load_cache(data_id, locations)
- except TypeError as type_err:
- LOGGER.error(type_err)
-
- # Return the locations.
- return locations
diff --git a/app/services/location/jhu.py b/app/services/location/jhu.py
deleted file mode 100644
index ebed3960..00000000
--- a/app/services/location/jhu.py
+++ /dev/null
@@ -1,228 +0,0 @@
-"""app.services.location.jhu.py"""
-import csv
-import logging
-import os
-from datetime import datetime
-from pprint import pformat as pf
-
-from asyncache import cached
-from cachetools import TTLCache
-
-from ...caches import check_cache, load_cache
-from ...coordinates import Coordinates
-from ...location import TimelinedLocation
-from ...models import Timeline
-from ...utils import countries
-from ...utils import date as date_util
-from ...utils import httputils
-from . import LocationService
-
-LOGGER = logging.getLogger("services.location.jhu")
-PID = os.getpid()
-
-
-class JhuLocationService(LocationService):
- """
- Service for retrieving locations from Johns Hopkins CSSE (https://github.com/CSSEGISandData/COVID-19).
- """
-
- async def get_all(self):
- # Get the locations.
- locations = await get_locations()
- return locations
-
- async def get(self, loc_id): # pylint: disable=arguments-differ
- # Get location at the index equal to provided id.
- locations = await self.get_all()
- return locations[loc_id]
-
-
-# ---------------------------------------------------------------
-
-
-# Base URL for fetching category.
-BASE_URL = "https://raw.githubusercontent.com/CSSEGISandData/2019-nCoV/master/csse_covid_19_data/csse_covid_19_time_series/"
-
-
-@cached(cache=TTLCache(maxsize=4, ttl=1800))
-async def get_category(category):
- """
- Retrieves the data for the provided category. The data is cached for 30 minutes locally, 1 hour via shared Redis.
-
- :returns: The data for category.
- :rtype: dict
- """
- # Adhere to category naming standard.
- category = category.lower()
- data_id = f"jhu.{category}"
-
- # check shared cache
- cache_results = await check_cache(data_id)
- if cache_results:
- LOGGER.info(f"{data_id} using shared cache results")
- results = cache_results
- else:
- LOGGER.info(f"{data_id} shared cache empty")
- # URL to request data from.
- url = BASE_URL + "time_series_covid19_%s_global.csv" % category
-
- # Request the data
- LOGGER.info(f"{data_id} Requesting data...")
- async with httputils.CLIENT_SESSION.get(url) as response:
- text = await response.text()
-
- LOGGER.debug(f"{data_id} Data received")
-
- # Parse the CSV.
- data = list(csv.DictReader(text.splitlines()))
- LOGGER.debug(f"{data_id} CSV parsed")
-
- # The normalized locations.
- locations = []
-
- for item in data:
- # Filter out all the dates.
- dates = dict(filter(lambda element: date_util.is_date(element[0]), item.items()))
-
- # Make location history from dates.
- history = {date: int(float(amount or 0)) for date, amount in dates.items()}
-
- # Country for this location.
- country = item["Country/Region"]
-
- # Latest data insert value.
- latest = list(history.values())[-1]
-
- # Normalize the item and append to locations.
- locations.append(
- {
- # General info.
- "country": country,
- "country_code": countries.country_code(country),
- "province": item["Province/State"],
- # Coordinates.
- "coordinates": {"lat": item["Lat"], "long": item["Long"],},
- # History.
- "history": history,
- # Latest statistic.
- "latest": int(latest or 0),
- }
- )
- LOGGER.debug(f"{data_id} Data normalized")
-
- # Latest total.
- latest = sum(map(lambda location: location["latest"], locations))
-
- # Return the final data.
- results = {
- "locations": locations,
- "latest": latest,
- "last_updated": datetime.utcnow().isoformat() + "Z",
- "source": "https://github.com/ExpDev07/coronavirus-tracker-api",
- }
- # save the results to distributed cache
- await load_cache(data_id, results)
-
- LOGGER.info(f"{data_id} results:\n{pf(results, depth=1)}")
- return results
-
-
-@cached(cache=TTLCache(maxsize=1, ttl=1800))
-async def get_locations():
- """
- Retrieves the locations from the categories. The locations are cached for 1 hour.
-
- :returns: The locations.
- :rtype: List[Location]
- """
- data_id = "jhu.locations"
- LOGGER.info(f"pid:{PID}: {data_id} Requesting data...")
- # Get all of the data categories locations.
- confirmed = await get_category("confirmed")
- deaths = await get_category("deaths")
- recovered = await get_category("recovered")
-
- locations_confirmed = confirmed["locations"]
- locations_deaths = deaths["locations"]
- locations_recovered = recovered["locations"]
-
- # Final locations to return.
- locations = []
- # ***************************************************************************
- # TODO: This iteration approach assumes the indexes remain the same
- # and opens us to a CRITICAL ERROR. The removal of a column in the data source
- # would break the API or SHIFT all the data confirmed, deaths, recovery producting
- # incorrect data to consumers.
- # ***************************************************************************
- # Go through locations.
- for index, location in enumerate(locations_confirmed):
- # Get the timelines.
-
- # TEMP: Fix for merging recovery data. See TODO above for more details.
- key = (location["country"], location["province"])
-
- timelines = {
- "confirmed": location["history"],
- "deaths": parse_history(key, locations_deaths, index),
- "recovered": parse_history(key, locations_recovered, index),
- }
-
- # Grab coordinates.
- coordinates = location["coordinates"]
-
- # Create location (supporting timelines) and append.
- locations.append(
- TimelinedLocation(
- # General info.
- index,
- location["country"],
- location["province"],
- # Coordinates.
- Coordinates(latitude=coordinates["lat"], longitude=coordinates["long"]),
- # Last update.
- datetime.utcnow().isoformat() + "Z",
- # Timelines (parse dates as ISO).
- {
- "confirmed": Timeline(
- timeline={
- datetime.strptime(date, "%m/%d/%y").isoformat() + "Z": amount
- for date, amount in timelines["confirmed"].items()
- }
- ),
- "deaths": Timeline(
- timeline={
- datetime.strptime(date, "%m/%d/%y").isoformat() + "Z": amount
- for date, amount in timelines["deaths"].items()
- }
- ),
- "recovered": Timeline(
- timeline={
- datetime.strptime(date, "%m/%d/%y").isoformat() + "Z": amount
- for date, amount in timelines["recovered"].items()
- }
- ),
- },
- )
- )
- LOGGER.info(f"{data_id} Data normalized")
-
- # Finally, return the locations.
- return locations
-
-
-def parse_history(key: tuple, locations: list, index: int):
- """
- Helper for validating and extracting history content from
- locations data based on index. Validates with the current country/province
- key to make sure no index/column issue.
-
- TEMP: solution because implement a more efficient and better approach in the refactor.
- """
- location_history = {}
- try:
- if key == (locations[index]["country"], locations[index]["province"]):
- location_history = locations[index]["history"]
- except (IndexError, KeyError):
- LOGGER.debug(f"iteration data merge error: {index} {key}")
-
- return location_history
diff --git a/app/services/location/nyt.py b/app/services/location/nyt.py
deleted file mode 100644
index 1f25ec34..00000000
--- a/app/services/location/nyt.py
+++ /dev/null
@@ -1,145 +0,0 @@
-"""app.services.location.nyt.py"""
-import csv
-import logging
-from datetime import datetime
-
-from asyncache import cached
-from cachetools import TTLCache
-
-from ...caches import check_cache, load_cache
-from ...coordinates import Coordinates
-from ...location.nyt import NYTLocation
-from ...models import Timeline
-from ...utils import httputils
-from . import LocationService
-
-LOGGER = logging.getLogger("services.location.nyt")
-
-
-class NYTLocationService(LocationService):
- """
- Service for retrieving locations from New York Times (https://github.com/nytimes/covid-19-data).
- """
-
- async def get_all(self):
- # Get the locations.
- locations = await get_locations()
- return locations
-
- async def get(self, loc_id): # pylint: disable=arguments-differ
- # Get location at the index equal to provided id.
- locations = await self.get_all()
- return locations[loc_id]
-
-
-# ---------------------------------------------------------------
-
-
-# Base URL for fetching category.
-BASE_URL = "https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv"
-
-
-def get_grouped_locations_dict(data):
- """
- Helper function to group history for locations into one dict.
-
- :returns: The complete data for each unique US county
- :rdata: dict
- """
- grouped_locations = {}
-
- # in increasing order of dates
- for row in data:
- county_state = (row["county"], row["state"])
- date = row["date"]
- confirmed = row["cases"]
- deaths = row["deaths"]
-
- # initialize if not existing
- if county_state not in grouped_locations:
- grouped_locations[county_state] = {"confirmed": [], "deaths": []}
-
- # append confirmed tuple to county_state (date, # confirmed)
- grouped_locations[county_state]["confirmed"].append((date, confirmed))
- # append deaths tuple to county_state (date, # deaths)
- grouped_locations[county_state]["deaths"].append((date, deaths))
-
- return grouped_locations
-
-
-@cached(cache=TTLCache(maxsize=1, ttl=1800))
-async def get_locations():
- """
- Returns a list containing parsed NYT data by US county. The data is cached for 1 hour.
-
- :returns: The complete data for US Counties.
- :rtype: dict
- """
- data_id = "nyt.locations"
- # Request the data.
- LOGGER.info(f"{data_id} Requesting data...")
- # check shared cache
- cache_results = await check_cache(data_id)
- if cache_results:
- LOGGER.info(f"{data_id} using shared cache results")
- locations = cache_results
- else:
- LOGGER.info(f"{data_id} shared cache empty")
- async with httputils.CLIENT_SESSION.get(BASE_URL) as response:
- text = await response.text()
-
- LOGGER.debug(f"{data_id} Data received")
-
- # Parse the CSV.
- data = list(csv.DictReader(text.splitlines()))
- LOGGER.debug(f"{data_id} CSV parsed")
-
- # Group together locations (NYT data ordered by dates not location).
- grouped_locations = get_grouped_locations_dict(data)
-
- # The normalized locations.
- locations = []
-
- for idx, (county_state, histories) in enumerate(grouped_locations.items()):
- # Make location history for confirmed and deaths from dates.
- # List is tuples of (date, amount) in order of increasing dates.
- confirmed_list = histories["confirmed"]
- confirmed_history = {date: int(amount or 0) for date, amount in confirmed_list}
-
- deaths_list = histories["deaths"]
- deaths_history = {date: int(amount or 0) for date, amount in deaths_list}
-
- # Normalize the item and append to locations.
- locations.append(
- NYTLocation(
- id=idx,
- state=county_state[1],
- county=county_state[0],
- coordinates=Coordinates(None, None), # NYT does not provide coordinates
- last_updated=datetime.utcnow().isoformat() + "Z", # since last request
- timelines={
- "confirmed": Timeline(
- timeline={
- datetime.strptime(date, "%Y-%m-%d").isoformat() + "Z": amount
- for date, amount in confirmed_history.items()
- }
- ),
- "deaths": Timeline(
- timeline={
- datetime.strptime(date, "%Y-%m-%d").isoformat() + "Z": amount
- for date, amount in deaths_history.items()
- }
- ),
- "recovered": Timeline(),
- },
- )
- )
- LOGGER.info(f"{data_id} Data normalized")
- # save the results to distributed cache
- # TODO: fix json serialization
- try:
- await load_cache(data_id, locations)
- except TypeError as type_err:
- LOGGER.error(type_err)
-
- return locations