Automatically add transactions from all major Israeli banks and credit card companies to a online worksheet
Internally we use israeli-bank-scrapers to scrape the data.
Having all your data in one place lets you view all of your expenses in a beautiful dashboard like Google Data Studio, Azure Data Explorer dashboards and Microsoft Power BI
This app requires some technical skills, if you prefer a GUI app you can use Caspion instead.
Important: The current implementation assumes that you run the code on secure and trusted computers.
It’s a bad idea to put all your financial data and passwords in one place, especially with more than read-only access.
By using moneyman, you acknowledge that you are taking full responsibility for the code quality and will use it only after you review the code and validate that it’s secure.
Please use a proper secret management solution to save and pass the environment variables
Moneyman can be configured to periodically run automatically, using the scrape
github workflow.
By default, this workflow will run every other day.
Since logs are public for public repos, most logs are off by default and the progress and error messages will be sent in telegram.
- Fork the moneyman repo to your account
- Add the following secrets to the actions secrets of the forked repo
ACCOUNTS_JSON
- So moneyman can login to your accountsTELEGRAM_API_[KEY, CHAT_ID]
- So moneyman can send private logs and errors- The environment variables of the storage you want to use
- Wait for the workflow to be triggered by github
- Clone this repo
- Run
npm install
- Run
npm run build
- Add your env variables (you can add them in a
.env
file in the project's root directory) - Run
npm run start
- Define the environment variables in a
.env
file docker run --rm --env-file ".env" ghcr.io/daniel-hauser/moneyman:latest
.
docker doesn't support multiline environment variables (i.e. GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY
), in that case you can run docker-compose up
instead
We use the debug package for debug messages under the moneyman:
namespace.
If you want to see them, use the DEBUG
environment variable with the value moneyman:*
Use the following env vars to setup the data fetching:
A json array of accounts following this schema with an additional companyId
field with a companyType as the value.
Example:
[
{ "companyId": "hapoalim", "userCode": "AB1234", "password": "p@ssword" },
{ "companyId": "visaCal", "username": "Ploni Almoni", "password": "p@ssword" }
]
env variable name | default | description |
---|---|---|
ACCOUNTS_TO_SCRAPE |
[] |
A comma separated list of providers to take from ACCOUNTS_JSON . if empty, all accounts will be used |
DAYS_BACK |
10 |
The amount of days back to scrape |
TZ |
'Asia/Jerusalem' |
A timezone for the process - used for the formatting of the timestamp |
FUTURE_MONTHS |
1 |
The amount of months that will be scrapped in the future, starting from the day calculated using DAYS_BACK |
We use telegram to send you the update status.
- Create your bot following this
- Open this url
https://api.telegram.org/bot<TELEGRAM_API_KEY>/getUpdates
- Send a message to your bot and fnd the chat id
Use the following env vars to setup:
env variable name | description |
---|---|
TELEGRAM_API_KEY |
The super secret api key you got from BotFather |
TELEGRAM_CHAT_ID |
The chat id |
TODO: Add a way to send a message to the bot to connect?
-
Create a new data explorer cluster (can be done for free here)
-
Create a database within your cluster
-
Create a azure Service Principal following steps 1-7 here
-
Allow the service to ingest data to the database by running this:
.execute database script <| .add database ['<ADE_DATABASE_NAME>'] ingestors ('aadapp=<AZURE_APP_ID>;<AZURE_TENANT_ID>')
-
Create a table and ingestion mapping by running this: (Replace
<ADE_TABLE_NAME>
and<ADE_INGESTION_MAPPING>
).execute database script <| .drop table <ADE_TABLE_NAME> ifexists .create table <ADE_TABLE_NAME> ( metadata: dynamic, transaction: dynamic ) .create table <ADE_TABLE_NAME> ingestion json mapping '<ADE_INGESTION_MAPPING>' ``` [ { "column": "transaction", "path": "$.transaction" }, { "column": "metadata", "path": "$.metadata" } ] ```
Feel free to add more columns to the table and ingestion json mapping
Use the following env vars to setup:
env variable name | description |
---|---|
AZURE_APP_ID |
The azure application ID |
AZURE_APP_KEY |
The azure application secret key |
AZURE_TENANT_ID |
The tenant ID of your azure application |
ADE_DATABASE_NAME |
The name of the database |
ADE_TABLE_NAME |
The name of the table |
ADE_INGESTION_MAPPING |
The name of the JSON ingestion mapping |
ADE_INGEST_URI |
The ingest URI of the cluster |
Export transactions to json file.
Use the following env vars to setup:
env variable name | description |
---|---|
LOCAL_JSON_STORAGE |
If truthy, all transaction will be saved to a <process cwd>/output/<ISO timestamp>.json file |
WIP
- Follow the instructions here to create a google service account.
- Create a new sheet and share it with your service account using the
GOOGLE_SERVICE_ACCOUNT_EMAIL
.
Use the following env vars to setup:
env variable name | description |
---|---|
GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY |
The super secret api key of your service account |
GOOGLE_SERVICE_ACCOUNT_EMAIL |
The service account's email address |
GOOGLE_SHEET_ID |
The id of the spreadsheet you shared with the service account |
WORKSHEET_NAME |
The name of the sheet you want to add the transactions to |