Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shadowfork devnet #191

Open
wants to merge 87 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
87 commits
Select commit Hold shift + click to select a range
c373a7f
disable deploying contracts & enable forking anvil
colinlyguo Sep 24, 2024
af7c4f0
rename
colinlyguo Sep 24, 2024
cff5d63
tmp commit
colinlyguo Sep 24, 2024
701e37e
tmp commit
colinlyguo Sep 24, 2024
b0deb29
bump versions
colinlyguo Sep 24, 2024
e9155c4
update scripts
colinlyguo Sep 25, 2024
d2459c5
add scripts
colinlyguo Sep 25, 2024
c0be49e
bug fixes
colinlyguo Sep 25, 2024
b6f19e7
bump l1-devnet version
colinlyguo Sep 25, 2024
2d6de58
fix versions
colinlyguo Sep 25, 2024
2af5803
bump version
colinlyguo Sep 26, 2024
ab783f2
update
colinlyguo Sep 26, 2024
0d979bb
update envs
colinlyguo Sep 26, 2024
ea97c7e
update script
colinlyguo Sep 26, 2024
0401b91
tmp commit
colinlyguo Sep 26, 2024
86dcd80
bump version
colinlyguo Sep 26, 2024
654f0f5
bump version
colinlyguo Sep 26, 2024
e8ee711
bump rollup-node version
colinlyguo Sep 26, 2024
a495038
update
colinlyguo Sep 26, 2024
e8944f2
fixes
colinlyguo Sep 26, 2024
f8818e8
fix
colinlyguo Sep 26, 2024
8ee6218
update L1_SCROLL_CHAIN_PROXY_ADDR
colinlyguo Sep 27, 2024
a023748
fix a bug
colinlyguo Sep 27, 2024
cf97c87
bump version
colinlyguo Sep 27, 2024
1b7a140
fix bugs
colinlyguo Sep 27, 2024
e4a2000
bump version
colinlyguo Sep 28, 2024
9bb3828
bump version
colinlyguo Sep 28, 2024
9d66f87
fix
colinlyguo Sep 28, 2024
2f8ffc6
tweak
colinlyguo Sep 28, 2024
2ef78cc
fix
colinlyguo Sep 28, 2024
9aae560
fixes
colinlyguo Sep 28, 2024
3e36713
fix
colinlyguo Sep 28, 2024
bdf8c93
bump version
colinlyguo Sep 29, 2024
421ce64
fixes
colinlyguo Sep 29, 2024
d0616cc
update
colinlyguo Sep 29, 2024
a8787d7
bump version
colinlyguo Sep 29, 2024
7774e9b
bump version
colinlyguo Sep 29, 2024
934bb40
disable unused services
colinlyguo Sep 30, 2024
d1a5afe
fix
colinlyguo Oct 8, 2024
4358bf4
bump version
colinlyguo Oct 8, 2024
33581d9
bump version
colinlyguo Oct 8, 2024
4358979
bump version
colinlyguo Oct 8, 2024
0cf0687
fix bugs
colinlyguo Oct 8, 2024
7e47df9
update scripts
colinlyguo Oct 8, 2024
3542d92
update scripts
colinlyguo Oct 8, 2024
af7518d
fix
colinlyguo Oct 8, 2024
9d16a07
update base image of foundry scripts
colinlyguo Oct 8, 2024
135a12b
fix bugs
colinlyguo Oct 8, 2024
31465b3
fix
colinlyguo Oct 8, 2024
254ef2e
update env
colinlyguo Oct 8, 2024
8dc4ad1
fix
colinlyguo Oct 8, 2024
705c8f9
update env
colinlyguo Oct 8, 2024
4bb0207
bump version
colinlyguo Oct 8, 2024
cf42cb0
fix
colinlyguo Oct 8, 2024
e6ef1c6
update configs
colinlyguo Oct 9, 2024
2a1d085
fix bugs
colinlyguo Oct 9, 2024
6478f1d
bump version
colinlyguo Oct 9, 2024
28397a4
bump version
colinlyguo Oct 9, 2024
9b585f0
bump version
colinlyguo Oct 9, 2024
7501829
bump version
colinlyguo Oct 9, 2024
06444f0
fix bugs
colinlyguo Oct 9, 2024
81724e3
fix
colinlyguo Oct 9, 2024
bb07fd5
fix a bug
colinlyguo Oct 9, 2024
8facd5e
revert
colinlyguo Oct 9, 2024
9e4ccc5
Merge branch 'develop' into shadowfork-devnet
colinlyguo Oct 9, 2024
3260f88
change param
colinlyguo Oct 9, 2024
e0241e7
fix a bug
colinlyguo Oct 9, 2024
c410581
fix bugs
colinlyguo Oct 9, 2024
429f924
fix
colinlyguo Oct 9, 2024
881dda7
add curl install
colinlyguo Oct 9, 2024
1c1f23f
fix
colinlyguo Oct 9, 2024
647b02e
fix a bug
colinlyguo Oct 9, 2024
2c201fd
fix
colinlyguo Oct 9, 2024
bcb9785
fix a bug
colinlyguo Oct 9, 2024
215bb85
fix
colinlyguo Oct 9, 2024
b261f72
change configs
colinlyguo Oct 10, 2024
469d71d
fix cmd
colinlyguo Oct 10, 2024
7b6fe27
increase resources
colinlyguo Oct 10, 2024
ebb7dea
remove init datadir by genesis.json
colinlyguo Oct 10, 2024
dc4aa08
add init genesis back
colinlyguo Oct 10, 2024
f1becfc
fix
colinlyguo Oct 10, 2024
4182346
fix
colinlyguo Oct 10, 2024
0bd7870
fix
colinlyguo Oct 10, 2024
b21e51c
fix cmds
colinlyguo Oct 10, 2024
2fec7c9
fix script
colinlyguo Oct 10, 2024
0f39c53
remove --rollup.verify
colinlyguo Oct 10, 2024
ad80930
add block time
colinlyguo Oct 10, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion charts/l1-devnet/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
apiVersion: v2
description: l1-devnet helm charts
name: l1-devnet
version: 0.0.3
version: 0.0.9
appVersion: v0.1.0
kubeVersion: ">=1.22.0-0"
maintainers:
Expand Down
2 changes: 1 addition & 1 deletion charts/l1-devnet/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ image:
tag: v0.0.4

command:
["/bin/bash", "-c", "anvil --host 0.0.0.0 --port 8545 --chain-id ${CHAIN_ID} --state /data/state.json --state-interval 60 --slots-in-an-epoch 3"]
["/bin/bash", "-c", "anvil --host 0.0.0.0 --port 8545 --chain-id ${CHAIN_ID} --fork-url ${L1_FULLNODE_RPC_ENDPOINT} --fork-block-number ${L1_SHADOWFORK_BLOCK_NUMBER} --state /data/state.json --state-interval 60 --slots-in-an-epoch 3 --block-time 12"]

envFrom:
- configMapRef:
Expand Down
2 changes: 1 addition & 1 deletion charts/l2-sequencer/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
apiVersion: v2
description: l2-sequencer helm charts
name: l2-sequencer
version: 0.0.11
version: 0.0.41
appVersion: v0.1.0
kubeVersion: ">=1.22.0-0"
maintainers:
Expand Down
28 changes: 21 additions & 7 deletions charts/l2-sequencer/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ env:
value: "1000000"
- name: VERBOSITY
value: "3"
- name: L2_SHADOWFORK_BLOCK_NUMBER
value: "0xe9a5"

envFrom:
- configMapRef:
Expand All @@ -41,19 +43,26 @@ command:
[
"bash",
"-c",
"mkdir -p /l2geth/data/keystore && \
"(until geth attach http://localhost:$L2GETH_RPC_HTTP_PORT --exec \"eth.blockNumber\" > /dev/null 2>&1; do \
echo \"Waiting for RPC to become available...\"; \
sleep 5; \
done && \
echo \"RPC is now available. Executing debug_setHead...\" && \
geth attach http://localhost:$L2GETH_RPC_HTTP_PORT --exec \"debug.setHead('${L2_SHADOWFORK_BLOCK_NUMBER}')\" && \
echo \"debug_setHead executed.\") & \
mkdir -p /l2geth/data/keystore && \
mkdir -p /l2geth/data/geth && \
echo \"[Node.P2P] StaticNodes = $L2GETH_PEER_LIST\" > \"/l2geth/config.toml\" && \
cp -r /host/snapshot/* /l2geth/data/ && \
echo ${L2GETH_PASSWORD} > /l2geth/password && \
echo ${L2GETH_KEYSTORE} > /l2geth/data/keystore/keystore.json && \
echo ${L2GETH_NODEKEY} > /l2geth/data/geth/nodekey && \
geth --datadir \"/l2geth/data\" init /l2geth/genesis/genesis.json && \
geth --datadir \"/l2geth/data\" \
--port \"$L2GETH_P2P_PORT\" --nodiscover --syncmode full --networkid \"$CHAIN_ID\" \
--config \"/l2geth/config.toml\" \
--http --http.port \"$L2GETH_RPC_HTTP_PORT\" --http.addr \"0.0.0.0\" --http.vhosts=\"*\" --http.corsdomain \"*\" --http.api \"eth,scroll,net,web3,debug\" \
--pprof --pprof.addr \"0.0.0.0\" --pprof.port 6060 \
--ws --ws.port \"$L2GETH_RPC_WS_PORT\" --ws.addr \"0.0.0.0\" --ws.api \"eth,scroll,net,web3,debug\" \
--net.shadowforkpeers empty \
--unlock \"$L2GETH_SIGNER_ADDRESS\" --password \"/l2geth/password\" --allow-insecure-unlock --mine \
--ccc --ccc.numworkers 5 \
--gcmode archive \
Expand All @@ -64,7 +73,6 @@ command:
--gpo.percentile 20 \
--gpo.blocks 100 \
--l1.endpoint \"$L2GETH_L1_ENDPOINT\" --l1.confirmations \"$L2GETH_L1_WATCHER_CONFIRMATIONS\" --l1.sync.startblock \"$L2GETH_L1_CONTRACT_DEPLOYMENT_BLOCK\" \
--rollup.verify \
--metrics --metrics.expensive \
$L2GETH_EXTRA_PARAMS"
]
Expand Down Expand Up @@ -107,11 +115,11 @@ service:

resources:
requests:
memory: "150Mi"
cpu: "50m"
memory: "4Gi"
cpu: "1"
limits:
memory: "8Gi"
cpu: "100m"
cpu: "2"

defaultProbes: &default_probes
enabled: true
Expand Down Expand Up @@ -152,6 +160,12 @@ persistence:
type: configMap
name: wait-for-l1-script
defaultMode: "0777"
snapshot:
enabled: true
type: hostPath
hostPath: /var/scroll/l2geth/snapshot
mountPath: /host/snapshot
hostPathType: Directory

serviceMonitor:
main:
Expand Down
2 changes: 1 addition & 1 deletion charts/rollup-node/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
apiVersion: v2
description: rollup-node helm charts
name: rollup-node
version: 0.0.11
version: 0.0.25
appVersion: v0.1.0
kubeVersion: ">=1.22.0-0"
maintainers:
Expand Down
194 changes: 194 additions & 0 deletions charts/rollup-node/templates/get-db-info-script.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: get-db-info-script
data:
get-db-info.sh: |
#!/bin/bash

echo "Waiting for L1 contract to be ready..."
while true; do
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X POST --data '{"jsonrpc":"2.0","method":"web3_clientVersion","params":[],"id":1}' -H "Content-Type: application/json" $L1_RPC_ENDPOINT)
if [ "$HTTP_CODE" -eq 200 ]; then
echo "L1 contract is ready!"
break
else
echo "L1 contract is not responding, HTTP code: $HTTP_CODE. Retrying in 5 seconds..."
sleep 5
fi
done

# Get last finalized batch
export LAST_FINALIZED_BATCH=$(cast call "${L1_SCROLL_CHAIN_PROXY_ADDR}" "lastFinalizedBatchIndex()(uint256)" --rpc-url ${L1_RPC_ENDPOINT} | awk '{print $1}')

# Calculate last committed batch
FIRST_UNCOMMITTED_BATCH=$((LAST_FINALIZED_BATCH + 1))
echo "Starting search from batch: $FIRST_UNCOMMITTED_BATCH"

LOOP_COUNT=0
MAX_LOOPS=1000
while [ $LOOP_COUNT -lt $MAX_LOOPS ]; do
BATCH_HASH=$(cast call "${L1_SCROLL_CHAIN_PROXY_ADDR}" "committedBatches(uint256)(bytes32)" "$FIRST_UNCOMMITTED_BATCH" --rpc-url ${L1_RPC_ENDPOINT})
echo "Batch $FIRST_UNCOMMITTED_BATCH - Hash: $BATCH_HASH"
if [[ $BATCH_HASH == "0x0000000000000000000000000000000000000000000000000000000000000000" ]]; then
echo "Found first uncommitted batch: $FIRST_UNCOMMITTED_BATCH"
break
fi
FIRST_UNCOMMITTED_BATCH=$((FIRST_UNCOMMITTED_BATCH + 1))
LOOP_COUNT=$((LOOP_COUNT + 1))
done

if [ $LOOP_COUNT -eq $MAX_LOOPS ]; then
echo "Reached maximum number of iterations ($MAX_LOOPS). Exiting loop."
exit 1
fi

export LAST_COMMITTED_BATCH=$((FIRST_UNCOMMITTED_BATCH - 1))
echo "Last committed batch: $LAST_COMMITTED_BATCH"

echo "SHADOW: Last Finalized Batch: $LAST_FINALIZED_BATCH Last Committed Batch: $LAST_COMMITTED_BATCH"

# Export the variables to be used by other processes
echo "export LAST_FINALIZED_BATCH=$LAST_FINALIZED_BATCH"
echo "export LAST_COMMITTED_BATCH=$LAST_COMMITTED_BATCH"

# Database copy logic
local_sql_run() {
psql "${SCROLL_ROLLUP_DB_CONFIG_DSN}" -Aqt -c "$@"
}

remote_sql_run() {
psql "${SCROLL_RDS_ROLLUP_NODE_DSN}" -Aqt -c "$@"
}

# Install PostgreSQL client
apt-get install -y postgresql-client

# Check database connections
echo "Checking database connections..."
if ! local_sql_run "SELECT 1;" > /dev/null 2>&1; then
echo "Error: Cannot connect to local PostgreSQL database. Please check your connection string."
echo "DSN: $SCROLL_ROLLUP_DB_CONFIG_DSN"
exit 1
fi

if ! remote_sql_run "SELECT 1;" > /dev/null 2>&1; then
echo "Error: Cannot connect to remote PostgreSQL database. Please check your connection string."
echo "DSN: $SCROLL_RDS_ROLLUP_NODE_DSN"
exit 1
fi

echo "Database connections successful. Proceeding with operations..."

# Get the bundle containing the last finalized batch
BUNDLE_INDEX=$(remote_sql_run "SELECT index FROM bundle WHERE end_batch_index = $LAST_FINALIZED_BATCH")

if [ -z "$BUNDLE_INDEX" ]; then
echo "Warning: No bundle found for the last finalized batch. Skipping bundle copy."
else
# Copy bundles
echo "SHADOW: Copying bundle containing last finalized batch (index $BUNDLE_INDEX)"
remote_sql_run "COPY (SELECT * FROM bundle WHERE index = $BUNDLE_INDEX) TO STDOUT WITH CSV HEADER" | local_sql_run "COPY bundle FROM STDIN WITH CSV HEADER"
fi

# Copy batches
echo "SHADOW: Copying batches [$LAST_FINALIZED_BATCH, $LAST_COMMITTED_BATCH]"
remote_sql_run "COPY (SELECT * FROM batch WHERE index >= $LAST_FINALIZED_BATCH AND index <= $LAST_COMMITTED_BATCH) TO STDOUT WITH CSV HEADER" | local_sql_run "COPY batch FROM STDIN WITH CSV HEADER"
local_sql_run "UPDATE batch SET rollup_status = 3, finalize_tx_hash = NULL, finalized_at = NULL, committed_at = NOW() WHERE index > $LAST_FINALIZED_BATCH"
local_sql_run "UPDATE batch SET proving_status = 1, prover_assigned_at = NULL, total_attempts = 0, active_attempts = 0, chunk_proofs_status = 1"

# Get the start_chunk_index for the batch between the last finalized batch and end_chunk_index for the last committed batch
# Note: The range [finalized, committed] is used to handle corner cases and ensure data integrity.
# This approach ensures that even when committed == finalized, we still retrieve the necessary parent chunk data.
# It helps to maintain data consistency and prevents potential issues in edge scenarios.
CHUNK_INDICES=$(local_sql_run "
SELECT
MIN(start_chunk_index) as start_chunk_index,
MAX(end_chunk_index) as end_chunk_index
FROM batch
WHERE index BETWEEN $LAST_FINALIZED_BATCH AND $LAST_COMMITTED_BATCH
")

# Extract start_chunk_index (first field) from CHUNK_INDICES
START_CHUNK_INDEX=$(echo $CHUNK_INDICES | cut -d '|' -f1)

# Extract end_chunk_index (second field) from CHUNK_INDICES
END_CHUNK_INDEX=$(echo $CHUNK_INDICES | cut -d '|' -f2)

# Print the values of START_CHUNK_INDEX and END_CHUNK_INDEX for verification
echo "Chunk index range: $START_CHUNK_INDEX to $END_CHUNK_INDEX"

# Copy chunks
echo "SHADOW: Copying chunks [$START_CHUNK_INDEX, $END_CHUNK_INDEX]"
remote_sql_run "COPY (SELECT * FROM chunk WHERE index >= $START_CHUNK_INDEX AND index <= $END_CHUNK_INDEX) TO STDOUT WITH CSV HEADER" | local_sql_run "COPY chunk FROM STDIN WITH CSV HEADER"

# Get the batch hash for the last finalized batch
LAST_FINALIZED_BATCH_HASH=$(local_sql_run "
SELECT hash
FROM batch
WHERE index = $LAST_FINALIZED_BATCH
")
echo "Last finalized batch hash: $LAST_FINALIZED_BATCH_HASH"

# Reset chunk status for chunks not in the last finalized batch
local_sql_run "UPDATE chunk SET proving_status = 1, prover_assigned_at = NULL, total_attempts = 0, active_attempts = 0 WHERE batch_hash != '$LAST_FINALIZED_BATCH_HASH'"

# Get total chunk count
TOTAL_CHUNKS=$(local_sql_run "SELECT COUNT(*) FROM chunk")

# Get unassigned chunks info
UNASSIGNED_CHUNKS_INFO=$(local_sql_run "
SELECT MIN(index), MAX(index), COUNT(*)
FROM chunk
WHERE proving_status = 1
")

MIN_UNASSIGNED_CHUNK_INDEX=$(echo $UNASSIGNED_CHUNKS_INFO | cut -d '|' -f1)
MAX_UNASSIGNED_CHUNK_INDEX=$(echo $UNASSIGNED_CHUNKS_INFO | cut -d '|' -f2)
UNASSIGNED_CHUNKS_COUNT=$(echo $UNASSIGNED_CHUNKS_INFO | cut -d '|' -f3)

# Check how many chunks still have verified status
VERIFIED_CHUNKS_INFO=$(local_sql_run "
SELECT MIN(index), MAX(index), COUNT(*)
FROM chunk
WHERE proving_status = 4
")

MIN_VERIFIED_CHUNK_INDEX=$(echo $VERIFIED_CHUNKS_INFO | cut -d '|' -f1)
MAX_VERIFIED_CHUNK_INDEX=$(echo $VERIFIED_CHUNKS_INFO | cut -d '|' -f2)
VERIFIED_CHUNKS_COUNT=$(echo $VERIFIED_CHUNKS_INFO | cut -d '|' -f3)

# Print summary
echo "----------------------------------------"
echo "Chunk Statistics Summary:"
echo "----------------------------------------"
echo "Total chunks: $TOTAL_CHUNKS"
echo ""
echo "Verified chunks:"
echo " Count: $VERIFIED_CHUNKS_COUNT"
if [ -n "$MIN_VERIFIED_CHUNK_INDEX" ] && [ -n "$MAX_VERIFIED_CHUNK_INDEX" ]; then
echo " Range: [$MIN_VERIFIED_CHUNK_INDEX, $MAX_VERIFIED_CHUNK_INDEX]"
else
echo " Range: N/A (no verified chunks)"
fi
echo ""
echo "Unassigned chunks:"
echo " Count: $UNASSIGNED_CHUNKS_COUNT"
if [ -n "$MIN_UNASSIGNED_CHUNK_INDEX" ] && [ -n "$MAX_UNASSIGNED_CHUNK_INDEX" ]; then
echo " Range: [$MIN_UNASSIGNED_CHUNK_INDEX, $MAX_UNASSIGNED_CHUNK_INDEX]"
else
echo " Range: N/A (no unassigned chunks)"
fi
echo "----------------------------------------"

# Get the start and end block numbers for the copied chunks
BLOCK_NUMBERS=$(local_sql_run "SELECT MIN(start_block_number), MAX(end_block_number) FROM chunk WHERE index >= $START_CHUNK_INDEX AND index <= $END_CHUNK_INDEX")
START_BLOCK_NUM=$(echo $BLOCK_NUMBERS | cut -d '|' -f1)
END_BLOCK_NUM=$(echo $BLOCK_NUMBERS | cut -d '|' -f2)

# Print block number range for verification
echo "Block number range: $START_BLOCK_NUM to $END_BLOCK_NUM"

# Copy blocks
echo "SHADOW: Copying blocks [$START_BLOCK_NUM, $END_BLOCK_NUM]"
remote_sql_run "COPY (SELECT * FROM l2_block WHERE number >= $START_BLOCK_NUM AND number <= $END_BLOCK_NUM) TO STDOUT WITH CSV HEADER" | local_sql_run "COPY l2_block FROM STDIN WITH CSV HEADER"
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: take-over-commit-and-finalize-senders-script
data:
take-over-commit-and-finalize-senders.sh: |
#!/bin/bash

# Function to check if a variable is set
check_var() {
if [[ -z "${!1}" ]]; then
echo "Error: $1 is not set" >&2
exit 1
fi
}

# Check required variables
check_var L1_RPC_ENDPOINT
check_var L1_MAINNET_SCROLL_OWNER_ADDR
check_var L1_SCROLL_CHAIN_PROXY_ADDR
check_var L1_COMMIT_SENDER_ADDR
check_var L1_FINALIZE_SENDER_ADDR

# Set the balance of the Scroll owner
# We're setting it to 1000 ETH (0x3635C9ADC5DEA00000 in wei)
cast rpc --rpc-url "${L1_RPC_ENDPOINT}" anvil_setBalance "${L1_MAINNET_SCROLL_OWNER_ADDR}" 0x3635C9ADC5DEA00000

# Start impersonating the Scroll owner account
cast rpc --rpc-url "${L1_RPC_ENDPOINT}" anvil_impersonateAccount "${L1_MAINNET_SCROLL_OWNER_ADDR}"

# Add a new sequencer
cast send "${L1_SCROLL_CHAIN_PROXY_ADDR}" --rpc-url "${L1_RPC_ENDPOINT}" \
--from "${L1_MAINNET_SCROLL_OWNER_ADDR}" \
--unlocked \
"addSequencer(address)" \
"${L1_COMMIT_SENDER_ADDR}"

# Add a new prover
cast send "${L1_SCROLL_CHAIN_PROXY_ADDR}" --rpc-url "${L1_RPC_ENDPOINT}" \
--from "${L1_MAINNET_SCROLL_OWNER_ADDR}" \
--unlocked \
"addProver(address)" \
"${L1_FINALIZE_SENDER_ADDR}"

# Stop impersonating the L1_MAINNET_SCROLL_OWNER_ADDR account
cast rpc --rpc-url "${L1_RPC_ENDPOINT}" anvil_stopImpersonatingAccount "${L1_MAINNET_SCROLL_OWNER_ADDR}"

# Check if the new prover was successfully added
echo "SHADOW: isProver($L1_FINALIZE_SENDER_ADDR) = $(cast call "${L1_SCROLL_CHAIN_PROXY_ADDR}" --rpc-url "${L1_RPC_ENDPOINT}" \
"isProver(address)(bool)" \
"${L1_FINALIZE_SENDER_ADDR}")"

# Check if the new sequencer was successfully added
echo "SHADOW: isSequencer($L1_COMMIT_SENDER_ADDR) = $(cast call "${L1_SCROLL_CHAIN_PROXY_ADDR}" --rpc-url "${L1_RPC_ENDPOINT}" \
"isSequencer(address)(bool)" \
"${L1_COMMIT_SENDER_ADDR}")"

# Indicate that the process is complete
echo "SHADOW: done taking over L1 contracts"
Loading
Loading