top of page

Cassandra

  1. Installation: 

  2. Admin Command​​

    1. Create repository​​

    2. Create snapshot

  3. Cluster commands

    • ​

  4. Backup and restore​​

  5. Troubleshooting

Cassandra

  1. Installation: 

  2. Admin Command​​

    1. Create repository​​

    2. Create snapshot

  3. Cluster commands

    • ​

  4. Backup and restore​​

  5. Troubleshooting

2. Admin command

  • Workshop​

https://github.com/DataStax-Academy/cassandra-workshop-series https://youkudbhelper.wordpress.com/2021/09/01/list-of-handful-cassandra-commands-to-improve-productivity/

  • Query without more
    paging off

  • Create index

    Create index DeptIndex on University.Student(dept);
    drop index IF EXISTS University.DeptIndex;

  • Security/ user


 

    create user robin with password 'manager' superuser;

    create user robin with password 'newhire';

    list users;

    list all permissions on dev.emp;

    list all permissions of laura;

    SELECT role FROM system_auth.roles;

    

    desc keyspaces; 

      list roles;

      SELECT keyspace_name,replication FROM system_schema.keyspaces;

 

      COPY system_auth.role_permissions (role, resource) TO '/media/data/output.csv' WITH HEADER = true;

      

      select role,permissions from system_auth.role_permissions where resource = 'data/ks_adcs_pbcdv01aws' allow filtering; 

      select cluster_name,cql_version,data_center,dse_version,listen_address,native_transport_address  from system.local;

​

  • Check hostname that contain keyspace data

    nodetool getendpoints killrvideo videos_by_tag 'cassandra' or
    nodetool getendpoints killrvideo videos_by_tag 'datastax'

    ​

  • Hint query

    http://192.168.56.11:8888/MyCluster/cluster-metrics/lta/pending-hint-dispatcher

  • Config file

     

Cassandra_home: /var/lib/cassandra 

config: /var/lib/cassandra/resources/cassandra/conf/cassandra.yaml 

dse: /var/lib/cassandra/resources/dse

​

  • Change DC name 


 

add below to resources/cassandra/conf/cassandra-env.sh and restart JVM_OPTS=\"$JVM_OPTS -Dcassandra.ignore_rack=true -Dcassandra.ignore_dc=true\"'

 

  • Check keyspace size

#!/bin/sh

nodetool cfstats -H | awk -F ' *: *' '

BEGIN { print "Keyspace,Table,Live,Total" }

/^Keyspace : / { ks = $2 }

/\tTable:/ { t = $2 }

/\tSpace used .live/ { sp = $2 }

/\tSpace used .total/ { print ks "," t "," sp "," $2 }

' | column -s , -t

3. Cluster tasks

  • 3.1 Start/Stop DSE

     nohup /var/lib/cassandra/bin/dse cassandra -k  </dev/null >/dev/null 2>&1
    /var/lib/cassandra/bin/dse cassandra -k 
    /var/lib/cassandra/bin/dse cassandra-stop

    3.2 Check sysnc status​

    nodetool repair
    nodetool cfstats -> check count
    ​nodetool status keyspace_name
    ​
    3.3 Adding new DC to existing cluster
    ​

192.168.56.201    cass01

192.168.56.202    cass02

192.168.56.203    cass03

new Node

192.168.56.211    lta01

192.168.56.212    lta02

192.168.56.213    lta03

​

  1. Add a new seeds to cassandra.yaml. Example 192.168.56.24 is a new seed in new DC

    ​

    seeds: '192.168.56.21,192.168.56.24' listen_address: 'lta01' -> localhost endpoint_snitch: GossipingPropertyFileSnitch Comment out the initial_token property.

  2. Modify cassandra-rackdc.properties, update new DC and new rack.

    ​

    dc=newdc rack=newrack Migration information: The GossipingPropertyFileSnitch always loads cassandra-topology.properties when the file is present. Remove the file from each node on any new cluster, or any cluster migrated from the PropertyFileSnitch.

  3. Make the following changes in the existing datacenters.

    ​

    On nodes in the existing datacenters, update the -seeds property in cassandra.yaml to include the seed nodes in the new datacenter. /var/lib/cassandra/resources/cassandra/conf/cassandra.yaml in existing node: cass01,cass02,cass03

    • seeds: "192.168.56.201,192.168.56.202,192.168.56.203,192.168.56.211,192.168.56.212,192.168.56.213"

      ​

    Add the new datacenter definition to the cassandra.yaml properties file for the type of snitch used in the cluster. If changing snitches,

  4. Slowly start exising node one by one, ensure all nodes are up

    ​

    nodetool status nodetool ring

  5. Modify existing keyspace with replication

    ​

    ALTER KEYSPACE system.auth WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'itsc':2, 'lta':3};

  6. in master nodes in existing dc. run

    ​

    nodetool repair

    ​

  7. Check replication

    ​

  8. ALTER KEYSPACE system_auth WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': '3', 'dc2': '3'}; 

    nodetool repair -pr system_auth 

    ALTER KEYSPACE dse_perf WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': '2', 'dc2': '2'}; 

    ALTER KEYSPACE system_traces WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': '1', 'dc2': '1'}; 

    ALTER KEYSPACE dse_security WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': '3', 'dc2': '3'}; 

    ALTER KEYSPACE dse_leases WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': '3', 'dc2': '3'}; 

    ALTER KEYSPACE system_distributed WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': '3', 'dc2': '3'};

    ​

  9. nodetool repair -pr dse_perf nodetool repair -pr dse_security 

    nodetool repair -pr dse_leases nodetool repair -pr system_distributed 

    nodetool repair -pr system_traces

    3.4 Remove Node

In the node that want to remove.

nodetool decommission --force

​

       3.5 Remove node, it node up

​

Nodetool decommision. Redistribute the data from the node that is going away. Make it ready to go offline

nodetool -u cassandra -pw password decommision or
nodetool -h remove_hostname decommision


       3.6 Node already go offline in case hardware issue.

node_id get from nodetool info n

nodetool removenode 2083c0e3-ac61-44e6-938a-d067b2d7d3ce 

nodetool assassinate -> like kill -9 when cant remove node

​

      3.7 Convert to cluster

​

data center: /etc/cassandra/default.conf/cassandra-rackdc.properties 

cassandra yaml file: /etc/cassandra/default.conf/cassandra.yaml

​

cassandra.yaml endpoint_snitch: 

GossipingPropertyFileSnitch -> cassandra-rackdc.properties file 

PropertyFileSnitch -> cassandra-topology.properties file

​

      3.8  Configure Node SSL

​

Prepare file cert as below content, file name: gen_ca_cert.conf

​

distinguished_name = req_distinguished_name

    prompt    = no

    output_password    = cassandra

    default_bits    =2048

    [ req_distinguished_name ]

    C = US

    ST = WA

    L = SEA

    O = DataStax Academy

    OU = KillrVideoCluster

    CN = KillrVideoClusterMasterCA

    emailAddress = hanh@abc.com

 

    openssl req -config gen_ca_cert.conf -new -x509 -keyout ca_key -out ca_cert -days 365

    

    Generating a 2048 bit RSA private key

    ...................+++

    ........................................................................................................+++

    writing new private key to 'ca_key'

    -----

One the same machine, create a keystone file for each node
 

keytool -genkeypair -keyalg RSA -alias node1 -keystore node1-server-keystore.jks -storepass cassandra -keypass cassandra -validity 365 -keysize 2048 -dname "CN=node1, OU=SSL-verification-cluster, O=KillrVideo, C=US"

keytool -genkeypair -keyalg RSA -alias node3 -keystore node3-server-ls -keystore.jks -storepass cassandra -keypass cassandra -validity 365 keysize 2048 -dname "CN=node3, OU=SSL-verification-cluster, O=KillrVideo, C=US"

 

​

​

bottom of page