Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Building and running cloud native cassandra

138 Aufrufe

Veröffentlicht am

This session will re-evaluate Cassandra’s relationship with runtime and build systems, pointing out ways that the existing systems fall down, and identifying avenues for improvement. Over the past few years, a number of platforms have emerged for running user code. Container runtimes like Docker, container orchestrators such as Kubernetes, and metrics collections agents like Prometheus and Spectator have all gained popularity and mind-share. Cassandra functionality such as metrics, bootstrapping, and monitoring integrates with the newer paradigms, but in an ad-hoc and improvised fashion. By taking a purposeful approach to integrating with these new methods of deployment, the Cassandra community can more fully benefit from their advertised strengths. The Cassandra build system based on Ant+Ivy dates to the early 2000’s, and reflects legacy complexity that could be avoided with modern build systems. Cassandra’s system package builds are not much better and often fail to integrate with industry standards such as systemd. Iterating on the existing systems is difficult, but this technical debt slows innovation in our build systems. In this talk, we propose solutions to make building, deploying and monitoring Cassandra easy and low overhead, while taking advantage of cloud advancements wherever possible.

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

Building and running cloud native cassandra

  1. 1. Vinay Chella, Joey Lynch Distributed Database Engineers Netflix NGCC 2019 Building and Running Cloud Native Cassandra
  2. 2. Speakers Vinay Chella Distributed Systems Engineer Focusing on Apache Cassandra and Data Abstractions Cloud Data Engineering Netflix Joey Lynch Distributed Systems Engineer Distributed system addict and data wrangler Cloud Data Engineering Netflix
  3. 3. Goals to help align the project cloud principles. Agenda Outline — What is Cloud Native? — Cassandra’s Rough Edges ◆ Development ◆ Packaging ◆ Starting a Cluster ◆ Running a Cluster — Our Proposed Solutions
  4. 4. What even is “Cloud Native”?
  5. 5. Any hardware configuration Any operating system Cattle not Pets Will die constantly
  6. 6. Clouds provide mappings on top of “datacenters” Zones will fail Regions will fail “Durable” storage is not so durable
  7. 7. Develop ScalingPackage Starting
  8. 8. Develop ScalingPackage Starting
  9. 9. Friction for New Contributions docs code build test Discovery, building mental model Understand and modify Create artifacts (jar, pkg, container) Confidence they haven’t broken something
  10. 10. First interactions? — I don’t know how the database works — Where can I run v4.0?
  11. 11. How Netflix Does This
  12. 12. How Netflix Does This
  13. 13. How Netflix Does This Markdown Pull Request Immediate Deploy
  14. 14. Proposal docs code build test Move svn website to git branch in main repo Replace sphinx with markdown (pandoc)? Automatically build and publish docs (jenkins + docker?)
  15. 15. Proposal docs code build test
  16. 16. Non standard layout 2260 line XML build.xml Checked in library .jars Checked in python library
  17. 17. Can use Gradle for Cassandra too Import build.xml Add any jars we want at build time
  18. 18. Can use Gradle for Cassandra too Build debs directly
  19. 19. Proposal docs code build test Start by importing build.xml Gradually modernize build system via gradle multi-project builds Use out of the box dependency locking
  20. 20. Proposal docs code build test
  21. 21. Jenkins 10 hour builds Who can run?
  22. 22. CircleCI 20 minute builds Anyone can run them
  23. 23. Proposal docs code build test Keep moving with the CircleCI integration for testing (artifacts stay on jenkins) Run unit tests on every pull request (5 minutes, doable with free tier)
  24. 24. Develop ScaleDistribute Launch
  25. 25. Cloud Native Distribution source packages containers Building from source must be easy Integration with package managers apt, yum, brew ... Having Docker containers for testing, as well as production
  26. 26. Cloud Native Distribution source packages containers Already talked about this, we can improve in this area with previous proposals
  27. 27. Cloud Native Distribution source packages containers Already have pretty great deb and rpm integration. Custom packages slightly difficult
  28. 28. Autostart is not Cloud Native
  29. 29. Cloud Native Distribution source packages containers Not doing well at containers.
  30. 30. How Netflix Does Testing Images ~5s startup* Easy to customize Minimal memory footprint *skip_wait_for_gossip_to_settle=0, disable vnodes, etc ..
  31. 31. Proposal source packages containers Do earlier suggestions Stop autostarting Cassandra packages Publish official testing container Publish official kubernetes/swarm containers
  32. 32. Develop ScaleDistribute Launch
  33. 33. Launching Nodes seeds tokens process Which nodes will initiate the cluster How will we assign tokens How to manage Cassandra process
  34. 34. How does Netflix launch? seeds tokens process Ask AWS cloud control plane Ask CDE cluster control plane Use systemd
  35. 35. Declarative Control Plane
  36. 36. Data Migration? Avoid Cassandra Streaming Use S3, Use EBS Use direct file copy (sendfile) Verify with xxHash
  37. 37. Data Migration: The Numbers Naive solution, sequential transfer Parallel S3 download sequential fixup Parallel download and zone wide fixup
  38. 38. Process Control? ##/lib/systemd/system/cassandra.service [Unit] Description=Cassandra # Don't ever give up on starting Cassandra StartLimitInterval=0 StartLimitIntervalSec=1800 [Service] RuntimeDirectory=cassandra ExecStart=/usr/bin/cassandra_wrapper LimitNOFILE=1048576 LimitNPROC=131072 LimitMEMLOCK=infinity # always restart Cassandra # If you want to stop Cassandra, either do: # * nodetool drain # * systemctl stop cassandra Restart=always RestartSec=30 StartLimitBurst=4 Just use systemd Handles automatic restarts Handles console logs It’s great
  39. 39. Process Control? ## /usr/bin/cassandra_wrapper # OS Page Cache Loading cd ${CASSANDRA_HOME/data} set +e if [ -e .happycache.gz ]; then happycache load # Delete the OS cache file even if the load fails rm -f .happycache.gz fi set -e ## Pid file for preventing multiple processes ## starting at once PIDFILE=/run/cassandra/cassandra.pid exec 3<> "${PIDFILE}" if ! flock -n 3; then PID=`cat ${PIDFILE}` echo "Cassandra already running (best guess in PID: ${PID}), cannot start" exit 1 fi echo $$ > ${PIDFILE} # Start Cassandra ... Reload OS page cache with happycache Guarantee single process running with flocked pidfiles
  40. 40. Proposal seeds tokens process Sidecar + native cloud providers Add TokenProvider interface, should happen for (#13701). More full SSTable streaming. Just add a systemd unit
  41. 41. Develop ScaleDistribute Launch
  42. 42. Scaling Cassandra metrics monitor integrity administer Operational insights into performance What is happening with the cluster Data integrity (failure, corruption) Must be easy to configure and restart
  43. 43. Scaling Cassandra metrics monitor backup administer Interoperable with … JMX, but not — statsd — prometheus — spectator Performant export is a problem
  44. 44. How does Netflix get metrics? # cassandra-env.sh patch # Pull in any agents present in CASSANDRA_HOME for agent_file in ${CASSANDRA_HOME}/agents/*.jar; do if [ -e "${agent_file}" ]; then base_file="${agent_file%.jar}" if [ -s "${base_file}.options" ]; then options=`cat ${base_file}.options` agent_file="${agent_file}=${options}" fi JVM_OPTS="$JVM_OPTS -javaagent:${agent_file}" fi done for agent_file in ${CASSANDRA_HOME}/agents/*.so; do if [ -e "${agent_file}" ]; then base_file="${agent_file%.so}" if [ -s "${base_file}.options" ]; then options=`cat ${base_file}.options` agent_file="${agent_file}=${options}" fi JVM_OPTS="$JVM_OPTS -agentpath:${agent_file}" fi done
  45. 45. Proposal metrics monitor integrity administer Allow pluggability of histogram Official exporter agents + pluggable agents HTTP metrics endpoint on sidecar
  46. 46. Scaling Cassandra metrics monitor integrity administer No real way to know cluster view Is the ring healthy? Where is my data located?
  47. 47. How does Netflix monitor? Cluster Health
  48. 48. How does Netflix monitor?
  49. 49. How do other DBs do this? CockroachDB out of the box
  50. 50. How do other DBs do this? Elasticsearch plugs in Cerebro
  51. 51. Ship out of the box health and status dashboards JSON metrics/status may be sufficient Sidecar may be a great place for this Proposal metrics monitor integrity administer
  52. 52. Scaling Cassandra metrics monitor integrity administer Nodes and clusters constantly failing Lack critical backup, restore and repair scheduling functionality No standard way to configure and restart nodes in a cluster
  53. 53. How does Netflix backup? — Point in time ◆ Incremental snapshot ◆ Optimal performance — Plugins for different clouds
  54. 54. How does Netflix repair? — Distributed Control Plane — Every node follows repair state machine (#14346)
  55. 55. How does Netflix administer? Declarative Desire based
  56. 56. Distributed State Machines
  57. 57. Add native cloud backup plugins Continue investing in Apache Cassandra Sidecar project Backups Repairs Restarts Proposal metrics monitor integrity administer
  58. 58. Develop ScalingPackage Starting
  59. 59. Our users need better solutions We need the community’s help!
  60. 60. With targeted investment we can solve this problem
  61. 61. Discussion?

×