Train-Wreck to Bullet-Train: Dashboard Evolution for very large datasets
Do you have dashboards that load too slowly to comfortably use, especially with wide timepicker windows? Do you understand how the Splunk dashboard architecture works? Have you performed a tradeoff analysis on cache vs. lookups vs. data models vs. data model acceleration? Come and hear a "pain-by-numbers" use-case about the evolution of a complex dashboard that processes very large datasets. Yes, this is related to the previous DASUG event/project.
* General requirements and architecture
* Base/Post-process Searches and Data Cubes
* The Rumsfeld Doctrine: Unknowns going into the design (future-casting)
* Initial dashboard (cache-based)
* Performance Problems
* Tradeoff Analysis
* Data Model Acceleration (DMA)
* Performance Improvements
* Dashboard Tradeoffs / Lessons Learned
Greg's 34 years of industry experience has built proven abilities to rapidly assess the magnitude and scope of complex projects, develop customer relationships, shape needs and requirements and then either implement or drive teams to satisfy those requirements. He is a certified Splunk Admin w/ significant training towards Splunk Sales Engineer, Splunk Architect and Splunk Core Consultant.
Greg Smith has 34 years of industry experience with proven abilities to rapidly assess the magnitude and scope of complex projects, develop customer relationships, work with them to shape needs and requirements and then either implement or drive teams to satisfy those requirements. His many years of System Engineering and Technical Management have made him adept at seeing the big picture, understanding constraints (technical, cost, schedule, resources, “turf”), seeing “white space”, “herding cats” and solving complex problems. He is a certified Splunk Admin with significant training towards Splunk Sales Engineer, Splunk Architect and Splunk Core Consultant. He tries to remain fit through swimming and hiking, adores his wife of 39 years, dotes on his daughter, and loves helping people … starting from where they are.
Gregg Woodcock is a gun-toting, Christian, homeschooling father of three whose 30+ years of IT experience (primarily in Telecom) and early adoption of Splunk (v3) has positioned him on the leading edge of the Big Data explosion and uniquely qualified him to launch "Splunxter", a Splunk-focused professional services and contracting company headquartered in the Dallas area. He is the founder and chairman of the Dallas-area Splunk User Group, a two-time speaker at "Splunk Live!", a twice-invited speaker for LTE North America, an Instructor with Global Big Data Boot Camps, occasional street-preacher, and the current Chairman of the Constitution Party of Texas. He is a genuine evangelist of all the best things in life and that of course includes Splunk!
Leader, Dallas Area Splunk User Group