Boost Productivity with Big:eye Pro: Best Practices

How to Use Big:eye Pro: Tips, Tricks, and Setup

What Big:eye Pro does

Big:eye Pro is a monitoring and observability tool for applications and infrastructure that collects metrics, traces, and logs to help you detect issues, optimize performance, and troubleshoot faster.

Quick setup (assumes Linux server + web UI)

  1. Sign up and create a project

    • Visit the Big:eye Pro web console and create an account (or sign in to your organization). Create a new project/environment for the service you’ll monitor.
  2. Install the agent

    • On the target host, download the official Big:eye Pro agent package (RPM/DEB or tarball).
    • Install and start the agent as a service:

      bash

      # Debian/Ubuntu (example) sudo dpkg -i bigeyepro-agent.deb sudo systemctl enable –now bigeyepro-agent # RHEL/CentOS (example) sudo rpm -ivh bigeyepro-agent-.rpm sudo systemctl enable –now bigeyepro-agent
    • Confirm the agent is running:

      bash

      sudo systemctl status bigeyepro-agent
  3. Authenticate the agent with an API key

    • In the Big:eye Pro console, create an API key for the project.
    • Place the key in the agent config (commonly /etc/bigeyepro/agent.conf) or export as an env var:

      bash

      export BIGEYE_API_KEY=“your_api_keyhere” sudo systemctl restart bigeyepro-agent
  4. Instrument your application

    • Use Big:eye Pro SDKs or exporters for your language/framework (Node.js, Python, Java, Go). Example Node.js:

      bash

      npm install @bigeyepro/sdk

      js

      const bigeye = require(’@bigeyepro/sdk’); bigeye.init({ apiKey: process.env.BIGEYE_API_KEY, serviceName: ‘my-service’ });
    • Add tracing spans and custom metrics where useful (DB calls, external requests, heavy loops).
  5. Configure integrations

    • Enable integrations for databases, message queues, cloud providers, and web servers from the console or agent config (e.g., PostgreSQL, Redis, AWS CloudWatch).
    • Verify metrics/logs/traces appear within 2–5 minutes.

Core UI workflows

  • Dashboards: Use default dashboards for CPU, memory, latency, error rate. Create custom dashboards by combining metrics and traces.
  • Alerts: Set alert rules on thresholds (e.g., 90th-percentile latency > 500ms for 5m). Configure notification channels (email, Slack, PagerDuty).
  • Trace search: Filter traces by service, operation, status code, or duration to find slow requests.
  • Log correlation: Link logs to traces for root-cause analysis — open a trace span and view associated log lines.

Practical tips & best practices

  • Start small: Monitor a few critical services and the most important metrics first (latency, error rate, CPU, memory). Expand gradually.
  • Use percentiles, not averages: Track p50/p95/p99 for latency to surface tail latency issues.
  • Tag consistently: Add environment, service, and role tags to metrics and logs to enable focused dashboards and alerts.
  • Alert on symptoms, not causes: Alert on user-impacting metrics (error rate, availability, high latency) rather than internal counters alone.
  • Reduce noise: Use multi-condition alerts (e.g., high CPU + high load) and mute flapping alerts during deploy windows.
  • Instrument key flows: Add tracing to authentication, payment, and heavy DB operations for targeted troubleshooting.
  • Retention strategy: Keep high-resolution metrics for critical services longer; downsample or aggregate lower-value metrics to reduce cost.
  • Secure keys and access: Rotate API keys periodically and use least-privilege roles for team members.

Troubleshooting common issues

  • Agent not reporting: check agent logs (/var/log/bigeyepro/agent.log), validate API key, confirm outbound network to Big:eye Pro endpoints.
  • Missing traces: ensure SDK initialized before service start, confirm sampling settings (increase sampling for dev).
  • Alert spam: raise thresholds, increase evaluation window, use grouping and suppression rules.

Example checklist for first 24 hours

  1. Create project and generate API key.
  2. Install agent on one host and verify it registers.
  3. Instrument a sample app and confirm traces appear.
  4. Import or create a dashboard for key metrics.
  5. Create one critical alert (e.g., error rate > 5% for 5m to Slack).
  6. Run a load test or simulate errors and verify detection and notification.

Where to go next

  • Expand integrations to databases and cloud services.
  • Create role-based dashboards for SREs, developers, and managers.
  • Automate alerts and runbooks for common incidents.

If you want, I can create a step-by-step install script for your OS (macOS, Ubuntu, RHEL) or an example instrumentation snippet for a specific language — tell me which OS or language.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *