16 KiB
16 KiB
phase, plan, type, wave, depends_on, files_modified, autonomous, requirements, must_haves
| phase | plan | type | wave | depends_on | files_modified | autonomous | requirements | must_haves | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 03-output-distribution | 01 | execute | 1 |
|
true |
|
|
Purpose: Implements --log flag that runs tcptop in headless mode (no TUI), writing periodic CSV snapshots of all active connections. This is the primary output feature for offline analysis (OUTP-01, OUTP-02). Output: csv_writer.rs module, headless event loop in main.rs, CSV tests, new dependencies (csv, serde, chrono).
<execution_context> @/Users/zrowitsch/local_src/tcptop/.claude/get-shit-done/workflows/execute-plan.md @/Users/zrowitsch/local_src/tcptop/.claude/get-shit-done/templates/summary.md </execution_context>
@.planning/PROJECT.md @.planning/ROADMAP.md @.planning/STATE.mdFrom tcptop/src/model.rs:
pub enum Protocol { Tcp, Udp }
pub struct ConnectionKey {
pub protocol: Protocol,
pub local_addr: IpAddr,
pub local_port: u16,
pub remote_addr: IpAddr,
pub remote_port: u16,
}
pub struct ConnectionRecord {
pub key: ConnectionKey,
pub pid: u32,
pub process_name: String,
pub tcp_state: Option<TcpState>,
pub bytes_in: u64,
pub bytes_out: u64,
pub packets_in: u64,
pub packets_out: u64,
pub rate_in: f64,
pub rate_out: f64,
pub prev_bytes_in: u64,
pub prev_bytes_out: u64,
pub rtt_us: Option<u32>,
pub last_seen: Instant,
pub is_partial: bool,
pub is_closed: bool,
}
impl TcpState {
pub fn as_str(&self) -> &'static str;
}
From tcptop/src/aggregator.rs:
pub struct ConnectionTable { ... }
impl ConnectionTable {
pub fn new() -> Self;
pub fn seed(&mut self, records: Vec<ConnectionRecord>);
pub fn update(&mut self, event: CollectorEvent);
pub fn tick(&mut self) -> (Vec<&ConnectionRecord>, Vec<ConnectionRecord>);
}
From tcptop/src/main.rs (current Cli struct, line 16-46):
#[derive(Parser, Debug)]
#[command(name = "tcptop", about = "Real-time per-connection network monitor")]
struct Cli {
#[arg(long)] port: Option<u16>,
#[arg(long)] pid: Option<u32>,
#[arg(long)] process: Option<String>,
#[arg(long, short = 'i')] interface: Option<String>,
#[arg(long)] tcp: bool,
#[arg(long)] udp: bool,
#[arg(long, default_value = "1")] interval: u64,
}
**Step 2: Add crate dependencies to tcptop/Cargo.toml**:
Add under `[dependencies]`:
```toml
serde = { workspace = true }
csv = { workspace = true }
chrono = { workspace = true }
```
Also add under `[dev-dependencies]`:
```toml
tempfile = "3"
```
**Step 3: Create tcptop/src/csv_writer.rs** with:
- `CsvRow` struct with `#[derive(Serialize)]` containing exactly these 16 fields (per D-02, D-05):
`timestamp: String`, `protocol: &'static str`, `local_addr: String`, `local_port: u16`, `remote_addr: String`, `remote_port: u16`, `pid: u32`, `process_name: String`, `state: String`, `bytes_in: u64`, `bytes_out: u64`, `packets_in: u64`, `packets_out: u64`, `rate_in_bytes_sec: f64`, `rate_out_bytes_sec: f64`, `rtt_us: String`
- `CsvRow::from_record(record: &ConnectionRecord, timestamp: &str) -> Self`:
- `protocol`: `"TCP"` for Tcp, `"UDP"` for Udp
- `state`: `record.tcp_state.map_or("UDP".to_string(), |s| s.as_str().to_string())`
- `rate_in_bytes_sec`: `(record.rate_in * 100.0).round() / 100.0` (2 decimal places per Pitfall 5)
- `rate_out_bytes_sec`: `(record.rate_out * 100.0).round() / 100.0`
- `rtt_us`: `record.rtt_us.map_or("N/A".to_string(), |v| v.to_string())`
- `CsvLogger` struct wrapping `csv::Writer<std::fs::File>`
- `CsvLogger::new(path: &Path) -> Result<Self>`: uses `csv::Writer::from_path(path)` which creates/overwrites (per D-04)
- `CsvLogger::write_snapshot(&mut self, records: &[&ConnectionRecord], timestamp: &str) -> Result<()>`: iterates records, serializes CsvRow::from_record for each, calls `self.writer.flush()` after all rows (per Pitfall 1)
**Step 4: Add `pub mod csv_writer;` to tcptop/src/lib.rs**
**Step 5: Create tcptop/tests/csv_test.rs** with tests using `tempfile::NamedTempFile` (same pattern as pipeline_test.rs). Helper function `create_test_record()` builds a synthetic ConnectionRecord with:
- protocol: Tcp, local: 10.0.0.1:12345, remote: 93.184.216.34:443
- pid: 1234, process_name: "curl", tcp_state: Some(TcpState::Established)
- bytes_in: 5000, bytes_out: 1500, packets_in: 10, packets_out: 5
- rate_in: 1234.5678, rate_out: 567.891, rtt_us: Some(5000)
Also `create_test_udp_record()` with protocol: Udp, tcp_state: None, rtt_us: None.
Tests (write RED first, then GREEN):
1. `test_csv_header_row` - verify header line contains all 16 column names
2. `test_csv_data_row_field_count` - verify data row has 16 fields
3. `test_csv_overwrite_existing` - write "old,data\n" to file, create CsvLogger, verify old content gone (D-04)
4. `test_csv_timestamp_consistency` - write snapshot with 2 records, verify all data rows start with the same timestamp
5. `test_csv_rate_precision` - verify rate values are rounded (1234.57 not 1234.5678)
6. `test_csv_tcp_state_and_rtt` - verify TCP record has "ESTABLISHED" and "5000", UDP record has "UDP" and "N/A"
cd /Users/zrowitsch/local_src/tcptop && cargo test --package tcptop --test csv_test -- --nocapture 2>&1 | tail -20
- tcptop/src/csv_writer.rs exists and contains `pub struct CsvRow` with `#[derive(Serialize)]`
- tcptop/src/csv_writer.rs contains `pub struct CsvLogger`
- tcptop/src/csv_writer.rs contains `pub fn from_record(record: &ConnectionRecord, timestamp: &str) -> Self`
- tcptop/src/csv_writer.rs contains `pub fn write_snapshot(&mut self, records: &[&ConnectionRecord], timestamp: &str) -> Result<()>`
- tcptop/src/csv_writer.rs contains `.flush()`
- tcptop/src/csv_writer.rs contains `(record.rate_in * 100.0).round() / 100.0`
- tcptop/src/lib.rs contains `pub mod csv_writer;`
- Cargo.toml (workspace root) contains `csv = "1.4"`
- Cargo.toml (workspace root) contains `serde = { version = "1", features = ["derive"] }`
- Cargo.toml (workspace root) contains `chrono =`
- tcptop/Cargo.toml contains `serde = { workspace = true }`
- tcptop/Cargo.toml contains `csv = { workspace = true }`
- tcptop/Cargo.toml contains `chrono = { workspace = true }`
- tcptop/tests/csv_test.rs contains `test_csv_header_row`
- tcptop/tests/csv_test.rs contains `test_csv_overwrite_existing`
- tcptop/tests/csv_test.rs contains `test_csv_timestamp_consistency`
- `cargo test --package tcptop --test csv_test` exits 0
CsvRow and CsvLogger are implemented, all 6+ CSV tests pass, dependencies added to workspace
Task 2: Add --log flag to Cli and implement headless event loop in main.rs
tcptop/src/main.rs
- tcptop/src/main.rs (current Cli struct and run_linux function)
- tcptop/src/csv_writer.rs (CsvLogger API from Task 1)
- tcptop/src/aggregator.rs (ConnectionTable::new, seed, update, tick)
- tcptop/src/collector/mod.rs (CollectorEvent, NetworkCollector trait)
**Step 1: Add --log field to Cli struct** (after the `interval` field):
```rust
/// Log connection data to CSV file (headless mode, no TUI)
#[arg(long)]
log: Option,
```
**Step 2: Modify the `#[cfg(target_os = "linux")]` block in main()** to branch on `cli.log` BEFORE `ratatui::init()` (per Pattern 2 from RESEARCH.md, critical to avoid terminal corruption):
```rust
#[cfg(target_os = "linux")]
{
let cli = Cli::parse();
if let Some(ref log_path) = cli.log {
run_headless(&cli, log_path).await?;
} else {
let mut terminal = ratatui::init();
let result = run_linux(&mut terminal, &cli).await;
ratatui::restore();
result?;
}
}
```
**Step 3: Create `run_headless` async function** (per D-01: TUI and CSV are mutually exclusive):
```rust
#[cfg(target_os = "linux")]
async fn run_headless(cli: &Cli, log_path: &str) -> Result<()> {
use tcptop::csv_writer::CsvLogger;
use chrono::Utc;
use std::path::Path;
let mut collector = LinuxCollector::new()?;
let mut table = ConnectionTable::new();
// Bootstrap pre-existing connections (same as TUI mode)
match collector.bootstrap_existing() {
Ok(existing) => {
log::info!("Bootstrapped {} pre-existing connections", existing.len());
table.seed(existing);
}
Err(e) => {
log::warn!("Failed to bootstrap existing connections: {}", e);
}
}
let (tx, mut rx) = mpsc::channel(4096);
let collector_handle = tokio::spawn(async move {
if let Err(e) = collector.start(tx).await {
log::error!("Collector error: {}", e);
}
});
// Create CSV logger (overwrites existing file per D-04)
let mut csv_logger = CsvLogger::new(Path::new(log_path))?;
// Use CLI-specified interval (D-03: same cadence as TUI)
let mut tick = interval(Duration::from_secs(cli.interval));
// Signal handlers for graceful shutdown
let mut sigint = tokio::signal::unix::signal(tokio::signal::unix::SignalKind::interrupt())?;
let mut sigterm = tokio::signal::unix::signal(tokio::signal::unix::SignalKind::terminate())?;
eprintln!("tcptop: logging to {} (interval: {}s, Ctrl-C to stop)", log_path, cli.interval);
loop {
tokio::select! {
Some(event) = rx.recv() => {
table.update(event);
}
_ = tick.tick() => {
let (active, _closed) = table.tick();
// Generate timestamp ONCE per tick (Pitfall 6)
let timestamp = Utc::now().to_rfc3339();
csv_logger.write_snapshot(&active, ×tamp)?;
}
_ = sigint.recv() => break,
_ = sigterm.recv() => break,
}
}
collector_handle.abort();
eprintln!("tcptop: logging stopped, CSV written to {}", log_path);
Ok(())
}
```
**Important:** The `run_headless` function must NOT import or use ratatui, crossterm::event::EventStream, or the tui module. It is completely separate from the TUI code path (per Pitfall 2 from RESEARCH.md).
**Step 4: Add necessary imports at the top of main.rs** (inside #[cfg(target_os = "linux")] blocks as needed):
- `use chrono::Utc;` (inside run_headless)
- `use std::path::Path;` (inside run_headless)
- `use tcptop::csv_writer::CsvLogger;` (inside run_headless)
cd /Users/zrowitsch/local_src/tcptop && cargo build --package tcptop 2>&1 | tail -5
- tcptop/src/main.rs contains `#[arg(long)]` followed by `log: Option`
- tcptop/src/main.rs contains `if let Some(ref log_path) = cli.log`
- tcptop/src/main.rs contains `async fn run_headless`
- tcptop/src/main.rs run_headless does NOT contain `ratatui::init` or `EventStream`
- tcptop/src/main.rs run_headless contains `CsvLogger::new`
- tcptop/src/main.rs run_headless contains `Utc::now().to_rfc3339()`
- tcptop/src/main.rs run_headless contains `csv_logger.write_snapshot`
- tcptop/src/main.rs contains `eprintln!("tcptop: logging to`
- `cargo build --package tcptop` exits 0
--log flag added, headless event loop implemented, cargo build succeeds, TUI mode unchanged
1. `cargo test --package tcptop --test csv_test` -- all CSV tests pass
2. `cargo test --package tcptop --test pipeline_test` -- existing tests still pass
3. `cargo build --package tcptop` -- compiles cleanly
4. `cargo build --package tcptop 2>&1 | grep -i warning` -- no new warnings
<success_criteria>
- tcptop --log output.csv compiles and the headless code path is reachable
- CSV writer produces files with correct header (16 columns per D-02 + D-05)
- CSV writer overwrites existing files (D-04)
- All timestamps within a snapshot are identical (D-05, Pitfall 6)
- Rate values are rounded to 2 decimal places (Pitfall 5)
- 6+ CSV-specific tests pass
- Existing pipeline_test.rs tests still pass </success_criteria>