Aug 13 03:19:52.964083 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 03:19:52.964130 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 03:19:52.964164 kernel: BIOS-provided physical RAM map: Aug 13 03:19:52.964174 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 03:19:52.964184 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 03:19:52.964193 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 03:19:52.964217 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Aug 13 03:19:52.964228 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Aug 13 03:19:52.964238 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 03:19:52.964248 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 03:19:52.964262 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 03:19:52.964272 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 03:19:52.964282 kernel: NX (Execute Disable) protection: active Aug 13 03:19:52.964292 kernel: SMBIOS 2.8 present. Aug 13 03:19:52.964305 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Aug 13 03:19:52.964316 kernel: Hypervisor detected: KVM Aug 13 03:19:52.964330 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 03:19:52.964341 kernel: kvm-clock: cpu 0, msr 3e19e001, primary cpu clock Aug 13 03:19:52.964352 kernel: kvm-clock: using sched offset of 4833924314 cycles Aug 13 03:19:52.964364 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 03:19:52.964375 kernel: tsc: Detected 2500.032 MHz processor Aug 13 03:19:52.964386 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 03:19:52.964397 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 03:19:52.964408 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Aug 13 03:19:52.964418 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 03:19:52.964433 kernel: Using GB pages for direct mapping Aug 13 03:19:52.964444 kernel: ACPI: Early table checksum verification disabled Aug 13 03:19:52.964455 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Aug 13 03:19:52.964466 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 03:19:52.964477 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 03:19:52.964487 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 03:19:52.964498 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Aug 13 03:19:52.964509 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 03:19:52.964520 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 03:19:52.964535 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 03:19:52.964546 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 03:19:52.964556 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Aug 13 03:19:52.964567 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Aug 13 03:19:52.964578 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Aug 13 03:19:52.964589 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Aug 13 03:19:52.964614 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Aug 13 03:19:52.964629 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Aug 13 03:19:52.964641 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Aug 13 03:19:52.964652 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 03:19:52.964664 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 03:19:52.964675 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 13 03:19:52.964686 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Aug 13 03:19:52.964698 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 13 03:19:52.964713 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Aug 13 03:19:52.964725 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 13 03:19:52.964736 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Aug 13 03:19:52.964747 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 13 03:19:52.964758 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Aug 13 03:19:52.964777 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 13 03:19:52.964789 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Aug 13 03:19:52.964800 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 13 03:19:52.964812 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Aug 13 03:19:52.964853 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 13 03:19:52.964874 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Aug 13 03:19:52.964885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 03:19:52.964897 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 03:19:52.964908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Aug 13 03:19:52.964920 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Aug 13 03:19:52.964932 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Aug 13 03:19:52.964943 kernel: Zone ranges: Aug 13 03:19:52.964955 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 03:19:52.964966 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Aug 13 03:19:52.964982 kernel: Normal empty Aug 13 03:19:52.964994 kernel: Movable zone start for each node Aug 13 03:19:52.965005 kernel: Early memory node ranges Aug 13 03:19:52.965017 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 03:19:52.965028 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Aug 13 03:19:52.965039 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Aug 13 03:19:52.965051 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 03:19:52.965062 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 03:19:52.965074 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Aug 13 03:19:52.965090 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 03:19:52.965101 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 03:19:52.965112 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 03:19:52.965124 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 03:19:52.965135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 03:19:52.965147 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 03:19:52.965158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 03:19:52.965169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 03:19:52.965181 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 03:19:52.965196 kernel: TSC deadline timer available Aug 13 03:19:52.965208 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Aug 13 03:19:52.965219 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 03:19:52.965231 kernel: Booting paravirtualized kernel on KVM Aug 13 03:19:52.965242 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 03:19:52.965254 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Aug 13 03:19:52.965266 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Aug 13 03:19:52.965277 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Aug 13 03:19:52.965288 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Aug 13 03:19:52.965304 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Aug 13 03:19:52.965315 kernel: kvm-guest: PV spinlocks enabled Aug 13 03:19:52.965327 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 03:19:52.965338 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Aug 13 03:19:52.965349 kernel: Policy zone: DMA32 Aug 13 03:19:52.965362 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 03:19:52.965374 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 03:19:52.965386 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 03:19:52.965402 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 03:19:52.965413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 03:19:52.965425 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 192524K reserved, 0K cma-reserved) Aug 13 03:19:52.965437 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Aug 13 03:19:52.965448 kernel: Kernel/User page tables isolation: enabled Aug 13 03:19:52.965460 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 03:19:52.965471 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 03:19:52.965482 kernel: rcu: Hierarchical RCU implementation. Aug 13 03:19:52.965494 kernel: rcu: RCU event tracing is enabled. Aug 13 03:19:52.965510 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Aug 13 03:19:52.965522 kernel: Rude variant of Tasks RCU enabled. Aug 13 03:19:52.965534 kernel: Tracing variant of Tasks RCU enabled. Aug 13 03:19:52.965546 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 03:19:52.965557 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Aug 13 03:19:52.965569 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Aug 13 03:19:52.965580 kernel: random: crng init done Aug 13 03:19:52.965606 kernel: Console: colour VGA+ 80x25 Aug 13 03:19:52.965619 kernel: printk: console [tty0] enabled Aug 13 03:19:52.965630 kernel: printk: console [ttyS0] enabled Aug 13 03:19:52.965642 kernel: ACPI: Core revision 20210730 Aug 13 03:19:52.965654 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 03:19:52.965678 kernel: x2apic enabled Aug 13 03:19:52.965691 kernel: Switched APIC routing to physical x2apic. Aug 13 03:19:52.965703 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Aug 13 03:19:52.965716 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Aug 13 03:19:52.965728 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 03:19:52.965744 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 03:19:52.965756 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 03:19:52.965768 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 03:19:52.965780 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 03:19:52.965808 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 03:19:52.965821 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 03:19:52.965844 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 03:19:52.965856 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 03:19:52.965868 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 03:19:52.965880 kernel: MMIO Stale Data: Unknown: No mitigations Aug 13 03:19:52.965892 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 13 03:19:52.965909 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 03:19:52.965922 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 03:19:52.965934 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 03:19:52.965946 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 03:19:52.965958 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 03:19:52.965970 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 03:19:52.965982 kernel: Freeing SMP alternatives memory: 32K Aug 13 03:19:52.965993 kernel: pid_max: default: 32768 minimum: 301 Aug 13 03:19:52.966005 kernel: LSM: Security Framework initializing Aug 13 03:19:52.966017 kernel: SELinux: Initializing. Aug 13 03:19:52.966029 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 03:19:52.966046 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 03:19:52.966058 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Aug 13 03:19:52.966070 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Aug 13 03:19:52.966082 kernel: signal: max sigframe size: 1776 Aug 13 03:19:52.966094 kernel: rcu: Hierarchical SRCU implementation. Aug 13 03:19:52.966106 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 03:19:52.966118 kernel: smp: Bringing up secondary CPUs ... Aug 13 03:19:52.966130 kernel: x86: Booting SMP configuration: Aug 13 03:19:52.966142 kernel: .... node #0, CPUs: #1 Aug 13 03:19:52.966158 kernel: kvm-clock: cpu 1, msr 3e19e041, secondary cpu clock Aug 13 03:19:52.966171 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 13 03:19:52.966182 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Aug 13 03:19:52.966194 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 03:19:52.966206 kernel: smpboot: Max logical packages: 16 Aug 13 03:19:52.966218 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Aug 13 03:19:52.966230 kernel: devtmpfs: initialized Aug 13 03:19:52.966242 kernel: x86/mm: Memory block size: 128MB Aug 13 03:19:52.966254 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 03:19:52.966266 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Aug 13 03:19:52.966283 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 03:19:52.966295 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 03:19:52.966307 kernel: audit: initializing netlink subsys (disabled) Aug 13 03:19:52.966319 kernel: audit: type=2000 audit(1755055192.310:1): state=initialized audit_enabled=0 res=1 Aug 13 03:19:52.966331 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 03:19:52.966343 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 03:19:52.966355 kernel: cpuidle: using governor menu Aug 13 03:19:52.966367 kernel: ACPI: bus type PCI registered Aug 13 03:19:52.966379 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 03:19:52.966395 kernel: dca service started, version 1.12.1 Aug 13 03:19:52.966407 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 03:19:52.966419 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Aug 13 03:19:52.966432 kernel: PCI: Using configuration type 1 for base access Aug 13 03:19:52.966444 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 03:19:52.966456 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 03:19:52.966468 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 03:19:52.966480 kernel: ACPI: Added _OSI(Module Device) Aug 13 03:19:52.966491 kernel: ACPI: Added _OSI(Processor Device) Aug 13 03:19:52.966508 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 03:19:52.966520 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 03:19:52.966532 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 03:19:52.966544 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 03:19:52.966556 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 03:19:52.966568 kernel: ACPI: Interpreter enabled Aug 13 03:19:52.966580 kernel: ACPI: PM: (supports S0 S5) Aug 13 03:19:52.966591 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 03:19:52.966604 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 03:19:52.966620 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 03:19:52.966632 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 03:19:52.966949 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 03:19:52.967111 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 03:19:52.967263 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 03:19:52.967282 kernel: PCI host bridge to bus 0000:00 Aug 13 03:19:52.967445 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 03:19:52.967593 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 03:19:52.967732 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 03:19:52.967907 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 03:19:52.968058 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 03:19:52.968220 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Aug 13 03:19:52.968359 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 03:19:52.968547 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 03:19:52.968717 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Aug 13 03:19:52.974278 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Aug 13 03:19:52.974467 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Aug 13 03:19:52.974643 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Aug 13 03:19:52.974797 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 03:19:52.974999 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Aug 13 03:19:52.975177 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Aug 13 03:19:52.975361 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Aug 13 03:19:52.975505 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Aug 13 03:19:52.975675 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Aug 13 03:19:52.975867 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Aug 13 03:19:52.976042 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Aug 13 03:19:52.976226 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Aug 13 03:19:52.976424 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Aug 13 03:19:52.976577 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Aug 13 03:19:52.976740 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Aug 13 03:19:52.976924 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Aug 13 03:19:52.977088 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Aug 13 03:19:52.977255 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Aug 13 03:19:52.977421 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Aug 13 03:19:52.977581 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Aug 13 03:19:52.977757 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 03:19:52.977936 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 03:19:52.978089 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Aug 13 03:19:52.978254 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Aug 13 03:19:52.978431 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Aug 13 03:19:52.978600 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 13 03:19:52.978776 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 03:19:52.978968 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Aug 13 03:19:52.979126 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Aug 13 03:19:52.979294 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 03:19:52.979450 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 03:19:52.979625 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 03:19:52.979782 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Aug 13 03:19:52.979963 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Aug 13 03:19:52.980159 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 03:19:52.980316 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 03:19:52.980488 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Aug 13 03:19:52.980660 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Aug 13 03:19:52.989871 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 13 03:19:52.990054 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 13 03:19:52.990215 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 03:19:52.990392 kernel: pci_bus 0000:02: extended config space not accessible Aug 13 03:19:52.990577 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Aug 13 03:19:52.990755 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Aug 13 03:19:52.990970 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 13 03:19:52.991152 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 13 03:19:52.991366 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Aug 13 03:19:52.991553 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Aug 13 03:19:52.991710 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 13 03:19:52.991892 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 13 03:19:52.992053 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 03:19:52.992225 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Aug 13 03:19:52.992385 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Aug 13 03:19:52.992539 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 13 03:19:52.992692 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 13 03:19:52.992879 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 03:19:52.993037 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 13 03:19:52.993189 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 13 03:19:52.993347 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 03:19:52.993503 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 13 03:19:52.993654 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 13 03:19:52.993839 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 03:19:52.993998 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 13 03:19:52.994149 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 13 03:19:52.994298 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 03:19:52.994453 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 13 03:19:52.994621 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 13 03:19:52.994789 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 03:19:52.994981 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 13 03:19:52.995151 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 13 03:19:52.995313 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 03:19:52.995331 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 03:19:52.995345 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 03:19:52.995358 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 03:19:52.995377 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 03:19:52.995390 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 03:19:52.995403 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 03:19:52.995415 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 03:19:52.995441 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 03:19:52.995454 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 03:19:52.995466 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 03:19:52.995478 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 03:19:52.995490 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 03:19:52.995507 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 03:19:52.995519 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 03:19:52.995531 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 03:19:52.995555 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 03:19:52.995567 kernel: iommu: Default domain type: Translated Aug 13 03:19:52.995579 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 03:19:52.995720 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 03:19:52.995903 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 03:19:52.996065 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 03:19:52.996084 kernel: vgaarb: loaded Aug 13 03:19:52.996097 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 03:19:52.996110 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 03:19:52.996123 kernel: PTP clock support registered Aug 13 03:19:52.996135 kernel: PCI: Using ACPI for IRQ routing Aug 13 03:19:52.996148 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 03:19:52.996160 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 03:19:52.996173 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Aug 13 03:19:52.996191 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 03:19:52.996204 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 03:19:52.996217 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 03:19:52.996229 kernel: pnp: PnP ACPI init Aug 13 03:19:52.996433 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 03:19:52.996454 kernel: pnp: PnP ACPI: found 5 devices Aug 13 03:19:52.996467 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 03:19:52.996480 kernel: NET: Registered PF_INET protocol family Aug 13 03:19:52.996499 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 03:19:52.996512 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 03:19:52.996525 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 03:19:52.996538 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 03:19:52.996551 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 03:19:52.996563 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 03:19:52.996576 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 03:19:52.996588 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 03:19:52.996601 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 03:19:52.996618 kernel: NET: Registered PF_XDP protocol family Aug 13 03:19:52.996770 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Aug 13 03:19:53.006025 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Aug 13 03:19:53.006206 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Aug 13 03:19:53.006375 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Aug 13 03:19:53.006537 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 03:19:53.006690 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 03:19:53.006892 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 03:19:53.007049 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 03:19:53.007202 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Aug 13 03:19:53.007364 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Aug 13 03:19:53.007516 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Aug 13 03:19:53.007668 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Aug 13 03:19:53.007858 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Aug 13 03:19:53.008012 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Aug 13 03:19:53.008163 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Aug 13 03:19:53.008324 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Aug 13 03:19:53.008484 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 13 03:19:53.008642 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 13 03:19:53.008814 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 13 03:19:53.008979 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Aug 13 03:19:53.009132 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 13 03:19:53.009292 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 03:19:53.009452 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 13 03:19:53.009630 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Aug 13 03:19:53.009784 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 13 03:19:53.009971 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 03:19:53.010134 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 13 03:19:53.010297 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Aug 13 03:19:53.010447 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 13 03:19:53.010597 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 03:19:53.010755 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 13 03:19:53.010942 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Aug 13 03:19:53.011097 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 13 03:19:53.011261 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 03:19:53.011430 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 13 03:19:53.011581 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Aug 13 03:19:53.011743 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 13 03:19:53.011934 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 03:19:53.012089 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 13 03:19:53.012255 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Aug 13 03:19:53.012432 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 13 03:19:53.012588 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 03:19:53.012740 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 13 03:19:53.020742 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Aug 13 03:19:53.020940 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 13 03:19:53.021098 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 03:19:53.021251 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 13 03:19:53.021402 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Aug 13 03:19:53.021553 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 13 03:19:53.021713 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 03:19:53.021889 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 03:19:53.022029 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 03:19:53.022175 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 03:19:53.022312 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 03:19:53.022449 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 03:19:53.022587 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Aug 13 03:19:53.022750 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Aug 13 03:19:53.022932 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Aug 13 03:19:53.023078 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 03:19:53.023247 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Aug 13 03:19:53.023417 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Aug 13 03:19:53.023562 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Aug 13 03:19:53.023706 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 03:19:53.023889 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Aug 13 03:19:53.024045 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Aug 13 03:19:53.024200 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 03:19:53.024380 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Aug 13 03:19:53.024532 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Aug 13 03:19:53.024710 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 03:19:53.024902 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Aug 13 03:19:53.025071 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Aug 13 03:19:53.025242 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 03:19:53.025460 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Aug 13 03:19:53.025616 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Aug 13 03:19:53.025787 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 03:19:53.025992 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Aug 13 03:19:53.026164 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Aug 13 03:19:53.026340 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 03:19:53.026511 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Aug 13 03:19:53.026659 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Aug 13 03:19:53.027961 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 03:19:53.027985 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 03:19:53.028000 kernel: PCI: CLS 0 bytes, default 64 Aug 13 03:19:53.028013 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 03:19:53.028027 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Aug 13 03:19:53.028047 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 03:19:53.028061 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Aug 13 03:19:53.028074 kernel: Initialise system trusted keyrings Aug 13 03:19:53.028087 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 03:19:53.028101 kernel: Key type asymmetric registered Aug 13 03:19:53.028113 kernel: Asymmetric key parser 'x509' registered Aug 13 03:19:53.028126 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 03:19:53.028139 kernel: io scheduler mq-deadline registered Aug 13 03:19:53.028153 kernel: io scheduler kyber registered Aug 13 03:19:53.028170 kernel: io scheduler bfq registered Aug 13 03:19:53.028328 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Aug 13 03:19:53.028482 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Aug 13 03:19:53.028648 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 03:19:53.028813 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Aug 13 03:19:53.028994 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Aug 13 03:19:53.029147 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 03:19:53.029310 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Aug 13 03:19:53.029462 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Aug 13 03:19:53.029612 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 03:19:53.029776 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Aug 13 03:19:53.029963 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Aug 13 03:19:53.030116 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 03:19:53.030289 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Aug 13 03:19:53.030440 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Aug 13 03:19:53.030615 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 03:19:53.030776 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Aug 13 03:19:53.030972 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Aug 13 03:19:53.031126 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 03:19:53.031286 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Aug 13 03:19:53.031437 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Aug 13 03:19:53.031589 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 03:19:53.031740 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Aug 13 03:19:53.031923 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Aug 13 03:19:53.032076 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 03:19:53.032106 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 03:19:53.032120 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 03:19:53.032134 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 03:19:53.032147 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 03:19:53.032168 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 03:19:53.032181 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 03:19:53.032194 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 03:19:53.032208 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 03:19:53.032385 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 03:19:53.032407 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 03:19:53.032547 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 03:19:53.032689 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T03:19:52 UTC (1755055192) Aug 13 03:19:53.038895 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 03:19:53.038929 kernel: intel_pstate: CPU model not supported Aug 13 03:19:53.038944 kernel: NET: Registered PF_INET6 protocol family Aug 13 03:19:53.038971 kernel: Segment Routing with IPv6 Aug 13 03:19:53.038984 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 03:19:53.038998 kernel: NET: Registered PF_PACKET protocol family Aug 13 03:19:53.039011 kernel: Key type dns_resolver registered Aug 13 03:19:53.039024 kernel: IPI shorthand broadcast: enabled Aug 13 03:19:53.039038 kernel: sched_clock: Marking stable (1003403646, 221998884)->(1515400032, -289997502) Aug 13 03:19:53.039051 kernel: registered taskstats version 1 Aug 13 03:19:53.039064 kernel: Loading compiled-in X.509 certificates Aug 13 03:19:53.039077 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 03:19:53.039090 kernel: Key type .fscrypt registered Aug 13 03:19:53.039107 kernel: Key type fscrypt-provisioning registered Aug 13 03:19:53.039121 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 03:19:53.039134 kernel: ima: Allocated hash algorithm: sha1 Aug 13 03:19:53.039148 kernel: ima: No architecture policies found Aug 13 03:19:53.039161 kernel: clk: Disabling unused clocks Aug 13 03:19:53.039174 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 03:19:53.039187 kernel: Write protecting the kernel read-only data: 28672k Aug 13 03:19:53.039200 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 03:19:53.039218 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 03:19:53.039231 kernel: Run /init as init process Aug 13 03:19:53.039244 kernel: with arguments: Aug 13 03:19:53.039258 kernel: /init Aug 13 03:19:53.039270 kernel: with environment: Aug 13 03:19:53.039283 kernel: HOME=/ Aug 13 03:19:53.039295 kernel: TERM=linux Aug 13 03:19:53.039308 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 03:19:53.039335 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 03:19:53.039359 systemd[1]: Detected virtualization kvm. Aug 13 03:19:53.039374 systemd[1]: Detected architecture x86-64. Aug 13 03:19:53.039388 systemd[1]: Running in initrd. Aug 13 03:19:53.039402 systemd[1]: No hostname configured, using default hostname. Aug 13 03:19:53.039420 systemd[1]: Hostname set to . Aug 13 03:19:53.039435 systemd[1]: Initializing machine ID from VM UUID. Aug 13 03:19:53.039449 systemd[1]: Queued start job for default target initrd.target. Aug 13 03:19:53.039463 systemd[1]: Started systemd-ask-password-console.path. Aug 13 03:19:53.039480 systemd[1]: Reached target cryptsetup.target. Aug 13 03:19:53.039495 systemd[1]: Reached target paths.target. Aug 13 03:19:53.039508 systemd[1]: Reached target slices.target. Aug 13 03:19:53.039522 systemd[1]: Reached target swap.target. Aug 13 03:19:53.039536 systemd[1]: Reached target timers.target. Aug 13 03:19:53.039551 systemd[1]: Listening on iscsid.socket. Aug 13 03:19:53.039564 systemd[1]: Listening on iscsiuio.socket. Aug 13 03:19:53.039595 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 03:19:53.039609 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 03:19:53.039622 systemd[1]: Listening on systemd-journald.socket. Aug 13 03:19:53.039649 systemd[1]: Listening on systemd-networkd.socket. Aug 13 03:19:53.039662 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 03:19:53.039677 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 03:19:53.039690 systemd[1]: Reached target sockets.target. Aug 13 03:19:53.039704 systemd[1]: Starting kmod-static-nodes.service... Aug 13 03:19:53.039718 systemd[1]: Finished network-cleanup.service. Aug 13 03:19:53.039736 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 03:19:53.039755 systemd[1]: Starting systemd-journald.service... Aug 13 03:19:53.039769 systemd[1]: Starting systemd-modules-load.service... Aug 13 03:19:53.039783 systemd[1]: Starting systemd-resolved.service... Aug 13 03:19:53.039797 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 03:19:53.039811 systemd[1]: Finished kmod-static-nodes.service. Aug 13 03:19:53.039846 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 03:19:53.039876 systemd-journald[201]: Journal started Aug 13 03:19:53.039966 systemd-journald[201]: Runtime Journal (/run/log/journal/b4f549c48a034f4fbe8c3fe51028d7fb) is 4.7M, max 38.1M, 33.3M free. Aug 13 03:19:52.962935 systemd-modules-load[202]: Inserted module 'overlay' Aug 13 03:19:53.060718 kernel: Bridge firewalling registered Aug 13 03:19:53.015167 systemd-resolved[203]: Positive Trust Anchors: Aug 13 03:19:53.077494 systemd[1]: Started systemd-resolved.service. Aug 13 03:19:53.077538 kernel: audit: type=1130 audit(1755055193.060:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.077559 systemd[1]: Started systemd-journald.service. Aug 13 03:19:53.077593 kernel: audit: type=1130 audit(1755055193.067:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.077612 kernel: SCSI subsystem initialized Aug 13 03:19:53.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.015188 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 03:19:53.015247 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 03:19:53.026418 systemd-resolved[203]: Defaulting to hostname 'linux'. Aug 13 03:19:53.046421 systemd-modules-load[202]: Inserted module 'br_netfilter' Aug 13 03:19:53.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.094255 kernel: audit: type=1130 audit(1755055193.082:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.094290 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 03:19:53.083320 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 03:19:53.115113 kernel: audit: type=1130 audit(1755055193.083:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.115148 kernel: device-mapper: uevent: version 1.0.3 Aug 13 03:19:53.115167 kernel: audit: type=1130 audit(1755055193.084:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.115184 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 03:19:53.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.084217 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 03:19:53.085024 systemd[1]: Reached target nss-lookup.target. Aug 13 03:19:53.094109 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 03:19:53.112476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 03:19:53.121649 systemd-modules-load[202]: Inserted module 'dm_multipath' Aug 13 03:19:53.123795 systemd[1]: Finished systemd-modules-load.service. Aug 13 03:19:53.126356 systemd[1]: Starting systemd-sysctl.service... Aug 13 03:19:53.132553 kernel: audit: type=1130 audit(1755055193.123:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.134054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 03:19:53.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.140821 kernel: audit: type=1130 audit(1755055193.134:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.143133 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 03:19:53.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.145304 systemd[1]: Starting dracut-cmdline.service... Aug 13 03:19:53.151469 kernel: audit: type=1130 audit(1755055193.143:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.151057 systemd[1]: Finished systemd-sysctl.service. Aug 13 03:19:53.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.157857 kernel: audit: type=1130 audit(1755055193.151:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.161282 dracut-cmdline[223]: dracut-dracut-053 Aug 13 03:19:53.176264 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 03:19:53.269864 kernel: Loading iSCSI transport class v2.0-870. Aug 13 03:19:53.291844 kernel: iscsi: registered transport (tcp) Aug 13 03:19:53.321777 kernel: iscsi: registered transport (qla4xxx) Aug 13 03:19:53.321889 kernel: QLogic iSCSI HBA Driver Aug 13 03:19:53.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.373747 systemd[1]: Finished dracut-cmdline.service. Aug 13 03:19:53.375790 systemd[1]: Starting dracut-pre-udev.service... Aug 13 03:19:53.436866 kernel: raid6: sse2x4 gen() 12596 MB/s Aug 13 03:19:53.454850 kernel: raid6: sse2x4 xor() 7688 MB/s Aug 13 03:19:53.472848 kernel: raid6: sse2x2 gen() 8676 MB/s Aug 13 03:19:53.490852 kernel: raid6: sse2x2 xor() 7721 MB/s Aug 13 03:19:53.508852 kernel: raid6: sse2x1 gen() 8740 MB/s Aug 13 03:19:53.527673 kernel: raid6: sse2x1 xor() 6992 MB/s Aug 13 03:19:53.527756 kernel: raid6: using algorithm sse2x4 gen() 12596 MB/s Aug 13 03:19:53.527783 kernel: raid6: .... xor() 7688 MB/s, rmw enabled Aug 13 03:19:53.529006 kernel: raid6: using ssse3x2 recovery algorithm Aug 13 03:19:53.546851 kernel: xor: automatically using best checksumming function avx Aug 13 03:19:53.668869 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 03:19:53.684564 systemd[1]: Finished dracut-pre-udev.service. Aug 13 03:19:53.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.685000 audit: BPF prog-id=7 op=LOAD Aug 13 03:19:53.685000 audit: BPF prog-id=8 op=LOAD Aug 13 03:19:53.686733 systemd[1]: Starting systemd-udevd.service... Aug 13 03:19:53.705501 systemd-udevd[401]: Using default interface naming scheme 'v252'. Aug 13 03:19:53.713768 systemd[1]: Started systemd-udevd.service. Aug 13 03:19:53.717181 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 03:19:53.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.736571 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Aug 13 03:19:53.778510 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 03:19:53.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.781520 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 03:19:53.877109 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 03:19:53.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:53.983940 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 03:19:54.047700 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 03:19:54.047730 kernel: ACPI: bus type USB registered Aug 13 03:19:54.047748 kernel: usbcore: registered new interface driver usbfs Aug 13 03:19:54.047771 kernel: usbcore: registered new interface driver hub Aug 13 03:19:54.047827 kernel: usbcore: registered new device driver usb Aug 13 03:19:54.047849 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 03:19:54.047866 kernel: GPT:17805311 != 125829119 Aug 13 03:19:54.047881 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 03:19:54.047897 kernel: GPT:17805311 != 125829119 Aug 13 03:19:54.047913 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 03:19:54.047929 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 03:19:54.047946 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 13 03:19:54.048166 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Aug 13 03:19:54.048360 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Aug 13 03:19:54.048559 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 13 03:19:54.048739 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Aug 13 03:19:54.048950 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Aug 13 03:19:54.049125 kernel: hub 1-0:1.0: USB hub found Aug 13 03:19:54.049331 kernel: hub 1-0:1.0: 4 ports detected Aug 13 03:19:54.049679 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Aug 13 03:19:54.050031 kernel: hub 2-0:1.0: USB hub found Aug 13 03:19:54.050261 kernel: hub 2-0:1.0: 4 ports detected Aug 13 03:19:54.050529 kernel: AVX version of gcm_enc/dec engaged. Aug 13 03:19:54.050549 kernel: AES CTR mode by8 optimization enabled Aug 13 03:19:54.076850 kernel: libata version 3.00 loaded. Aug 13 03:19:54.087844 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 03:19:54.131853 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 03:19:54.131900 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 03:19:54.132096 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 03:19:54.132275 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (455) Aug 13 03:19:54.132296 kernel: scsi host0: ahci Aug 13 03:19:54.132510 kernel: scsi host1: ahci Aug 13 03:19:54.132719 kernel: scsi host2: ahci Aug 13 03:19:54.133008 kernel: scsi host3: ahci Aug 13 03:19:54.133208 kernel: scsi host4: ahci Aug 13 03:19:54.133404 kernel: scsi host5: ahci Aug 13 03:19:54.133592 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Aug 13 03:19:54.133613 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Aug 13 03:19:54.133630 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Aug 13 03:19:54.133646 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Aug 13 03:19:54.133663 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Aug 13 03:19:54.133686 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Aug 13 03:19:54.104313 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 03:19:54.112331 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 03:19:54.119709 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 03:19:54.212970 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 03:19:54.218694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 03:19:54.220682 systemd[1]: Starting disk-uuid.service... Aug 13 03:19:54.228062 disk-uuid[528]: Primary Header is updated. Aug 13 03:19:54.228062 disk-uuid[528]: Secondary Entries is updated. Aug 13 03:19:54.228062 disk-uuid[528]: Secondary Header is updated. Aug 13 03:19:54.231676 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 03:19:54.279834 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Aug 13 03:19:54.427410 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 03:19:54.444194 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 03:19:54.444288 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 03:19:54.445792 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 03:19:54.449300 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 03:19:54.449342 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 03:19:54.450858 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 03:19:54.462190 kernel: usbcore: registered new interface driver usbhid Aug 13 03:19:54.462261 kernel: usbhid: USB HID core driver Aug 13 03:19:54.477816 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Aug 13 03:19:54.477861 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Aug 13 03:19:55.249831 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 03:19:55.251005 disk-uuid[529]: The operation has completed successfully. Aug 13 03:19:55.306767 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 03:19:55.306977 systemd[1]: Finished disk-uuid.service. Aug 13 03:19:55.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.313748 systemd[1]: Starting verity-setup.service... Aug 13 03:19:55.332836 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Aug 13 03:19:55.388794 systemd[1]: Found device dev-mapper-usr.device. Aug 13 03:19:55.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.393159 systemd[1]: Mounting sysusr-usr.mount... Aug 13 03:19:55.394046 systemd[1]: Finished verity-setup.service. Aug 13 03:19:55.492830 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 03:19:55.493492 systemd[1]: Mounted sysusr-usr.mount. Aug 13 03:19:55.494386 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 03:19:55.495524 systemd[1]: Starting ignition-setup.service... Aug 13 03:19:55.498538 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 03:19:55.515609 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 03:19:55.515670 kernel: BTRFS info (device vda6): using free space tree Aug 13 03:19:55.515699 kernel: BTRFS info (device vda6): has skinny extents Aug 13 03:19:55.531369 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 03:19:55.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.539715 systemd[1]: Finished ignition-setup.service. Aug 13 03:19:55.541562 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 03:19:55.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.641310 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 03:19:55.642000 audit: BPF prog-id=9 op=LOAD Aug 13 03:19:55.644403 systemd[1]: Starting systemd-networkd.service... Aug 13 03:19:55.694091 systemd-networkd[710]: lo: Link UP Aug 13 03:19:55.694105 systemd-networkd[710]: lo: Gained carrier Aug 13 03:19:55.695582 systemd-networkd[710]: Enumeration completed Aug 13 03:19:55.696341 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 03:19:55.697254 systemd[1]: Started systemd-networkd.service. Aug 13 03:19:55.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.699060 systemd-networkd[710]: eth0: Link UP Aug 13 03:19:55.699066 systemd-networkd[710]: eth0: Gained carrier Aug 13 03:19:55.699331 systemd[1]: Reached target network.target. Aug 13 03:19:55.702096 systemd[1]: Starting iscsiuio.service... Aug 13 03:19:55.712880 systemd[1]: Started iscsiuio.service. Aug 13 03:19:55.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.730307 systemd[1]: Starting iscsid.service... Aug 13 03:19:55.734989 systemd-networkd[710]: eth0: DHCPv4 address 10.230.26.254/30, gateway 10.230.26.253 acquired from 10.230.26.253 Aug 13 03:19:55.738293 ignition[633]: Ignition 2.14.0 Aug 13 03:19:55.740885 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 03:19:55.740885 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 03:19:55.740885 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 03:19:55.740885 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 03:19:55.740885 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 03:19:55.740885 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 03:19:55.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.738321 ignition[633]: Stage: fetch-offline Aug 13 03:19:55.741244 systemd[1]: Started iscsid.service. Aug 13 03:19:55.738497 ignition[633]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 03:19:55.744245 systemd[1]: Starting dracut-initqueue.service... Aug 13 03:19:55.738538 ignition[633]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 03:19:55.745147 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 03:19:55.741915 ignition[633]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 03:19:55.750220 systemd[1]: Starting ignition-fetch.service... Aug 13 03:19:55.742090 ignition[633]: parsed url from cmdline: "" Aug 13 03:19:55.742098 ignition[633]: no config URL provided Aug 13 03:19:55.742108 ignition[633]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 03:19:55.742124 ignition[633]: no config at "/usr/lib/ignition/user.ign" Aug 13 03:19:55.742134 ignition[633]: failed to fetch config: resource requires networking Aug 13 03:19:55.742357 ignition[633]: Ignition finished successfully Aug 13 03:19:55.767686 ignition[717]: Ignition 2.14.0 Aug 13 03:19:55.767705 ignition[717]: Stage: fetch Aug 13 03:19:55.767917 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 03:19:55.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.769680 systemd[1]: Finished dracut-initqueue.service. Aug 13 03:19:55.767955 ignition[717]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 03:19:55.770530 systemd[1]: Reached target remote-fs-pre.target. Aug 13 03:19:55.769629 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 03:19:55.772266 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 03:19:55.769777 ignition[717]: parsed url from cmdline: "" Aug 13 03:19:55.774626 systemd[1]: Reached target remote-fs.target. Aug 13 03:19:55.769785 ignition[717]: no config URL provided Aug 13 03:19:55.776416 systemd[1]: Starting dracut-pre-mount.service... Aug 13 03:19:55.769795 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 03:19:55.769831 ignition[717]: no config at "/usr/lib/ignition/user.ign" Aug 13 03:19:55.775470 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Aug 13 03:19:55.782382 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Aug 13 03:19:55.783562 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Aug 13 03:19:55.792147 systemd[1]: Finished dracut-pre-mount.service. Aug 13 03:19:55.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.810158 ignition[717]: GET result: OK Aug 13 03:19:55.810589 ignition[717]: parsing config with SHA512: c598d9cc570e08ac7d3317dd8f429fe82bd741d6685e69c55980ad5c758150cee5a16392965d80e7d9db863d0105ce2238c9c51f0dbb02b02d6c756a5e25f076 Aug 13 03:19:55.820038 unknown[717]: fetched base config from "system" Aug 13 03:19:55.820900 unknown[717]: fetched base config from "system" Aug 13 03:19:55.821726 unknown[717]: fetched user config from "openstack" Aug 13 03:19:55.823091 ignition[717]: fetch: fetch complete Aug 13 03:19:55.823869 ignition[717]: fetch: fetch passed Aug 13 03:19:55.824724 ignition[717]: Ignition finished successfully Aug 13 03:19:55.827080 systemd[1]: Finished ignition-fetch.service. Aug 13 03:19:55.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.829196 systemd[1]: Starting ignition-kargs.service... Aug 13 03:19:55.843187 ignition[735]: Ignition 2.14.0 Aug 13 03:19:55.843843 ignition[735]: Stage: kargs Aug 13 03:19:55.844023 ignition[735]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 03:19:55.844070 ignition[735]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 03:19:55.845455 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 03:19:55.847154 ignition[735]: kargs: kargs passed Aug 13 03:19:55.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.847246 ignition[735]: Ignition finished successfully Aug 13 03:19:55.848235 systemd[1]: Finished ignition-kargs.service. Aug 13 03:19:55.850133 systemd[1]: Starting ignition-disks.service... Aug 13 03:19:55.859952 ignition[740]: Ignition 2.14.0 Aug 13 03:19:55.859975 ignition[740]: Stage: disks Aug 13 03:19:55.860167 ignition[740]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 03:19:55.860201 ignition[740]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 03:19:55.861489 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 03:19:55.863116 ignition[740]: disks: disks passed Aug 13 03:19:55.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.864070 systemd[1]: Finished ignition-disks.service. Aug 13 03:19:55.863185 ignition[740]: Ignition finished successfully Aug 13 03:19:55.865032 systemd[1]: Reached target initrd-root-device.target. Aug 13 03:19:55.866137 systemd[1]: Reached target local-fs-pre.target. Aug 13 03:19:55.867371 systemd[1]: Reached target local-fs.target. Aug 13 03:19:55.868672 systemd[1]: Reached target sysinit.target. Aug 13 03:19:55.869925 systemd[1]: Reached target basic.target. Aug 13 03:19:55.872409 systemd[1]: Starting systemd-fsck-root.service... Aug 13 03:19:55.893374 systemd-fsck[747]: ROOT: clean, 629/1628000 files, 124064/1617920 blocks Aug 13 03:19:55.896955 systemd[1]: Finished systemd-fsck-root.service. Aug 13 03:19:55.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:55.898713 systemd[1]: Mounting sysroot.mount... Aug 13 03:19:55.911833 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 03:19:55.912159 systemd[1]: Mounted sysroot.mount. Aug 13 03:19:55.912920 systemd[1]: Reached target initrd-root-fs.target. Aug 13 03:19:55.915418 systemd[1]: Mounting sysroot-usr.mount... Aug 13 03:19:55.916594 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 03:19:55.917485 systemd[1]: Starting flatcar-openstack-hostname.service... Aug 13 03:19:55.918253 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 03:19:55.918321 systemd[1]: Reached target ignition-diskful.target. Aug 13 03:19:55.924973 systemd[1]: Mounted sysroot-usr.mount. Aug 13 03:19:55.927068 systemd[1]: Starting initrd-setup-root.service... Aug 13 03:19:55.941372 initrd-setup-root[758]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 03:19:55.951700 initrd-setup-root[766]: cut: /sysroot/etc/group: No such file or directory Aug 13 03:19:55.962305 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 03:19:55.970780 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 03:19:56.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:56.034168 systemd[1]: Finished initrd-setup-root.service. Aug 13 03:19:56.036318 systemd[1]: Starting ignition-mount.service... Aug 13 03:19:56.038166 systemd[1]: Starting sysroot-boot.service... Aug 13 03:19:56.060858 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 03:19:56.075401 ignition[803]: INFO : Ignition 2.14.0 Aug 13 03:19:56.075401 ignition[803]: INFO : Stage: mount Aug 13 03:19:56.077170 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 03:19:56.077170 ignition[803]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 03:19:56.079584 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 03:19:56.080581 ignition[803]: INFO : mount: mount passed Aug 13 03:19:56.080581 ignition[803]: INFO : Ignition finished successfully Aug 13 03:19:56.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:56.082888 systemd[1]: Finished ignition-mount.service. Aug 13 03:19:56.085233 coreos-metadata[753]: Aug 13 03:19:56.085 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 13 03:19:56.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:56.091980 systemd[1]: Finished sysroot-boot.service. Aug 13 03:19:56.105740 coreos-metadata[753]: Aug 13 03:19:56.105 INFO Fetch successful Aug 13 03:19:56.106619 coreos-metadata[753]: Aug 13 03:19:56.106 INFO wrote hostname srv-pghwy.gb1.brightbox.com to /sysroot/etc/hostname Aug 13 03:19:56.109281 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Aug 13 03:19:56.109425 systemd[1]: Finished flatcar-openstack-hostname.service. Aug 13 03:19:56.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:56.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:56.415498 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 03:19:56.427835 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (810) Aug 13 03:19:56.447188 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 03:19:56.447263 kernel: BTRFS info (device vda6): using free space tree Aug 13 03:19:56.447283 kernel: BTRFS info (device vda6): has skinny extents Aug 13 03:19:56.453811 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 03:19:56.455751 systemd[1]: Starting ignition-files.service... Aug 13 03:19:56.479809 ignition[830]: INFO : Ignition 2.14.0 Aug 13 03:19:56.479809 ignition[830]: INFO : Stage: files Aug 13 03:19:56.481601 ignition[830]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 03:19:56.481601 ignition[830]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 03:19:56.481601 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 03:19:56.486113 ignition[830]: DEBUG : files: compiled without relabeling support, skipping Aug 13 03:19:56.486113 ignition[830]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 03:19:56.486113 ignition[830]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 03:19:56.489825 ignition[830]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 03:19:56.491375 ignition[830]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 03:19:56.494247 ignition[830]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 03:19:56.494247 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 03:19:56.494247 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 03:19:56.494247 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 03:19:56.494247 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 03:19:56.492350 unknown[830]: wrote ssh authorized keys file for user: core Aug 13 03:19:56.680517 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 03:19:56.959189 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 03:19:56.960693 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 03:19:56.962184 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 03:19:56.963430 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 03:19:56.964833 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 03:19:56.966466 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 03:19:56.966466 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 03:19:56.966466 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 03:19:56.966466 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 03:19:56.971692 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 03:19:56.971692 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 03:19:56.971692 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 03:19:56.971692 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 03:19:56.971692 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 03:19:56.971692 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 03:19:56.986975 systemd-networkd[710]: eth0: Gained IPv6LL Aug 13 03:19:57.296197 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 03:19:58.494904 systemd-networkd[710]: eth0: Ignoring DHCPv6 address 2a02:1348:179:86bf:24:19ff:fee6:1afe/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:86bf:24:19ff:fee6:1afe/64 assigned by NDisc. Aug 13 03:19:58.494917 systemd-networkd[710]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 13 03:19:58.731018 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 03:19:58.731018 ignition[830]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Aug 13 03:19:58.731018 ignition[830]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Aug 13 03:19:58.731018 ignition[830]: INFO : files: op(d): [started] processing unit "containerd.service" Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(d): [finished] processing unit "containerd.service" Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 03:19:58.738201 ignition[830]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 03:19:58.760875 kernel: kauditd_printk_skb: 28 callbacks suppressed Aug 13 03:19:58.760916 kernel: audit: type=1130 audit(1755055198.747:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.745297 systemd[1]: Finished ignition-files.service. Aug 13 03:19:58.763820 ignition[830]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 03:19:58.763820 ignition[830]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 03:19:58.763820 ignition[830]: INFO : files: files passed Aug 13 03:19:58.763820 ignition[830]: INFO : Ignition finished successfully Aug 13 03:19:58.775981 kernel: audit: type=1130 audit(1755055198.769:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.749460 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 03:19:58.786686 kernel: audit: type=1130 audit(1755055198.775:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.786713 kernel: audit: type=1131 audit(1755055198.775:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.761512 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 03:19:58.788467 initrd-setup-root-after-ignition[853]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 03:19:58.764720 systemd[1]: Starting ignition-quench.service... Aug 13 03:19:58.767278 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 03:19:58.770492 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 03:19:58.770638 systemd[1]: Finished ignition-quench.service. Aug 13 03:19:58.776785 systemd[1]: Reached target ignition-complete.target. Aug 13 03:19:58.788513 systemd[1]: Starting initrd-parse-etc.service... Aug 13 03:19:58.809776 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 03:19:58.810896 systemd[1]: Finished initrd-parse-etc.service. Aug 13 03:19:58.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.817057 systemd[1]: Reached target initrd-fs.target. Aug 13 03:19:58.824524 kernel: audit: type=1130 audit(1755055198.811:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.824576 kernel: audit: type=1131 audit(1755055198.816:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.822864 systemd[1]: Reached target initrd.target. Aug 13 03:19:58.823529 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 03:19:58.824600 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 03:19:58.841980 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 03:19:58.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.844165 systemd[1]: Starting initrd-cleanup.service... Aug 13 03:19:58.863049 kernel: audit: type=1130 audit(1755055198.841:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.871093 systemd[1]: Stopped target nss-lookup.target. Aug 13 03:19:58.871917 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 03:19:58.873243 systemd[1]: Stopped target timers.target. Aug 13 03:19:58.874490 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 03:19:58.881013 kernel: audit: type=1131 audit(1755055198.874:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.874675 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 03:19:58.875936 systemd[1]: Stopped target initrd.target. Aug 13 03:19:58.881851 systemd[1]: Stopped target basic.target. Aug 13 03:19:58.883103 systemd[1]: Stopped target ignition-complete.target. Aug 13 03:19:58.884317 systemd[1]: Stopped target ignition-diskful.target. Aug 13 03:19:58.885826 systemd[1]: Stopped target initrd-root-device.target. Aug 13 03:19:58.887198 systemd[1]: Stopped target remote-fs.target. Aug 13 03:19:58.888435 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 03:19:58.889833 systemd[1]: Stopped target sysinit.target. Aug 13 03:19:58.891066 systemd[1]: Stopped target local-fs.target. Aug 13 03:19:58.892372 systemd[1]: Stopped target local-fs-pre.target. Aug 13 03:19:58.893782 systemd[1]: Stopped target swap.target. Aug 13 03:19:58.901217 kernel: audit: type=1131 audit(1755055198.895:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.894905 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 03:19:58.895184 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 03:19:58.908407 kernel: audit: type=1131 audit(1755055198.902:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.896380 systemd[1]: Stopped target cryptsetup.target. Aug 13 03:19:58.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.901948 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 03:19:58.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.902173 systemd[1]: Stopped dracut-initqueue.service. Aug 13 03:19:58.903391 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 03:19:58.903614 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 03:19:58.909465 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 03:19:58.909693 systemd[1]: Stopped ignition-files.service. Aug 13 03:19:58.912189 systemd[1]: Stopping ignition-mount.service... Aug 13 03:19:58.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.916752 systemd[1]: Stopping iscsiuio.service... Aug 13 03:19:58.928734 ignition[868]: INFO : Ignition 2.14.0 Aug 13 03:19:58.928734 ignition[868]: INFO : Stage: umount Aug 13 03:19:58.928734 ignition[868]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 03:19:58.928734 ignition[868]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 03:19:58.928734 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 03:19:58.928734 ignition[868]: INFO : umount: umount passed Aug 13 03:19:58.928734 ignition[868]: INFO : Ignition finished successfully Aug 13 03:19:58.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.917343 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 03:19:58.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.917542 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 03:19:58.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.924753 systemd[1]: Stopping sysroot-boot.service... Aug 13 03:19:58.925523 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 03:19:58.925754 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 03:19:58.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.926692 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 03:19:58.926870 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 03:19:58.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.930474 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 03:19:58.930669 systemd[1]: Stopped iscsiuio.service. Aug 13 03:19:58.937902 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 03:19:58.938883 systemd[1]: Stopped ignition-mount.service. Aug 13 03:19:58.941122 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 03:19:58.941241 systemd[1]: Stopped ignition-disks.service. Aug 13 03:19:58.943224 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 03:19:58.943289 systemd[1]: Stopped ignition-kargs.service. Aug 13 03:19:58.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.945233 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 03:19:58.945297 systemd[1]: Stopped ignition-fetch.service. Aug 13 03:19:58.945953 systemd[1]: Stopped target network.target. Aug 13 03:19:58.946519 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 03:19:58.946584 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 03:19:58.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.947327 systemd[1]: Stopped target paths.target. Aug 13 03:19:58.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.949392 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 03:19:58.950297 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 03:19:58.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.952328 systemd[1]: Stopped target slices.target. Aug 13 03:19:58.952937 systemd[1]: Stopped target sockets.target. Aug 13 03:19:58.972000 audit: BPF prog-id=6 op=UNLOAD Aug 13 03:19:58.955068 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 03:19:58.955119 systemd[1]: Closed iscsid.socket. Aug 13 03:19:58.956251 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 03:19:58.956314 systemd[1]: Closed iscsiuio.socket. Aug 13 03:19:58.957417 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 03:19:58.957486 systemd[1]: Stopped ignition-setup.service. Aug 13 03:19:58.959161 systemd[1]: Stopping systemd-networkd.service... Aug 13 03:19:58.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.960397 systemd[1]: Stopping systemd-resolved.service... Aug 13 03:19:58.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.963473 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 03:19:58.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.964356 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 03:19:58.964492 systemd[1]: Finished initrd-cleanup.service. Aug 13 03:19:58.965410 systemd-networkd[710]: eth0: DHCPv6 lease lost Aug 13 03:19:58.992000 audit: BPF prog-id=9 op=UNLOAD Aug 13 03:19:58.968264 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 03:19:58.968408 systemd[1]: Stopped systemd-resolved.service. Aug 13 03:19:58.970314 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 03:19:58.970454 systemd[1]: Stopped systemd-networkd.service. Aug 13 03:19:58.973724 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 03:19:58.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.973782 systemd[1]: Closed systemd-networkd.socket. Aug 13 03:19:58.976361 systemd[1]: Stopping network-cleanup.service... Aug 13 03:19:58.977183 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 03:19:59.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.977283 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 03:19:58.981960 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 03:19:58.982041 systemd[1]: Stopped systemd-sysctl.service. Aug 13 03:19:59.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.983432 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 03:19:59.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.983496 systemd[1]: Stopped systemd-modules-load.service. Aug 13 03:19:59.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:58.984480 systemd[1]: Stopping systemd-udevd.service... Aug 13 03:19:58.993378 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 03:19:58.997870 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 03:19:58.998090 systemd[1]: Stopped systemd-udevd.service. Aug 13 03:19:59.002420 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 03:19:59.002562 systemd[1]: Stopped network-cleanup.service. Aug 13 03:19:59.003827 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 03:19:59.003901 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 03:19:59.004991 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 03:19:59.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:59.005050 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 03:19:59.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:59.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:59.006910 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 03:19:59.006988 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 03:19:59.008264 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 03:19:59.008349 systemd[1]: Stopped dracut-cmdline.service. Aug 13 03:19:59.009563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 03:19:59.009630 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 03:19:59.012263 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 03:19:59.013092 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 03:19:59.013175 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 03:19:59.024318 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 03:19:59.024460 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 03:19:59.102454 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 03:19:59.102672 systemd[1]: Stopped sysroot-boot.service. Aug 13 03:19:59.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:59.104410 systemd[1]: Reached target initrd-switch-root.target. Aug 13 03:19:59.105340 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 03:19:59.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:19:59.105418 systemd[1]: Stopped initrd-setup-root.service. Aug 13 03:19:59.107907 systemd[1]: Starting initrd-switch-root.service... Aug 13 03:19:59.120839 systemd[1]: Switching root. Aug 13 03:19:59.121000 audit: BPF prog-id=5 op=UNLOAD Aug 13 03:19:59.121000 audit: BPF prog-id=4 op=UNLOAD Aug 13 03:19:59.121000 audit: BPF prog-id=3 op=UNLOAD Aug 13 03:19:59.122000 audit: BPF prog-id=8 op=UNLOAD Aug 13 03:19:59.122000 audit: BPF prog-id=7 op=UNLOAD Aug 13 03:19:59.142472 iscsid[715]: iscsid shutting down. Aug 13 03:19:59.143358 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Aug 13 03:19:59.143458 systemd-journald[201]: Journal stopped Aug 13 03:20:03.326683 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 03:20:03.326813 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 03:20:03.326849 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 03:20:03.326871 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 03:20:03.326891 kernel: SELinux: policy capability open_perms=1 Aug 13 03:20:03.326911 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 03:20:03.326947 kernel: SELinux: policy capability always_check_network=0 Aug 13 03:20:03.326969 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 03:20:03.327001 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 03:20:03.327022 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 03:20:03.327047 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 03:20:03.327071 systemd[1]: Successfully loaded SELinux policy in 62.053ms. Aug 13 03:20:03.327105 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.362ms. Aug 13 03:20:03.327129 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 03:20:03.327164 systemd[1]: Detected virtualization kvm. Aug 13 03:20:03.327188 systemd[1]: Detected architecture x86-64. Aug 13 03:20:03.327209 systemd[1]: Detected first boot. Aug 13 03:20:03.327247 systemd[1]: Hostname set to . Aug 13 03:20:03.327270 systemd[1]: Initializing machine ID from VM UUID. Aug 13 03:20:03.327304 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 03:20:03.327327 systemd[1]: Populated /etc with preset unit settings. Aug 13 03:20:03.327365 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 03:20:03.327390 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 03:20:03.327419 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 03:20:03.327444 systemd[1]: Queued start job for default target multi-user.target. Aug 13 03:20:03.327466 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 03:20:03.327488 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 03:20:03.327527 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 03:20:03.327562 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Aug 13 03:20:03.327586 systemd[1]: Created slice system-getty.slice. Aug 13 03:20:03.327607 systemd[1]: Created slice system-modprobe.slice. Aug 13 03:20:03.327628 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 03:20:03.327651 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 03:20:03.327673 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 03:20:03.327694 systemd[1]: Created slice user.slice. Aug 13 03:20:03.327715 systemd[1]: Started systemd-ask-password-console.path. Aug 13 03:20:03.327736 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 03:20:03.327769 systemd[1]: Set up automount boot.automount. Aug 13 03:20:03.327907 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 03:20:03.327936 systemd[1]: Reached target integritysetup.target. Aug 13 03:20:03.327959 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 03:20:03.328036 systemd[1]: Reached target remote-fs.target. Aug 13 03:20:03.328060 systemd[1]: Reached target slices.target. Aug 13 03:20:03.328082 systemd[1]: Reached target swap.target. Aug 13 03:20:03.328128 systemd[1]: Reached target torcx.target. Aug 13 03:20:03.328151 systemd[1]: Reached target veritysetup.target. Aug 13 03:20:03.328173 systemd[1]: Listening on systemd-coredump.socket. Aug 13 03:20:03.328196 systemd[1]: Listening on systemd-initctl.socket. Aug 13 03:20:03.328218 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 03:20:03.328240 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 03:20:03.328262 systemd[1]: Listening on systemd-journald.socket. Aug 13 03:20:03.328283 systemd[1]: Listening on systemd-networkd.socket. Aug 13 03:20:03.328305 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 03:20:03.328338 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 03:20:03.328362 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 03:20:03.328384 systemd[1]: Mounting dev-hugepages.mount... Aug 13 03:20:03.328405 systemd[1]: Mounting dev-mqueue.mount... Aug 13 03:20:03.328434 systemd[1]: Mounting media.mount... Aug 13 03:20:03.328457 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 03:20:03.328486 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 03:20:03.328523 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 03:20:03.328546 systemd[1]: Mounting tmp.mount... Aug 13 03:20:03.328580 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 03:20:03.328604 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 03:20:03.328629 systemd[1]: Starting kmod-static-nodes.service... Aug 13 03:20:03.328652 systemd[1]: Starting modprobe@configfs.service... Aug 13 03:20:03.328674 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 03:20:03.328697 systemd[1]: Starting modprobe@drm.service... Aug 13 03:20:03.328719 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 03:20:03.328741 systemd[1]: Starting modprobe@fuse.service... Aug 13 03:20:03.328762 systemd[1]: Starting modprobe@loop.service... Aug 13 03:20:03.328796 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 03:20:03.328836 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 03:20:03.328858 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 03:20:03.328879 systemd[1]: Starting systemd-journald.service... Aug 13 03:20:03.328902 systemd[1]: Starting systemd-modules-load.service... Aug 13 03:20:03.328924 systemd[1]: Starting systemd-network-generator.service... Aug 13 03:20:03.328962 systemd[1]: Starting systemd-remount-fs.service... Aug 13 03:20:03.328985 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 03:20:03.329007 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 03:20:03.329042 systemd[1]: Mounted dev-hugepages.mount. Aug 13 03:20:03.329066 systemd[1]: Mounted dev-mqueue.mount. Aug 13 03:20:03.329088 systemd[1]: Mounted media.mount. Aug 13 03:20:03.329110 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 03:20:03.329132 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 03:20:03.329154 systemd[1]: Mounted tmp.mount. Aug 13 03:20:03.329175 systemd[1]: Finished kmod-static-nodes.service. Aug 13 03:20:03.329198 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 03:20:03.329221 systemd[1]: Finished modprobe@configfs.service. Aug 13 03:20:03.329254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 03:20:03.329278 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 03:20:03.329299 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 03:20:03.329325 systemd-journald[1010]: Journal started Aug 13 03:20:03.329396 systemd-journald[1010]: Runtime Journal (/run/log/journal/b4f549c48a034f4fbe8c3fe51028d7fb) is 4.7M, max 38.1M, 33.3M free. Aug 13 03:20:03.097000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 03:20:03.097000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 03:20:03.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.323000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 03:20:03.323000 audit[1010]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe02e33970 a2=4000 a3=7ffe02e33a0c items=0 ppid=1 pid=1010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:03.323000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 03:20:03.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.339361 systemd[1]: Finished modprobe@drm.service. Aug 13 03:20:03.339434 kernel: loop: module loaded Aug 13 03:20:03.345224 systemd[1]: Started systemd-journald.service. Aug 13 03:20:03.345285 kernel: fuse: init (API version 7.34) Aug 13 03:20:03.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.339100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 03:20:03.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.344965 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 03:20:03.346331 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 03:20:03.346581 systemd[1]: Finished modprobe@fuse.service. Aug 13 03:20:03.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.348171 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 03:20:03.348430 systemd[1]: Finished modprobe@loop.service. Aug 13 03:20:03.349630 systemd[1]: Finished systemd-modules-load.service. Aug 13 03:20:03.352640 systemd[1]: Finished systemd-network-generator.service. Aug 13 03:20:03.353857 systemd[1]: Finished systemd-remount-fs.service. Aug 13 03:20:03.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.356469 systemd[1]: Reached target network-pre.target. Aug 13 03:20:03.363404 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 03:20:03.366985 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 03:20:03.367719 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 03:20:03.371511 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 03:20:03.378119 systemd[1]: Starting systemd-journal-flush.service... Aug 13 03:20:03.380937 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 03:20:03.384106 systemd[1]: Starting systemd-random-seed.service... Aug 13 03:20:03.384978 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 03:20:03.388633 systemd[1]: Starting systemd-sysctl.service... Aug 13 03:20:03.397996 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 03:20:03.398886 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 03:20:03.403362 systemd-journald[1010]: Time spent on flushing to /var/log/journal/b4f549c48a034f4fbe8c3fe51028d7fb is 76.069ms for 1221 entries. Aug 13 03:20:03.403362 systemd-journald[1010]: System Journal (/var/log/journal/b4f549c48a034f4fbe8c3fe51028d7fb) is 8.0M, max 584.8M, 576.8M free. Aug 13 03:20:03.496256 systemd-journald[1010]: Received client request to flush runtime journal. Aug 13 03:20:03.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.418743 systemd[1]: Finished systemd-random-seed.service. Aug 13 03:20:03.419654 systemd[1]: Reached target first-boot-complete.target. Aug 13 03:20:03.441565 systemd[1]: Finished systemd-sysctl.service. Aug 13 03:20:03.449437 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 03:20:03.452274 systemd[1]: Starting systemd-sysusers.service... Aug 13 03:20:03.498735 systemd[1]: Finished systemd-sysusers.service. Aug 13 03:20:03.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.504163 systemd[1]: Finished systemd-journal-flush.service. Aug 13 03:20:03.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.509082 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 03:20:03.571160 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 03:20:03.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.583779 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 03:20:03.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:03.586669 systemd[1]: Starting systemd-udev-settle.service... Aug 13 03:20:03.601724 udevadm[1064]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 03:20:04.112421 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 03:20:04.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.116592 kernel: kauditd_printk_skb: 76 callbacks suppressed Aug 13 03:20:04.116684 kernel: audit: type=1130 audit(1755055204.114:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.124487 systemd[1]: Starting systemd-udevd.service... Aug 13 03:20:04.153194 systemd-udevd[1066]: Using default interface naming scheme 'v252'. Aug 13 03:20:04.185119 systemd[1]: Started systemd-udevd.service. Aug 13 03:20:04.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.192153 systemd[1]: Starting systemd-networkd.service... Aug 13 03:20:04.195295 kernel: audit: type=1130 audit(1755055204.188:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.206613 systemd[1]: Starting systemd-userdbd.service... Aug 13 03:20:04.263739 systemd[1]: Found device dev-ttyS0.device. Aug 13 03:20:04.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.297646 systemd[1]: Started systemd-userdbd.service. Aug 13 03:20:04.307819 kernel: audit: type=1130 audit(1755055204.297:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.417045 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 13 03:20:04.424886 systemd-networkd[1075]: lo: Link UP Aug 13 03:20:04.425373 systemd-networkd[1075]: lo: Gained carrier Aug 13 03:20:04.426427 systemd-networkd[1075]: Enumeration completed Aug 13 03:20:04.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.426739 systemd[1]: Started systemd-networkd.service. Aug 13 03:20:04.428207 systemd-networkd[1075]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 03:20:04.434759 kernel: audit: type=1130 audit(1755055204.426:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.435283 systemd-networkd[1075]: eth0: Link UP Aug 13 03:20:04.435295 systemd-networkd[1075]: eth0: Gained carrier Aug 13 03:20:04.446581 kernel: ACPI: button: Power Button [PWRF] Aug 13 03:20:04.445807 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 03:20:04.450830 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 03:20:04.459009 systemd-networkd[1075]: eth0: DHCPv4 address 10.230.26.254/30, gateway 10.230.26.253 acquired from 10.230.26.253 Aug 13 03:20:04.498000 audit[1077]: AVC avc: denied { confidentiality } for pid=1077 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 03:20:04.520629 kernel: audit: type=1400 audit(1755055204.498:120): avc: denied { confidentiality } for pid=1077 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 03:20:04.498000 audit[1077]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55da6192a420 a1=338ac a2=7fcf5a9d0bc5 a3=5 items=110 ppid=1066 pid=1077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:04.542858 kernel: audit: type=1300 audit(1755055204.498:120): arch=c000003e syscall=175 success=yes exit=0 a0=55da6192a420 a1=338ac a2=7fcf5a9d0bc5 a3=5 items=110 ppid=1066 pid=1077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:04.498000 audit: CWD cwd="/" Aug 13 03:20:04.498000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.551407 kernel: audit: type=1307 audit(1755055204.498:120): cwd="/" Aug 13 03:20:04.551496 kernel: audit: type=1302 audit(1755055204.498:120): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=1 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.559831 kernel: audit: type=1302 audit(1755055204.498:120): item=1 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=2 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.566840 kernel: audit: type=1302 audit(1755055204.498:120): item=2 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=3 name=(null) inode=14281 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=4 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=5 name=(null) inode=14282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=6 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=7 name=(null) inode=14283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=8 name=(null) inode=14283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=9 name=(null) inode=14284 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=10 name=(null) inode=14283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=11 name=(null) inode=14285 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=12 name=(null) inode=14283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=13 name=(null) inode=14286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=14 name=(null) inode=14283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=15 name=(null) inode=14287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=16 name=(null) inode=14283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=17 name=(null) inode=14288 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=18 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=19 name=(null) inode=14289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=20 name=(null) inode=14289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=21 name=(null) inode=14290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=22 name=(null) inode=14289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=23 name=(null) inode=14291 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=24 name=(null) inode=14289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=25 name=(null) inode=14292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=26 name=(null) inode=14289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=27 name=(null) inode=14293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=28 name=(null) inode=14289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=29 name=(null) inode=14294 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=30 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=31 name=(null) inode=14295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=32 name=(null) inode=14295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=33 name=(null) inode=14296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=34 name=(null) inode=14295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=35 name=(null) inode=14297 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=36 name=(null) inode=14295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=37 name=(null) inode=14298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=38 name=(null) inode=14295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=39 name=(null) inode=14299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=40 name=(null) inode=14295 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=41 name=(null) inode=14300 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=42 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=43 name=(null) inode=14301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=44 name=(null) inode=14301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=45 name=(null) inode=14302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=46 name=(null) inode=14301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=47 name=(null) inode=14303 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=48 name=(null) inode=14301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=49 name=(null) inode=14304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=50 name=(null) inode=14301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=51 name=(null) inode=14305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=52 name=(null) inode=14301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=53 name=(null) inode=14306 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=55 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=56 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=57 name=(null) inode=14308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=58 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=59 name=(null) inode=14309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=60 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=61 name=(null) inode=14310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=62 name=(null) inode=14310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=63 name=(null) inode=14311 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=64 name=(null) inode=14310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=65 name=(null) inode=14312 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=66 name=(null) inode=14310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=67 name=(null) inode=14313 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=68 name=(null) inode=14310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=69 name=(null) inode=14314 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=70 name=(null) inode=14310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=71 name=(null) inode=14315 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=72 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=73 name=(null) inode=14316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=74 name=(null) inode=14316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=75 name=(null) inode=14317 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=76 name=(null) inode=14316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=77 name=(null) inode=14318 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=78 name=(null) inode=14316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=79 name=(null) inode=14319 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=80 name=(null) inode=14316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=81 name=(null) inode=14320 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=82 name=(null) inode=14316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=83 name=(null) inode=14321 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=84 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=85 name=(null) inode=14322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=86 name=(null) inode=14322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=87 name=(null) inode=14323 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=88 name=(null) inode=14322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=89 name=(null) inode=14324 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=90 name=(null) inode=14322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=91 name=(null) inode=14325 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=92 name=(null) inode=14322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=93 name=(null) inode=14326 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=94 name=(null) inode=14322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=95 name=(null) inode=14327 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=96 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=97 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=98 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=99 name=(null) inode=14329 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=100 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=101 name=(null) inode=14330 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=102 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=103 name=(null) inode=14331 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=104 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=105 name=(null) inode=14332 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=106 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=107 name=(null) inode=14333 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PATH item=109 name=(null) inode=14334 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:20:04.498000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 03:20:04.594835 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Aug 13 03:20:04.594914 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 03:20:04.609086 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 03:20:04.609439 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 03:20:04.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.749733 systemd[1]: Finished systemd-udev-settle.service. Aug 13 03:20:04.753086 systemd[1]: Starting lvm2-activation-early.service... Aug 13 03:20:04.778261 lvm[1096]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 03:20:04.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.812452 systemd[1]: Finished lvm2-activation-early.service. Aug 13 03:20:04.813381 systemd[1]: Reached target cryptsetup.target. Aug 13 03:20:04.816643 systemd[1]: Starting lvm2-activation.service... Aug 13 03:20:04.823210 lvm[1098]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 03:20:04.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.850313 systemd[1]: Finished lvm2-activation.service. Aug 13 03:20:04.851197 systemd[1]: Reached target local-fs-pre.target. Aug 13 03:20:04.852269 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 03:20:04.852310 systemd[1]: Reached target local-fs.target. Aug 13 03:20:04.853230 systemd[1]: Reached target machines.target. Aug 13 03:20:04.856097 systemd[1]: Starting ldconfig.service... Aug 13 03:20:04.857661 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 03:20:04.857720 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 03:20:04.860579 systemd[1]: Starting systemd-boot-update.service... Aug 13 03:20:04.863432 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 03:20:04.866377 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 03:20:04.871398 systemd[1]: Starting systemd-sysext.service... Aug 13 03:20:04.876787 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1101 (bootctl) Aug 13 03:20:04.880173 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 03:20:04.894232 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 03:20:04.905399 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 03:20:04.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:04.908731 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 03:20:04.909083 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 03:20:04.932876 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 03:20:05.067004 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 03:20:05.068031 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 03:20:05.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.094364 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 03:20:05.115867 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 03:20:05.138381 (sd-sysext)[1117]: Using extensions 'kubernetes'. Aug 13 03:20:05.141276 (sd-sysext)[1117]: Merged extensions into '/usr'. Aug 13 03:20:05.160537 systemd-fsck[1113]: fsck.fat 4.2 (2021-01-31) Aug 13 03:20:05.160537 systemd-fsck[1113]: /dev/vda1: 789 files, 119324/258078 clusters Aug 13 03:20:05.177224 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 03:20:05.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.185963 systemd[1]: Mounting boot.mount... Aug 13 03:20:05.187017 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 03:20:05.189558 systemd[1]: Mounting usr-share-oem.mount... Aug 13 03:20:05.190583 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.196361 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 03:20:05.198634 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 03:20:05.201061 systemd[1]: Starting modprobe@loop.service... Aug 13 03:20:05.203705 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.203977 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 03:20:05.204247 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 03:20:05.228527 systemd[1]: Mounted boot.mount. Aug 13 03:20:05.230535 systemd[1]: Mounted usr-share-oem.mount. Aug 13 03:20:05.234152 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 03:20:05.234473 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 03:20:05.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.236768 systemd[1]: Finished systemd-sysext.service. Aug 13 03:20:05.239364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 03:20:05.239602 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 03:20:05.241335 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 03:20:05.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.242519 systemd[1]: Finished modprobe@loop.service. Aug 13 03:20:05.253764 systemd[1]: Starting ensure-sysext.service... Aug 13 03:20:05.255166 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 03:20:05.255681 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.258102 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 03:20:05.270610 systemd[1]: Finished systemd-boot-update.service. Aug 13 03:20:05.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.271674 systemd[1]: Reloading. Aug 13 03:20:05.292314 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 03:20:05.294253 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 03:20:05.298881 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 03:20:05.457758 ldconfig[1100]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 03:20:05.471923 /usr/lib/systemd/system-generators/torcx-generator[1159]: time="2025-08-13T03:20:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 03:20:05.473598 /usr/lib/systemd/system-generators/torcx-generator[1159]: time="2025-08-13T03:20:05Z" level=info msg="torcx already run" Aug 13 03:20:05.563351 systemd-networkd[1075]: eth0: Gained IPv6LL Aug 13 03:20:05.595317 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 03:20:05.595369 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 03:20:05.624599 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 03:20:05.716390 systemd[1]: Finished ldconfig.service. Aug 13 03:20:05.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.718815 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 03:20:05.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.722979 systemd[1]: Starting audit-rules.service... Aug 13 03:20:05.725933 systemd[1]: Starting clean-ca-certificates.service... Aug 13 03:20:05.729369 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 03:20:05.737053 systemd[1]: Starting systemd-resolved.service... Aug 13 03:20:05.740374 systemd[1]: Starting systemd-timesyncd.service... Aug 13 03:20:05.746765 systemd[1]: Starting systemd-update-utmp.service... Aug 13 03:20:05.754000 audit[1218]: SYSTEM_BOOT pid=1218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.755230 systemd[1]: Finished clean-ca-certificates.service. Aug 13 03:20:05.767785 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.770078 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 03:20:05.773959 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 03:20:05.779221 systemd[1]: Starting modprobe@loop.service... Aug 13 03:20:05.780085 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.780357 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 03:20:05.780598 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 03:20:05.782680 systemd[1]: Finished systemd-update-utmp.service. Aug 13 03:20:05.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.788463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 03:20:05.788699 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 03:20:05.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.794057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 03:20:05.794294 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 03:20:05.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.797278 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 03:20:05.797539 systemd[1]: Finished modprobe@loop.service. Aug 13 03:20:05.800452 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 03:20:05.800601 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.807404 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.813530 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 03:20:05.818002 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 03:20:05.820407 systemd[1]: Starting modprobe@loop.service... Aug 13 03:20:05.821988 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.822233 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 03:20:05.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.823491 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 03:20:05.827182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 03:20:05.827475 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 03:20:05.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.830366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 03:20:05.830636 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 03:20:05.832207 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 03:20:05.837615 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.841663 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 03:20:05.845385 systemd[1]: Starting modprobe@drm.service... Aug 13 03:20:05.849092 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 03:20:05.854110 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.854595 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 03:20:05.858614 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 03:20:05.861971 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 03:20:05.863940 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 03:20:05.864221 systemd[1]: Finished modprobe@loop.service. Aug 13 03:20:05.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.871730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 03:20:05.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.872753 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 03:20:05.874525 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 03:20:05.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.880562 systemd[1]: Finished ensure-sysext.service. Aug 13 03:20:05.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.885025 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 03:20:05.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.897254 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 03:20:05.897577 systemd[1]: Finished modprobe@drm.service. Aug 13 03:20:05.901576 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 03:20:05.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.925870 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 03:20:05.926499 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 03:20:05.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.928371 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 03:20:05.928445 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 03:20:05.931441 systemd[1]: Starting systemd-update-done.service... Aug 13 03:20:05.933964 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 03:20:05.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:05.946985 systemd[1]: Finished systemd-update-done.service. Aug 13 03:20:05.969000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 03:20:05.969000 audit[1259]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa02737e0 a2=420 a3=0 items=0 ppid=1212 pid=1259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:05.969000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 03:20:05.971503 systemd[1]: Finished audit-rules.service. Aug 13 03:20:05.972752 augenrules[1259]: No rules Aug 13 03:20:05.998166 systemd[1]: Started systemd-timesyncd.service. Aug 13 03:20:05.999045 systemd[1]: Reached target time-set.target. Aug 13 03:20:05.999671 systemd-resolved[1215]: Positive Trust Anchors: Aug 13 03:20:05.999690 systemd-resolved[1215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 03:20:05.999733 systemd-resolved[1215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 03:20:06.009346 systemd-resolved[1215]: Using system hostname 'srv-pghwy.gb1.brightbox.com'. Aug 13 03:20:06.012470 systemd[1]: Started systemd-resolved.service. Aug 13 03:20:06.013335 systemd[1]: Reached target network.target. Aug 13 03:20:06.013977 systemd[1]: Reached target network-online.target. Aug 13 03:20:06.014613 systemd[1]: Reached target nss-lookup.target. Aug 13 03:20:06.015275 systemd[1]: Reached target sysinit.target. Aug 13 03:20:06.016043 systemd[1]: Started motdgen.path. Aug 13 03:20:06.016731 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 03:20:06.017720 systemd[1]: Started logrotate.timer. Aug 13 03:20:06.018488 systemd[1]: Started mdadm.timer. Aug 13 03:20:06.019112 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 03:20:06.019741 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 03:20:06.019816 systemd[1]: Reached target paths.target. Aug 13 03:20:06.020417 systemd[1]: Reached target timers.target. Aug 13 03:20:06.021680 systemd[1]: Listening on dbus.socket. Aug 13 03:20:06.024562 systemd[1]: Starting docker.socket... Aug 13 03:20:06.027575 systemd[1]: Listening on sshd.socket. Aug 13 03:20:06.028617 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 03:20:06.029305 systemd[1]: Listening on docker.socket. Aug 13 03:20:06.030201 systemd[1]: Reached target sockets.target. Aug 13 03:20:06.030995 systemd[1]: Reached target basic.target. Aug 13 03:20:06.032011 systemd[1]: System is tainted: cgroupsv1 Aug 13 03:20:06.032253 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 03:20:06.032452 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 03:20:06.034702 systemd[1]: Starting containerd.service... Aug 13 03:20:06.037525 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Aug 13 03:20:06.040612 systemd[1]: Starting dbus.service... Aug 13 03:20:06.043661 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 03:20:06.047195 systemd[1]: Starting extend-filesystems.service... Aug 13 03:20:06.050266 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 03:20:06.058178 systemd[1]: Starting kubelet.service... Aug 13 03:20:06.075761 jq[1274]: false Aug 13 03:20:06.061200 systemd[1]: Starting motdgen.service... Aug 13 03:20:06.068734 systemd[1]: Starting prepare-helm.service... Aug 13 03:20:06.074034 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 03:20:06.084179 systemd[1]: Starting sshd-keygen.service... Aug 13 03:20:06.088718 systemd[1]: Starting systemd-logind.service... Aug 13 03:20:06.089488 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 03:20:06.089648 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 03:20:06.093955 systemd[1]: Starting update-engine.service... Aug 13 03:20:06.098219 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 03:20:06.109066 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 03:20:06.109540 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 03:20:06.117883 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 03:20:06.160043 jq[1291]: true Aug 13 03:20:06.118309 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 03:20:06.164980 tar[1296]: linux-amd64/helm Aug 13 03:20:06.174501 jq[1299]: true Aug 13 03:20:07.400063 systemd-resolved[1215]: Clock change detected. Flushing caches. Aug 13 03:20:07.400500 systemd-timesyncd[1216]: Contacted time server 178.215.228.24:123 (0.flatcar.pool.ntp.org). Aug 13 03:20:07.400736 systemd-timesyncd[1216]: Initial clock synchronization to Wed 2025-08-13 03:20:07.399983 UTC. Aug 13 03:20:07.429410 extend-filesystems[1276]: Found loop1 Aug 13 03:20:07.432043 extend-filesystems[1276]: Found vda Aug 13 03:20:07.432992 extend-filesystems[1276]: Found vda1 Aug 13 03:20:07.434127 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 03:20:07.434541 systemd[1]: Finished motdgen.service. Aug 13 03:20:07.437817 extend-filesystems[1276]: Found vda2 Aug 13 03:20:07.438771 extend-filesystems[1276]: Found vda3 Aug 13 03:20:07.440349 extend-filesystems[1276]: Found usr Aug 13 03:20:07.440349 extend-filesystems[1276]: Found vda4 Aug 13 03:20:07.440349 extend-filesystems[1276]: Found vda6 Aug 13 03:20:07.440349 extend-filesystems[1276]: Found vda7 Aug 13 03:20:07.440349 extend-filesystems[1276]: Found vda9 Aug 13 03:20:07.440349 extend-filesystems[1276]: Checking size of /dev/vda9 Aug 13 03:20:07.445360 systemd[1]: Started dbus.service. Aug 13 03:20:07.445065 dbus-daemon[1272]: [system] SELinux support is enabled Aug 13 03:20:07.453108 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 03:20:07.456426 dbus-daemon[1272]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1075 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 03:20:07.453147 systemd[1]: Reached target system-config.target. Aug 13 03:20:07.453833 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 03:20:07.453879 systemd[1]: Reached target user-config.target. Aug 13 03:20:07.454574 systemd-networkd[1075]: eth0: Ignoring DHCPv6 address 2a02:1348:179:86bf:24:19ff:fee6:1afe/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:86bf:24:19ff:fee6:1afe/64 assigned by NDisc. Aug 13 03:20:07.454580 systemd-networkd[1075]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 13 03:20:07.468596 dbus-daemon[1272]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 03:20:07.474706 systemd[1]: Starting systemd-hostnamed.service... Aug 13 03:20:07.488141 extend-filesystems[1276]: Resized partition /dev/vda9 Aug 13 03:20:07.506423 extend-filesystems[1332]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 03:20:07.521370 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Aug 13 03:20:07.543277 update_engine[1289]: I0813 03:20:07.542657 1289 main.cc:92] Flatcar Update Engine starting Aug 13 03:20:07.555596 systemd[1]: Started update-engine.service. Aug 13 03:20:07.560633 update_engine[1289]: I0813 03:20:07.557595 1289 update_check_scheduler.cc:74] Next update check in 9m31s Aug 13 03:20:07.559510 systemd[1]: Started locksmithd.service. Aug 13 03:20:07.606861 bash[1336]: Updated "/home/core/.ssh/authorized_keys" Aug 13 03:20:07.605935 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 03:20:07.679834 env[1300]: time="2025-08-13T03:20:07.679182177Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 03:20:07.704433 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 03:20:07.731946 extend-filesystems[1332]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 03:20:07.731946 extend-filesystems[1332]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 03:20:07.731946 extend-filesystems[1332]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 03:20:07.730621 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 03:20:07.735635 extend-filesystems[1276]: Resized filesystem in /dev/vda9 Aug 13 03:20:07.731011 systemd[1]: Finished extend-filesystems.service. Aug 13 03:20:07.741538 systemd-logind[1285]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 03:20:07.745216 systemd-logind[1285]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 03:20:07.746845 systemd-logind[1285]: New seat seat0. Aug 13 03:20:07.753187 systemd[1]: Started systemd-logind.service. Aug 13 03:20:07.760755 env[1300]: time="2025-08-13T03:20:07.760667757Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 03:20:07.761223 env[1300]: time="2025-08-13T03:20:07.761190111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 03:20:07.764189 env[1300]: time="2025-08-13T03:20:07.764139828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 03:20:07.764189 env[1300]: time="2025-08-13T03:20:07.764185248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 03:20:07.764594 env[1300]: time="2025-08-13T03:20:07.764552027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 03:20:07.764594 env[1300]: time="2025-08-13T03:20:07.764589377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 03:20:07.764744 env[1300]: time="2025-08-13T03:20:07.764611557Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 03:20:07.764744 env[1300]: time="2025-08-13T03:20:07.764645070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 03:20:07.764846 env[1300]: time="2025-08-13T03:20:07.764813379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 03:20:07.765301 env[1300]: time="2025-08-13T03:20:07.765267939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 03:20:07.765584 env[1300]: time="2025-08-13T03:20:07.765545606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 03:20:07.765584 env[1300]: time="2025-08-13T03:20:07.765579809Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 03:20:07.765717 env[1300]: time="2025-08-13T03:20:07.765665958Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 03:20:07.765717 env[1300]: time="2025-08-13T03:20:07.765689877Z" level=info msg="metadata content store policy set" policy=shared Aug 13 03:20:07.770170 env[1300]: time="2025-08-13T03:20:07.770126911Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 03:20:07.770248 env[1300]: time="2025-08-13T03:20:07.770170115Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 03:20:07.770248 env[1300]: time="2025-08-13T03:20:07.770194663Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 03:20:07.770377 env[1300]: time="2025-08-13T03:20:07.770259707Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 03:20:07.770377 env[1300]: time="2025-08-13T03:20:07.770290937Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 03:20:07.770377 env[1300]: time="2025-08-13T03:20:07.770317588Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 03:20:07.770377 env[1300]: time="2025-08-13T03:20:07.770362377Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 03:20:07.770563 env[1300]: time="2025-08-13T03:20:07.770384282Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 03:20:07.770563 env[1300]: time="2025-08-13T03:20:07.770405264Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 03:20:07.770563 env[1300]: time="2025-08-13T03:20:07.770426883Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 03:20:07.770563 env[1300]: time="2025-08-13T03:20:07.770447796Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 03:20:07.770563 env[1300]: time="2025-08-13T03:20:07.770474634Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 03:20:07.770815 env[1300]: time="2025-08-13T03:20:07.770620154Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 03:20:07.770815 env[1300]: time="2025-08-13T03:20:07.770770298Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 03:20:07.771202 env[1300]: time="2025-08-13T03:20:07.771168732Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 03:20:07.771269 env[1300]: time="2025-08-13T03:20:07.771240527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771333 env[1300]: time="2025-08-13T03:20:07.771268745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 03:20:07.771398 env[1300]: time="2025-08-13T03:20:07.771371097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771454 env[1300]: time="2025-08-13T03:20:07.771398225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771454 env[1300]: time="2025-08-13T03:20:07.771425214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771601 env[1300]: time="2025-08-13T03:20:07.771456441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771601 env[1300]: time="2025-08-13T03:20:07.771478831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771601 env[1300]: time="2025-08-13T03:20:07.771500257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771601 env[1300]: time="2025-08-13T03:20:07.771519318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771601 env[1300]: time="2025-08-13T03:20:07.771537107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771601 env[1300]: time="2025-08-13T03:20:07.771559449Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 03:20:07.771857 env[1300]: time="2025-08-13T03:20:07.771761903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771857 env[1300]: time="2025-08-13T03:20:07.771787748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771857 env[1300]: time="2025-08-13T03:20:07.771807101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.771857 env[1300]: time="2025-08-13T03:20:07.771824759Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 03:20:07.771857 env[1300]: time="2025-08-13T03:20:07.771845627Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 03:20:07.772098 env[1300]: time="2025-08-13T03:20:07.771862805Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 03:20:07.772098 env[1300]: time="2025-08-13T03:20:07.771907115Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 03:20:07.772098 env[1300]: time="2025-08-13T03:20:07.771979609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 03:20:07.772363 env[1300]: time="2025-08-13T03:20:07.772245041Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 03:20:07.793857 env[1300]: time="2025-08-13T03:20:07.792306634Z" level=info msg="Connect containerd service" Aug 13 03:20:07.793857 env[1300]: time="2025-08-13T03:20:07.792461101Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 03:20:07.794418 env[1300]: time="2025-08-13T03:20:07.794373856Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 03:20:07.796311 env[1300]: time="2025-08-13T03:20:07.796281285Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 03:20:07.796530 env[1300]: time="2025-08-13T03:20:07.796495903Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 03:20:07.796876 systemd[1]: Started containerd.service. Aug 13 03:20:07.798679 env[1300]: time="2025-08-13T03:20:07.798649272Z" level=info msg="containerd successfully booted in 0.130327s" Aug 13 03:20:07.804349 env[1300]: time="2025-08-13T03:20:07.803634221Z" level=info msg="Start subscribing containerd event" Aug 13 03:20:07.804349 env[1300]: time="2025-08-13T03:20:07.803725859Z" level=info msg="Start recovering state" Aug 13 03:20:07.804349 env[1300]: time="2025-08-13T03:20:07.803939457Z" level=info msg="Start event monitor" Aug 13 03:20:07.804349 env[1300]: time="2025-08-13T03:20:07.803976577Z" level=info msg="Start snapshots syncer" Aug 13 03:20:07.804349 env[1300]: time="2025-08-13T03:20:07.804000803Z" level=info msg="Start cni network conf syncer for default" Aug 13 03:20:07.804349 env[1300]: time="2025-08-13T03:20:07.804021731Z" level=info msg="Start streaming server" Aug 13 03:20:07.877572 dbus-daemon[1272]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 03:20:07.877748 systemd[1]: Started systemd-hostnamed.service. Aug 13 03:20:07.889547 dbus-daemon[1272]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1329 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 03:20:07.894741 systemd[1]: Starting polkit.service... Aug 13 03:20:07.918956 polkitd[1348]: Started polkitd version 121 Aug 13 03:20:07.942642 polkitd[1348]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 03:20:07.943055 polkitd[1348]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 03:20:07.949298 polkitd[1348]: Finished loading, compiling and executing 2 rules Aug 13 03:20:07.950954 dbus-daemon[1272]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 03:20:07.951179 systemd[1]: Started polkit.service. Aug 13 03:20:07.953170 polkitd[1348]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 03:20:07.973899 systemd-hostnamed[1329]: Hostname set to (static) Aug 13 03:20:08.235771 sshd_keygen[1311]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 03:20:08.304261 systemd[1]: Finished sshd-keygen.service. Aug 13 03:20:08.307608 systemd[1]: Starting issuegen.service... Aug 13 03:20:08.321180 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 03:20:08.321604 systemd[1]: Finished issuegen.service. Aug 13 03:20:08.325363 systemd[1]: Starting systemd-user-sessions.service... Aug 13 03:20:08.341436 systemd[1]: Finished systemd-user-sessions.service. Aug 13 03:20:08.344649 systemd[1]: Started getty@tty1.service. Aug 13 03:20:08.347921 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 03:20:08.351135 systemd[1]: Reached target getty.target. Aug 13 03:20:08.436261 tar[1296]: linux-amd64/LICENSE Aug 13 03:20:08.436261 tar[1296]: linux-amd64/README.md Aug 13 03:20:08.443515 systemd[1]: Finished prepare-helm.service. Aug 13 03:20:08.470461 locksmithd[1337]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 03:20:09.168846 systemd[1]: Started kubelet.service. Aug 13 03:20:09.883071 kubelet[1385]: E0813 03:20:09.882933 1385 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 03:20:09.885744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 03:20:09.886064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 03:20:10.364380 systemd[1]: Created slice system-sshd.slice. Aug 13 03:20:10.376645 systemd[1]: Started sshd@0-10.230.26.254:22-139.178.89.65:59870.service. Aug 13 03:20:11.304647 sshd[1392]: Accepted publickey for core from 139.178.89.65 port 59870 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:20:11.308625 sshd[1392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:20:11.338462 systemd[1]: Created slice user-500.slice. Aug 13 03:20:11.341304 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 03:20:11.347762 systemd-logind[1285]: New session 1 of user core. Aug 13 03:20:11.360366 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 03:20:11.364504 systemd[1]: Starting user@500.service... Aug 13 03:20:11.371868 (systemd)[1398]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:20:11.480458 systemd[1398]: Queued start job for default target default.target. Aug 13 03:20:11.481022 systemd[1398]: Reached target paths.target. Aug 13 03:20:11.481050 systemd[1398]: Reached target sockets.target. Aug 13 03:20:11.481071 systemd[1398]: Reached target timers.target. Aug 13 03:20:11.481091 systemd[1398]: Reached target basic.target. Aug 13 03:20:11.481163 systemd[1398]: Reached target default.target. Aug 13 03:20:11.481213 systemd[1398]: Startup finished in 99ms. Aug 13 03:20:11.482446 systemd[1]: Started user@500.service. Aug 13 03:20:11.486088 systemd[1]: Started session-1.scope. Aug 13 03:20:12.151145 systemd[1]: Started sshd@1-10.230.26.254:22-139.178.89.65:59874.service. Aug 13 03:20:13.102854 sshd[1407]: Accepted publickey for core from 139.178.89.65 port 59874 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:20:13.105753 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:20:13.114286 systemd-logind[1285]: New session 2 of user core. Aug 13 03:20:13.114818 systemd[1]: Started session-2.scope. Aug 13 03:20:13.767168 sshd[1407]: pam_unix(sshd:session): session closed for user core Aug 13 03:20:13.770975 systemd[1]: sshd@1-10.230.26.254:22-139.178.89.65:59874.service: Deactivated successfully. Aug 13 03:20:13.772466 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 03:20:13.772500 systemd-logind[1285]: Session 2 logged out. Waiting for processes to exit. Aug 13 03:20:13.774253 systemd-logind[1285]: Removed session 2. Aug 13 03:20:13.923462 systemd[1]: Started sshd@2-10.230.26.254:22-139.178.89.65:59880.service. Aug 13 03:20:14.493069 coreos-metadata[1271]: Aug 13 03:20:14.492 WARN failed to locate config-drive, using the metadata service API instead Aug 13 03:20:14.548674 coreos-metadata[1271]: Aug 13 03:20:14.548 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Aug 13 03:20:14.581896 coreos-metadata[1271]: Aug 13 03:20:14.581 INFO Fetch successful Aug 13 03:20:14.582193 coreos-metadata[1271]: Aug 13 03:20:14.581 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 03:20:14.624823 coreos-metadata[1271]: Aug 13 03:20:14.624 INFO Fetch successful Aug 13 03:20:14.627305 unknown[1271]: wrote ssh authorized keys file for user: core Aug 13 03:20:14.641423 update-ssh-keys[1419]: Updated "/home/core/.ssh/authorized_keys" Aug 13 03:20:14.642037 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Aug 13 03:20:14.642578 systemd[1]: Reached target multi-user.target. Aug 13 03:20:14.645005 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 03:20:14.657848 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 03:20:14.658197 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 03:20:14.658515 systemd[1]: Startup finished in 7.802s (kernel) + 14.123s (userspace) = 21.926s. Aug 13 03:20:14.879120 sshd[1414]: Accepted publickey for core from 139.178.89.65 port 59880 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:20:14.880609 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:20:14.888883 systemd[1]: Started session-3.scope. Aug 13 03:20:14.889195 systemd-logind[1285]: New session 3 of user core. Aug 13 03:20:15.550758 sshd[1414]: pam_unix(sshd:session): session closed for user core Aug 13 03:20:15.556368 systemd[1]: sshd@2-10.230.26.254:22-139.178.89.65:59880.service: Deactivated successfully. Aug 13 03:20:15.558418 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 03:20:15.559358 systemd-logind[1285]: Session 3 logged out. Waiting for processes to exit. Aug 13 03:20:15.561032 systemd-logind[1285]: Removed session 3. Aug 13 03:20:19.945731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 03:20:19.946109 systemd[1]: Stopped kubelet.service. Aug 13 03:20:19.949623 systemd[1]: Starting kubelet.service... Aug 13 03:20:20.149061 systemd[1]: Started kubelet.service. Aug 13 03:20:20.254032 kubelet[1437]: E0813 03:20:20.253255 1437 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 03:20:20.258143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 03:20:20.258519 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 03:20:25.693604 systemd[1]: Started sshd@3-10.230.26.254:22-139.178.89.65:58884.service. Aug 13 03:20:26.588631 sshd[1444]: Accepted publickey for core from 139.178.89.65 port 58884 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:20:26.591393 sshd[1444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:20:26.602566 systemd[1]: Started session-4.scope. Aug 13 03:20:26.602873 systemd-logind[1285]: New session 4 of user core. Aug 13 03:20:27.215769 sshd[1444]: pam_unix(sshd:session): session closed for user core Aug 13 03:20:27.220099 systemd[1]: sshd@3-10.230.26.254:22-139.178.89.65:58884.service: Deactivated successfully. Aug 13 03:20:27.221510 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 03:20:27.223705 systemd-logind[1285]: Session 4 logged out. Waiting for processes to exit. Aug 13 03:20:27.226022 systemd-logind[1285]: Removed session 4. Aug 13 03:20:27.367982 systemd[1]: Started sshd@4-10.230.26.254:22-139.178.89.65:58898.service. Aug 13 03:20:28.266173 sshd[1451]: Accepted publickey for core from 139.178.89.65 port 58898 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:20:28.268393 sshd[1451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:20:28.277179 systemd[1]: Started session-5.scope. Aug 13 03:20:28.277691 systemd-logind[1285]: New session 5 of user core. Aug 13 03:20:28.885311 sshd[1451]: pam_unix(sshd:session): session closed for user core Aug 13 03:20:28.889962 systemd-logind[1285]: Session 5 logged out. Waiting for processes to exit. Aug 13 03:20:28.891084 systemd[1]: sshd@4-10.230.26.254:22-139.178.89.65:58898.service: Deactivated successfully. Aug 13 03:20:28.893043 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 03:20:28.893806 systemd-logind[1285]: Removed session 5. Aug 13 03:20:29.033061 systemd[1]: Started sshd@5-10.230.26.254:22-139.178.89.65:58912.service. Aug 13 03:20:29.930080 sshd[1458]: Accepted publickey for core from 139.178.89.65 port 58912 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:20:29.933072 sshd[1458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:20:29.941256 systemd-logind[1285]: New session 6 of user core. Aug 13 03:20:29.941414 systemd[1]: Started session-6.scope. Aug 13 03:20:30.413844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 03:20:30.414146 systemd[1]: Stopped kubelet.service. Aug 13 03:20:30.417427 systemd[1]: Starting kubelet.service... Aug 13 03:20:30.559472 sshd[1458]: pam_unix(sshd:session): session closed for user core Aug 13 03:20:30.565091 systemd[1]: sshd@5-10.230.26.254:22-139.178.89.65:58912.service: Deactivated successfully. Aug 13 03:20:30.567860 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 03:20:30.571300 systemd-logind[1285]: Session 6 logged out. Waiting for processes to exit. Aug 13 03:20:30.572785 systemd-logind[1285]: Removed session 6. Aug 13 03:20:30.580779 systemd[1]: Started kubelet.service. Aug 13 03:20:30.676822 kubelet[1473]: E0813 03:20:30.675978 1473 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 03:20:30.679416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 03:20:30.679753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 03:20:30.705523 systemd[1]: Started sshd@6-10.230.26.254:22-139.178.89.65:35702.service. Aug 13 03:20:31.601580 sshd[1480]: Accepted publickey for core from 139.178.89.65 port 35702 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:20:31.604628 sshd[1480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:20:31.612537 systemd[1]: Started session-7.scope. Aug 13 03:20:31.614426 systemd-logind[1285]: New session 7 of user core. Aug 13 03:20:32.098993 sudo[1484]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 03:20:32.099420 sudo[1484]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 03:20:32.109655 dbus-daemon[1272]: \xd0M \xc28V: received setenforce notice (enforcing=1002256096) Aug 13 03:20:32.112513 sudo[1484]: pam_unix(sudo:session): session closed for user root Aug 13 03:20:32.259076 sshd[1480]: pam_unix(sshd:session): session closed for user core Aug 13 03:20:32.264871 systemd[1]: sshd@6-10.230.26.254:22-139.178.89.65:35702.service: Deactivated successfully. Aug 13 03:20:32.266760 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 03:20:32.267573 systemd-logind[1285]: Session 7 logged out. Waiting for processes to exit. Aug 13 03:20:32.269157 systemd-logind[1285]: Removed session 7. Aug 13 03:20:32.405608 systemd[1]: Started sshd@7-10.230.26.254:22-139.178.89.65:35708.service. Aug 13 03:20:33.301758 sshd[1488]: Accepted publickey for core from 139.178.89.65 port 35708 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:20:33.304737 sshd[1488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:20:33.313001 systemd[1]: Started session-8.scope. Aug 13 03:20:33.313353 systemd-logind[1285]: New session 8 of user core. Aug 13 03:20:33.784968 sudo[1493]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 03:20:33.785440 sudo[1493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 03:20:33.790613 sudo[1493]: pam_unix(sudo:session): session closed for user root Aug 13 03:20:33.798140 sudo[1492]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 03:20:33.798996 sudo[1492]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 03:20:33.814364 systemd[1]: Stopping audit-rules.service... Aug 13 03:20:33.824736 kernel: kauditd_printk_skb: 152 callbacks suppressed Aug 13 03:20:33.833132 kernel: audit: type=1305 audit(1755055233.814:163): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 03:20:33.814000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 03:20:33.833434 auditctl[1496]: No rules Aug 13 03:20:33.829953 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 03:20:33.830566 systemd[1]: Stopped audit-rules.service. Aug 13 03:20:33.814000 audit[1496]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7dbb0700 a2=420 a3=0 items=0 ppid=1 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:33.835848 systemd[1]: Starting audit-rules.service... Aug 13 03:20:33.846668 kernel: audit: type=1300 audit(1755055233.814:163): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7dbb0700 a2=420 a3=0 items=0 ppid=1 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:33.846787 kernel: audit: type=1327 audit(1755055233.814:163): proctitle=2F7362696E2F617564697463746C002D44 Aug 13 03:20:33.814000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Aug 13 03:20:33.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:33.854232 kernel: audit: type=1131 audit(1755055233.829:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:33.871154 augenrules[1514]: No rules Aug 13 03:20:33.879374 kernel: audit: type=1130 audit(1755055233.871:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:33.879500 kernel: audit: type=1106 audit(1755055233.872:166): pid=1492 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:20:33.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:33.872000 audit[1492]: USER_END pid=1492 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:20:33.874504 sudo[1492]: pam_unix(sudo:session): session closed for user root Aug 13 03:20:33.872696 systemd[1]: Finished audit-rules.service. Aug 13 03:20:33.872000 audit[1492]: CRED_DISP pid=1492 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:20:33.891101 kernel: audit: type=1104 audit(1755055233.872:167): pid=1492 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:20:34.023809 sshd[1488]: pam_unix(sshd:session): session closed for user core Aug 13 03:20:34.024000 audit[1488]: USER_END pid=1488 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:20:34.028945 systemd-logind[1285]: Session 8 logged out. Waiting for processes to exit. Aug 13 03:20:34.030247 systemd[1]: sshd@7-10.230.26.254:22-139.178.89.65:35708.service: Deactivated successfully. Aug 13 03:20:34.031494 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 03:20:34.033291 systemd-logind[1285]: Removed session 8. Aug 13 03:20:34.024000 audit[1488]: CRED_DISP pid=1488 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:20:34.040179 kernel: audit: type=1106 audit(1755055234.024:168): pid=1488 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:20:34.040288 kernel: audit: type=1104 audit(1755055234.024:169): pid=1488 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:20:34.040369 kernel: audit: type=1131 audit(1755055234.024:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.26.254:22-139.178.89.65:35708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:34.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.26.254:22-139.178.89.65:35708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:34.170811 systemd[1]: Started sshd@8-10.230.26.254:22-139.178.89.65:35710.service. Aug 13 03:20:34.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.26.254:22-139.178.89.65:35710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:35.071000 audit[1521]: USER_ACCT pid=1521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:20:35.073124 sshd[1521]: Accepted publickey for core from 139.178.89.65 port 35710 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:20:35.073000 audit[1521]: CRED_ACQ pid=1521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:20:35.073000 audit[1521]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1a8e2ab0 a2=3 a3=0 items=0 ppid=1 pid=1521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.073000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:20:35.075541 sshd[1521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:20:35.083898 systemd[1]: Started session-9.scope. Aug 13 03:20:35.085762 systemd-logind[1285]: New session 9 of user core. Aug 13 03:20:35.093000 audit[1521]: USER_START pid=1521 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:20:35.096000 audit[1524]: CRED_ACQ pid=1524 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:20:35.557000 audit[1525]: USER_ACCT pid=1525 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:20:35.558982 sudo[1525]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 03:20:35.558000 audit[1525]: CRED_REFR pid=1525 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:20:35.560112 sudo[1525]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 03:20:35.562000 audit[1525]: USER_START pid=1525 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:20:35.620971 systemd[1]: Starting docker.service... Aug 13 03:20:35.685865 env[1535]: time="2025-08-13T03:20:35.685762240Z" level=info msg="Starting up" Aug 13 03:20:35.689539 env[1535]: time="2025-08-13T03:20:35.689508900Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 03:20:35.689669 env[1535]: time="2025-08-13T03:20:35.689636762Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 03:20:35.689821 env[1535]: time="2025-08-13T03:20:35.689788503Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 03:20:35.689941 env[1535]: time="2025-08-13T03:20:35.689911838Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 03:20:35.695941 env[1535]: time="2025-08-13T03:20:35.695863252Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 03:20:35.695941 env[1535]: time="2025-08-13T03:20:35.695911113Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 03:20:35.695941 env[1535]: time="2025-08-13T03:20:35.695940648Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 03:20:35.696179 env[1535]: time="2025-08-13T03:20:35.695958478Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 03:20:35.706667 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2468091843-merged.mount: Deactivated successfully. Aug 13 03:20:35.751210 env[1535]: time="2025-08-13T03:20:35.751149651Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 03:20:35.751564 env[1535]: time="2025-08-13T03:20:35.751534154Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 03:20:35.752029 env[1535]: time="2025-08-13T03:20:35.751988658Z" level=info msg="Loading containers: start." Aug 13 03:20:35.844000 audit[1567]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.844000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcb5a50720 a2=0 a3=7ffcb5a5070c items=0 ppid=1535 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.844000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Aug 13 03:20:35.848000 audit[1569]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.848000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd781eaf50 a2=0 a3=7ffd781eaf3c items=0 ppid=1535 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.848000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Aug 13 03:20:35.851000 audit[1571]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.851000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe88c30e50 a2=0 a3=7ffe88c30e3c items=0 ppid=1535 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.851000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 03:20:35.854000 audit[1573]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.854000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffddb5fc3c0 a2=0 a3=7ffddb5fc3ac items=0 ppid=1535 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.854000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 03:20:35.858000 audit[1575]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.858000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffecbcee0a0 a2=0 a3=7ffecbcee08c items=0 ppid=1535 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.858000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Aug 13 03:20:35.881000 audit[1580]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.881000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc4951b360 a2=0 a3=7ffc4951b34c items=0 ppid=1535 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.881000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Aug 13 03:20:35.890000 audit[1582]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.890000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc11271e00 a2=0 a3=7ffc11271dec items=0 ppid=1535 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.890000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 13 03:20:35.893000 audit[1584]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.893000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff239618f0 a2=0 a3=7fff239618dc items=0 ppid=1535 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.893000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 13 03:20:35.896000 audit[1586]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.896000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffde1d8a6a0 a2=0 a3=7ffde1d8a68c items=0 ppid=1535 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.896000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 03:20:35.906000 audit[1590]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.906000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc4e9e24d0 a2=0 a3=7ffc4e9e24bc items=0 ppid=1535 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.906000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 03:20:35.912000 audit[1591]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:35.912000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffee1351720 a2=0 a3=7ffee135170c items=0 ppid=1535 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:35.912000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 03:20:35.929734 kernel: Initializing XFRM netlink socket Aug 13 03:20:35.979637 env[1535]: time="2025-08-13T03:20:35.979522620Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 03:20:36.028000 audit[1599]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.028000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd21509bc0 a2=0 a3=7ffd21509bac items=0 ppid=1535 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.028000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Aug 13 03:20:36.044000 audit[1602]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.044000 audit[1602]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff41a97ab0 a2=0 a3=7fff41a97a9c items=0 ppid=1535 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.044000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Aug 13 03:20:36.049000 audit[1605]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.049000 audit[1605]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdd8f8d9c0 a2=0 a3=7ffdd8f8d9ac items=0 ppid=1535 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.049000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Aug 13 03:20:36.053000 audit[1607]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.053000 audit[1607]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdca4b9c10 a2=0 a3=7ffdca4b9bfc items=0 ppid=1535 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.053000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Aug 13 03:20:36.057000 audit[1609]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.057000 audit[1609]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd908657d0 a2=0 a3=7ffd908657bc items=0 ppid=1535 pid=1609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.057000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Aug 13 03:20:36.060000 audit[1611]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.060000 audit[1611]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fffa006d5b0 a2=0 a3=7fffa006d59c items=0 ppid=1535 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.060000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Aug 13 03:20:36.063000 audit[1613]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.063000 audit[1613]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffeb3f63c60 a2=0 a3=7ffeb3f63c4c items=0 ppid=1535 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.063000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Aug 13 03:20:36.077000 audit[1616]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.077000 audit[1616]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc9c8aa970 a2=0 a3=7ffc9c8aa95c items=0 ppid=1535 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.077000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Aug 13 03:20:36.081000 audit[1618]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.081000 audit[1618]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fffcd6074f0 a2=0 a3=7fffcd6074dc items=0 ppid=1535 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.081000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 03:20:36.084000 audit[1620]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.084000 audit[1620]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffff6fb43d0 a2=0 a3=7ffff6fb43bc items=0 ppid=1535 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.084000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 03:20:36.087000 audit[1622]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1622 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.087000 audit[1622]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc4589d010 a2=0 a3=7ffc4589cffc items=0 ppid=1535 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.087000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Aug 13 03:20:36.089834 systemd-networkd[1075]: docker0: Link UP Aug 13 03:20:36.100000 audit[1626]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.100000 audit[1626]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdafe72000 a2=0 a3=7ffdafe71fec items=0 ppid=1535 pid=1626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.100000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 03:20:36.106000 audit[1627]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:20:36.106000 audit[1627]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff486778e0 a2=0 a3=7fff486778cc items=0 ppid=1535 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:20:36.106000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 03:20:36.108146 env[1535]: time="2025-08-13T03:20:36.108084529Z" level=info msg="Loading containers: done." Aug 13 03:20:36.142875 env[1535]: time="2025-08-13T03:20:36.142799795Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 03:20:36.143563 env[1535]: time="2025-08-13T03:20:36.143529668Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 03:20:36.143880 env[1535]: time="2025-08-13T03:20:36.143852523Z" level=info msg="Daemon has completed initialization" Aug 13 03:20:36.173616 systemd[1]: Started docker.service. Aug 13 03:20:36.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:36.182687 env[1535]: time="2025-08-13T03:20:36.182600661Z" level=info msg="API listen on /run/docker.sock" Aug 13 03:20:37.107258 env[1300]: time="2025-08-13T03:20:37.107105095Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Aug 13 03:20:37.977778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454269627.mount: Deactivated successfully. Aug 13 03:20:37.986920 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 03:20:37.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:40.694531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 03:20:40.703269 kernel: kauditd_printk_skb: 85 callbacks suppressed Aug 13 03:20:40.703425 kernel: audit: type=1130 audit(1755055240.694:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:40.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:40.695064 systemd[1]: Stopped kubelet.service. Aug 13 03:20:40.701142 systemd[1]: Starting kubelet.service... Aug 13 03:20:40.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:40.713359 kernel: audit: type=1131 audit(1755055240.694:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:40.899094 systemd[1]: Started kubelet.service. Aug 13 03:20:40.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:40.906403 kernel: audit: type=1130 audit(1755055240.899:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:41.013567 kubelet[1672]: E0813 03:20:41.012565 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 03:20:41.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 03:20:41.014253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 03:20:41.014585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 03:20:41.021607 kernel: audit: type=1131 audit(1755055241.014:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 03:20:41.188763 env[1300]: time="2025-08-13T03:20:41.188678556Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:41.193544 env[1300]: time="2025-08-13T03:20:41.193491892Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:41.196540 env[1300]: time="2025-08-13T03:20:41.196500166Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:41.198859 env[1300]: time="2025-08-13T03:20:41.198818347Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:41.200206 env[1300]: time="2025-08-13T03:20:41.200157584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Aug 13 03:20:41.211241 env[1300]: time="2025-08-13T03:20:41.211195937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Aug 13 03:20:45.124201 env[1300]: time="2025-08-13T03:20:45.124059732Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:45.126750 env[1300]: time="2025-08-13T03:20:45.126708450Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:45.130484 env[1300]: time="2025-08-13T03:20:45.130450424Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:45.133845 env[1300]: time="2025-08-13T03:20:45.133809108Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:45.136450 env[1300]: time="2025-08-13T03:20:45.135306119Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Aug 13 03:20:45.137360 env[1300]: time="2025-08-13T03:20:45.137274516Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Aug 13 03:20:47.466837 env[1300]: time="2025-08-13T03:20:47.466701184Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:47.469951 env[1300]: time="2025-08-13T03:20:47.469904507Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:47.471978 env[1300]: time="2025-08-13T03:20:47.471941301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:47.492482 env[1300]: time="2025-08-13T03:20:47.480040352Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:47.492482 env[1300]: time="2025-08-13T03:20:47.481361416Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Aug 13 03:20:47.493261 env[1300]: time="2025-08-13T03:20:47.493213763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Aug 13 03:20:49.990780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount781966689.mount: Deactivated successfully. Aug 13 03:20:51.070494 env[1300]: time="2025-08-13T03:20:51.070363015Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:51.072640 env[1300]: time="2025-08-13T03:20:51.072571902Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:51.074799 env[1300]: time="2025-08-13T03:20:51.074762228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:51.076438 env[1300]: time="2025-08-13T03:20:51.076402913Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:51.077154 env[1300]: time="2025-08-13T03:20:51.077105940Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Aug 13 03:20:51.078629 env[1300]: time="2025-08-13T03:20:51.078563754Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 03:20:51.195127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 03:20:51.216527 kernel: audit: type=1130 audit(1755055251.195:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:51.216718 kernel: audit: type=1131 audit(1755055251.195:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:51.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:51.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:51.195527 systemd[1]: Stopped kubelet.service. Aug 13 03:20:51.212580 systemd[1]: Starting kubelet.service... Aug 13 03:20:51.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:51.394459 systemd[1]: Started kubelet.service. Aug 13 03:20:51.400362 kernel: audit: type=1130 audit(1755055251.394:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:20:51.453302 kubelet[1688]: E0813 03:20:51.453213 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 03:20:51.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 03:20:51.465516 kernel: audit: type=1131 audit(1755055251.456:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 03:20:51.456127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 03:20:51.456453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 03:20:52.326765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3513341801.mount: Deactivated successfully. Aug 13 03:20:52.748337 update_engine[1289]: I0813 03:20:52.748203 1289 update_attempter.cc:509] Updating boot flags... Aug 13 03:20:54.139012 env[1300]: time="2025-08-13T03:20:54.138897429Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:54.142849 env[1300]: time="2025-08-13T03:20:54.142807736Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:54.145624 env[1300]: time="2025-08-13T03:20:54.145586490Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:54.147132 env[1300]: time="2025-08-13T03:20:54.147068286Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 03:20:54.148795 env[1300]: time="2025-08-13T03:20:54.148754665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 03:20:54.149542 env[1300]: time="2025-08-13T03:20:54.149478598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:55.416114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount745413646.mount: Deactivated successfully. Aug 13 03:20:55.422250 env[1300]: time="2025-08-13T03:20:55.422195916Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:55.425082 env[1300]: time="2025-08-13T03:20:55.425036199Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:55.426791 env[1300]: time="2025-08-13T03:20:55.426752328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:55.428614 env[1300]: time="2025-08-13T03:20:55.428576185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:20:55.429682 env[1300]: time="2025-08-13T03:20:55.429635256Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 03:20:55.431395 env[1300]: time="2025-08-13T03:20:55.431362134Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 03:20:56.606686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766311790.mount: Deactivated successfully. Aug 13 03:21:01.158585 env[1300]: time="2025-08-13T03:21:01.158500595Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:01.160763 env[1300]: time="2025-08-13T03:21:01.160722148Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:01.163644 env[1300]: time="2025-08-13T03:21:01.163608452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:01.166393 env[1300]: time="2025-08-13T03:21:01.166353286Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:01.167849 env[1300]: time="2025-08-13T03:21:01.167798518Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 03:21:01.536088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 03:21:01.537117 systemd[1]: Stopped kubelet.service. Aug 13 03:21:01.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:01.549560 kernel: audit: type=1130 audit(1755055261.535:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:01.549691 kernel: audit: type=1131 audit(1755055261.539:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:01.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:01.545088 systemd[1]: Starting kubelet.service... Aug 13 03:21:02.017495 systemd[1]: Started kubelet.service. Aug 13 03:21:02.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:02.025663 kernel: audit: type=1130 audit(1755055262.016:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:02.270764 kubelet[1733]: E0813 03:21:02.270571 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 03:21:02.273243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 03:21:02.273570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 03:21:02.283272 kernel: audit: type=1131 audit(1755055262.272:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 03:21:02.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 03:21:04.789153 systemd[1]: Stopped kubelet.service. Aug 13 03:21:04.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:04.797822 systemd[1]: Starting kubelet.service... Aug 13 03:21:04.799368 kernel: audit: type=1130 audit(1755055264.788:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:04.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:04.806351 kernel: audit: type=1131 audit(1755055264.788:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:04.852606 systemd[1]: Reloading. Aug 13 03:21:05.017852 /usr/lib/systemd/system-generators/torcx-generator[1768]: time="2025-08-13T03:21:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 03:21:05.017924 /usr/lib/systemd/system-generators/torcx-generator[1768]: time="2025-08-13T03:21:05Z" level=info msg="torcx already run" Aug 13 03:21:05.149077 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 03:21:05.149601 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 03:21:05.179959 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 03:21:05.318854 systemd[1]: Started kubelet.service. Aug 13 03:21:05.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:05.327372 kernel: audit: type=1130 audit(1755055265.318:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:05.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:05.328911 systemd[1]: Stopping kubelet.service... Aug 13 03:21:05.329644 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 03:21:05.330010 systemd[1]: Stopped kubelet.service. Aug 13 03:21:05.335374 kernel: audit: type=1131 audit(1755055265.328:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:05.336241 systemd[1]: Starting kubelet.service... Aug 13 03:21:05.541453 systemd[1]: Started kubelet.service. Aug 13 03:21:05.548375 kernel: audit: type=1130 audit(1755055265.540:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:05.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:05.647155 kubelet[1835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 03:21:05.647155 kubelet[1835]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 03:21:05.647155 kubelet[1835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 03:21:05.648214 kubelet[1835]: I0813 03:21:05.647261 1835 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 03:21:06.372190 kubelet[1835]: I0813 03:21:06.372125 1835 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 03:21:06.372542 kubelet[1835]: I0813 03:21:06.372516 1835 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 03:21:06.373030 kubelet[1835]: I0813 03:21:06.373004 1835 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 03:21:06.404218 kubelet[1835]: E0813 03:21:06.404157 1835 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.26.254:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.26.254:6443: connect: connection refused" logger="UnhandledError" Aug 13 03:21:06.405878 kubelet[1835]: I0813 03:21:06.405801 1835 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 03:21:06.415467 kubelet[1835]: E0813 03:21:06.415415 1835 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 03:21:06.415594 kubelet[1835]: I0813 03:21:06.415569 1835 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 03:21:06.424274 kubelet[1835]: I0813 03:21:06.424238 1835 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 03:21:06.425770 kubelet[1835]: I0813 03:21:06.425734 1835 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 03:21:06.426229 kubelet[1835]: I0813 03:21:06.426175 1835 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 03:21:06.426657 kubelet[1835]: I0813 03:21:06.426359 1835 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-pghwy.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 03:21:06.427060 kubelet[1835]: I0813 03:21:06.427032 1835 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 03:21:06.427192 kubelet[1835]: I0813 03:21:06.427170 1835 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 03:21:06.427568 kubelet[1835]: I0813 03:21:06.427544 1835 state_mem.go:36] "Initialized new in-memory state store" Aug 13 03:21:06.437426 kubelet[1835]: I0813 03:21:06.437299 1835 kubelet.go:408] "Attempting to sync node with API server" Aug 13 03:21:06.437426 kubelet[1835]: I0813 03:21:06.437438 1835 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 03:21:06.437725 kubelet[1835]: I0813 03:21:06.437521 1835 kubelet.go:314] "Adding apiserver pod source" Aug 13 03:21:06.437725 kubelet[1835]: I0813 03:21:06.437585 1835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 03:21:06.445164 kubelet[1835]: W0813 03:21:06.445078 1835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.26.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-pghwy.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.26.254:6443: connect: connection refused Aug 13 03:21:06.445401 kubelet[1835]: E0813 03:21:06.445364 1835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.26.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-pghwy.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.26.254:6443: connect: connection refused" logger="UnhandledError" Aug 13 03:21:06.445676 kubelet[1835]: I0813 03:21:06.445630 1835 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 03:21:06.446581 kubelet[1835]: I0813 03:21:06.446556 1835 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 03:21:06.447503 kubelet[1835]: W0813 03:21:06.447477 1835 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 03:21:06.454224 kubelet[1835]: I0813 03:21:06.454194 1835 server.go:1274] "Started kubelet" Aug 13 03:21:06.459748 kubelet[1835]: W0813 03:21:06.459607 1835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.26.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.26.254:6443: connect: connection refused Aug 13 03:21:06.459748 kubelet[1835]: E0813 03:21:06.459688 1835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.26.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.26.254:6443: connect: connection refused" logger="UnhandledError" Aug 13 03:21:06.459951 kubelet[1835]: I0813 03:21:06.459767 1835 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 03:21:06.462623 kubelet[1835]: I0813 03:21:06.462556 1835 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 03:21:06.463548 kubelet[1835]: I0813 03:21:06.463520 1835 server.go:449] "Adding debug handlers to kubelet server" Aug 13 03:21:06.465455 kubelet[1835]: I0813 03:21:06.465422 1835 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 03:21:06.466000 audit[1835]: AVC avc: denied { mac_admin } for pid=1835 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:21:06.473754 kubelet[1835]: I0813 03:21:06.468293 1835 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 03:21:06.473754 kubelet[1835]: I0813 03:21:06.468366 1835 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 03:21:06.473754 kubelet[1835]: I0813 03:21:06.471151 1835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 03:21:06.476355 kernel: audit: type=1400 audit(1755055266.466:223): avc: denied { mac_admin } for pid=1835 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:21:06.466000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 03:21:06.466000 audit[1835]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bf4600 a1=c000b65740 a2=c000bf45d0 a3=25 items=0 ppid=1 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.466000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 03:21:06.466000 audit[1835]: AVC avc: denied { mac_admin } for pid=1835 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:21:06.466000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 03:21:06.466000 audit[1835]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b770e0 a1=c000b65758 a2=c000bf4690 a3=25 items=0 ppid=1 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.466000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 03:21:06.484188 kubelet[1835]: I0813 03:21:06.484161 1835 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 03:21:06.484737 kubelet[1835]: E0813 03:21:06.484706 1835 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-pghwy.gb1.brightbox.com\" not found" Aug 13 03:21:06.485940 kubelet[1835]: E0813 03:21:06.483948 1835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.26.254:6443/api/v1/namespaces/default/events\": dial tcp 10.230.26.254:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-pghwy.gb1.brightbox.com.185b3577562d6d41 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-pghwy.gb1.brightbox.com,UID:srv-pghwy.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-pghwy.gb1.brightbox.com,},FirstTimestamp:2025-08-13 03:21:06.454138177 +0000 UTC m=+0.899030740,LastTimestamp:2025-08-13 03:21:06.454138177 +0000 UTC m=+0.899030740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-pghwy.gb1.brightbox.com,}" Aug 13 03:21:06.486378 kubelet[1835]: I0813 03:21:06.486348 1835 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 03:21:06.488000 audit[1846]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.488000 audit[1846]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd52202e80 a2=0 a3=7ffd52202e6c items=0 ppid=1835 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.488000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 03:21:06.491145 kubelet[1835]: E0813 03:21:06.491103 1835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.26.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-pghwy.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.26.254:6443: connect: connection refused" interval="200ms" Aug 13 03:21:06.490000 audit[1847]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.490000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdee484b00 a2=0 a3=7ffdee484aec items=0 ppid=1835 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 03:21:06.493979 kubelet[1835]: I0813 03:21:06.493939 1835 reconciler.go:26] "Reconciler: start to sync state" Aug 13 03:21:06.494468 kubelet[1835]: I0813 03:21:06.494438 1835 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 03:21:06.495124 kubelet[1835]: I0813 03:21:06.494784 1835 factory.go:221] Registration of the systemd container factory successfully Aug 13 03:21:06.495124 kubelet[1835]: I0813 03:21:06.494954 1835 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 03:21:06.496751 kubelet[1835]: W0813 03:21:06.496661 1835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.26.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.26.254:6443: connect: connection refused Aug 13 03:21:06.496845 kubelet[1835]: E0813 03:21:06.496809 1835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.26.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.26.254:6443: connect: connection refused" logger="UnhandledError" Aug 13 03:21:06.498255 kubelet[1835]: E0813 03:21:06.498218 1835 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 03:21:06.496000 audit[1849]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.496000 audit[1849]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe32ad0e70 a2=0 a3=7ffe32ad0e5c items=0 ppid=1835 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.496000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 03:21:06.500601 kubelet[1835]: I0813 03:21:06.500574 1835 factory.go:221] Registration of the containerd container factory successfully Aug 13 03:21:06.509000 audit[1853]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.509000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe2453ddf0 a2=0 a3=7ffe2453dddc items=0 ppid=1835 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 03:21:06.524000 audit[1857]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1857 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.524000 audit[1857]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffcae466c0 a2=0 a3=7fffcae466ac items=0 ppid=1835 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.524000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Aug 13 03:21:06.527187 kubelet[1835]: I0813 03:21:06.527114 1835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 03:21:06.530000 audit[1858]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1858 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:06.530000 audit[1858]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffce78a2a70 a2=0 a3=7ffce78a2a5c items=0 ppid=1835 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 03:21:06.535522 kubelet[1835]: I0813 03:21:06.535486 1835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 03:21:06.535709 kubelet[1835]: I0813 03:21:06.535684 1835 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 03:21:06.535883 kubelet[1835]: I0813 03:21:06.535859 1835 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 03:21:06.536098 kubelet[1835]: E0813 03:21:06.536059 1835 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 03:21:06.538000 audit[1862]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.541000 kernel: kauditd_printk_skb: 25 callbacks suppressed Aug 13 03:21:06.541094 kernel: audit: type=1325 audit(1755055266.538:231): table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.538000 audit[1862]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0f5f5050 a2=0 a3=7ffe0f5f503c items=0 ppid=1835 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.552291 kernel: audit: type=1300 audit(1755055266.538:231): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0f5f5050 a2=0 a3=7ffe0f5f503c items=0 ppid=1835 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.538000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 03:21:06.555435 kubelet[1835]: W0813 03:21:06.555364 1835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.26.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.26.254:6443: connect: connection refused Aug 13 03:21:06.541000 audit[1863]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.557355 kubelet[1835]: E0813 03:21:06.557307 1835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.26.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.26.254:6443: connect: connection refused" logger="UnhandledError" Aug 13 03:21:06.557504 kubelet[1835]: I0813 03:21:06.557163 1835 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 03:21:06.557641 kubelet[1835]: I0813 03:21:06.557619 1835 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 03:21:06.557791 kubelet[1835]: I0813 03:21:06.557768 1835 state_mem.go:36] "Initialized new in-memory state store" Aug 13 03:21:06.560464 kernel: audit: type=1327 audit(1755055266.538:231): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 03:21:06.560576 kernel: audit: type=1325 audit(1755055266.541:232): table=nat:33 family=2 entries=1 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.560640 kernel: audit: type=1300 audit(1755055266.541:232): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccfe09bc0 a2=0 a3=7ffccfe09bac items=0 ppid=1835 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.541000 audit[1863]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccfe09bc0 a2=0 a3=7ffccfe09bac items=0 ppid=1835 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 03:21:06.569388 kubelet[1835]: I0813 03:21:06.569355 1835 policy_none.go:49] "None policy: Start" Aug 13 03:21:06.571959 kernel: audit: type=1327 audit(1755055266.541:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 03:21:06.542000 audit[1864]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.572660 kubelet[1835]: I0813 03:21:06.572635 1835 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 03:21:06.572852 kubelet[1835]: I0813 03:21:06.572811 1835 state_mem.go:35] "Initializing new in-memory state store" Aug 13 03:21:06.575872 kernel: audit: type=1325 audit(1755055266.542:233): table=filter:34 family=2 entries=1 op=nft_register_chain pid=1864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:06.575978 kernel: audit: type=1300 audit(1755055266.542:233): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffcebecb50 a2=0 a3=7fffcebecb3c items=0 ppid=1835 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.542000 audit[1864]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffcebecb50 a2=0 a3=7fffcebecb3c items=0 ppid=1835 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.591406 kernel: audit: type=1327 audit(1755055266.542:233): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 03:21:06.591570 kernel: audit: type=1325 audit(1755055266.544:234): table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:06.542000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 03:21:06.544000 audit[1865]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:06.544000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcff8b9300 a2=0 a3=7ffcff8b92ec items=0 ppid=1835 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.544000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 03:21:06.545000 audit[1866]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1866 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:06.545000 audit[1866]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe2b26bd60 a2=0 a3=7ffe2b26bd4c items=0 ppid=1835 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 03:21:06.546000 audit[1867]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:06.546000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff311929a0 a2=0 a3=7fff3119298c items=0 ppid=1835 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.546000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 03:21:06.592716 kubelet[1835]: E0813 03:21:06.592670 1835 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-pghwy.gb1.brightbox.com\" not found" Aug 13 03:21:06.599584 kubelet[1835]: I0813 03:21:06.599550 1835 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 03:21:06.600000 audit[1835]: AVC avc: denied { mac_admin } for pid=1835 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:21:06.600000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 03:21:06.600000 audit[1835]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ec5410 a1=c000ed0af8 a2=c000ec53e0 a3=25 items=0 ppid=1 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:06.600000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 03:21:06.602400 kubelet[1835]: I0813 03:21:06.602097 1835 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 03:21:06.602400 kubelet[1835]: I0813 03:21:06.602381 1835 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 03:21:06.602539 kubelet[1835]: I0813 03:21:06.602407 1835 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 03:21:06.603087 kubelet[1835]: I0813 03:21:06.603058 1835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 03:21:06.605203 kubelet[1835]: E0813 03:21:06.605174 1835 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-pghwy.gb1.brightbox.com\" not found" Aug 13 03:21:06.696893 kubelet[1835]: E0813 03:21:06.692452 1835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.26.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-pghwy.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.26.254:6443: connect: connection refused" interval="400ms" Aug 13 03:21:06.706664 kubelet[1835]: I0813 03:21:06.706620 1835 kubelet_node_status.go:72] "Attempting to register node" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.707179 kubelet[1835]: E0813 03:21:06.707131 1835 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.26.254:6443/api/v1/nodes\": dial tcp 10.230.26.254:6443: connect: connection refused" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.796732 kubelet[1835]: I0813 03:21:06.796658 1835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42a7a6fe203610cdf621b3138a1ee25b-k8s-certs\") pod \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" (UID: \"42a7a6fe203610cdf621b3138a1ee25b\") " pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.797095 kubelet[1835]: I0813 03:21:06.797065 1835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84f28c991608e12fa4f0024a7de1234d-kubeconfig\") pod \"kube-scheduler-srv-pghwy.gb1.brightbox.com\" (UID: \"84f28c991608e12fa4f0024a7de1234d\") " pod="kube-system/kube-scheduler-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.797271 kubelet[1835]: I0813 03:21:06.797241 1835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/306e4da29225d7b44536de485b455127-ca-certs\") pod \"kube-apiserver-srv-pghwy.gb1.brightbox.com\" (UID: \"306e4da29225d7b44536de485b455127\") " pod="kube-system/kube-apiserver-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.797495 kubelet[1835]: I0813 03:21:06.797465 1835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/306e4da29225d7b44536de485b455127-usr-share-ca-certificates\") pod \"kube-apiserver-srv-pghwy.gb1.brightbox.com\" (UID: \"306e4da29225d7b44536de485b455127\") " pod="kube-system/kube-apiserver-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.797664 kubelet[1835]: I0813 03:21:06.797636 1835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42a7a6fe203610cdf621b3138a1ee25b-ca-certs\") pod \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" (UID: \"42a7a6fe203610cdf621b3138a1ee25b\") " pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.797848 kubelet[1835]: I0813 03:21:06.797811 1835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42a7a6fe203610cdf621b3138a1ee25b-flexvolume-dir\") pod \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" (UID: \"42a7a6fe203610cdf621b3138a1ee25b\") " pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.798003 kubelet[1835]: I0813 03:21:06.797975 1835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/306e4da29225d7b44536de485b455127-k8s-certs\") pod \"kube-apiserver-srv-pghwy.gb1.brightbox.com\" (UID: \"306e4da29225d7b44536de485b455127\") " pod="kube-system/kube-apiserver-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.798187 kubelet[1835]: I0813 03:21:06.798159 1835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42a7a6fe203610cdf621b3138a1ee25b-kubeconfig\") pod \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" (UID: \"42a7a6fe203610cdf621b3138a1ee25b\") " pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.798374 kubelet[1835]: I0813 03:21:06.798337 1835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42a7a6fe203610cdf621b3138a1ee25b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" (UID: \"42a7a6fe203610cdf621b3138a1ee25b\") " pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.910875 kubelet[1835]: I0813 03:21:06.910829 1835 kubelet_node_status.go:72] "Attempting to register node" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.911804 kubelet[1835]: E0813 03:21:06.911742 1835 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.26.254:6443/api/v1/nodes\": dial tcp 10.230.26.254:6443: connect: connection refused" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:06.950984 env[1300]: time="2025-08-13T03:21:06.949857921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-pghwy.gb1.brightbox.com,Uid:306e4da29225d7b44536de485b455127,Namespace:kube-system,Attempt:0,}" Aug 13 03:21:06.965511 env[1300]: time="2025-08-13T03:21:06.965342293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-pghwy.gb1.brightbox.com,Uid:42a7a6fe203610cdf621b3138a1ee25b,Namespace:kube-system,Attempt:0,}" Aug 13 03:21:06.966291 env[1300]: time="2025-08-13T03:21:06.966172316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-pghwy.gb1.brightbox.com,Uid:84f28c991608e12fa4f0024a7de1234d,Namespace:kube-system,Attempt:0,}" Aug 13 03:21:07.097242 kubelet[1835]: E0813 03:21:07.097159 1835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.26.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-pghwy.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.26.254:6443: connect: connection refused" interval="800ms" Aug 13 03:21:07.316758 kubelet[1835]: I0813 03:21:07.316148 1835 kubelet_node_status.go:72] "Attempting to register node" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:07.317572 kubelet[1835]: E0813 03:21:07.317521 1835 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.26.254:6443/api/v1/nodes\": dial tcp 10.230.26.254:6443: connect: connection refused" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:07.462356 kubelet[1835]: W0813 03:21:07.462224 1835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.26.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-pghwy.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.26.254:6443: connect: connection refused Aug 13 03:21:07.462646 kubelet[1835]: E0813 03:21:07.462373 1835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.26.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-pghwy.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.26.254:6443: connect: connection refused" logger="UnhandledError" Aug 13 03:21:07.467460 kubelet[1835]: W0813 03:21:07.467402 1835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.26.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.26.254:6443: connect: connection refused Aug 13 03:21:07.467573 kubelet[1835]: E0813 03:21:07.467468 1835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.26.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.26.254:6443: connect: connection refused" logger="UnhandledError" Aug 13 03:21:07.671305 kubelet[1835]: W0813 03:21:07.671076 1835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.26.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.26.254:6443: connect: connection refused Aug 13 03:21:07.672377 kubelet[1835]: E0813 03:21:07.672305 1835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.26.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.26.254:6443: connect: connection refused" logger="UnhandledError" Aug 13 03:21:07.899172 kubelet[1835]: E0813 03:21:07.899087 1835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.26.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-pghwy.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.26.254:6443: connect: connection refused" interval="1.6s" Aug 13 03:21:07.916596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687799373.mount: Deactivated successfully. Aug 13 03:21:07.924562 env[1300]: time="2025-08-13T03:21:07.923802431Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:07.927831 env[1300]: time="2025-08-13T03:21:07.927765934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:07.929439 env[1300]: time="2025-08-13T03:21:07.929395239Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:07.931913 env[1300]: time="2025-08-13T03:21:07.931868572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:07.935599 env[1300]: time="2025-08-13T03:21:07.935026424Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:07.936686 env[1300]: time="2025-08-13T03:21:07.936635659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:07.952166 env[1300]: time="2025-08-13T03:21:07.952105269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:07.959353 env[1300]: time="2025-08-13T03:21:07.959245854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:07.976427 env[1300]: time="2025-08-13T03:21:07.976282045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:21:07.976693 env[1300]: time="2025-08-13T03:21:07.976394933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:21:07.976693 env[1300]: time="2025-08-13T03:21:07.976413652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:21:07.976693 env[1300]: time="2025-08-13T03:21:07.976649261Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a9dfc68ea34edc1bae3efa4f40f3420003e7e122e58ffe17ca10e8b14830439 pid=1875 runtime=io.containerd.runc.v2 Aug 13 03:21:07.986371 env[1300]: time="2025-08-13T03:21:07.986278669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:21:07.987050 env[1300]: time="2025-08-13T03:21:07.986993351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:21:07.987248 env[1300]: time="2025-08-13T03:21:07.987193406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:21:07.987806 env[1300]: time="2025-08-13T03:21:07.987738806Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2af1b96ec7aea1f3be4c75819df3922f6e27cec95d344624e972f580d8a93a07 pid=1893 runtime=io.containerd.runc.v2 Aug 13 03:21:08.029071 kubelet[1835]: W0813 03:21:08.024996 1835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.26.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.26.254:6443: connect: connection refused Aug 13 03:21:08.029071 kubelet[1835]: E0813 03:21:08.025105 1835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.26.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.26.254:6443: connect: connection refused" logger="UnhandledError" Aug 13 03:21:08.099547 env[1300]: time="2025-08-13T03:21:08.099468940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-pghwy.gb1.brightbox.com,Uid:84f28c991608e12fa4f0024a7de1234d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2af1b96ec7aea1f3be4c75819df3922f6e27cec95d344624e972f580d8a93a07\"" Aug 13 03:21:08.104612 env[1300]: time="2025-08-13T03:21:08.104543671Z" level=info msg="CreateContainer within sandbox \"2af1b96ec7aea1f3be4c75819df3922f6e27cec95d344624e972f580d8a93a07\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 03:21:08.117999 env[1300]: time="2025-08-13T03:21:08.117928362Z" level=info msg="CreateContainer within sandbox \"2af1b96ec7aea1f3be4c75819df3922f6e27cec95d344624e972f580d8a93a07\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0b735d8ba7161233bbcae3c378e028824adac689900aa0656aadf99358a72e4e\"" Aug 13 03:21:08.120100 env[1300]: time="2025-08-13T03:21:08.120043298Z" level=info msg="StartContainer for \"0b735d8ba7161233bbcae3c378e028824adac689900aa0656aadf99358a72e4e\"" Aug 13 03:21:08.122532 kubelet[1835]: I0813 03:21:08.121938 1835 kubelet_node_status.go:72] "Attempting to register node" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:08.122532 kubelet[1835]: E0813 03:21:08.122477 1835 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.26.254:6443/api/v1/nodes\": dial tcp 10.230.26.254:6443: connect: connection refused" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:08.133129 env[1300]: time="2025-08-13T03:21:08.133045839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-pghwy.gb1.brightbox.com,Uid:306e4da29225d7b44536de485b455127,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a9dfc68ea34edc1bae3efa4f40f3420003e7e122e58ffe17ca10e8b14830439\"" Aug 13 03:21:08.137123 env[1300]: time="2025-08-13T03:21:08.137075275Z" level=info msg="CreateContainer within sandbox \"2a9dfc68ea34edc1bae3efa4f40f3420003e7e122e58ffe17ca10e8b14830439\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 03:21:08.148467 env[1300]: time="2025-08-13T03:21:08.148295557Z" level=info msg="CreateContainer within sandbox \"2a9dfc68ea34edc1bae3efa4f40f3420003e7e122e58ffe17ca10e8b14830439\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ca5576733d9a1f6ad3ab39d88d3b5457d8af967ef7da9b7fbb5edb3b3d2a1b8\"" Aug 13 03:21:08.149579 env[1300]: time="2025-08-13T03:21:08.149546021Z" level=info msg="StartContainer for \"1ca5576733d9a1f6ad3ab39d88d3b5457d8af967ef7da9b7fbb5edb3b3d2a1b8\"" Aug 13 03:21:08.221757 env[1300]: time="2025-08-13T03:21:08.221696762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:08.225481 env[1300]: time="2025-08-13T03:21:08.225447830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:08.229512 env[1300]: time="2025-08-13T03:21:08.229478210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:08.232940 env[1300]: time="2025-08-13T03:21:08.232907307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:08.280654 env[1300]: time="2025-08-13T03:21:08.280547354Z" level=info msg="StartContainer for \"0b735d8ba7161233bbcae3c378e028824adac689900aa0656aadf99358a72e4e\" returns successfully" Aug 13 03:21:08.304050 env[1300]: time="2025-08-13T03:21:08.303954248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:21:08.304467 env[1300]: time="2025-08-13T03:21:08.304411135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:21:08.304663 env[1300]: time="2025-08-13T03:21:08.304609008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:21:08.305100 env[1300]: time="2025-08-13T03:21:08.305041526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47609144812c2a08292ad4d235aeacad75e1cc4c832bd1abfffed990d48e8cf1 pid=2018 runtime=io.containerd.runc.v2 Aug 13 03:21:08.327952 env[1300]: time="2025-08-13T03:21:08.322592867Z" level=info msg="StartContainer for \"1ca5576733d9a1f6ad3ab39d88d3b5457d8af967ef7da9b7fbb5edb3b3d2a1b8\" returns successfully" Aug 13 03:21:08.465070 env[1300]: time="2025-08-13T03:21:08.456409437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-pghwy.gb1.brightbox.com,Uid:42a7a6fe203610cdf621b3138a1ee25b,Namespace:kube-system,Attempt:0,} returns sandbox id \"47609144812c2a08292ad4d235aeacad75e1cc4c832bd1abfffed990d48e8cf1\"" Aug 13 03:21:08.465070 env[1300]: time="2025-08-13T03:21:08.459585216Z" level=info msg="CreateContainer within sandbox \"47609144812c2a08292ad4d235aeacad75e1cc4c832bd1abfffed990d48e8cf1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 03:21:08.492229 kubelet[1835]: E0813 03:21:08.491259 1835 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.26.254:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.26.254:6443: connect: connection refused" logger="UnhandledError" Aug 13 03:21:08.499688 env[1300]: time="2025-08-13T03:21:08.499623913Z" level=info msg="CreateContainer within sandbox \"47609144812c2a08292ad4d235aeacad75e1cc4c832bd1abfffed990d48e8cf1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"041b63e1f30464e4d8f35ea990203dd2313bcbee07d2e919379b426ef2923e38\"" Aug 13 03:21:08.500641 env[1300]: time="2025-08-13T03:21:08.500583251Z" level=info msg="StartContainer for \"041b63e1f30464e4d8f35ea990203dd2313bcbee07d2e919379b426ef2923e38\"" Aug 13 03:21:08.669364 env[1300]: time="2025-08-13T03:21:08.668738516Z" level=info msg="StartContainer for \"041b63e1f30464e4d8f35ea990203dd2313bcbee07d2e919379b426ef2923e38\" returns successfully" Aug 13 03:21:09.726477 kubelet[1835]: I0813 03:21:09.726427 1835 kubelet_node_status.go:72] "Attempting to register node" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:11.305894 kubelet[1835]: E0813 03:21:11.305806 1835 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-pghwy.gb1.brightbox.com\" not found" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:11.441076 kubelet[1835]: I0813 03:21:11.441020 1835 kubelet_node_status.go:75] "Successfully registered node" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:11.441076 kubelet[1835]: E0813 03:21:11.441088 1835 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-pghwy.gb1.brightbox.com\": node \"srv-pghwy.gb1.brightbox.com\" not found" Aug 13 03:21:11.458197 kubelet[1835]: I0813 03:21:11.458145 1835 apiserver.go:52] "Watching apiserver" Aug 13 03:21:11.494958 kubelet[1835]: I0813 03:21:11.494900 1835 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 03:21:11.595096 kubelet[1835]: E0813 03:21:11.594909 1835 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:13.565608 systemd[1]: Reloading. Aug 13 03:21:13.651726 /usr/lib/systemd/system-generators/torcx-generator[2124]: time="2025-08-13T03:21:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 03:21:13.652486 /usr/lib/systemd/system-generators/torcx-generator[2124]: time="2025-08-13T03:21:13Z" level=info msg="torcx already run" Aug 13 03:21:13.804162 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 03:21:13.804191 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 03:21:13.834074 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 03:21:13.971559 systemd[1]: Stopping kubelet.service... Aug 13 03:21:13.993168 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 03:21:13.993680 systemd[1]: Stopped kubelet.service. Aug 13 03:21:14.010784 kernel: kauditd_printk_skb: 12 callbacks suppressed Aug 13 03:21:14.011154 kernel: audit: type=1131 audit(1755055273.992:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:13.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:14.015525 systemd[1]: Starting kubelet.service... Aug 13 03:21:15.392083 kernel: audit: type=1130 audit(1755055275.372:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:15.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:15.371515 systemd[1]: Started kubelet.service. Aug 13 03:21:15.571066 kubelet[2186]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 03:21:15.571066 kubelet[2186]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 03:21:15.571066 kubelet[2186]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 03:21:15.571066 kubelet[2186]: I0813 03:21:15.570864 2186 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 03:21:15.588576 kubelet[2186]: I0813 03:21:15.587808 2186 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 03:21:15.588576 kubelet[2186]: I0813 03:21:15.587858 2186 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 03:21:15.588576 kubelet[2186]: I0813 03:21:15.588177 2186 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 03:21:15.590578 kubelet[2186]: I0813 03:21:15.590358 2186 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 03:21:15.600016 kubelet[2186]: I0813 03:21:15.599584 2186 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 03:21:15.608629 kubelet[2186]: E0813 03:21:15.608579 2186 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 03:21:15.609783 kubelet[2186]: I0813 03:21:15.608866 2186 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 03:21:15.619433 kubelet[2186]: I0813 03:21:15.619396 2186 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 03:21:15.620242 kubelet[2186]: I0813 03:21:15.620045 2186 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 03:21:15.620364 kubelet[2186]: I0813 03:21:15.620237 2186 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 03:21:15.621057 kubelet[2186]: I0813 03:21:15.620285 2186 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-pghwy.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 03:21:15.621057 kubelet[2186]: I0813 03:21:15.620607 2186 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 03:21:15.621057 kubelet[2186]: I0813 03:21:15.620630 2186 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 03:21:15.621057 kubelet[2186]: I0813 03:21:15.620719 2186 state_mem.go:36] "Initialized new in-memory state store" Aug 13 03:21:15.621057 kubelet[2186]: I0813 03:21:15.620949 2186 kubelet.go:408] "Attempting to sync node with API server" Aug 13 03:21:15.622437 kubelet[2186]: I0813 03:21:15.622103 2186 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 03:21:15.622437 kubelet[2186]: I0813 03:21:15.622226 2186 kubelet.go:314] "Adding apiserver pod source" Aug 13 03:21:15.646254 kubelet[2186]: I0813 03:21:15.645933 2186 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 03:21:15.649313 kubelet[2186]: I0813 03:21:15.648838 2186 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 03:21:15.649719 kubelet[2186]: I0813 03:21:15.649669 2186 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 03:21:15.650650 kubelet[2186]: I0813 03:21:15.650624 2186 server.go:1274] "Started kubelet" Aug 13 03:21:15.668849 kernel: audit: type=1400 audit(1755055275.657:240): avc: denied { mac_admin } for pid=2186 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:21:15.657000 audit[2186]: AVC avc: denied { mac_admin } for pid=2186 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:21:15.669244 kubelet[2186]: I0813 03:21:15.658801 2186 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 03:21:15.669244 kubelet[2186]: I0813 03:21:15.658861 2186 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 03:21:15.669244 kubelet[2186]: I0813 03:21:15.662001 2186 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 03:21:15.669244 kubelet[2186]: I0813 03:21:15.666827 2186 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 03:21:15.669535 kubelet[2186]: I0813 03:21:15.669268 2186 server.go:449] "Adding debug handlers to kubelet server" Aug 13 03:21:15.674055 kernel: audit: type=1401 audit(1755055275.657:240): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 03:21:15.657000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 03:21:15.674245 kubelet[2186]: I0813 03:21:15.670750 2186 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 03:21:15.674245 kubelet[2186]: I0813 03:21:15.671044 2186 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 03:21:15.674245 kubelet[2186]: I0813 03:21:15.671878 2186 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 03:21:15.680712 kubelet[2186]: I0813 03:21:15.680655 2186 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 03:21:15.681889 kubelet[2186]: E0813 03:21:15.681653 2186 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-pghwy.gb1.brightbox.com\" not found" Aug 13 03:21:15.691115 kernel: audit: type=1300 audit(1755055275.657:240): arch=c000003e syscall=188 success=no exit=-22 a0=c000857b30 a1=c00086cdb0 a2=c000857b00 a3=25 items=0 ppid=1 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:15.657000 audit[2186]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000857b30 a1=c00086cdb0 a2=c000857b00 a3=25 items=0 ppid=1 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:15.693111 kubelet[2186]: I0813 03:21:15.682822 2186 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 03:21:15.693111 kubelet[2186]: I0813 03:21:15.683082 2186 reconciler.go:26] "Reconciler: start to sync state" Aug 13 03:21:15.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 03:21:15.700345 kernel: audit: type=1327 audit(1755055275.657:240): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 03:21:15.657000 audit[2186]: AVC avc: denied { mac_admin } for pid=2186 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:21:15.718069 kernel: audit: type=1400 audit(1755055275.657:241): avc: denied { mac_admin } for pid=2186 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:21:15.720505 kubelet[2186]: I0813 03:21:15.720090 2186 factory.go:221] Registration of the systemd container factory successfully Aug 13 03:21:15.720505 kubelet[2186]: I0813 03:21:15.720345 2186 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 03:21:15.722187 kubelet[2186]: E0813 03:21:15.721621 2186 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 03:21:15.657000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 03:21:15.727707 kernel: audit: type=1401 audit(1755055275.657:241): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 03:21:15.727867 kernel: audit: type=1300 audit(1755055275.657:241): arch=c000003e syscall=188 success=no exit=-22 a0=c000a14080 a1=c00086cdc8 a2=c000857bc0 a3=25 items=0 ppid=1 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:15.657000 audit[2186]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a14080 a1=c00086cdc8 a2=c000857bc0 a3=25 items=0 ppid=1 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:15.731482 kubelet[2186]: I0813 03:21:15.730589 2186 factory.go:221] Registration of the containerd container factory successfully Aug 13 03:21:15.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 03:21:15.747452 kernel: audit: type=1327 audit(1755055275.657:241): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 03:21:15.804754 kubelet[2186]: I0813 03:21:15.804700 2186 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 03:21:15.806591 kubelet[2186]: I0813 03:21:15.806565 2186 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 03:21:15.806777 kubelet[2186]: I0813 03:21:15.806752 2186 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 03:21:15.806971 kubelet[2186]: I0813 03:21:15.806935 2186 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 03:21:15.807232 kubelet[2186]: E0813 03:21:15.807196 2186 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 03:21:15.884087 kubelet[2186]: I0813 03:21:15.884046 2186 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 03:21:15.884427 kubelet[2186]: I0813 03:21:15.884400 2186 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 03:21:15.884569 kubelet[2186]: I0813 03:21:15.884547 2186 state_mem.go:36] "Initialized new in-memory state store" Aug 13 03:21:15.884922 kubelet[2186]: I0813 03:21:15.884896 2186 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 03:21:15.885066 kubelet[2186]: I0813 03:21:15.885023 2186 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 03:21:15.885210 kubelet[2186]: I0813 03:21:15.885187 2186 policy_none.go:49] "None policy: Start" Aug 13 03:21:15.886213 kubelet[2186]: I0813 03:21:15.886189 2186 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 03:21:15.886384 kubelet[2186]: I0813 03:21:15.886362 2186 state_mem.go:35] "Initializing new in-memory state store" Aug 13 03:21:15.886708 kubelet[2186]: I0813 03:21:15.886684 2186 state_mem.go:75] "Updated machine memory state" Aug 13 03:21:15.889130 kubelet[2186]: I0813 03:21:15.889102 2186 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 03:21:15.887000 audit[2186]: AVC avc: denied { mac_admin } for pid=2186 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:21:15.887000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 03:21:15.887000 audit[2186]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00121c540 a1=c00120ed20 a2=c00121c510 a3=25 items=0 ppid=1 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:15.887000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 03:21:15.889962 kubelet[2186]: I0813 03:21:15.889931 2186 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 03:21:15.890311 kubelet[2186]: I0813 03:21:15.890286 2186 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 03:21:15.890595 kubelet[2186]: I0813 03:21:15.890531 2186 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 03:21:15.894968 kubelet[2186]: I0813 03:21:15.894941 2186 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 03:21:15.924436 kubelet[2186]: W0813 03:21:15.922190 2186 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 03:21:15.941771 kubelet[2186]: W0813 03:21:15.941729 2186 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 03:21:15.953092 kubelet[2186]: W0813 03:21:15.952862 2186 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 03:21:15.986495 kubelet[2186]: I0813 03:21:15.986437 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/306e4da29225d7b44536de485b455127-ca-certs\") pod \"kube-apiserver-srv-pghwy.gb1.brightbox.com\" (UID: \"306e4da29225d7b44536de485b455127\") " pod="kube-system/kube-apiserver-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:15.986828 kubelet[2186]: I0813 03:21:15.986791 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42a7a6fe203610cdf621b3138a1ee25b-k8s-certs\") pod \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" (UID: \"42a7a6fe203610cdf621b3138a1ee25b\") " pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:15.986982 kubelet[2186]: I0813 03:21:15.986954 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42a7a6fe203610cdf621b3138a1ee25b-ca-certs\") pod \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" (UID: \"42a7a6fe203610cdf621b3138a1ee25b\") " pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:15.987146 kubelet[2186]: I0813 03:21:15.987118 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42a7a6fe203610cdf621b3138a1ee25b-flexvolume-dir\") pod \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" (UID: \"42a7a6fe203610cdf621b3138a1ee25b\") " pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:15.987283 kubelet[2186]: I0813 03:21:15.987256 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42a7a6fe203610cdf621b3138a1ee25b-kubeconfig\") pod \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" (UID: \"42a7a6fe203610cdf621b3138a1ee25b\") " pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:15.987455 kubelet[2186]: I0813 03:21:15.987423 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42a7a6fe203610cdf621b3138a1ee25b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-pghwy.gb1.brightbox.com\" (UID: \"42a7a6fe203610cdf621b3138a1ee25b\") " pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:15.987600 kubelet[2186]: I0813 03:21:15.987573 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84f28c991608e12fa4f0024a7de1234d-kubeconfig\") pod \"kube-scheduler-srv-pghwy.gb1.brightbox.com\" (UID: \"84f28c991608e12fa4f0024a7de1234d\") " pod="kube-system/kube-scheduler-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:15.987755 kubelet[2186]: I0813 03:21:15.987726 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/306e4da29225d7b44536de485b455127-k8s-certs\") pod \"kube-apiserver-srv-pghwy.gb1.brightbox.com\" (UID: \"306e4da29225d7b44536de485b455127\") " pod="kube-system/kube-apiserver-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:15.987905 kubelet[2186]: I0813 03:21:15.987877 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/306e4da29225d7b44536de485b455127-usr-share-ca-certificates\") pod \"kube-apiserver-srv-pghwy.gb1.brightbox.com\" (UID: \"306e4da29225d7b44536de485b455127\") " pod="kube-system/kube-apiserver-srv-pghwy.gb1.brightbox.com" Aug 13 03:21:16.051342 kubelet[2186]: I0813 03:21:16.051288 2186 kubelet_node_status.go:72] "Attempting to register node" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:16.066748 kubelet[2186]: I0813 03:21:16.066701 2186 kubelet_node_status.go:111] "Node was previously registered" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:16.067141 kubelet[2186]: I0813 03:21:16.067118 2186 kubelet_node_status.go:75] "Successfully registered node" node="srv-pghwy.gb1.brightbox.com" Aug 13 03:21:16.647054 kubelet[2186]: I0813 03:21:16.646996 2186 apiserver.go:52] "Watching apiserver" Aug 13 03:21:16.684074 kubelet[2186]: I0813 03:21:16.684019 2186 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 03:21:16.875548 kubelet[2186]: I0813 03:21:16.875200 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-pghwy.gb1.brightbox.com" podStartSLOduration=1.874527925 podStartE2EDuration="1.874527925s" podCreationTimestamp="2025-08-13 03:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 03:21:16.872958913 +0000 UTC m=+1.446398749" watchObservedRunningTime="2025-08-13 03:21:16.874527925 +0000 UTC m=+1.447967762" Aug 13 03:21:16.896568 kubelet[2186]: I0813 03:21:16.896487 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-pghwy.gb1.brightbox.com" podStartSLOduration=1.8964467200000001 podStartE2EDuration="1.89644672s" podCreationTimestamp="2025-08-13 03:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 03:21:16.885999342 +0000 UTC m=+1.459439185" watchObservedRunningTime="2025-08-13 03:21:16.89644672 +0000 UTC m=+1.469886552" Aug 13 03:21:16.910103 kubelet[2186]: I0813 03:21:16.909921 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-pghwy.gb1.brightbox.com" podStartSLOduration=1.90989783 podStartE2EDuration="1.90989783s" podCreationTimestamp="2025-08-13 03:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 03:21:16.897414185 +0000 UTC m=+1.470854020" watchObservedRunningTime="2025-08-13 03:21:16.90989783 +0000 UTC m=+1.483337667" Aug 13 03:21:19.953293 kubelet[2186]: I0813 03:21:19.953241 2186 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 03:21:19.954038 env[1300]: time="2025-08-13T03:21:19.953783262Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 03:21:19.954538 kubelet[2186]: I0813 03:21:19.954118 2186 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 03:21:20.924801 kubelet[2186]: I0813 03:21:20.924557 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3223f607-e255-4759-bea6-3963de3161bb-kube-proxy\") pod \"kube-proxy-jb52z\" (UID: \"3223f607-e255-4759-bea6-3963de3161bb\") " pod="kube-system/kube-proxy-jb52z" Aug 13 03:21:20.925176 kubelet[2186]: I0813 03:21:20.925134 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3223f607-e255-4759-bea6-3963de3161bb-xtables-lock\") pod \"kube-proxy-jb52z\" (UID: \"3223f607-e255-4759-bea6-3963de3161bb\") " pod="kube-system/kube-proxy-jb52z" Aug 13 03:21:20.925480 kubelet[2186]: I0813 03:21:20.925433 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3223f607-e255-4759-bea6-3963de3161bb-lib-modules\") pod \"kube-proxy-jb52z\" (UID: \"3223f607-e255-4759-bea6-3963de3161bb\") " pod="kube-system/kube-proxy-jb52z" Aug 13 03:21:20.925601 kubelet[2186]: I0813 03:21:20.925482 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdp5\" (UniqueName: \"kubernetes.io/projected/3223f607-e255-4759-bea6-3963de3161bb-kube-api-access-lzdp5\") pod \"kube-proxy-jb52z\" (UID: \"3223f607-e255-4759-bea6-3963de3161bb\") " pod="kube-system/kube-proxy-jb52z" Aug 13 03:21:21.045540 kubelet[2186]: I0813 03:21:21.045469 2186 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 03:21:21.126950 kubelet[2186]: I0813 03:21:21.126875 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/feb44eed-17fe-4b4a-9fde-951b9e220077-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-sb99t\" (UID: \"feb44eed-17fe-4b4a-9fde-951b9e220077\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-sb99t" Aug 13 03:21:21.127310 kubelet[2186]: I0813 03:21:21.127270 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h967t\" (UniqueName: \"kubernetes.io/projected/feb44eed-17fe-4b4a-9fde-951b9e220077-kube-api-access-h967t\") pod \"tigera-operator-5bf8dfcb4-sb99t\" (UID: \"feb44eed-17fe-4b4a-9fde-951b9e220077\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-sb99t" Aug 13 03:21:21.223543 env[1300]: time="2025-08-13T03:21:21.222894236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jb52z,Uid:3223f607-e255-4759-bea6-3963de3161bb,Namespace:kube-system,Attempt:0,}" Aug 13 03:21:21.267316 env[1300]: time="2025-08-13T03:21:21.267184513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:21:21.267583 env[1300]: time="2025-08-13T03:21:21.267284051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:21:21.267583 env[1300]: time="2025-08-13T03:21:21.267302238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:21:21.267805 env[1300]: time="2025-08-13T03:21:21.267631327Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d707c854871933a6b66d9f5e8a1018538c1c777fd45b959092326bec732bb50 pid=2240 runtime=io.containerd.runc.v2 Aug 13 03:21:21.355734 env[1300]: time="2025-08-13T03:21:21.355652537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jb52z,Uid:3223f607-e255-4759-bea6-3963de3161bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d707c854871933a6b66d9f5e8a1018538c1c777fd45b959092326bec732bb50\"" Aug 13 03:21:21.364564 env[1300]: time="2025-08-13T03:21:21.364495371Z" level=info msg="CreateContainer within sandbox \"3d707c854871933a6b66d9f5e8a1018538c1c777fd45b959092326bec732bb50\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 03:21:21.384734 env[1300]: time="2025-08-13T03:21:21.384656028Z" level=info msg="CreateContainer within sandbox \"3d707c854871933a6b66d9f5e8a1018538c1c777fd45b959092326bec732bb50\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"243249e1ae1d1e41be94c3608b214abaf7193072ac3d1b470c9e432e41e25cb5\"" Aug 13 03:21:21.388303 env[1300]: time="2025-08-13T03:21:21.388263550Z" level=info msg="StartContainer for \"243249e1ae1d1e41be94c3608b214abaf7193072ac3d1b470c9e432e41e25cb5\"" Aug 13 03:21:21.421834 env[1300]: time="2025-08-13T03:21:21.421757714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-sb99t,Uid:feb44eed-17fe-4b4a-9fde-951b9e220077,Namespace:tigera-operator,Attempt:0,}" Aug 13 03:21:21.455196 env[1300]: time="2025-08-13T03:21:21.454880379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:21:21.455196 env[1300]: time="2025-08-13T03:21:21.454942075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:21:21.455196 env[1300]: time="2025-08-13T03:21:21.454964523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:21:21.455816 env[1300]: time="2025-08-13T03:21:21.455678897Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5890ca8c4a31a4c5904c8746d70017f619455364943acede356bd22909f3f36b pid=2305 runtime=io.containerd.runc.v2 Aug 13 03:21:21.530842 env[1300]: time="2025-08-13T03:21:21.530683540Z" level=info msg="StartContainer for \"243249e1ae1d1e41be94c3608b214abaf7193072ac3d1b470c9e432e41e25cb5\" returns successfully" Aug 13 03:21:21.588842 env[1300]: time="2025-08-13T03:21:21.588711829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-sb99t,Uid:feb44eed-17fe-4b4a-9fde-951b9e220077,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5890ca8c4a31a4c5904c8746d70017f619455364943acede356bd22909f3f36b\"" Aug 13 03:21:21.592947 env[1300]: time="2025-08-13T03:21:21.592895478Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 03:21:21.991000 audit[2383]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:21.994795 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 03:21:21.995976 kernel: audit: type=1325 audit(1755055281.991:243): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:21.991000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffdfea2800 a2=0 a3=7fffdfea27ec items=0 ppid=2291 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.008371 kernel: audit: type=1300 audit(1755055281.991:243): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffdfea2800 a2=0 a3=7fffdfea27ec items=0 ppid=2291 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:21.991000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 03:21:22.015415 kernel: audit: type=1327 audit(1755055281.991:243): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 03:21:21.999000 audit[2384]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.021355 kernel: audit: type=1325 audit(1755055281.999:244): table=nat:39 family=10 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:21.999000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda965f1b0 a2=0 a3=7ffda965f19c items=0 ppid=2291 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.029353 kernel: audit: type=1300 audit(1755055281.999:244): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda965f1b0 a2=0 a3=7ffda965f19c items=0 ppid=2291 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:21.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 03:21:22.002000 audit[2385]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.037750 kernel: audit: type=1327 audit(1755055281.999:244): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 03:21:22.037846 kernel: audit: type=1325 audit(1755055282.002:245): table=filter:40 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.002000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0be4c670 a2=0 a3=7fff0be4c65c items=0 ppid=2291 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.045777 kernel: audit: type=1300 audit(1755055282.002:245): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0be4c670 a2=0 a3=7fff0be4c65c items=0 ppid=2291 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 03:21:22.050379 kernel: audit: type=1327 audit(1755055282.002:245): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 03:21:22.050491 kernel: audit: type=1325 audit(1755055282.009:246): table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.009000 audit[2386]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.058427 systemd[1]: run-containerd-runc-k8s.io-3d707c854871933a6b66d9f5e8a1018538c1c777fd45b959092326bec732bb50-runc.Q1ucyE.mount: Deactivated successfully. Aug 13 03:21:22.009000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc038a4440 a2=0 a3=7ffc038a442c items=0 ppid=2291 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.009000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 03:21:22.013000 audit[2387]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.013000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6e95dce0 a2=0 a3=7ffd6e95dccc items=0 ppid=2291 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 03:21:22.021000 audit[2388]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.021000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7c018fd0 a2=0 a3=7ffe7c018fbc items=0 ppid=2291 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.021000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 03:21:22.107000 audit[2389]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.107000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff204d5ac0 a2=0 a3=7fff204d5aac items=0 ppid=2291 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 03:21:22.113000 audit[2391]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.113000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffea1dd9b0 a2=0 a3=7fffea1dd99c items=0 ppid=2291 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Aug 13 03:21:22.121000 audit[2394]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.121000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeebf8ea90 a2=0 a3=7ffeebf8ea7c items=0 ppid=2291 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.121000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Aug 13 03:21:22.123000 audit[2395]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.123000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbabb1a20 a2=0 a3=7fffbabb1a0c items=0 ppid=2291 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 03:21:22.127000 audit[2397]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.127000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff99b52f60 a2=0 a3=7fff99b52f4c items=0 ppid=2291 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 03:21:22.129000 audit[2398]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.129000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc607418e0 a2=0 a3=7ffc607418cc items=0 ppid=2291 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 03:21:22.133000 audit[2400]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.133000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf96feb00 a2=0 a3=7ffdf96feaec items=0 ppid=2291 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 03:21:22.138000 audit[2403]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.138000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffda6d0ed50 a2=0 a3=7ffda6d0ed3c items=0 ppid=2291 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Aug 13 03:21:22.140000 audit[2404]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.140000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee7c73de0 a2=0 a3=7ffee7c73dcc items=0 ppid=2291 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 03:21:22.144000 audit[2406]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.144000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd38d1d8f0 a2=0 a3=7ffd38d1d8dc items=0 ppid=2291 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 03:21:22.145000 audit[2407]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.145000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6df37010 a2=0 a3=7ffd6df36ffc items=0 ppid=2291 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 03:21:22.150000 audit[2409]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.150000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff32499f90 a2=0 a3=7fff32499f7c items=0 ppid=2291 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 03:21:22.158000 audit[2412]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.158000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc7f5db820 a2=0 a3=7ffc7f5db80c items=0 ppid=2291 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 03:21:22.165000 audit[2415]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.165000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd1876d900 a2=0 a3=7ffd1876d8ec items=0 ppid=2291 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 03:21:22.168000 audit[2416]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.168000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff847cefb0 a2=0 a3=7fff847cef9c items=0 ppid=2291 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.168000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 03:21:22.175000 audit[2418]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.175000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe8bf93460 a2=0 a3=7ffe8bf9344c items=0 ppid=2291 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 03:21:22.181000 audit[2421]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.181000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc6453f60 a2=0 a3=7ffdc6453f4c items=0 ppid=2291 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 03:21:22.183000 audit[2422]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.183000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeef08bf70 a2=0 a3=7ffeef08bf5c items=0 ppid=2291 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 03:21:22.187000 audit[2424]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 03:21:22.187000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd2f653a50 a2=0 a3=7ffd2f653a3c items=0 ppid=2291 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 03:21:22.225000 audit[2430]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:22.225000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd7f66db00 a2=0 a3=7ffd7f66daec items=0 ppid=2291 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:22.237000 audit[2430]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:22.237000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd7f66db00 a2=0 a3=7ffd7f66daec items=0 ppid=2291 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:22.240000 audit[2435]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.240000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffde67fcd10 a2=0 a3=7ffde67fccfc items=0 ppid=2291 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.240000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 03:21:22.244000 audit[2437]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.244000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffed8629bb0 a2=0 a3=7ffed8629b9c items=0 ppid=2291 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.244000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Aug 13 03:21:22.252000 audit[2440]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.252000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc2d059e00 a2=0 a3=7ffc2d059dec items=0 ppid=2291 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Aug 13 03:21:22.254000 audit[2441]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.254000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf96e3790 a2=0 a3=7ffdf96e377c items=0 ppid=2291 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.254000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 03:21:22.258000 audit[2443]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.258000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc57c1750 a2=0 a3=7fffc57c173c items=0 ppid=2291 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.258000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 03:21:22.260000 audit[2444]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.260000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3110bc90 a2=0 a3=7ffe3110bc7c items=0 ppid=2291 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.260000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 03:21:22.263000 audit[2446]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.263000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd3b9cfb40 a2=0 a3=7ffd3b9cfb2c items=0 ppid=2291 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Aug 13 03:21:22.269000 audit[2449]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.269000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffef7d5350 a2=0 a3=7fffef7d533c items=0 ppid=2291 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 03:21:22.271000 audit[2450]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.271000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8cfcb6c0 a2=0 a3=7ffd8cfcb6ac items=0 ppid=2291 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 03:21:22.277000 audit[2452]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.277000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff300fbb80 a2=0 a3=7fff300fbb6c items=0 ppid=2291 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 03:21:22.281000 audit[2453]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.281000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcb54e2e0 a2=0 a3=7ffdcb54e2cc items=0 ppid=2291 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.281000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 03:21:22.286000 audit[2455]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.286000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc0665650 a2=0 a3=7ffdc066563c items=0 ppid=2291 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 03:21:22.292000 audit[2458]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.292000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2882b7a0 a2=0 a3=7fff2882b78c items=0 ppid=2291 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 03:21:22.298000 audit[2461]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.298000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd2de7ae0 a2=0 a3=7fffd2de7acc items=0 ppid=2291 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Aug 13 03:21:22.300000 audit[2462]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.300000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef0477c10 a2=0 a3=7ffef0477bfc items=0 ppid=2291 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.300000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 03:21:22.304000 audit[2464]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.304000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff12316b40 a2=0 a3=7fff12316b2c items=0 ppid=2291 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.304000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 03:21:22.309000 audit[2467]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.309000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd897a9eb0 a2=0 a3=7ffd897a9e9c items=0 ppid=2291 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 03:21:22.311000 audit[2468]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.311000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd62ffbd0 a2=0 a3=7ffcd62ffbbc items=0 ppid=2291 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 03:21:22.314000 audit[2470]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.314000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff5da2a3f0 a2=0 a3=7fff5da2a3dc items=0 ppid=2291 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 03:21:22.316000 audit[2471]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.316000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc27e75550 a2=0 a3=7ffc27e7553c items=0 ppid=2291 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.316000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 03:21:22.320000 audit[2473]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.320000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffea9e84b90 a2=0 a3=7ffea9e84b7c items=0 ppid=2291 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 03:21:22.326000 audit[2476]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 03:21:22.326000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcc2dc2110 a2=0 a3=7ffcc2dc20fc items=0 ppid=2291 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.326000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 03:21:22.331000 audit[2478]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 03:21:22.331000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffe2324bf0 a2=0 a3=7fffe2324bdc items=0 ppid=2291 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.331000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:22.332000 audit[2478]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 03:21:22.332000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffe2324bf0 a2=0 a3=7fffe2324bdc items=0 ppid=2291 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:22.332000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:23.448376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482563550.mount: Deactivated successfully. Aug 13 03:21:24.887371 env[1300]: time="2025-08-13T03:21:24.887273733Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:24.889901 env[1300]: time="2025-08-13T03:21:24.889781602Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:24.892182 env[1300]: time="2025-08-13T03:21:24.892121675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:24.894215 env[1300]: time="2025-08-13T03:21:24.894165610Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:24.895451 env[1300]: time="2025-08-13T03:21:24.895412405Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 03:21:24.904265 env[1300]: time="2025-08-13T03:21:24.904209863Z" level=info msg="CreateContainer within sandbox \"5890ca8c4a31a4c5904c8746d70017f619455364943acede356bd22909f3f36b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 03:21:24.920163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973537249.mount: Deactivated successfully. Aug 13 03:21:24.928071 env[1300]: time="2025-08-13T03:21:24.927977455Z" level=info msg="CreateContainer within sandbox \"5890ca8c4a31a4c5904c8746d70017f619455364943acede356bd22909f3f36b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f4f2111fc0c58b0bc7956eaf23acc5099a6c40bde60b8e68588bdf27f6e86306\"" Aug 13 03:21:24.931295 env[1300]: time="2025-08-13T03:21:24.931237851Z" level=info msg="StartContainer for \"f4f2111fc0c58b0bc7956eaf23acc5099a6c40bde60b8e68588bdf27f6e86306\"" Aug 13 03:21:25.029296 env[1300]: time="2025-08-13T03:21:25.028547739Z" level=info msg="StartContainer for \"f4f2111fc0c58b0bc7956eaf23acc5099a6c40bde60b8e68588bdf27f6e86306\" returns successfully" Aug 13 03:21:25.829552 kubelet[2186]: I0813 03:21:25.829463 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jb52z" podStartSLOduration=5.829421992 podStartE2EDuration="5.829421992s" podCreationTimestamp="2025-08-13 03:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 03:21:21.874164479 +0000 UTC m=+6.447604337" watchObservedRunningTime="2025-08-13 03:21:25.829421992 +0000 UTC m=+10.402861828" Aug 13 03:21:25.889835 kubelet[2186]: I0813 03:21:25.887625 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-sb99t" podStartSLOduration=1.580953047 podStartE2EDuration="4.887596391s" podCreationTimestamp="2025-08-13 03:21:21 +0000 UTC" firstStartedPulling="2025-08-13 03:21:21.59088667 +0000 UTC m=+6.164326498" lastFinishedPulling="2025-08-13 03:21:24.897530013 +0000 UTC m=+9.470969842" observedRunningTime="2025-08-13 03:21:25.887077276 +0000 UTC m=+10.460517128" watchObservedRunningTime="2025-08-13 03:21:25.887596391 +0000 UTC m=+10.461036226" Aug 13 03:21:25.915947 systemd[1]: run-containerd-runc-k8s.io-f4f2111fc0c58b0bc7956eaf23acc5099a6c40bde60b8e68588bdf27f6e86306-runc.UGezmW.mount: Deactivated successfully. Aug 13 03:21:32.772299 sudo[1525]: pam_unix(sudo:session): session closed for user root Aug 13 03:21:32.773000 audit[1525]: USER_END pid=1525 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:21:32.778965 kernel: kauditd_printk_skb: 143 callbacks suppressed Aug 13 03:21:32.779109 kernel: audit: type=1106 audit(1755055292.773:294): pid=1525 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:21:32.777000 audit[1525]: CRED_DISP pid=1525 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:21:32.826373 kernel: audit: type=1104 audit(1755055292.777:295): pid=1525 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 03:21:32.933891 sshd[1521]: pam_unix(sshd:session): session closed for user core Aug 13 03:21:32.947000 audit[1521]: USER_END pid=1521 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:21:32.956110 systemd[1]: sshd@8-10.230.26.254:22-139.178.89.65:35710.service: Deactivated successfully. Aug 13 03:21:32.956700 kernel: audit: type=1106 audit(1755055292.947:296): pid=1521 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:21:32.958153 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 03:21:32.958198 systemd-logind[1285]: Session 9 logged out. Waiting for processes to exit. Aug 13 03:21:32.947000 audit[1521]: CRED_DISP pid=1521 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:21:32.962293 systemd-logind[1285]: Removed session 9. Aug 13 03:21:32.970366 kernel: audit: type=1104 audit(1755055292.947:297): pid=1521 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:21:32.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.26.254:22-139.178.89.65:35710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:32.983381 kernel: audit: type=1131 audit(1755055292.956:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.26.254:22-139.178.89.65:35710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:21:33.283000 audit[2565]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:33.291350 kernel: audit: type=1325 audit(1755055293.283:299): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:33.283000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcc8662390 a2=0 a3=7ffcc866237c items=0 ppid=2291 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:33.300374 kernel: audit: type=1300 audit(1755055293.283:299): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcc8662390 a2=0 a3=7ffcc866237c items=0 ppid=2291 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:33.283000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:33.307357 kernel: audit: type=1327 audit(1755055293.283:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:33.303000 audit[2565]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:33.319362 kernel: audit: type=1325 audit(1755055293.303:300): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:33.303000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc8662390 a2=0 a3=0 items=0 ppid=2291 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:33.328347 kernel: audit: type=1300 audit(1755055293.303:300): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc8662390 a2=0 a3=0 items=0 ppid=2291 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:33.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:33.506000 audit[2567]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:33.506000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffddf6fd60 a2=0 a3=7fffddf6fd4c items=0 ppid=2291 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:33.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:33.513000 audit[2567]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:33.513000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffddf6fd60 a2=0 a3=0 items=0 ppid=2291 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:33.513000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:37.794000 audit[2569]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:37.801508 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 03:21:37.801684 kernel: audit: type=1325 audit(1755055297.794:303): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:37.794000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffc5184b90 a2=0 a3=7fffc5184b7c items=0 ppid=2291 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:37.827360 kernel: audit: type=1300 audit(1755055297.794:303): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffc5184b90 a2=0 a3=7fffc5184b7c items=0 ppid=2291 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:37.794000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:37.844349 kernel: audit: type=1327 audit(1755055297.794:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:37.814000 audit[2569]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:37.854352 kernel: audit: type=1325 audit(1755055297.814:304): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:37.814000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffc5184b90 a2=0 a3=0 items=0 ppid=2291 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:37.873350 kernel: audit: type=1300 audit(1755055297.814:304): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffc5184b90 a2=0 a3=0 items=0 ppid=2291 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:37.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:37.879439 kernel: audit: type=1327 audit(1755055297.814:304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:37.857000 audit[2571]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:37.857000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffec7b7db10 a2=0 a3=7ffec7b7dafc items=0 ppid=2291 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:37.896103 kernel: audit: type=1325 audit(1755055297.857:305): table=filter:95 family=2 entries=18 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:37.896257 kernel: audit: type=1300 audit(1755055297.857:305): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffec7b7db10 a2=0 a3=7ffec7b7dafc items=0 ppid=2291 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:37.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:37.906353 kernel: audit: type=1327 audit(1755055297.857:305): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:37.878000 audit[2571]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:37.914345 kernel: audit: type=1325 audit(1755055297.878:306): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:37.878000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec7b7db10 a2=0 a3=0 items=0 ppid=2291 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:37.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:38.155446 kubelet[2186]: I0813 03:21:38.155270 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67eaaa1c-e4fd-4368-b69a-f8c3f790d484-tigera-ca-bundle\") pod \"calico-typha-7567fd6b8b-qcjtx\" (UID: \"67eaaa1c-e4fd-4368-b69a-f8c3f790d484\") " pod="calico-system/calico-typha-7567fd6b8b-qcjtx" Aug 13 03:21:38.157917 kubelet[2186]: I0813 03:21:38.157886 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/67eaaa1c-e4fd-4368-b69a-f8c3f790d484-typha-certs\") pod \"calico-typha-7567fd6b8b-qcjtx\" (UID: \"67eaaa1c-e4fd-4368-b69a-f8c3f790d484\") " pod="calico-system/calico-typha-7567fd6b8b-qcjtx" Aug 13 03:21:38.158595 kubelet[2186]: I0813 03:21:38.158565 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktn9\" (UniqueName: \"kubernetes.io/projected/67eaaa1c-e4fd-4368-b69a-f8c3f790d484-kube-api-access-qktn9\") pod \"calico-typha-7567fd6b8b-qcjtx\" (UID: \"67eaaa1c-e4fd-4368-b69a-f8c3f790d484\") " pod="calico-system/calico-typha-7567fd6b8b-qcjtx" Aug 13 03:21:38.434153 env[1300]: time="2025-08-13T03:21:38.433916388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7567fd6b8b-qcjtx,Uid:67eaaa1c-e4fd-4368-b69a-f8c3f790d484,Namespace:calico-system,Attempt:0,}" Aug 13 03:21:38.460619 kubelet[2186]: I0813 03:21:38.460543 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0b198d1-a49b-4a7d-a50b-8de788c60344-xtables-lock\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.460944 kubelet[2186]: I0813 03:21:38.460912 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a0b198d1-a49b-4a7d-a50b-8de788c60344-flexvol-driver-host\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.461289 kubelet[2186]: I0813 03:21:38.461259 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0b198d1-a49b-4a7d-a50b-8de788c60344-tigera-ca-bundle\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.461494 kubelet[2186]: I0813 03:21:38.461467 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a0b198d1-a49b-4a7d-a50b-8de788c60344-cni-log-dir\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.461635 kubelet[2186]: I0813 03:21:38.461609 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8zw\" (UniqueName: \"kubernetes.io/projected/a0b198d1-a49b-4a7d-a50b-8de788c60344-kube-api-access-rt8zw\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.461774 kubelet[2186]: I0813 03:21:38.461749 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a0b198d1-a49b-4a7d-a50b-8de788c60344-cni-net-dir\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.461945 kubelet[2186]: I0813 03:21:38.461918 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a0b198d1-a49b-4a7d-a50b-8de788c60344-var-lib-calico\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.462147 kubelet[2186]: I0813 03:21:38.462090 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a0b198d1-a49b-4a7d-a50b-8de788c60344-node-certs\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.462281 kubelet[2186]: I0813 03:21:38.462254 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a0b198d1-a49b-4a7d-a50b-8de788c60344-cni-bin-dir\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.462488 kubelet[2186]: I0813 03:21:38.462452 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0b198d1-a49b-4a7d-a50b-8de788c60344-lib-modules\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.462642 kubelet[2186]: I0813 03:21:38.462617 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a0b198d1-a49b-4a7d-a50b-8de788c60344-var-run-calico\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.462828 kubelet[2186]: I0813 03:21:38.462780 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a0b198d1-a49b-4a7d-a50b-8de788c60344-policysync\") pod \"calico-node-4rpwj\" (UID: \"a0b198d1-a49b-4a7d-a50b-8de788c60344\") " pod="calico-system/calico-node-4rpwj" Aug 13 03:21:38.516855 env[1300]: time="2025-08-13T03:21:38.516706789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:21:38.516855 env[1300]: time="2025-08-13T03:21:38.516797855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:21:38.517300 env[1300]: time="2025-08-13T03:21:38.517221208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:21:38.517907 env[1300]: time="2025-08-13T03:21:38.517807288Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44e00a57d58659298c6808c5f9874dfc4e3256499a71a3c43525b32ddadff137 pid=2581 runtime=io.containerd.runc.v2 Aug 13 03:21:38.573871 kubelet[2186]: E0813 03:21:38.573798 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.574200 kubelet[2186]: W0813 03:21:38.574168 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.574409 kubelet[2186]: E0813 03:21:38.574382 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.574763 kubelet[2186]: E0813 03:21:38.574740 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.574891 kubelet[2186]: W0813 03:21:38.574865 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.575052 kubelet[2186]: E0813 03:21:38.575027 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.575456 kubelet[2186]: E0813 03:21:38.575423 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.575601 kubelet[2186]: W0813 03:21:38.575575 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.575748 kubelet[2186]: E0813 03:21:38.575720 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.584539 kubelet[2186]: E0813 03:21:38.584516 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.584675 kubelet[2186]: W0813 03:21:38.584649 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.584805 kubelet[2186]: E0813 03:21:38.584782 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.585232 kubelet[2186]: E0813 03:21:38.585209 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.589411 kubelet[2186]: W0813 03:21:38.589383 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.589558 kubelet[2186]: E0813 03:21:38.589533 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.589939 kubelet[2186]: E0813 03:21:38.589917 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.590082 kubelet[2186]: W0813 03:21:38.590055 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.590233 kubelet[2186]: E0813 03:21:38.590208 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.590653 kubelet[2186]: E0813 03:21:38.590632 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.590778 kubelet[2186]: W0813 03:21:38.590754 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.590943 kubelet[2186]: E0813 03:21:38.590920 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.595404 kubelet[2186]: E0813 03:21:38.595359 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.596392 kubelet[2186]: W0813 03:21:38.596365 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.596552 kubelet[2186]: E0813 03:21:38.596522 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.597325 kubelet[2186]: E0813 03:21:38.597302 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.598688 kubelet[2186]: W0813 03:21:38.598640 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.598873 kubelet[2186]: E0813 03:21:38.598850 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.602488 kubelet[2186]: E0813 03:21:38.602462 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.612016 kubelet[2186]: W0813 03:21:38.602679 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.612190 kubelet[2186]: E0813 03:21:38.612162 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.633602 kubelet[2186]: E0813 03:21:38.633565 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.634611 kubelet[2186]: W0813 03:21:38.634570 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.634801 kubelet[2186]: E0813 03:21:38.634761 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.659397 kubelet[2186]: E0813 03:21:38.659352 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.659668 kubelet[2186]: W0813 03:21:38.659635 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.659860 kubelet[2186]: E0813 03:21:38.659820 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.665391 kubelet[2186]: E0813 03:21:38.665316 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.665631 kubelet[2186]: W0813 03:21:38.665600 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.666665 kubelet[2186]: E0813 03:21:38.666638 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.673793 kubelet[2186]: E0813 03:21:38.673770 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.673938 kubelet[2186]: W0813 03:21:38.673913 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.674080 kubelet[2186]: E0813 03:21:38.674056 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.674270 kubelet[2186]: E0813 03:21:38.674216 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:21:38.750261 kubelet[2186]: E0813 03:21:38.750218 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.750617 kubelet[2186]: W0813 03:21:38.750586 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.750766 kubelet[2186]: E0813 03:21:38.750738 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.751265 kubelet[2186]: E0813 03:21:38.751243 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.751423 kubelet[2186]: W0813 03:21:38.751396 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.751568 kubelet[2186]: E0813 03:21:38.751542 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.752458 kubelet[2186]: E0813 03:21:38.752434 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.752600 kubelet[2186]: W0813 03:21:38.752573 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.752767 kubelet[2186]: E0813 03:21:38.752743 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.753468 kubelet[2186]: E0813 03:21:38.753443 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.753633 kubelet[2186]: W0813 03:21:38.753595 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.753773 kubelet[2186]: E0813 03:21:38.753750 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.754443 kubelet[2186]: E0813 03:21:38.754419 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.754581 kubelet[2186]: W0813 03:21:38.754555 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.754712 kubelet[2186]: E0813 03:21:38.754688 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.756436 kubelet[2186]: E0813 03:21:38.756412 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.756589 kubelet[2186]: W0813 03:21:38.756554 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.756741 kubelet[2186]: E0813 03:21:38.756715 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.758778 kubelet[2186]: E0813 03:21:38.758503 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.758778 kubelet[2186]: W0813 03:21:38.758550 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.758778 kubelet[2186]: E0813 03:21:38.758588 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.759026 kubelet[2186]: E0813 03:21:38.758974 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.759026 kubelet[2186]: W0813 03:21:38.758991 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.759026 kubelet[2186]: E0813 03:21:38.759008 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.759376 kubelet[2186]: E0813 03:21:38.759345 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.759376 kubelet[2186]: W0813 03:21:38.759370 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.759519 kubelet[2186]: E0813 03:21:38.759388 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.760077 kubelet[2186]: E0813 03:21:38.759618 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.760077 kubelet[2186]: W0813 03:21:38.759641 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.760077 kubelet[2186]: E0813 03:21:38.759659 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.760592 kubelet[2186]: E0813 03:21:38.760557 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.760592 kubelet[2186]: W0813 03:21:38.760581 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.760762 kubelet[2186]: E0813 03:21:38.760600 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.760935 kubelet[2186]: E0813 03:21:38.760887 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.760935 kubelet[2186]: W0813 03:21:38.760909 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.761138 kubelet[2186]: E0813 03:21:38.760948 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.761273 kubelet[2186]: E0813 03:21:38.761247 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.761273 kubelet[2186]: W0813 03:21:38.761269 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.761454 kubelet[2186]: E0813 03:21:38.761286 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.761550 kubelet[2186]: E0813 03:21:38.761524 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.761550 kubelet[2186]: W0813 03:21:38.761547 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.761677 kubelet[2186]: E0813 03:21:38.761564 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.761869 kubelet[2186]: E0813 03:21:38.761827 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.761869 kubelet[2186]: W0813 03:21:38.761857 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.761999 kubelet[2186]: E0813 03:21:38.761872 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.762145 kubelet[2186]: E0813 03:21:38.762109 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.762145 kubelet[2186]: W0813 03:21:38.762141 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.762290 kubelet[2186]: E0813 03:21:38.762158 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.762472 kubelet[2186]: E0813 03:21:38.762448 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.762472 kubelet[2186]: W0813 03:21:38.762469 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.762623 kubelet[2186]: E0813 03:21:38.762487 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.762759 kubelet[2186]: E0813 03:21:38.762733 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.762759 kubelet[2186]: W0813 03:21:38.762756 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.762954 kubelet[2186]: E0813 03:21:38.762782 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.763132 kubelet[2186]: E0813 03:21:38.763085 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.763132 kubelet[2186]: W0813 03:21:38.763117 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.763273 kubelet[2186]: E0813 03:21:38.763141 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.763501 kubelet[2186]: E0813 03:21:38.763475 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.763501 kubelet[2186]: W0813 03:21:38.763498 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.763649 kubelet[2186]: E0813 03:21:38.763515 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.779891 kubelet[2186]: E0813 03:21:38.778478 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.779891 kubelet[2186]: W0813 03:21:38.778521 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.779891 kubelet[2186]: E0813 03:21:38.778545 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.779891 kubelet[2186]: I0813 03:21:38.778581 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8mn6\" (UniqueName: \"kubernetes.io/projected/a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00-kube-api-access-q8mn6\") pod \"csi-node-driver-rqpff\" (UID: \"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00\") " pod="calico-system/csi-node-driver-rqpff" Aug 13 03:21:38.779891 kubelet[2186]: E0813 03:21:38.778840 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.779891 kubelet[2186]: W0813 03:21:38.778857 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.779891 kubelet[2186]: E0813 03:21:38.778874 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.779891 kubelet[2186]: I0813 03:21:38.778899 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00-socket-dir\") pod \"csi-node-driver-rqpff\" (UID: \"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00\") " pod="calico-system/csi-node-driver-rqpff" Aug 13 03:21:38.779891 kubelet[2186]: E0813 03:21:38.779243 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.781542 kubelet[2186]: W0813 03:21:38.779260 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.781542 kubelet[2186]: E0813 03:21:38.779277 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.781542 kubelet[2186]: I0813 03:21:38.779304 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00-varrun\") pod \"csi-node-driver-rqpff\" (UID: \"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00\") " pod="calico-system/csi-node-driver-rqpff" Aug 13 03:21:38.781542 kubelet[2186]: E0813 03:21:38.779633 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.781542 kubelet[2186]: W0813 03:21:38.779650 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.781542 kubelet[2186]: E0813 03:21:38.779667 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.781542 kubelet[2186]: I0813 03:21:38.779695 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00-kubelet-dir\") pod \"csi-node-driver-rqpff\" (UID: \"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00\") " pod="calico-system/csi-node-driver-rqpff" Aug 13 03:21:38.781542 kubelet[2186]: E0813 03:21:38.779928 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.782109 kubelet[2186]: W0813 03:21:38.779945 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.782109 kubelet[2186]: E0813 03:21:38.779962 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.782109 kubelet[2186]: I0813 03:21:38.779986 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00-registration-dir\") pod \"csi-node-driver-rqpff\" (UID: \"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00\") " pod="calico-system/csi-node-driver-rqpff" Aug 13 03:21:38.782109 kubelet[2186]: E0813 03:21:38.780444 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.782109 kubelet[2186]: W0813 03:21:38.780462 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.782109 kubelet[2186]: E0813 03:21:38.780482 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.782109 kubelet[2186]: E0813 03:21:38.780706 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.782109 kubelet[2186]: W0813 03:21:38.781186 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.782109 kubelet[2186]: E0813 03:21:38.781214 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.782792 kubelet[2186]: E0813 03:21:38.781491 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.782792 kubelet[2186]: W0813 03:21:38.781507 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.782792 kubelet[2186]: E0813 03:21:38.781523 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.782792 kubelet[2186]: E0813 03:21:38.781861 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.782792 kubelet[2186]: W0813 03:21:38.781876 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.782792 kubelet[2186]: E0813 03:21:38.781893 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.782792 kubelet[2186]: E0813 03:21:38.782138 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.782792 kubelet[2186]: W0813 03:21:38.782155 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.782792 kubelet[2186]: E0813 03:21:38.782170 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.782792 kubelet[2186]: E0813 03:21:38.782421 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.783436 kubelet[2186]: W0813 03:21:38.782449 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.783436 kubelet[2186]: E0813 03:21:38.782464 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.783436 kubelet[2186]: E0813 03:21:38.782697 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.783436 kubelet[2186]: W0813 03:21:38.782711 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.783436 kubelet[2186]: E0813 03:21:38.782780 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.783436 kubelet[2186]: E0813 03:21:38.783019 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.783436 kubelet[2186]: W0813 03:21:38.783034 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.783436 kubelet[2186]: E0813 03:21:38.783049 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.783436 kubelet[2186]: E0813 03:21:38.783308 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.783436 kubelet[2186]: W0813 03:21:38.783343 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.784168 kubelet[2186]: E0813 03:21:38.783377 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.784168 kubelet[2186]: E0813 03:21:38.783583 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.784168 kubelet[2186]: W0813 03:21:38.783597 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.784168 kubelet[2186]: E0813 03:21:38.783621 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.790494 env[1300]: time="2025-08-13T03:21:38.790440020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4rpwj,Uid:a0b198d1-a49b-4a7d-a50b-8de788c60344,Namespace:calico-system,Attempt:0,}" Aug 13 03:21:38.841541 env[1300]: time="2025-08-13T03:21:38.841450538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7567fd6b8b-qcjtx,Uid:67eaaa1c-e4fd-4368-b69a-f8c3f790d484,Namespace:calico-system,Attempt:0,} returns sandbox id \"44e00a57d58659298c6808c5f9874dfc4e3256499a71a3c43525b32ddadff137\"" Aug 13 03:21:38.845684 env[1300]: time="2025-08-13T03:21:38.845621352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 03:21:38.858860 env[1300]: time="2025-08-13T03:21:38.858695351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:21:38.860049 env[1300]: time="2025-08-13T03:21:38.859923505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:21:38.860162 env[1300]: time="2025-08-13T03:21:38.860058841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:21:38.861811 env[1300]: time="2025-08-13T03:21:38.861694886Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bf4098207560036ef29f078442ee0408cfe8053d21a59a9c23f213081d88fe7 pid=2683 runtime=io.containerd.runc.v2 Aug 13 03:21:38.883187 kubelet[2186]: E0813 03:21:38.881140 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.883187 kubelet[2186]: W0813 03:21:38.881181 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.883187 kubelet[2186]: E0813 03:21:38.881216 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.883187 kubelet[2186]: E0813 03:21:38.881673 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.883187 kubelet[2186]: W0813 03:21:38.881695 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.883187 kubelet[2186]: E0813 03:21:38.881719 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.883187 kubelet[2186]: E0813 03:21:38.882082 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.883187 kubelet[2186]: W0813 03:21:38.882134 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.883187 kubelet[2186]: E0813 03:21:38.882158 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.883187 kubelet[2186]: E0813 03:21:38.882495 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.883838 kubelet[2186]: W0813 03:21:38.882522 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.883838 kubelet[2186]: E0813 03:21:38.882552 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.883838 kubelet[2186]: E0813 03:21:38.882911 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.883838 kubelet[2186]: W0813 03:21:38.882926 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.883838 kubelet[2186]: E0813 03:21:38.883050 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.883838 kubelet[2186]: E0813 03:21:38.883316 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.883838 kubelet[2186]: W0813 03:21:38.883348 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.883838 kubelet[2186]: E0813 03:21:38.883509 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.883838 kubelet[2186]: E0813 03:21:38.883727 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.883838 kubelet[2186]: W0813 03:21:38.883790 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.884558 kubelet[2186]: E0813 03:21:38.883923 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.884558 kubelet[2186]: E0813 03:21:38.884149 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.884558 kubelet[2186]: W0813 03:21:38.884165 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.884558 kubelet[2186]: E0813 03:21:38.884337 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.884558 kubelet[2186]: E0813 03:21:38.884548 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.884891 kubelet[2186]: W0813 03:21:38.884581 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.884891 kubelet[2186]: E0813 03:21:38.884764 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.885039 kubelet[2186]: E0813 03:21:38.884983 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.885039 kubelet[2186]: W0813 03:21:38.885015 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.885184 kubelet[2186]: E0813 03:21:38.885159 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.885459 kubelet[2186]: E0813 03:21:38.885402 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.885459 kubelet[2186]: W0813 03:21:38.885444 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.885587 kubelet[2186]: E0813 03:21:38.885563 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.885816 kubelet[2186]: E0813 03:21:38.885765 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.885816 kubelet[2186]: W0813 03:21:38.885809 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.887399 kubelet[2186]: E0813 03:21:38.886558 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.887399 kubelet[2186]: E0813 03:21:38.886846 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.887399 kubelet[2186]: W0813 03:21:38.886865 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.887399 kubelet[2186]: E0813 03:21:38.886997 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.887399 kubelet[2186]: E0813 03:21:38.887201 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.887399 kubelet[2186]: W0813 03:21:38.887227 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.887399 kubelet[2186]: E0813 03:21:38.887356 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.887902 kubelet[2186]: E0813 03:21:38.887558 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.887902 kubelet[2186]: W0813 03:21:38.887585 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.887902 kubelet[2186]: E0813 03:21:38.887769 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.889431 kubelet[2186]: E0813 03:21:38.888070 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.889431 kubelet[2186]: W0813 03:21:38.888110 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.889431 kubelet[2186]: E0813 03:21:38.888971 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.889431 kubelet[2186]: E0813 03:21:38.889251 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.889431 kubelet[2186]: W0813 03:21:38.889267 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.890410 kubelet[2186]: E0813 03:21:38.889610 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.890410 kubelet[2186]: E0813 03:21:38.889819 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.890410 kubelet[2186]: W0813 03:21:38.889835 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.892566 kubelet[2186]: E0813 03:21:38.890904 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.892566 kubelet[2186]: E0813 03:21:38.891150 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.892566 kubelet[2186]: W0813 03:21:38.891165 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.892566 kubelet[2186]: E0813 03:21:38.891342 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.892566 kubelet[2186]: E0813 03:21:38.891564 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.892566 kubelet[2186]: W0813 03:21:38.891579 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.892566 kubelet[2186]: E0813 03:21:38.891753 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.892566 kubelet[2186]: E0813 03:21:38.891927 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.892566 kubelet[2186]: W0813 03:21:38.891942 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.892566 kubelet[2186]: E0813 03:21:38.892082 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.894748 kubelet[2186]: E0813 03:21:38.892350 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.894748 kubelet[2186]: W0813 03:21:38.892367 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.894748 kubelet[2186]: E0813 03:21:38.892497 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.894748 kubelet[2186]: E0813 03:21:38.892748 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.894748 kubelet[2186]: W0813 03:21:38.892764 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.894748 kubelet[2186]: E0813 03:21:38.892910 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.908421 kubelet[2186]: E0813 03:21:38.900467 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.908421 kubelet[2186]: W0813 03:21:38.900498 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.908421 kubelet[2186]: E0813 03:21:38.900964 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.908421 kubelet[2186]: E0813 03:21:38.901243 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.908421 kubelet[2186]: W0813 03:21:38.901260 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.908421 kubelet[2186]: E0813 03:21:38.901285 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.915000 audit[2726]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2726 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:38.915000 audit[2726]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffd599b1b0 a2=0 a3=7fffd599b19c items=0 ppid=2291 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:38.915000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:38.925630 kubelet[2186]: E0813 03:21:38.923900 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:38.925630 kubelet[2186]: W0813 03:21:38.923928 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:38.925630 kubelet[2186]: E0813 03:21:38.923980 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:38.926000 audit[2726]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2726 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:38.926000 audit[2726]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd599b1b0 a2=0 a3=0 items=0 ppid=2291 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:38.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:39.010149 env[1300]: time="2025-08-13T03:21:39.009969005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4rpwj,Uid:a0b198d1-a49b-4a7d-a50b-8de788c60344,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bf4098207560036ef29f078442ee0408cfe8053d21a59a9c23f213081d88fe7\"" Aug 13 03:21:39.807711 kubelet[2186]: E0813 03:21:39.807655 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:21:40.641576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293120669.mount: Deactivated successfully. Aug 13 03:21:41.809082 kubelet[2186]: E0813 03:21:41.808652 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:21:43.060041 env[1300]: time="2025-08-13T03:21:43.059896563Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:43.063082 env[1300]: time="2025-08-13T03:21:43.063026136Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:43.066012 env[1300]: time="2025-08-13T03:21:43.065935374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:43.068673 env[1300]: time="2025-08-13T03:21:43.068622016Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:43.069660 env[1300]: time="2025-08-13T03:21:43.069610071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 03:21:43.074725 env[1300]: time="2025-08-13T03:21:43.073674637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 03:21:43.103852 env[1300]: time="2025-08-13T03:21:43.103791801Z" level=info msg="CreateContainer within sandbox \"44e00a57d58659298c6808c5f9874dfc4e3256499a71a3c43525b32ddadff137\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 03:21:43.133675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161604484.mount: Deactivated successfully. Aug 13 03:21:43.141076 env[1300]: time="2025-08-13T03:21:43.141011515Z" level=info msg="CreateContainer within sandbox \"44e00a57d58659298c6808c5f9874dfc4e3256499a71a3c43525b32ddadff137\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"78f7a656c16163644767667a7383d8471e20f39a24329979d9a0e77c6f577992\"" Aug 13 03:21:43.144694 env[1300]: time="2025-08-13T03:21:43.143128621Z" level=info msg="StartContainer for \"78f7a656c16163644767667a7383d8471e20f39a24329979d9a0e77c6f577992\"" Aug 13 03:21:43.296934 env[1300]: time="2025-08-13T03:21:43.296863635Z" level=info msg="StartContainer for \"78f7a656c16163644767667a7383d8471e20f39a24329979d9a0e77c6f577992\" returns successfully" Aug 13 03:21:43.810307 kubelet[2186]: E0813 03:21:43.810213 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:21:43.949816 kubelet[2186]: I0813 03:21:43.949700 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7567fd6b8b-qcjtx" podStartSLOduration=1.7217884589999999 podStartE2EDuration="5.949674043s" podCreationTimestamp="2025-08-13 03:21:38 +0000 UTC" firstStartedPulling="2025-08-13 03:21:38.844036005 +0000 UTC m=+23.417475834" lastFinishedPulling="2025-08-13 03:21:43.071921595 +0000 UTC m=+27.645361418" observedRunningTime="2025-08-13 03:21:43.949384786 +0000 UTC m=+28.522824641" watchObservedRunningTime="2025-08-13 03:21:43.949674043 +0000 UTC m=+28.523113879" Aug 13 03:21:44.018685 kubelet[2186]: E0813 03:21:44.018622 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.019107 kubelet[2186]: W0813 03:21:44.019033 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.019276 kubelet[2186]: E0813 03:21:44.019245 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.019857 kubelet[2186]: E0813 03:21:44.019818 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.020023 kubelet[2186]: W0813 03:21:44.019995 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.020179 kubelet[2186]: E0813 03:21:44.020154 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.020629 kubelet[2186]: E0813 03:21:44.020607 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.020787 kubelet[2186]: W0813 03:21:44.020760 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.020948 kubelet[2186]: E0813 03:21:44.020910 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.021428 kubelet[2186]: E0813 03:21:44.021406 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.021566 kubelet[2186]: W0813 03:21:44.021540 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.021738 kubelet[2186]: E0813 03:21:44.021714 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.022263 kubelet[2186]: E0813 03:21:44.022242 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.022440 kubelet[2186]: W0813 03:21:44.022414 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.022601 kubelet[2186]: E0813 03:21:44.022575 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.023075 kubelet[2186]: E0813 03:21:44.023053 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.023199 kubelet[2186]: W0813 03:21:44.023173 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.023374 kubelet[2186]: E0813 03:21:44.023308 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.023913 kubelet[2186]: E0813 03:21:44.023891 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.024062 kubelet[2186]: W0813 03:21:44.024036 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.024179 kubelet[2186]: E0813 03:21:44.024154 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.024656 kubelet[2186]: E0813 03:21:44.024635 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.024788 kubelet[2186]: W0813 03:21:44.024761 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.024919 kubelet[2186]: E0813 03:21:44.024894 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.025373 kubelet[2186]: E0813 03:21:44.025352 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.025519 kubelet[2186]: W0813 03:21:44.025493 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.025663 kubelet[2186]: E0813 03:21:44.025628 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.026133 kubelet[2186]: E0813 03:21:44.026110 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.026255 kubelet[2186]: W0813 03:21:44.026229 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.026440 kubelet[2186]: E0813 03:21:44.026415 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.026902 kubelet[2186]: E0813 03:21:44.026881 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.027041 kubelet[2186]: W0813 03:21:44.027014 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.027164 kubelet[2186]: E0813 03:21:44.027139 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.027552 kubelet[2186]: E0813 03:21:44.027531 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.027667 kubelet[2186]: W0813 03:21:44.027642 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.027818 kubelet[2186]: E0813 03:21:44.027792 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.028253 kubelet[2186]: E0813 03:21:44.028232 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.028410 kubelet[2186]: W0813 03:21:44.028384 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.028569 kubelet[2186]: E0813 03:21:44.028544 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.029029 kubelet[2186]: E0813 03:21:44.029008 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.029153 kubelet[2186]: W0813 03:21:44.029127 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.029356 kubelet[2186]: E0813 03:21:44.029244 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.034009 kubelet[2186]: E0813 03:21:44.033985 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.034163 kubelet[2186]: W0813 03:21:44.034135 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.034367 kubelet[2186]: E0813 03:21:44.034301 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.120206 kubelet[2186]: E0813 03:21:44.119960 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.120938 kubelet[2186]: W0813 03:21:44.120505 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.122288 kubelet[2186]: E0813 03:21:44.121697 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.122967 kubelet[2186]: E0813 03:21:44.122919 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.123112 kubelet[2186]: W0813 03:21:44.123085 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.123264 kubelet[2186]: E0813 03:21:44.123239 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.123686 kubelet[2186]: E0813 03:21:44.123660 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.123686 kubelet[2186]: W0813 03:21:44.123685 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.123848 kubelet[2186]: E0813 03:21:44.123712 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.124078 kubelet[2186]: E0813 03:21:44.124053 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.124203 kubelet[2186]: W0813 03:21:44.124080 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.124203 kubelet[2186]: E0813 03:21:44.124107 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.124382 kubelet[2186]: E0813 03:21:44.124359 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.124461 kubelet[2186]: W0813 03:21:44.124380 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.124461 kubelet[2186]: E0813 03:21:44.124398 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.125015 kubelet[2186]: E0813 03:21:44.124689 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.125015 kubelet[2186]: W0813 03:21:44.124705 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.125015 kubelet[2186]: E0813 03:21:44.124737 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.125015 kubelet[2186]: E0813 03:21:44.124954 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.125015 kubelet[2186]: W0813 03:21:44.124971 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.125302 kubelet[2186]: E0813 03:21:44.125200 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.125302 kubelet[2186]: W0813 03:21:44.125214 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.125302 kubelet[2186]: E0813 03:21:44.125230 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.125618 kubelet[2186]: E0813 03:21:44.125594 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.125618 kubelet[2186]: W0813 03:21:44.125615 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.125782 kubelet[2186]: E0813 03:21:44.125634 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.125883 kubelet[2186]: E0813 03:21:44.125860 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.125883 kubelet[2186]: W0813 03:21:44.125882 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.126055 kubelet[2186]: E0813 03:21:44.125899 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.126218 kubelet[2186]: E0813 03:21:44.126183 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.126218 kubelet[2186]: W0813 03:21:44.126205 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.126383 kubelet[2186]: E0813 03:21:44.126223 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.126756 kubelet[2186]: E0813 03:21:44.126728 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.127081 kubelet[2186]: E0813 03:21:44.127058 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.127220 kubelet[2186]: W0813 03:21:44.127193 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.127432 kubelet[2186]: E0813 03:21:44.127407 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.127883 kubelet[2186]: E0813 03:21:44.127860 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.128035 kubelet[2186]: W0813 03:21:44.128008 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.128240 kubelet[2186]: E0813 03:21:44.128205 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.128558 kubelet[2186]: E0813 03:21:44.128511 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.128654 kubelet[2186]: W0813 03:21:44.128565 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.128654 kubelet[2186]: E0813 03:21:44.128587 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.128847 kubelet[2186]: E0813 03:21:44.128822 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.128847 kubelet[2186]: W0813 03:21:44.128844 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.129033 kubelet[2186]: E0813 03:21:44.128862 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.129126 kubelet[2186]: E0813 03:21:44.129102 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.129208 kubelet[2186]: W0813 03:21:44.129125 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.129208 kubelet[2186]: E0813 03:21:44.129142 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.129459 kubelet[2186]: E0813 03:21:44.129434 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.129459 kubelet[2186]: W0813 03:21:44.129454 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.129617 kubelet[2186]: E0813 03:21:44.129471 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.130039 kubelet[2186]: E0813 03:21:44.130012 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.130039 kubelet[2186]: W0813 03:21:44.130033 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.130039 kubelet[2186]: E0813 03:21:44.130050 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.851191 env[1300]: time="2025-08-13T03:21:44.851124109Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:44.854352 env[1300]: time="2025-08-13T03:21:44.854289805Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:44.856154 env[1300]: time="2025-08-13T03:21:44.856103003Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:44.858618 env[1300]: time="2025-08-13T03:21:44.858574613Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:44.859674 env[1300]: time="2025-08-13T03:21:44.859627398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 03:21:44.864365 env[1300]: time="2025-08-13T03:21:44.863702602Z" level=info msg="CreateContainer within sandbox \"9bf4098207560036ef29f078442ee0408cfe8053d21a59a9c23f213081d88fe7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 03:21:44.881958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2433236272.mount: Deactivated successfully. Aug 13 03:21:44.889589 env[1300]: time="2025-08-13T03:21:44.889540006Z" level=info msg="CreateContainer within sandbox \"9bf4098207560036ef29f078442ee0408cfe8053d21a59a9c23f213081d88fe7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c30dddb9b4b73f3de5382c5036cdd8836c09438fed013742e19816c70427760a\"" Aug 13 03:21:44.892137 env[1300]: time="2025-08-13T03:21:44.892046055Z" level=info msg="StartContainer for \"c30dddb9b4b73f3de5382c5036cdd8836c09438fed013742e19816c70427760a\"" Aug 13 03:21:44.943847 kubelet[2186]: I0813 03:21:44.937532 2186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 03:21:44.943847 kubelet[2186]: E0813 03:21:44.939763 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.943847 kubelet[2186]: W0813 03:21:44.939786 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.943847 kubelet[2186]: E0813 03:21:44.939811 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.943847 kubelet[2186]: E0813 03:21:44.940083 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.943847 kubelet[2186]: W0813 03:21:44.940099 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.943847 kubelet[2186]: E0813 03:21:44.940118 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.943847 kubelet[2186]: E0813 03:21:44.940391 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.943847 kubelet[2186]: W0813 03:21:44.940406 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.943847 kubelet[2186]: E0813 03:21:44.940422 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.943847 kubelet[2186]: E0813 03:21:44.940692 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.945387 kubelet[2186]: W0813 03:21:44.940708 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.945387 kubelet[2186]: E0813 03:21:44.940726 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.945387 kubelet[2186]: E0813 03:21:44.941102 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.945387 kubelet[2186]: W0813 03:21:44.941119 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.945387 kubelet[2186]: E0813 03:21:44.941136 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.945387 kubelet[2186]: E0813 03:21:44.941524 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.945387 kubelet[2186]: W0813 03:21:44.941578 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.945387 kubelet[2186]: E0813 03:21:44.941600 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.945387 kubelet[2186]: E0813 03:21:44.942128 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.945387 kubelet[2186]: W0813 03:21:44.942146 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.946812 kubelet[2186]: E0813 03:21:44.942163 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.946812 kubelet[2186]: E0813 03:21:44.942427 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.946812 kubelet[2186]: W0813 03:21:44.942442 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.946812 kubelet[2186]: E0813 03:21:44.942458 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.946812 kubelet[2186]: E0813 03:21:44.942741 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.946812 kubelet[2186]: W0813 03:21:44.942756 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.946812 kubelet[2186]: E0813 03:21:44.942772 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.946812 kubelet[2186]: E0813 03:21:44.943048 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.946812 kubelet[2186]: W0813 03:21:44.943066 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.946812 kubelet[2186]: E0813 03:21:44.943082 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.948708 kubelet[2186]: E0813 03:21:44.943350 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.948708 kubelet[2186]: W0813 03:21:44.943372 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.948708 kubelet[2186]: E0813 03:21:44.943387 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.948708 kubelet[2186]: E0813 03:21:44.943621 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.948708 kubelet[2186]: W0813 03:21:44.943636 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.948708 kubelet[2186]: E0813 03:21:44.943650 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.948708 kubelet[2186]: E0813 03:21:44.946169 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.948708 kubelet[2186]: W0813 03:21:44.946185 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.948708 kubelet[2186]: E0813 03:21:44.946203 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.948708 kubelet[2186]: E0813 03:21:44.946488 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.949420 kubelet[2186]: W0813 03:21:44.946505 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.949420 kubelet[2186]: E0813 03:21:44.946522 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:44.949420 kubelet[2186]: E0813 03:21:44.946859 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:44.949420 kubelet[2186]: W0813 03:21:44.946874 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:44.949420 kubelet[2186]: E0813 03:21:44.946898 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:45.018652 env[1300]: time="2025-08-13T03:21:45.014539915Z" level=info msg="StartContainer for \"c30dddb9b4b73f3de5382c5036cdd8836c09438fed013742e19816c70427760a\" returns successfully" Aug 13 03:21:45.033476 kubelet[2186]: E0813 03:21:45.033430 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:45.033476 kubelet[2186]: W0813 03:21:45.033465 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:45.033764 kubelet[2186]: E0813 03:21:45.033497 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:45.036371 kubelet[2186]: E0813 03:21:45.034624 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:45.036371 kubelet[2186]: W0813 03:21:45.034657 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:45.036371 kubelet[2186]: E0813 03:21:45.034683 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:45.036371 kubelet[2186]: E0813 03:21:45.034946 2186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 03:21:45.036371 kubelet[2186]: W0813 03:21:45.034962 2186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 03:21:45.036371 kubelet[2186]: E0813 03:21:45.034993 2186 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 03:21:45.087052 systemd[1]: run-containerd-runc-k8s.io-c30dddb9b4b73f3de5382c5036cdd8836c09438fed013742e19816c70427760a-runc.e2VNCz.mount: Deactivated successfully. Aug 13 03:21:45.087301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c30dddb9b4b73f3de5382c5036cdd8836c09438fed013742e19816c70427760a-rootfs.mount: Deactivated successfully. Aug 13 03:21:45.092858 env[1300]: time="2025-08-13T03:21:45.092778915Z" level=info msg="shim disconnected" id=c30dddb9b4b73f3de5382c5036cdd8836c09438fed013742e19816c70427760a Aug 13 03:21:45.093081 env[1300]: time="2025-08-13T03:21:45.092869040Z" level=warning msg="cleaning up after shim disconnected" id=c30dddb9b4b73f3de5382c5036cdd8836c09438fed013742e19816c70427760a namespace=k8s.io Aug 13 03:21:45.093081 env[1300]: time="2025-08-13T03:21:45.092887448Z" level=info msg="cleaning up dead shim" Aug 13 03:21:45.108389 env[1300]: time="2025-08-13T03:21:45.108172876Z" level=warning msg="cleanup warnings time=\"2025-08-13T03:21:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2894 runtime=io.containerd.runc.v2\n" Aug 13 03:21:45.809683 kubelet[2186]: E0813 03:21:45.809602 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:21:45.932622 kernel: kauditd_printk_skb: 8 callbacks suppressed Aug 13 03:21:45.933240 kernel: audit: type=1325 audit(1755055305.925:309): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:45.925000 audit[2912]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:45.925000 audit[2912]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe4b131f40 a2=0 a3=7ffe4b131f2c items=0 ppid=2291 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:45.945153 kernel: audit: type=1300 audit(1755055305.925:309): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe4b131f40 a2=0 a3=7ffe4b131f2c items=0 ppid=2291 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:45.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:45.951357 kernel: audit: type=1327 audit(1755055305.925:309): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:45.952000 audit[2912]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:45.952000 audit[2912]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe4b131f40 a2=0 a3=7ffe4b131f2c items=0 ppid=2291 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:45.965983 kernel: audit: type=1325 audit(1755055305.952:310): table=nat:100 family=2 entries=19 op=nft_register_chain pid=2912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:21:45.966096 kernel: audit: type=1300 audit(1755055305.952:310): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe4b131f40 a2=0 a3=7ffe4b131f2c items=0 ppid=2291 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:21:45.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:45.970967 kernel: audit: type=1327 audit(1755055305.952:310): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:21:45.972241 env[1300]: time="2025-08-13T03:21:45.971729709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 03:21:47.808926 kubelet[2186]: E0813 03:21:47.808075 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:21:49.808735 kubelet[2186]: E0813 03:21:49.808632 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:21:51.681191 env[1300]: time="2025-08-13T03:21:51.681114684Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:51.684776 env[1300]: time="2025-08-13T03:21:51.684731743Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:51.687345 env[1300]: time="2025-08-13T03:21:51.687293496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:51.690682 env[1300]: time="2025-08-13T03:21:51.690033842Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:21:51.691308 env[1300]: time="2025-08-13T03:21:51.691255573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 03:21:51.699126 env[1300]: time="2025-08-13T03:21:51.699064488Z" level=info msg="CreateContainer within sandbox \"9bf4098207560036ef29f078442ee0408cfe8053d21a59a9c23f213081d88fe7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 03:21:51.723869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564238031.mount: Deactivated successfully. Aug 13 03:21:51.729911 env[1300]: time="2025-08-13T03:21:51.729845705Z" level=info msg="CreateContainer within sandbox \"9bf4098207560036ef29f078442ee0408cfe8053d21a59a9c23f213081d88fe7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"897b12770e216b7d7914aef3f6d202b9971e0166018d57a6f0659b4128df993d\"" Aug 13 03:21:51.732277 env[1300]: time="2025-08-13T03:21:51.732219088Z" level=info msg="StartContainer for \"897b12770e216b7d7914aef3f6d202b9971e0166018d57a6f0659b4128df993d\"" Aug 13 03:21:51.809263 kubelet[2186]: E0813 03:21:51.808726 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:21:51.856625 env[1300]: time="2025-08-13T03:21:51.856536728Z" level=info msg="StartContainer for \"897b12770e216b7d7914aef3f6d202b9971e0166018d57a6f0659b4128df993d\" returns successfully" Aug 13 03:21:52.718271 systemd[1]: run-containerd-runc-k8s.io-897b12770e216b7d7914aef3f6d202b9971e0166018d57a6f0659b4128df993d-runc.CXhwY1.mount: Deactivated successfully. Aug 13 03:21:53.020668 env[1300]: time="2025-08-13T03:21:53.020438868Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 03:21:53.051185 kubelet[2186]: I0813 03:21:53.047993 2186 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 03:21:53.127160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-897b12770e216b7d7914aef3f6d202b9971e0166018d57a6f0659b4128df993d-rootfs.mount: Deactivated successfully. Aug 13 03:21:53.147417 env[1300]: time="2025-08-13T03:21:53.147277650Z" level=info msg="shim disconnected" id=897b12770e216b7d7914aef3f6d202b9971e0166018d57a6f0659b4128df993d Aug 13 03:21:53.147831 env[1300]: time="2025-08-13T03:21:53.147796954Z" level=warning msg="cleaning up after shim disconnected" id=897b12770e216b7d7914aef3f6d202b9971e0166018d57a6f0659b4128df993d namespace=k8s.io Aug 13 03:21:53.147971 env[1300]: time="2025-08-13T03:21:53.147940825Z" level=info msg="cleaning up dead shim" Aug 13 03:21:53.168285 env[1300]: time="2025-08-13T03:21:53.167560140Z" level=warning msg="cleanup warnings time=\"2025-08-13T03:21:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2963 runtime=io.containerd.runc.v2\n" Aug 13 03:21:53.223207 kubelet[2186]: I0813 03:21:53.223108 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxcb\" (UniqueName: \"kubernetes.io/projected/68df4933-b5c3-4312-8741-f03d5628c7c8-kube-api-access-lzxcb\") pod \"calico-kube-controllers-78f8b9fb6f-b4rrq\" (UID: \"68df4933-b5c3-4312-8741-f03d5628c7c8\") " pod="calico-system/calico-kube-controllers-78f8b9fb6f-b4rrq" Aug 13 03:21:53.223786 kubelet[2186]: I0813 03:21:53.223505 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68df4933-b5c3-4312-8741-f03d5628c7c8-tigera-ca-bundle\") pod \"calico-kube-controllers-78f8b9fb6f-b4rrq\" (UID: \"68df4933-b5c3-4312-8741-f03d5628c7c8\") " pod="calico-system/calico-kube-controllers-78f8b9fb6f-b4rrq" Aug 13 03:21:53.324367 kubelet[2186]: I0813 03:21:53.324150 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vn2w\" (UniqueName: \"kubernetes.io/projected/5686f5d5-01df-4f76-8a08-487dbaa97ed4-kube-api-access-7vn2w\") pod \"calico-apiserver-75b8b879dd-pv4k9\" (UID: \"5686f5d5-01df-4f76-8a08-487dbaa97ed4\") " pod="calico-apiserver/calico-apiserver-75b8b879dd-pv4k9" Aug 13 03:21:53.324741 kubelet[2186]: I0813 03:21:53.324710 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c-goldmane-key-pair\") pod \"goldmane-58fd7646b9-txzpd\" (UID: \"4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c\") " pod="calico-system/goldmane-58fd7646b9-txzpd" Aug 13 03:21:53.324918 kubelet[2186]: I0813 03:21:53.324888 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b0738f7-1587-4a22-887f-5f8bd64e6743-calico-apiserver-certs\") pod \"calico-apiserver-75b8b879dd-926tb\" (UID: \"1b0738f7-1587-4a22-887f-5f8bd64e6743\") " pod="calico-apiserver/calico-apiserver-75b8b879dd-926tb" Aug 13 03:21:53.325085 kubelet[2186]: I0813 03:21:53.325056 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77p7\" (UniqueName: \"kubernetes.io/projected/3f742517-09bc-4214-9d75-c0b7d73d3fd4-kube-api-access-r77p7\") pod \"coredns-7c65d6cfc9-t4mmj\" (UID: \"3f742517-09bc-4214-9d75-c0b7d73d3fd4\") " pod="kube-system/coredns-7c65d6cfc9-t4mmj" Aug 13 03:21:53.325775 kubelet[2186]: I0813 03:21:53.325230 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r88k\" (UniqueName: \"kubernetes.io/projected/5e80d36b-0ef9-49a4-9b05-2a70df6f56d4-kube-api-access-8r88k\") pod \"coredns-7c65d6cfc9-b5n98\" (UID: \"5e80d36b-0ef9-49a4-9b05-2a70df6f56d4\") " pod="kube-system/coredns-7c65d6cfc9-b5n98" Aug 13 03:21:53.326007 kubelet[2186]: I0813 03:21:53.325976 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a39a6cb3-bac3-4669-b0a7-db896af6f22e-whisker-backend-key-pair\") pod \"whisker-76c6df68db-ldnkj\" (UID: \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\") " pod="calico-system/whisker-76c6df68db-ldnkj" Aug 13 03:21:53.326158 kubelet[2186]: I0813 03:21:53.326127 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f742517-09bc-4214-9d75-c0b7d73d3fd4-config-volume\") pod \"coredns-7c65d6cfc9-t4mmj\" (UID: \"3f742517-09bc-4214-9d75-c0b7d73d3fd4\") " pod="kube-system/coredns-7c65d6cfc9-t4mmj" Aug 13 03:21:53.326368 kubelet[2186]: I0813 03:21:53.326339 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrsc2\" (UniqueName: \"kubernetes.io/projected/4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c-kube-api-access-xrsc2\") pod \"goldmane-58fd7646b9-txzpd\" (UID: \"4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c\") " pod="calico-system/goldmane-58fd7646b9-txzpd" Aug 13 03:21:53.326537 kubelet[2186]: I0813 03:21:53.326506 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a39a6cb3-bac3-4669-b0a7-db896af6f22e-whisker-ca-bundle\") pod \"whisker-76c6df68db-ldnkj\" (UID: \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\") " pod="calico-system/whisker-76c6df68db-ldnkj" Aug 13 03:21:53.326689 kubelet[2186]: I0813 03:21:53.326660 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm4v7\" (UniqueName: \"kubernetes.io/projected/a39a6cb3-bac3-4669-b0a7-db896af6f22e-kube-api-access-nm4v7\") pod \"whisker-76c6df68db-ldnkj\" (UID: \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\") " pod="calico-system/whisker-76c6df68db-ldnkj" Aug 13 03:21:53.326862 kubelet[2186]: I0813 03:21:53.326833 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scl5s\" (UniqueName: \"kubernetes.io/projected/1b0738f7-1587-4a22-887f-5f8bd64e6743-kube-api-access-scl5s\") pod \"calico-apiserver-75b8b879dd-926tb\" (UID: \"1b0738f7-1587-4a22-887f-5f8bd64e6743\") " pod="calico-apiserver/calico-apiserver-75b8b879dd-926tb" Aug 13 03:21:53.327006 kubelet[2186]: I0813 03:21:53.326978 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-txzpd\" (UID: \"4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c\") " pod="calico-system/goldmane-58fd7646b9-txzpd" Aug 13 03:21:53.327159 kubelet[2186]: I0813 03:21:53.327130 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e80d36b-0ef9-49a4-9b05-2a70df6f56d4-config-volume\") pod \"coredns-7c65d6cfc9-b5n98\" (UID: \"5e80d36b-0ef9-49a4-9b05-2a70df6f56d4\") " pod="kube-system/coredns-7c65d6cfc9-b5n98" Aug 13 03:21:53.327338 kubelet[2186]: I0813 03:21:53.327290 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5686f5d5-01df-4f76-8a08-487dbaa97ed4-calico-apiserver-certs\") pod \"calico-apiserver-75b8b879dd-pv4k9\" (UID: \"5686f5d5-01df-4f76-8a08-487dbaa97ed4\") " pod="calico-apiserver/calico-apiserver-75b8b879dd-pv4k9" Aug 13 03:21:53.327750 kubelet[2186]: I0813 03:21:53.327491 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c-config\") pod \"goldmane-58fd7646b9-txzpd\" (UID: \"4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c\") " pod="calico-system/goldmane-58fd7646b9-txzpd" Aug 13 03:21:53.499390 env[1300]: time="2025-08-13T03:21:53.498964349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f8b9fb6f-b4rrq,Uid:68df4933-b5c3-4312-8741-f03d5628c7c8,Namespace:calico-system,Attempt:0,}" Aug 13 03:21:53.524906 env[1300]: time="2025-08-13T03:21:53.524832662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b5n98,Uid:5e80d36b-0ef9-49a4-9b05-2a70df6f56d4,Namespace:kube-system,Attempt:0,}" Aug 13 03:21:53.528914 env[1300]: time="2025-08-13T03:21:53.528861209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-txzpd,Uid:4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c,Namespace:calico-system,Attempt:0,}" Aug 13 03:21:53.531824 env[1300]: time="2025-08-13T03:21:53.531741757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8b879dd-pv4k9,Uid:5686f5d5-01df-4f76-8a08-487dbaa97ed4,Namespace:calico-apiserver,Attempt:0,}" Aug 13 03:21:53.535778 env[1300]: time="2025-08-13T03:21:53.535731760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8b879dd-926tb,Uid:1b0738f7-1587-4a22-887f-5f8bd64e6743,Namespace:calico-apiserver,Attempt:0,}" Aug 13 03:21:53.541076 env[1300]: time="2025-08-13T03:21:53.541033945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t4mmj,Uid:3f742517-09bc-4214-9d75-c0b7d73d3fd4,Namespace:kube-system,Attempt:0,}" Aug 13 03:21:53.544309 env[1300]: time="2025-08-13T03:21:53.544255733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76c6df68db-ldnkj,Uid:a39a6cb3-bac3-4669-b0a7-db896af6f22e,Namespace:calico-system,Attempt:0,}" Aug 13 03:21:53.822033 env[1300]: time="2025-08-13T03:21:53.821961108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqpff,Uid:a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00,Namespace:calico-system,Attempt:0,}" Aug 13 03:21:53.960822 env[1300]: time="2025-08-13T03:21:53.960652119Z" level=error msg="Failed to destroy network for sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:53.965279 env[1300]: time="2025-08-13T03:21:53.965228117Z" level=error msg="encountered an error cleaning up failed sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:53.965846 env[1300]: time="2025-08-13T03:21:53.965499015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b5n98,Uid:5e80d36b-0ef9-49a4-9b05-2a70df6f56d4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:53.968865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119-shm.mount: Deactivated successfully. Aug 13 03:21:53.970580 kubelet[2186]: E0813 03:21:53.970214 2186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:53.971522 kubelet[2186]: E0813 03:21:53.971233 2186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b5n98" Aug 13 03:21:53.971522 kubelet[2186]: E0813 03:21:53.971302 2186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b5n98" Aug 13 03:21:53.971522 kubelet[2186]: E0813 03:21:53.971435 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-b5n98_kube-system(5e80d36b-0ef9-49a4-9b05-2a70df6f56d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-b5n98_kube-system(5e80d36b-0ef9-49a4-9b05-2a70df6f56d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b5n98" podUID="5e80d36b-0ef9-49a4-9b05-2a70df6f56d4" Aug 13 03:21:54.009176 env[1300]: time="2025-08-13T03:21:54.009117049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 03:21:54.026312 kubelet[2186]: I0813 03:21:54.026264 2186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:21:54.077218 env[1300]: time="2025-08-13T03:21:54.077038906Z" level=info msg="StopPodSandbox for \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\"" Aug 13 03:21:54.085484 env[1300]: time="2025-08-13T03:21:54.085430455Z" level=error msg="Failed to destroy network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.089403 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df-shm.mount: Deactivated successfully. Aug 13 03:21:54.090113 env[1300]: time="2025-08-13T03:21:54.090057291Z" level=error msg="encountered an error cleaning up failed sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.090352 env[1300]: time="2025-08-13T03:21:54.090276578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t4mmj,Uid:3f742517-09bc-4214-9d75-c0b7d73d3fd4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.091651 kubelet[2186]: E0813 03:21:54.090753 2186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.091651 kubelet[2186]: E0813 03:21:54.090868 2186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t4mmj" Aug 13 03:21:54.091651 kubelet[2186]: E0813 03:21:54.090923 2186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t4mmj" Aug 13 03:21:54.092273 kubelet[2186]: E0813 03:21:54.091111 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-t4mmj_kube-system(3f742517-09bc-4214-9d75-c0b7d73d3fd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-t4mmj_kube-system(3f742517-09bc-4214-9d75-c0b7d73d3fd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t4mmj" podUID="3f742517-09bc-4214-9d75-c0b7d73d3fd4" Aug 13 03:21:54.096338 env[1300]: time="2025-08-13T03:21:54.095663524Z" level=error msg="Failed to destroy network for sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.097014 env[1300]: time="2025-08-13T03:21:54.096846301Z" level=error msg="encountered an error cleaning up failed sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.097014 env[1300]: time="2025-08-13T03:21:54.096923597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-txzpd,Uid:4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.097920 kubelet[2186]: E0813 03:21:54.097381 2186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.097920 kubelet[2186]: E0813 03:21:54.097486 2186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-txzpd" Aug 13 03:21:54.097920 kubelet[2186]: E0813 03:21:54.097553 2186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-txzpd" Aug 13 03:21:54.099617 kubelet[2186]: E0813 03:21:54.097622 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-txzpd_calico-system(4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-txzpd_calico-system(4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-txzpd" podUID="4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c" Aug 13 03:21:54.131074 env[1300]: time="2025-08-13T03:21:54.130950305Z" level=error msg="Failed to destroy network for sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.131581 env[1300]: time="2025-08-13T03:21:54.131536706Z" level=error msg="encountered an error cleaning up failed sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.131691 env[1300]: time="2025-08-13T03:21:54.131607236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8b879dd-pv4k9,Uid:5686f5d5-01df-4f76-8a08-487dbaa97ed4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.132851 kubelet[2186]: E0813 03:21:54.132070 2186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.132851 kubelet[2186]: E0813 03:21:54.132241 2186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8b879dd-pv4k9" Aug 13 03:21:54.132851 kubelet[2186]: E0813 03:21:54.132279 2186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8b879dd-pv4k9" Aug 13 03:21:54.134825 kubelet[2186]: E0813 03:21:54.132400 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75b8b879dd-pv4k9_calico-apiserver(5686f5d5-01df-4f76-8a08-487dbaa97ed4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75b8b879dd-pv4k9_calico-apiserver(5686f5d5-01df-4f76-8a08-487dbaa97ed4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75b8b879dd-pv4k9" podUID="5686f5d5-01df-4f76-8a08-487dbaa97ed4" Aug 13 03:21:54.137171 env[1300]: time="2025-08-13T03:21:54.136999934Z" level=error msg="Failed to destroy network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.137555 env[1300]: time="2025-08-13T03:21:54.137497596Z" level=error msg="encountered an error cleaning up failed sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.137677 env[1300]: time="2025-08-13T03:21:54.137568166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f8b9fb6f-b4rrq,Uid:68df4933-b5c3-4312-8741-f03d5628c7c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.140130 kubelet[2186]: E0813 03:21:54.138209 2186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.140424 kubelet[2186]: E0813 03:21:54.139921 2186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f8b9fb6f-b4rrq" Aug 13 03:21:54.140758 kubelet[2186]: E0813 03:21:54.140725 2186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f8b9fb6f-b4rrq" Aug 13 03:21:54.140996 kubelet[2186]: E0813 03:21:54.140938 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78f8b9fb6f-b4rrq_calico-system(68df4933-b5c3-4312-8741-f03d5628c7c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78f8b9fb6f-b4rrq_calico-system(68df4933-b5c3-4312-8741-f03d5628c7c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f8b9fb6f-b4rrq" podUID="68df4933-b5c3-4312-8741-f03d5628c7c8" Aug 13 03:21:54.172295 env[1300]: time="2025-08-13T03:21:54.172182706Z" level=error msg="Failed to destroy network for sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.173057 env[1300]: time="2025-08-13T03:21:54.173010906Z" level=error msg="encountered an error cleaning up failed sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.173247 env[1300]: time="2025-08-13T03:21:54.173185609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76c6df68db-ldnkj,Uid:a39a6cb3-bac3-4669-b0a7-db896af6f22e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.173839 kubelet[2186]: E0813 03:21:54.173755 2186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.173962 kubelet[2186]: E0813 03:21:54.173871 2186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76c6df68db-ldnkj" Aug 13 03:21:54.173962 kubelet[2186]: E0813 03:21:54.173918 2186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76c6df68db-ldnkj" Aug 13 03:21:54.174120 kubelet[2186]: E0813 03:21:54.173988 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76c6df68db-ldnkj_calico-system(a39a6cb3-bac3-4669-b0a7-db896af6f22e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76c6df68db-ldnkj_calico-system(a39a6cb3-bac3-4669-b0a7-db896af6f22e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76c6df68db-ldnkj" podUID="a39a6cb3-bac3-4669-b0a7-db896af6f22e" Aug 13 03:21:54.192595 env[1300]: time="2025-08-13T03:21:54.192515055Z" level=error msg="Failed to destroy network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.193409 env[1300]: time="2025-08-13T03:21:54.193354176Z" level=error msg="encountered an error cleaning up failed sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.193642 env[1300]: time="2025-08-13T03:21:54.193581912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8b879dd-926tb,Uid:1b0738f7-1587-4a22-887f-5f8bd64e6743,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.194202 kubelet[2186]: E0813 03:21:54.194116 2186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.194338 kubelet[2186]: E0813 03:21:54.194230 2186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8b879dd-926tb" Aug 13 03:21:54.194338 kubelet[2186]: E0813 03:21:54.194264 2186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8b879dd-926tb" Aug 13 03:21:54.194459 kubelet[2186]: E0813 03:21:54.194347 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75b8b879dd-926tb_calico-apiserver(1b0738f7-1587-4a22-887f-5f8bd64e6743)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75b8b879dd-926tb_calico-apiserver(1b0738f7-1587-4a22-887f-5f8bd64e6743)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75b8b879dd-926tb" podUID="1b0738f7-1587-4a22-887f-5f8bd64e6743" Aug 13 03:21:54.214678 env[1300]: time="2025-08-13T03:21:54.214553759Z" level=error msg="Failed to destroy network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.215284 env[1300]: time="2025-08-13T03:21:54.215238559Z" level=error msg="encountered an error cleaning up failed sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.215410 env[1300]: time="2025-08-13T03:21:54.215316550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqpff,Uid:a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.215718 kubelet[2186]: E0813 03:21:54.215658 2186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.216201 kubelet[2186]: E0813 03:21:54.215751 2186 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rqpff" Aug 13 03:21:54.216201 kubelet[2186]: E0813 03:21:54.215786 2186 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rqpff" Aug 13 03:21:54.216201 kubelet[2186]: E0813 03:21:54.215850 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rqpff_calico-system(a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rqpff_calico-system(a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:21:54.236756 env[1300]: time="2025-08-13T03:21:54.236671280Z" level=error msg="StopPodSandbox for \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\" failed" error="failed to destroy network for sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:54.239409 kubelet[2186]: E0813 03:21:54.239148 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:21:54.242449 kubelet[2186]: E0813 03:21:54.239441 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119"} Aug 13 03:21:54.242541 kubelet[2186]: E0813 03:21:54.242478 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e80d36b-0ef9-49a4-9b05-2a70df6f56d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:21:54.242678 kubelet[2186]: E0813 03:21:54.242529 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e80d36b-0ef9-49a4-9b05-2a70df6f56d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b5n98" podUID="5e80d36b-0ef9-49a4-9b05-2a70df6f56d4" Aug 13 03:21:54.718614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510-shm.mount: Deactivated successfully. Aug 13 03:21:54.718865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24-shm.mount: Deactivated successfully. Aug 13 03:21:54.719043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46-shm.mount: Deactivated successfully. Aug 13 03:21:54.719194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a-shm.mount: Deactivated successfully. Aug 13 03:21:54.719393 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854-shm.mount: Deactivated successfully. Aug 13 03:21:54.719581 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46-shm.mount: Deactivated successfully. Aug 13 03:21:55.030819 kubelet[2186]: I0813 03:21:55.030631 2186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:21:55.034687 env[1300]: time="2025-08-13T03:21:55.032562396Z" level=info msg="StopPodSandbox for \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\"" Aug 13 03:21:55.042931 kubelet[2186]: I0813 03:21:55.042881 2186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:21:55.044839 env[1300]: time="2025-08-13T03:21:55.044756539Z" level=info msg="StopPodSandbox for \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\"" Aug 13 03:21:55.048531 kubelet[2186]: I0813 03:21:55.047956 2186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:21:55.049346 env[1300]: time="2025-08-13T03:21:55.049264557Z" level=info msg="StopPodSandbox for \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\"" Aug 13 03:21:55.052389 kubelet[2186]: I0813 03:21:55.051834 2186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:21:55.052919 env[1300]: time="2025-08-13T03:21:55.052847042Z" level=info msg="StopPodSandbox for \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\"" Aug 13 03:21:55.057069 kubelet[2186]: I0813 03:21:55.056510 2186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:21:55.057792 env[1300]: time="2025-08-13T03:21:55.057730700Z" level=info msg="StopPodSandbox for \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\"" Aug 13 03:21:55.059648 kubelet[2186]: I0813 03:21:55.059537 2186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:21:55.063196 env[1300]: time="2025-08-13T03:21:55.063161247Z" level=info msg="StopPodSandbox for \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\"" Aug 13 03:21:55.066562 kubelet[2186]: I0813 03:21:55.065920 2186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:21:55.067180 env[1300]: time="2025-08-13T03:21:55.067108715Z" level=info msg="StopPodSandbox for \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\"" Aug 13 03:21:55.183993 env[1300]: time="2025-08-13T03:21:55.183890594Z" level=error msg="StopPodSandbox for \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\" failed" error="failed to destroy network for sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:55.185387 kubelet[2186]: E0813 03:21:55.185117 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:21:55.185387 kubelet[2186]: E0813 03:21:55.185196 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854"} Aug 13 03:21:55.185387 kubelet[2186]: E0813 03:21:55.185270 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:21:55.185387 kubelet[2186]: E0813 03:21:55.185317 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-txzpd" podUID="4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c" Aug 13 03:21:55.201989 env[1300]: time="2025-08-13T03:21:55.201891410Z" level=error msg="StopPodSandbox for \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\" failed" error="failed to destroy network for sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:55.202461 kubelet[2186]: E0813 03:21:55.202383 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:21:55.202591 kubelet[2186]: E0813 03:21:55.202484 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24"} Aug 13 03:21:55.202591 kubelet[2186]: E0813 03:21:55.202560 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:21:55.202771 kubelet[2186]: E0813 03:21:55.202670 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76c6df68db-ldnkj" podUID="a39a6cb3-bac3-4669-b0a7-db896af6f22e" Aug 13 03:21:55.224947 env[1300]: time="2025-08-13T03:21:55.224831293Z" level=error msg="StopPodSandbox for \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\" failed" error="failed to destroy network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:55.225795 kubelet[2186]: E0813 03:21:55.225571 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:21:55.225795 kubelet[2186]: E0813 03:21:55.225650 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46"} Aug 13 03:21:55.225795 kubelet[2186]: E0813 03:21:55.225709 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68df4933-b5c3-4312-8741-f03d5628c7c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:21:55.225795 kubelet[2186]: E0813 03:21:55.225745 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68df4933-b5c3-4312-8741-f03d5628c7c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f8b9fb6f-b4rrq" podUID="68df4933-b5c3-4312-8741-f03d5628c7c8" Aug 13 03:21:55.233508 env[1300]: time="2025-08-13T03:21:55.233385875Z" level=error msg="StopPodSandbox for \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\" failed" error="failed to destroy network for sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:55.234361 kubelet[2186]: E0813 03:21:55.234139 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:21:55.234361 kubelet[2186]: E0813 03:21:55.234227 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a"} Aug 13 03:21:55.234361 kubelet[2186]: E0813 03:21:55.234282 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5686f5d5-01df-4f76-8a08-487dbaa97ed4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:21:55.234361 kubelet[2186]: E0813 03:21:55.234319 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5686f5d5-01df-4f76-8a08-487dbaa97ed4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75b8b879dd-pv4k9" podUID="5686f5d5-01df-4f76-8a08-487dbaa97ed4" Aug 13 03:21:55.238909 env[1300]: time="2025-08-13T03:21:55.238828925Z" level=error msg="StopPodSandbox for \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\" failed" error="failed to destroy network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:55.240337 kubelet[2186]: E0813 03:21:55.240154 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:21:55.240337 kubelet[2186]: E0813 03:21:55.240243 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46"} Aug 13 03:21:55.240496 kubelet[2186]: E0813 03:21:55.240363 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b0738f7-1587-4a22-887f-5f8bd64e6743\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:21:55.240496 kubelet[2186]: E0813 03:21:55.240405 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b0738f7-1587-4a22-887f-5f8bd64e6743\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75b8b879dd-926tb" podUID="1b0738f7-1587-4a22-887f-5f8bd64e6743" Aug 13 03:21:55.245605 env[1300]: time="2025-08-13T03:21:55.245539898Z" level=error msg="StopPodSandbox for \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\" failed" error="failed to destroy network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:55.245974 kubelet[2186]: E0813 03:21:55.245900 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:21:55.245974 kubelet[2186]: E0813 03:21:55.245966 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510"} Aug 13 03:21:55.246124 kubelet[2186]: E0813 03:21:55.246017 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:21:55.246124 kubelet[2186]: E0813 03:21:55.246053 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:21:55.257535 env[1300]: time="2025-08-13T03:21:55.257399107Z" level=error msg="StopPodSandbox for \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\" failed" error="failed to destroy network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:21:55.258335 kubelet[2186]: E0813 03:21:55.258044 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:21:55.258335 kubelet[2186]: E0813 03:21:55.258140 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df"} Aug 13 03:21:55.258335 kubelet[2186]: E0813 03:21:55.258221 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f742517-09bc-4214-9d75-c0b7d73d3fd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:21:55.258335 kubelet[2186]: E0813 03:21:55.258261 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f742517-09bc-4214-9d75-c0b7d73d3fd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t4mmj" podUID="3f742517-09bc-4214-9d75-c0b7d73d3fd4" Aug 13 03:22:05.811639 env[1300]: time="2025-08-13T03:22:05.810983845Z" level=info msg="StopPodSandbox for \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\"" Aug 13 03:22:05.824000 env[1300]: time="2025-08-13T03:22:05.823930337Z" level=info msg="StopPodSandbox for \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\"" Aug 13 03:22:05.926517 env[1300]: time="2025-08-13T03:22:05.926437368Z" level=error msg="StopPodSandbox for \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\" failed" error="failed to destroy network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:22:05.926748 kubelet[2186]: E0813 03:22:05.926650 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:22:05.927436 kubelet[2186]: E0813 03:22:05.926741 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46"} Aug 13 03:22:05.927436 kubelet[2186]: E0813 03:22:05.926804 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68df4933-b5c3-4312-8741-f03d5628c7c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:22:05.927436 kubelet[2186]: E0813 03:22:05.926854 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68df4933-b5c3-4312-8741-f03d5628c7c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f8b9fb6f-b4rrq" podUID="68df4933-b5c3-4312-8741-f03d5628c7c8" Aug 13 03:22:05.927436 kubelet[2186]: E0813 03:22:05.927104 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:22:05.927436 kubelet[2186]: E0813 03:22:05.927153 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510"} Aug 13 03:22:05.930997 env[1300]: time="2025-08-13T03:22:05.926961585Z" level=error msg="StopPodSandbox for \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\" failed" error="failed to destroy network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:22:05.931168 kubelet[2186]: E0813 03:22:05.927202 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:22:05.931168 kubelet[2186]: E0813 03:22:05.927250 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rqpff" podUID="a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00" Aug 13 03:22:05.990461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775116565.mount: Deactivated successfully. Aug 13 03:22:06.046292 env[1300]: time="2025-08-13T03:22:06.046223423Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:06.049209 env[1300]: time="2025-08-13T03:22:06.049174694Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:06.051855 env[1300]: time="2025-08-13T03:22:06.051821365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:06.054509 env[1300]: time="2025-08-13T03:22:06.054475321Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:06.055548 env[1300]: time="2025-08-13T03:22:06.055499564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 03:22:06.106162 env[1300]: time="2025-08-13T03:22:06.105102445Z" level=info msg="CreateContainer within sandbox \"9bf4098207560036ef29f078442ee0408cfe8053d21a59a9c23f213081d88fe7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 03:22:06.129989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076585623.mount: Deactivated successfully. Aug 13 03:22:06.145186 env[1300]: time="2025-08-13T03:22:06.145115936Z" level=info msg="CreateContainer within sandbox \"9bf4098207560036ef29f078442ee0408cfe8053d21a59a9c23f213081d88fe7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"31299f475ea61fd2a56bb6474a9a6d532cdd4891da855937f89f6110a32420b0\"" Aug 13 03:22:06.146594 env[1300]: time="2025-08-13T03:22:06.146558539Z" level=info msg="StartContainer for \"31299f475ea61fd2a56bb6474a9a6d532cdd4891da855937f89f6110a32420b0\"" Aug 13 03:22:06.312565 env[1300]: time="2025-08-13T03:22:06.312498862Z" level=info msg="StartContainer for \"31299f475ea61fd2a56bb6474a9a6d532cdd4891da855937f89f6110a32420b0\" returns successfully" Aug 13 03:22:06.809238 env[1300]: time="2025-08-13T03:22:06.809147861Z" level=info msg="StopPodSandbox for \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\"" Aug 13 03:22:06.811364 env[1300]: time="2025-08-13T03:22:06.810628065Z" level=info msg="StopPodSandbox for \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\"" Aug 13 03:22:06.845750 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 03:22:06.846834 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 03:22:06.909622 env[1300]: time="2025-08-13T03:22:06.909535348Z" level=error msg="StopPodSandbox for \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\" failed" error="failed to destroy network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:22:06.910267 kubelet[2186]: E0813 03:22:06.910193 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:22:06.910436 kubelet[2186]: E0813 03:22:06.910289 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df"} Aug 13 03:22:06.910576 kubelet[2186]: E0813 03:22:06.910539 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f742517-09bc-4214-9d75-c0b7d73d3fd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:22:06.910740 kubelet[2186]: E0813 03:22:06.910610 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f742517-09bc-4214-9d75-c0b7d73d3fd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t4mmj" podUID="3f742517-09bc-4214-9d75-c0b7d73d3fd4" Aug 13 03:22:06.911090 env[1300]: time="2025-08-13T03:22:06.910790930Z" level=error msg="StopPodSandbox for \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\" failed" error="failed to destroy network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 03:22:06.911378 kubelet[2186]: E0813 03:22:06.911302 2186 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:22:06.911998 kubelet[2186]: E0813 03:22:06.911386 2186 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46"} Aug 13 03:22:06.913243 kubelet[2186]: E0813 03:22:06.913207 2186 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b0738f7-1587-4a22-887f-5f8bd64e6743\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 03:22:06.913411 kubelet[2186]: E0813 03:22:06.913271 2186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b0738f7-1587-4a22-887f-5f8bd64e6743\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75b8b879dd-926tb" podUID="1b0738f7-1587-4a22-887f-5f8bd64e6743" Aug 13 03:22:07.079509 env[1300]: time="2025-08-13T03:22:07.079347289Z" level=info msg="StopPodSandbox for \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\"" Aug 13 03:22:07.155014 kubelet[2186]: I0813 03:22:07.152360 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4rpwj" podStartSLOduration=2.105737511 podStartE2EDuration="29.147611307s" podCreationTimestamp="2025-08-13 03:21:38 +0000 UTC" firstStartedPulling="2025-08-13 03:21:39.015575566 +0000 UTC m=+23.589015393" lastFinishedPulling="2025-08-13 03:22:06.057449365 +0000 UTC m=+50.630889189" observedRunningTime="2025-08-13 03:22:07.147232925 +0000 UTC m=+51.720672763" watchObservedRunningTime="2025-08-13 03:22:07.147611307 +0000 UTC m=+51.721051142" Aug 13 03:22:07.207658 systemd[1]: run-containerd-runc-k8s.io-31299f475ea61fd2a56bb6474a9a6d532cdd4891da855937f89f6110a32420b0-runc.lIWX7G.mount: Deactivated successfully. Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.317 [INFO][3446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.320 [INFO][3446] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" iface="eth0" netns="/var/run/netns/cni-6ea00027-3308-be5c-7808-c0afe386b8ac" Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.320 [INFO][3446] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" iface="eth0" netns="/var/run/netns/cni-6ea00027-3308-be5c-7808-c0afe386b8ac" Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.322 [INFO][3446] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" iface="eth0" netns="/var/run/netns/cni-6ea00027-3308-be5c-7808-c0afe386b8ac" Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.322 [INFO][3446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.322 [INFO][3446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.532 [INFO][3478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" HandleID="k8s-pod-network.1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.534 [INFO][3478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.535 [INFO][3478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.552 [WARNING][3478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" HandleID="k8s-pod-network.1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.553 [INFO][3478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" HandleID="k8s-pod-network.1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.559 [INFO][3478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:07.564779 env[1300]: 2025-08-13 03:22:07.562 [INFO][3446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:07.573988 env[1300]: time="2025-08-13T03:22:07.568812780Z" level=info msg="TearDown network for sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\" successfully" Aug 13 03:22:07.573988 env[1300]: time="2025-08-13T03:22:07.568871364Z" level=info msg="StopPodSandbox for \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\" returns successfully" Aug 13 03:22:07.568944 systemd[1]: run-netns-cni\x2d6ea00027\x2d3308\x2dbe5c\x2d7808\x2dc0afe386b8ac.mount: Deactivated successfully. Aug 13 03:22:07.745735 kubelet[2186]: I0813 03:22:07.745437 2186 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a39a6cb3-bac3-4669-b0a7-db896af6f22e-whisker-ca-bundle\") pod \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\" (UID: \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\") " Aug 13 03:22:07.745735 kubelet[2186]: I0813 03:22:07.745505 2186 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm4v7\" (UniqueName: \"kubernetes.io/projected/a39a6cb3-bac3-4669-b0a7-db896af6f22e-kube-api-access-nm4v7\") pod \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\" (UID: \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\") " Aug 13 03:22:07.745735 kubelet[2186]: I0813 03:22:07.745551 2186 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a39a6cb3-bac3-4669-b0a7-db896af6f22e-whisker-backend-key-pair\") pod \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\" (UID: \"a39a6cb3-bac3-4669-b0a7-db896af6f22e\") " Aug 13 03:22:07.749692 kubelet[2186]: I0813 03:22:07.747207 2186 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39a6cb3-bac3-4669-b0a7-db896af6f22e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a39a6cb3-bac3-4669-b0a7-db896af6f22e" (UID: "a39a6cb3-bac3-4669-b0a7-db896af6f22e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 03:22:07.753042 kubelet[2186]: I0813 03:22:07.752997 2186 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39a6cb3-bac3-4669-b0a7-db896af6f22e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a39a6cb3-bac3-4669-b0a7-db896af6f22e" (UID: "a39a6cb3-bac3-4669-b0a7-db896af6f22e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 03:22:07.753457 kubelet[2186]: I0813 03:22:07.753407 2186 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a39a6cb3-bac3-4669-b0a7-db896af6f22e-kube-api-access-nm4v7" (OuterVolumeSpecName: "kube-api-access-nm4v7") pod "a39a6cb3-bac3-4669-b0a7-db896af6f22e" (UID: "a39a6cb3-bac3-4669-b0a7-db896af6f22e"). InnerVolumeSpecName "kube-api-access-nm4v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 03:22:07.820064 env[1300]: time="2025-08-13T03:22:07.819101877Z" level=info msg="StopPodSandbox for \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\"" Aug 13 03:22:07.821308 env[1300]: time="2025-08-13T03:22:07.821177827Z" level=info msg="StopPodSandbox for \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\"" Aug 13 03:22:07.846459 kubelet[2186]: I0813 03:22:07.846391 2186 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a39a6cb3-bac3-4669-b0a7-db896af6f22e-whisker-backend-key-pair\") on node \"srv-pghwy.gb1.brightbox.com\" DevicePath \"\"" Aug 13 03:22:07.846459 kubelet[2186]: I0813 03:22:07.846455 2186 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a39a6cb3-bac3-4669-b0a7-db896af6f22e-whisker-ca-bundle\") on node \"srv-pghwy.gb1.brightbox.com\" DevicePath \"\"" Aug 13 03:22:07.847625 kubelet[2186]: I0813 03:22:07.846479 2186 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm4v7\" (UniqueName: \"kubernetes.io/projected/a39a6cb3-bac3-4669-b0a7-db896af6f22e-kube-api-access-nm4v7\") on node \"srv-pghwy.gb1.brightbox.com\" DevicePath \"\"" Aug 13 03:22:07.990800 systemd[1]: var-lib-kubelet-pods-a39a6cb3\x2dbac3\x2d4669\x2db0a7\x2ddb896af6f22e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnm4v7.mount: Deactivated successfully. Aug 13 03:22:07.991035 systemd[1]: var-lib-kubelet-pods-a39a6cb3\x2dbac3\x2d4669\x2db0a7\x2ddb896af6f22e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:07.920 [INFO][3517] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:07.920 [INFO][3517] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" iface="eth0" netns="/var/run/netns/cni-a3e9e389-a93a-fd6d-f9f7-666068d6eb66" Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:07.920 [INFO][3517] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" iface="eth0" netns="/var/run/netns/cni-a3e9e389-a93a-fd6d-f9f7-666068d6eb66" Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:07.921 [INFO][3517] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" iface="eth0" netns="/var/run/netns/cni-a3e9e389-a93a-fd6d-f9f7-666068d6eb66" Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:07.921 [INFO][3517] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:07.921 [INFO][3517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:08.004 [INFO][3531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" HandleID="k8s-pod-network.7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:08.004 [INFO][3531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:08.004 [INFO][3531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:08.014 [WARNING][3531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" HandleID="k8s-pod-network.7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:08.014 [INFO][3531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" HandleID="k8s-pod-network.7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:08.016 [INFO][3531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:08.028530 env[1300]: 2025-08-13 03:22:08.025 [INFO][3517] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:08.036077 env[1300]: time="2025-08-13T03:22:08.032723819Z" level=info msg="TearDown network for sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\" successfully" Aug 13 03:22:08.036077 env[1300]: time="2025-08-13T03:22:08.032784448Z" level=info msg="StopPodSandbox for \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\" returns successfully" Aug 13 03:22:08.032743 systemd[1]: run-netns-cni\x2da3e9e389\x2da93a\x2dfd6d\x2df9f7\x2d666068d6eb66.mount: Deactivated successfully. Aug 13 03:22:08.043701 env[1300]: time="2025-08-13T03:22:08.043624123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b5n98,Uid:5e80d36b-0ef9-49a4-9b05-2a70df6f56d4,Namespace:kube-system,Attempt:1,}" Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:07.915 [INFO][3516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:07.916 [INFO][3516] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" iface="eth0" netns="/var/run/netns/cni-ffddcdf8-cbeb-3db4-1144-2aadde95d7bc" Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:07.916 [INFO][3516] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" iface="eth0" netns="/var/run/netns/cni-ffddcdf8-cbeb-3db4-1144-2aadde95d7bc" Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:07.919 [INFO][3516] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" iface="eth0" netns="/var/run/netns/cni-ffddcdf8-cbeb-3db4-1144-2aadde95d7bc" Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:07.919 [INFO][3516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:07.919 [INFO][3516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:08.004 [INFO][3530] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" HandleID="k8s-pod-network.942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:08.005 [INFO][3530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:08.017 [INFO][3530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:08.026 [WARNING][3530] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" HandleID="k8s-pod-network.942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:08.026 [INFO][3530] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" HandleID="k8s-pod-network.942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:08.036 [INFO][3530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:08.048055 env[1300]: 2025-08-13 03:22:08.045 [INFO][3516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:08.054444 env[1300]: time="2025-08-13T03:22:08.051903432Z" level=info msg="TearDown network for sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\" successfully" Aug 13 03:22:08.054444 env[1300]: time="2025-08-13T03:22:08.051966507Z" level=info msg="StopPodSandbox for \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\" returns successfully" Aug 13 03:22:08.051794 systemd[1]: run-netns-cni\x2dffddcdf8\x2dcbeb\x2d3db4\x2d1144\x2d2aadde95d7bc.mount: Deactivated successfully. Aug 13 03:22:08.055300 env[1300]: time="2025-08-13T03:22:08.055263441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8b879dd-pv4k9,Uid:5686f5d5-01df-4f76-8a08-487dbaa97ed4,Namespace:calico-apiserver,Attempt:1,}" Aug 13 03:22:08.366548 kubelet[2186]: I0813 03:22:08.365026 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xvpx\" (UniqueName: \"kubernetes.io/projected/4492c2e3-78f4-4a10-8c6d-10bfd0197b31-kube-api-access-5xvpx\") pod \"whisker-5fdb56b4b-lpvhq\" (UID: \"4492c2e3-78f4-4a10-8c6d-10bfd0197b31\") " pod="calico-system/whisker-5fdb56b4b-lpvhq" Aug 13 03:22:08.366548 kubelet[2186]: I0813 03:22:08.365105 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4492c2e3-78f4-4a10-8c6d-10bfd0197b31-whisker-ca-bundle\") pod \"whisker-5fdb56b4b-lpvhq\" (UID: \"4492c2e3-78f4-4a10-8c6d-10bfd0197b31\") " pod="calico-system/whisker-5fdb56b4b-lpvhq" Aug 13 03:22:08.366548 kubelet[2186]: I0813 03:22:08.365168 2186 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4492c2e3-78f4-4a10-8c6d-10bfd0197b31-whisker-backend-key-pair\") pod \"whisker-5fdb56b4b-lpvhq\" (UID: \"4492c2e3-78f4-4a10-8c6d-10bfd0197b31\") " pod="calico-system/whisker-5fdb56b4b-lpvhq" Aug 13 03:22:08.454819 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 03:22:08.459510 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali172b8ee5cad: link becomes ready Aug 13 03:22:08.467832 systemd-networkd[1075]: cali172b8ee5cad: Link UP Aug 13 03:22:08.468186 systemd-networkd[1075]: cali172b8ee5cad: Gained carrier Aug 13 03:22:08.557771 env[1300]: time="2025-08-13T03:22:08.557700575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fdb56b4b-lpvhq,Uid:4492c2e3-78f4-4a10-8c6d-10bfd0197b31,Namespace:calico-system,Attempt:0,}" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.248 [INFO][3553] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.278 [INFO][3553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0 calico-apiserver-75b8b879dd- calico-apiserver 5686f5d5-01df-4f76-8a08-487dbaa97ed4 908 0 2025-08-13 03:21:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75b8b879dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-pghwy.gb1.brightbox.com calico-apiserver-75b8b879dd-pv4k9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali172b8ee5cad [] [] }} ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-pv4k9" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.278 [INFO][3553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-pv4k9" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.331 [INFO][3582] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" HandleID="k8s-pod-network.be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.333 [INFO][3582] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" HandleID="k8s-pod-network.be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-pghwy.gb1.brightbox.com", "pod":"calico-apiserver-75b8b879dd-pv4k9", "timestamp":"2025-08-13 03:22:08.331719266 +0000 UTC"}, Hostname:"srv-pghwy.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.333 [INFO][3582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.333 [INFO][3582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.333 [INFO][3582] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-pghwy.gb1.brightbox.com' Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.360 [INFO][3582] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.376 [INFO][3582] ipam/ipam.go 394: Looking up existing affinities for host host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.381 [INFO][3582] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.384 [INFO][3582] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.387 [INFO][3582] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.387 [INFO][3582] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.389 [INFO][3582] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9 Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.402 [INFO][3582] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.413 [INFO][3582] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.129/26] block=192.168.30.128/26 handle="k8s-pod-network.be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.414 [INFO][3582] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.129/26] handle="k8s-pod-network.be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.414 [INFO][3582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:08.568677 env[1300]: 2025-08-13 03:22:08.415 [INFO][3582] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.129/26] IPv6=[] ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" HandleID="k8s-pod-network.be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:08.570694 env[1300]: 2025-08-13 03:22:08.420 [INFO][3553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-pv4k9" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0", GenerateName:"calico-apiserver-75b8b879dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5686f5d5-01df-4f76-8a08-487dbaa97ed4", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8b879dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-75b8b879dd-pv4k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali172b8ee5cad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:08.570694 env[1300]: 2025-08-13 03:22:08.420 [INFO][3553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.129/32] ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-pv4k9" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:08.570694 env[1300]: 2025-08-13 03:22:08.420 [INFO][3553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali172b8ee5cad ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-pv4k9" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:08.570694 env[1300]: 2025-08-13 03:22:08.507 [INFO][3553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-pv4k9" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:08.570694 env[1300]: 2025-08-13 03:22:08.509 [INFO][3553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-pv4k9" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0", GenerateName:"calico-apiserver-75b8b879dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5686f5d5-01df-4f76-8a08-487dbaa97ed4", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8b879dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9", Pod:"calico-apiserver-75b8b879dd-pv4k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali172b8ee5cad", MAC:"4a:af:ee:cc:3a:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:08.570694 env[1300]: 2025-08-13 03:22:08.529 [INFO][3553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-pv4k9" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:08.648000 audit[3657]: AVC avc: denied { write } for pid=3657 comm="tee" name="fd" dev="proc" ino=30534 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 03:22:08.655393 kernel: audit: type=1400 audit(1755055328.648:311): avc: denied { write } for pid=3657 comm="tee" name="fd" dev="proc" ino=30534 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 03:22:08.648000 audit[3657]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd2dd907c6 a2=241 a3=1b6 items=1 ppid=3619 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:08.663989 systemd-networkd[1075]: cali06a03a31dc1: Link UP Aug 13 03:22:08.666865 kernel: audit: type=1300 audit(1755055328.648:311): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd2dd907c6 a2=241 a3=1b6 items=1 ppid=3619 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:08.666961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali06a03a31dc1: link becomes ready Aug 13 03:22:08.665937 systemd-networkd[1075]: cali06a03a31dc1: Gained carrier Aug 13 03:22:08.648000 audit: CWD cwd="/etc/service/enabled/bird/log" Aug 13 03:22:08.648000 audit: PATH item=0 name="/dev/fd/63" inode=29624 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:22:08.683987 kernel: audit: type=1307 audit(1755055328.648:311): cwd="/etc/service/enabled/bird/log" Aug 13 03:22:08.684121 kernel: audit: type=1302 audit(1755055328.648:311): item=0 name="/dev/fd/63" inode=29624 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:22:08.698641 kernel: audit: type=1327 audit(1755055328.648:311): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 03:22:08.698783 kernel: audit: type=1400 audit(1755055328.687:312): avc: denied { write } for pid=3678 comm="tee" name="fd" dev="proc" ino=29646 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 03:22:08.648000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 03:22:08.687000 audit[3678]: AVC avc: denied { write } for pid=3678 comm="tee" name="fd" dev="proc" ino=29646 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 03:22:08.687000 audit[3678]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffde9dbc7c7 a2=241 a3=1b6 items=1 ppid=3616 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.161 [INFO][3543] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.206 [INFO][3543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0 coredns-7c65d6cfc9- kube-system 5e80d36b-0ef9-49a4-9b05-2a70df6f56d4 909 0 2025-08-13 03:21:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-pghwy.gb1.brightbox.com coredns-7c65d6cfc9-b5n98 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali06a03a31dc1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5n98" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.207 [INFO][3543] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5n98" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.342 [INFO][3575] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" HandleID="k8s-pod-network.36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.343 [INFO][3575] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" HandleID="k8s-pod-network.36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd600), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-pghwy.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-b5n98", "timestamp":"2025-08-13 03:22:08.342864847 +0000 UTC"}, Hostname:"srv-pghwy.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.343 [INFO][3575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.414 [INFO][3575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.414 [INFO][3575] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-pghwy.gb1.brightbox.com' Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.464 [INFO][3575] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.555 [INFO][3575] ipam/ipam.go 394: Looking up existing affinities for host host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.581 [INFO][3575] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.587 [INFO][3575] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.591 [INFO][3575] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.591 [INFO][3575] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.593 [INFO][3575] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537 Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.601 [INFO][3575] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.615 [INFO][3575] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.130/26] block=192.168.30.128/26 handle="k8s-pod-network.36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.615 [INFO][3575] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.130/26] handle="k8s-pod-network.36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.615 [INFO][3575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:08.710807 env[1300]: 2025-08-13 03:22:08.615 [INFO][3575] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.130/26] IPv6=[] ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" HandleID="k8s-pod-network.36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:08.712018 kernel: audit: type=1300 audit(1755055328.687:312): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffde9dbc7c7 a2=241 a3=1b6 items=1 ppid=3616 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:08.712095 env[1300]: 2025-08-13 03:22:08.629 [INFO][3543] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5n98" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5e80d36b-0ef9-49a4-9b05-2a70df6f56d4", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-b5n98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06a03a31dc1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:08.712095 env[1300]: 2025-08-13 03:22:08.629 [INFO][3543] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.130/32] ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5n98" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:08.712095 env[1300]: 2025-08-13 03:22:08.630 [INFO][3543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06a03a31dc1 ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5n98" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:08.712095 env[1300]: 2025-08-13 03:22:08.668 [INFO][3543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5n98" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:08.712095 env[1300]: 2025-08-13 03:22:08.670 [INFO][3543] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5n98" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5e80d36b-0ef9-49a4-9b05-2a70df6f56d4", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537", Pod:"coredns-7c65d6cfc9-b5n98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06a03a31dc1", MAC:"36:11:63:c7:60:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:08.712095 env[1300]: 2025-08-13 03:22:08.695 [INFO][3543] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b5n98" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:08.687000 audit: CWD cwd="/etc/service/enabled/cni/log" Aug 13 03:22:08.687000 audit: PATH item=0 name="/dev/fd/63" inode=29643 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:22:08.720047 kernel: audit: type=1307 audit(1755055328.687:312): cwd="/etc/service/enabled/cni/log" Aug 13 03:22:08.720140 kernel: audit: type=1302 audit(1755055328.687:312): item=0 name="/dev/fd/63" inode=29643 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:22:08.726487 env[1300]: time="2025-08-13T03:22:08.726154431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:22:08.726487 env[1300]: time="2025-08-13T03:22:08.726249700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:22:08.726487 env[1300]: time="2025-08-13T03:22:08.726277393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:22:08.687000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 03:22:08.737343 kernel: audit: type=1327 audit(1755055328.687:312): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 03:22:08.694000 audit[3672]: AVC avc: denied { write } for pid=3672 comm="tee" name="fd" dev="proc" ino=30555 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 03:22:08.694000 audit[3672]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd1a6567c5 a2=241 a3=1b6 items=1 ppid=3625 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:08.694000 audit: CWD cwd="/etc/service/enabled/felix/log" Aug 13 03:22:08.694000 audit: PATH item=0 name="/dev/fd/63" inode=30524 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:22:08.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 03:22:08.694000 audit[3677]: AVC avc: denied { write } for pid=3677 comm="tee" name="fd" dev="proc" ino=30559 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 03:22:08.694000 audit[3677]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcf859b7b5 a2=241 a3=1b6 items=1 ppid=3631 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:08.694000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 13 03:22:08.694000 audit: PATH item=0 name="/dev/fd/63" inode=30523 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:22:08.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 03:22:08.722000 audit[3694]: AVC avc: denied { write } for pid=3694 comm="tee" name="fd" dev="proc" ino=29660 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 03:22:08.722000 audit[3694]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdae4617b6 a2=241 a3=1b6 items=1 ppid=3621 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:08.722000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Aug 13 03:22:08.722000 audit: PATH item=0 name="/dev/fd/63" inode=29657 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:22:08.722000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 03:22:08.727000 audit[3675]: AVC avc: denied { write } for pid=3675 comm="tee" name="fd" dev="proc" ino=29664 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 03:22:08.727000 audit[3675]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff601a87c5 a2=241 a3=1b6 items=1 ppid=3614 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:08.727000 audit: CWD cwd="/etc/service/enabled/bird6/log" Aug 13 03:22:08.727000 audit: PATH item=0 name="/dev/fd/63" inode=30529 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:22:08.727000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 03:22:08.740417 env[1300]: time="2025-08-13T03:22:08.738313436Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9 pid=3660 runtime=io.containerd.runc.v2 Aug 13 03:22:08.742000 audit[3704]: AVC avc: denied { write } for pid=3704 comm="tee" name="fd" dev="proc" ino=30582 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 03:22:08.742000 audit[3704]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff504ab7c5 a2=241 a3=1b6 items=1 ppid=3627 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:08.742000 audit: CWD cwd="/etc/service/enabled/confd/log" Aug 13 03:22:08.742000 audit: PATH item=0 name="/dev/fd/63" inode=29668 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 03:22:08.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 03:22:08.801248 env[1300]: time="2025-08-13T03:22:08.801001225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:22:08.832448 env[1300]: time="2025-08-13T03:22:08.827619968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:22:08.832448 env[1300]: time="2025-08-13T03:22:08.827784427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:22:08.838047 env[1300]: time="2025-08-13T03:22:08.837911065Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537 pid=3743 runtime=io.containerd.runc.v2 Aug 13 03:22:09.092350 env[1300]: time="2025-08-13T03:22:09.092243595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8b879dd-pv4k9,Uid:5686f5d5-01df-4f76-8a08-487dbaa97ed4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9\"" Aug 13 03:22:09.121857 env[1300]: time="2025-08-13T03:22:09.121783356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 03:22:09.187347 env[1300]: time="2025-08-13T03:22:09.187250573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b5n98,Uid:5e80d36b-0ef9-49a4-9b05-2a70df6f56d4,Namespace:kube-system,Attempt:1,} returns sandbox id \"36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537\"" Aug 13 03:22:09.194062 env[1300]: time="2025-08-13T03:22:09.192625348Z" level=info msg="CreateContainer within sandbox \"36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 03:22:09.225302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854584829.mount: Deactivated successfully. Aug 13 03:22:09.239119 env[1300]: time="2025-08-13T03:22:09.239056908Z" level=info msg="CreateContainer within sandbox \"36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"670fdefdef8267b0f6a83dce5b4d8d4b882949f40634c73c220e12c7b7293ea8\"" Aug 13 03:22:09.240279 env[1300]: time="2025-08-13T03:22:09.240241942Z" level=info msg="StartContainer for \"670fdefdef8267b0f6a83dce5b4d8d4b882949f40634c73c220e12c7b7293ea8\"" Aug 13 03:22:09.353278 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali28b434fb563: link becomes ready Aug 13 03:22:09.352508 systemd-networkd[1075]: cali28b434fb563: Link UP Aug 13 03:22:09.357406 systemd-networkd[1075]: cali28b434fb563: Gained carrier Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:08.948 [INFO][3647] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.052 [INFO][3647] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0 whisker-5fdb56b4b- calico-system 4492c2e3-78f4-4a10-8c6d-10bfd0197b31 925 0 2025-08-13 03:22:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5fdb56b4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-pghwy.gb1.brightbox.com whisker-5fdb56b4b-lpvhq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali28b434fb563 [] [] }} ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Namespace="calico-system" Pod="whisker-5fdb56b4b-lpvhq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.053 [INFO][3647] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Namespace="calico-system" Pod="whisker-5fdb56b4b-lpvhq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.205 [INFO][3789] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" HandleID="k8s-pod-network.ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.210 [INFO][3789] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" HandleID="k8s-pod-network.ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9e90), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-pghwy.gb1.brightbox.com", "pod":"whisker-5fdb56b4b-lpvhq", "timestamp":"2025-08-13 03:22:09.205935229 +0000 UTC"}, Hostname:"srv-pghwy.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.210 [INFO][3789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.210 [INFO][3789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.210 [INFO][3789] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-pghwy.gb1.brightbox.com' Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.254 [INFO][3789] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.270 [INFO][3789] ipam/ipam.go 394: Looking up existing affinities for host host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.288 [INFO][3789] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.294 [INFO][3789] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.302 [INFO][3789] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.302 [INFO][3789] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.307 [INFO][3789] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.313 [INFO][3789] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.324 [INFO][3789] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.131/26] block=192.168.30.128/26 handle="k8s-pod-network.ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.324 [INFO][3789] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.131/26] handle="k8s-pod-network.ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.325 [INFO][3789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:09.416651 env[1300]: 2025-08-13 03:22:09.325 [INFO][3789] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.131/26] IPv6=[] ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" HandleID="k8s-pod-network.ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0" Aug 13 03:22:09.418303 env[1300]: 2025-08-13 03:22:09.334 [INFO][3647] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Namespace="calico-system" Pod="whisker-5fdb56b4b-lpvhq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0", GenerateName:"whisker-5fdb56b4b-", Namespace:"calico-system", SelfLink:"", UID:"4492c2e3-78f4-4a10-8c6d-10bfd0197b31", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fdb56b4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"", Pod:"whisker-5fdb56b4b-lpvhq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28b434fb563", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:09.418303 env[1300]: 2025-08-13 03:22:09.334 [INFO][3647] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.131/32] ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Namespace="calico-system" Pod="whisker-5fdb56b4b-lpvhq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0" Aug 13 03:22:09.418303 env[1300]: 2025-08-13 03:22:09.334 [INFO][3647] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28b434fb563 ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Namespace="calico-system" Pod="whisker-5fdb56b4b-lpvhq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0" Aug 13 03:22:09.418303 env[1300]: 2025-08-13 03:22:09.357 [INFO][3647] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Namespace="calico-system" Pod="whisker-5fdb56b4b-lpvhq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0" Aug 13 03:22:09.418303 env[1300]: 2025-08-13 03:22:09.358 [INFO][3647] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Namespace="calico-system" Pod="whisker-5fdb56b4b-lpvhq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0", GenerateName:"whisker-5fdb56b4b-", Namespace:"calico-system", SelfLink:"", UID:"4492c2e3-78f4-4a10-8c6d-10bfd0197b31", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fdb56b4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c", Pod:"whisker-5fdb56b4b-lpvhq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28b434fb563", MAC:"4e:b9:23:9c:d6:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:09.418303 env[1300]: 2025-08-13 03:22:09.393 [INFO][3647] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c" Namespace="calico-system" Pod="whisker-5fdb56b4b-lpvhq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-whisker--5fdb56b4b--lpvhq-eth0" Aug 13 03:22:09.491684 env[1300]: time="2025-08-13T03:22:09.491606186Z" level=info msg="StartContainer for \"670fdefdef8267b0f6a83dce5b4d8d4b882949f40634c73c220e12c7b7293ea8\" returns successfully" Aug 13 03:22:09.498727 env[1300]: time="2025-08-13T03:22:09.498608005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:22:09.498865 env[1300]: time="2025-08-13T03:22:09.498750020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:22:09.498953 env[1300]: time="2025-08-13T03:22:09.498891551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:22:09.499701 env[1300]: time="2025-08-13T03:22:09.499602300Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c pid=3844 runtime=io.containerd.runc.v2 Aug 13 03:22:09.717110 env[1300]: time="2025-08-13T03:22:09.717032335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fdb56b4b-lpvhq,Uid:4492c2e3-78f4-4a10-8c6d-10bfd0197b31,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c\"" Aug 13 03:22:09.814100 kubelet[2186]: I0813 03:22:09.814030 2186 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a39a6cb3-bac3-4669-b0a7-db896af6f22e" path="/var/lib/kubelet/pods/a39a6cb3-bac3-4669-b0a7-db896af6f22e/volumes" Aug 13 03:22:09.829000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.829000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.829000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.829000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.829000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.829000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.829000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.829000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.829000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.829000 audit: BPF prog-id=10 op=LOAD Aug 13 03:22:09.829000 audit[3904]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd36aac930 a2=98 a3=1fffffffffffffff items=0 ppid=3626 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:09.829000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 03:22:09.830000 audit: BPF prog-id=10 op=UNLOAD Aug 13 03:22:09.830000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.830000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.830000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.830000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.830000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.830000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.830000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.830000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.830000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.830000 audit: BPF prog-id=11 op=LOAD Aug 13 03:22:09.830000 audit[3904]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd36aac810 a2=94 a3=3 items=0 ppid=3626 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:09.830000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 03:22:09.831000 audit: BPF prog-id=11 op=UNLOAD Aug 13 03:22:09.831000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.831000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.831000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.831000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.831000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.831000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.831000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.831000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.831000 audit[3904]: AVC avc: denied { bpf } for pid=3904 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.831000 audit: BPF prog-id=12 op=LOAD Aug 13 03:22:09.831000 audit[3904]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd36aac850 a2=94 a3=7ffd36aaca30 items=0 ppid=3626 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:09.831000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 03:22:09.831000 audit: BPF prog-id=12 op=UNLOAD Aug 13 03:22:09.831000 audit[3904]: AVC avc: denied { perfmon } for pid=3904 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.831000 audit[3904]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffd36aac920 a2=50 a3=a000000085 items=0 ppid=3626 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:09.831000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 03:22:09.835000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.835000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.835000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.835000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.835000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.835000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.835000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.835000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.835000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.835000 audit: BPF prog-id=13 op=LOAD Aug 13 03:22:09.835000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd3d0e7de0 a2=98 a3=3 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:09.835000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:09.836000 audit: BPF prog-id=13 op=UNLOAD Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit: BPF prog-id=14 op=LOAD Aug 13 03:22:09.836000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3d0e7bd0 a2=94 a3=54428f items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:09.836000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:09.836000 audit: BPF prog-id=14 op=UNLOAD Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:09.836000 audit: BPF prog-id=15 op=LOAD Aug 13 03:22:09.836000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3d0e7c00 a2=94 a3=2 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:09.836000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:09.837000 audit: BPF prog-id=15 op=UNLOAD Aug 13 03:22:10.034000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.034000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.034000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.034000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.034000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.034000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.034000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.034000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.034000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.034000 audit: BPF prog-id=16 op=LOAD Aug 13 03:22:10.034000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3d0e7ac0 a2=94 a3=1 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.034000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.034000 audit: BPF prog-id=16 op=UNLOAD Aug 13 03:22:10.034000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.034000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd3d0e7b90 a2=50 a3=7ffd3d0e7c70 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.034000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.049000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.049000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3d0e7ad0 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.049000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.049000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3d0e7b00 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.049000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.049000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3d0e7a10 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.049000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.049000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3d0e7b20 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.049000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.049000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3d0e7b00 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.049000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.049000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3d0e7af0 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3d0e7b20 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3d0e7b00 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3d0e7b20 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3d0e7af0 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3d0e7b60 a2=28 a3=0 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd3d0e7910 a2=50 a3=1 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit: BPF prog-id=17 op=LOAD Aug 13 03:22:10.050000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd3d0e7910 a2=94 a3=5 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.050000 audit: BPF prog-id=17 op=UNLOAD Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd3d0e79c0 a2=50 a3=1 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd3d0e7ae0 a2=4 a3=38 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.050000 audit[3909]: AVC avc: denied { confidentiality } for pid=3909 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 03:22:10.050000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd3d0e7b30 a2=94 a3=6 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.050000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { confidentiality } for pid=3909 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 03:22:10.051000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd3d0e72e0 a2=94 a3=88 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.051000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { perfmon } for pid=3909 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { bpf } for pid=3909 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.051000 audit[3909]: AVC avc: denied { confidentiality } for pid=3909 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 03:22:10.051000 audit[3909]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd3d0e72e0 a2=94 a3=88 items=0 ppid=3626 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.051000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 03:22:10.065000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.065000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.065000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.065000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.065000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.065000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.065000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.065000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.065000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.065000 audit: BPF prog-id=18 op=LOAD Aug 13 03:22:10.065000 audit[3921]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffcbb5d700 a2=98 a3=1999999999999999 items=0 ppid=3626 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.065000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 03:22:10.066000 audit: BPF prog-id=18 op=UNLOAD Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit: BPF prog-id=19 op=LOAD Aug 13 03:22:10.066000 audit[3921]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffcbb5d5e0 a2=94 a3=ffff items=0 ppid=3626 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.066000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 03:22:10.066000 audit: BPF prog-id=19 op=UNLOAD Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { perfmon } for pid=3921 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit[3921]: AVC avc: denied { bpf } for pid=3921 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.066000 audit: BPF prog-id=20 op=LOAD Aug 13 03:22:10.066000 audit[3921]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffcbb5d620 a2=94 a3=7fffcbb5d800 items=0 ppid=3626 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.066000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 03:22:10.066000 audit: BPF prog-id=20 op=UNLOAD Aug 13 03:22:10.112806 systemd-networkd[1075]: cali172b8ee5cad: Gained IPv6LL Aug 13 03:22:10.233563 kubelet[2186]: I0813 03:22:10.233455 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-b5n98" podStartSLOduration=49.233415628 podStartE2EDuration="49.233415628s" podCreationTimestamp="2025-08-13 03:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 03:22:10.212079024 +0000 UTC m=+54.785518865" watchObservedRunningTime="2025-08-13 03:22:10.233415628 +0000 UTC m=+54.806855463" Aug 13 03:22:10.240954 systemd-networkd[1075]: cali06a03a31dc1: Gained IPv6LL Aug 13 03:22:10.411794 systemd-networkd[1075]: vxlan.calico: Link UP Aug 13 03:22:10.411812 systemd-networkd[1075]: vxlan.calico: Gained carrier Aug 13 03:22:10.458000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.458000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.458000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.458000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.458000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.458000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.458000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.458000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.458000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.458000 audit: BPF prog-id=21 op=LOAD Aug 13 03:22:10.458000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe47f1ae10 a2=98 a3=20 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.458000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.458000 audit: BPF prog-id=21 op=UNLOAD Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit: BPF prog-id=22 op=LOAD Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe47f1ac20 a2=94 a3=54428f items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit: BPF prog-id=22 op=UNLOAD Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit: BPF prog-id=23 op=LOAD Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe47f1ac50 a2=94 a3=2 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit: BPF prog-id=23 op=UNLOAD Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe47f1ab20 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe47f1ab50 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe47f1aa60 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe47f1ab70 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe47f1ab50 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe47f1ab40 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe47f1ab70 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe47f1ab50 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe47f1ab70 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe47f1ab40 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.459000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.459000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe47f1abb0 a2=28 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.459000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.460000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.460000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.460000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.460000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.460000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.460000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.460000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.460000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.460000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.460000 audit: BPF prog-id=24 op=LOAD Aug 13 03:22:10.460000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe47f1aa20 a2=94 a3=0 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.460000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.460000 audit: BPF prog-id=24 op=UNLOAD Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe47f1aa10 a2=50 a3=2800 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.461000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe47f1aa10 a2=50 a3=2800 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.461000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit: BPF prog-id=25 op=LOAD Aug 13 03:22:10.461000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe47f1a230 a2=94 a3=2 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.461000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.461000 audit: BPF prog-id=25 op=UNLOAD Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { perfmon } for pid=3946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit[3946]: AVC avc: denied { bpf } for pid=3946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.461000 audit: BPF prog-id=26 op=LOAD Aug 13 03:22:10.461000 audit[3946]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe47f1a330 a2=94 a3=30 items=0 ppid=3626 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.461000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit: BPF prog-id=27 op=LOAD Aug 13 03:22:10.470000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc55d80ed0 a2=98 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.470000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.470000 audit: BPF prog-id=27 op=UNLOAD Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.470000 audit: BPF prog-id=28 op=LOAD Aug 13 03:22:10.470000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc55d80cc0 a2=94 a3=54428f items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.470000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.471000 audit: BPF prog-id=28 op=UNLOAD Aug 13 03:22:10.471000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.471000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.471000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.471000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.471000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.471000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.471000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.471000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.471000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.471000 audit: BPF prog-id=29 op=LOAD Aug 13 03:22:10.471000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc55d80cf0 a2=94 a3=2 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.471000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.471000 audit: BPF prog-id=29 op=UNLOAD Aug 13 03:22:10.489000 audit[3949]: NETFILTER_CFG table=filter:101 family=2 entries=17 op=nft_register_rule pid=3949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:10.489000 audit[3949]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffe421b160 a2=0 a3=7fffe421b14c items=0 ppid=2291 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:10.504000 audit[3949]: NETFILTER_CFG table=nat:102 family=2 entries=35 op=nft_register_chain pid=3949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:10.504000 audit[3949]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffe421b160 a2=0 a3=7fffe421b14c items=0 ppid=2291 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.504000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:10.564000 audit[3966]: NETFILTER_CFG table=filter:103 family=2 entries=14 op=nft_register_rule pid=3966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:10.564000 audit[3966]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffddf537a70 a2=0 a3=7ffddf537a5c items=0 ppid=2291 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:10.574000 audit[3966]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=3966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:10.574000 audit[3966]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffddf537a70 a2=0 a3=7ffddf537a5c items=0 ppid=2291 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:10.689524 systemd-networkd[1075]: cali28b434fb563: Gained IPv6LL Aug 13 03:22:10.704000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.704000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.704000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.704000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.704000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.704000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.704000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.704000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.704000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.704000 audit: BPF prog-id=30 op=LOAD Aug 13 03:22:10.704000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc55d80bb0 a2=94 a3=1 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.704000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.705000 audit: BPF prog-id=30 op=UNLOAD Aug 13 03:22:10.705000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.705000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc55d80c80 a2=50 a3=7ffc55d80d60 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.705000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.719000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.719000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55d80bc0 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.719000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.719000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.719000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc55d80bf0 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.719000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.719000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.719000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc55d80b00 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.719000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55d80c10 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55d80bf0 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55d80be0 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55d80c10 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc55d80bf0 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc55d80c10 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc55d80be0 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55d80c50 a2=28 a3=0 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc55d80a00 a2=50 a3=1 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit: BPF prog-id=31 op=LOAD Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc55d80a00 a2=94 a3=5 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit: BPF prog-id=31 op=UNLOAD Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc55d80ab0 a2=50 a3=1 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.720000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.720000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc55d80bd0 a2=4 a3=38 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { confidentiality } for pid=3951 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 03:22:10.721000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc55d80c20 a2=94 a3=6 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.721000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { confidentiality } for pid=3951 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 03:22:10.721000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc55d803d0 a2=94 a3=88 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.721000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { perfmon } for pid=3951 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.721000 audit[3951]: AVC avc: denied { confidentiality } for pid=3951 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 03:22:10.721000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc55d803d0 a2=94 a3=88 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.721000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.722000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.722000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc55d81e00 a2=10 a3=f8f00800 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.722000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.722000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.722000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc55d81ca0 a2=10 a3=3 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.722000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.722000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.722000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc55d81c40 a2=10 a3=3 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.722000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.722000 audit[3951]: AVC avc: denied { bpf } for pid=3951 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 03:22:10.722000 audit[3951]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc55d81c40 a2=10 a3=7 items=0 ppid=3626 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.722000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 03:22:10.733000 audit: BPF prog-id=26 op=UNLOAD Aug 13 03:22:10.733000 audit[1300]: SYSCALL arch=c000003e syscall=0 success=yes exit=4284 a0=3a a1=c001007fc2 a2=8b32 a3=e8 items=0 ppid=1 pid=1300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="containerd" exe="/run/torcx/unpack/docker/bin/containerd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.733000 audit: PROCTITLE proctitle=2F72756E2F746F7263782F62696E2F636F6E7461696E657264002D2D636F6E666967002F72756E2F746F7263782F756E7061636B2F646F636B65722F7573722F73686172652F636F6E7461696E6572642F636F6E6669672D6367726F757066732E746F6D6C Aug 13 03:22:10.811422 env[1300]: time="2025-08-13T03:22:10.811311123Z" level=info msg="StopPodSandbox for \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\"" Aug 13 03:22:10.944000 audit[4008]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=4008 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:10.944000 audit[4008]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffea7427c60 a2=0 a3=7ffea7427c4c items=0 ppid=3626 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.944000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 03:22:10.958000 audit[4006]: NETFILTER_CFG table=raw:106 family=2 entries=21 op=nft_register_chain pid=4006 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:10.958000 audit[4006]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff59396150 a2=0 a3=7fff5939613c items=0 ppid=3626 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.958000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 03:22:10.970000 audit[4007]: NETFILTER_CFG table=nat:107 family=2 entries=15 op=nft_register_chain pid=4007 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:10.970000 audit[4007]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc5e22c1d0 a2=0 a3=7ffc5e22c1bc items=0 ppid=3626 pid=4007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.970000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 03:22:10.977000 audit[4012]: NETFILTER_CFG table=filter:108 family=2 entries=170 op=nft_register_chain pid=4012 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:10.977000 audit[4012]: SYSCALL arch=c000003e syscall=46 success=yes exit=98076 a0=3 a1=7ffee8a0d530 a2=0 a3=7ffee8a0d51c items=0 ppid=3626 pid=4012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:10.977000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.014 [INFO][3994] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.016 [INFO][3994] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" iface="eth0" netns="/var/run/netns/cni-3333629a-7e70-19bf-bc91-6854410aa465" Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.016 [INFO][3994] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" iface="eth0" netns="/var/run/netns/cni-3333629a-7e70-19bf-bc91-6854410aa465" Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.019 [INFO][3994] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" iface="eth0" netns="/var/run/netns/cni-3333629a-7e70-19bf-bc91-6854410aa465" Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.019 [INFO][3994] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.019 [INFO][3994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.081 [INFO][4024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" HandleID="k8s-pod-network.1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.081 [INFO][4024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.081 [INFO][4024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.090 [WARNING][4024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" HandleID="k8s-pod-network.1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.090 [INFO][4024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" HandleID="k8s-pod-network.1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.092 [INFO][4024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:11.096642 env[1300]: 2025-08-13 03:22:11.094 [INFO][3994] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:11.101538 systemd[1]: run-netns-cni\x2d3333629a\x2d7e70\x2d19bf\x2dbc91\x2d6854410aa465.mount: Deactivated successfully. Aug 13 03:22:11.103476 env[1300]: time="2025-08-13T03:22:11.103417696Z" level=info msg="TearDown network for sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\" successfully" Aug 13 03:22:11.103625 env[1300]: time="2025-08-13T03:22:11.103582394Z" level=info msg="StopPodSandbox for \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\" returns successfully" Aug 13 03:22:11.104728 env[1300]: time="2025-08-13T03:22:11.104691091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-txzpd,Uid:4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c,Namespace:calico-system,Attempt:1,}" Aug 13 03:22:11.361273 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califdc420158f3: link becomes ready Aug 13 03:22:11.356751 systemd-networkd[1075]: califdc420158f3: Link UP Aug 13 03:22:11.360159 systemd-networkd[1075]: califdc420158f3: Gained carrier Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.228 [INFO][4032] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0 goldmane-58fd7646b9- calico-system 4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c 954 0 2025-08-13 03:21:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-pghwy.gb1.brightbox.com goldmane-58fd7646b9-txzpd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califdc420158f3 [] [] }} ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-txzpd" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.229 [INFO][4032] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-txzpd" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.293 [INFO][4043] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" HandleID="k8s-pod-network.9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.293 [INFO][4043] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" HandleID="k8s-pod-network.9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5610), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-pghwy.gb1.brightbox.com", "pod":"goldmane-58fd7646b9-txzpd", "timestamp":"2025-08-13 03:22:11.2936436 +0000 UTC"}, Hostname:"srv-pghwy.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.294 [INFO][4043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.295 [INFO][4043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.295 [INFO][4043] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-pghwy.gb1.brightbox.com' Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.307 [INFO][4043] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.315 [INFO][4043] ipam/ipam.go 394: Looking up existing affinities for host host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.321 [INFO][4043] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.324 [INFO][4043] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.327 [INFO][4043] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.327 [INFO][4043] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.329 [INFO][4043] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8 Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.336 [INFO][4043] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.349 [INFO][4043] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.132/26] block=192.168.30.128/26 handle="k8s-pod-network.9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.350 [INFO][4043] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.132/26] handle="k8s-pod-network.9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.350 [INFO][4043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:11.389284 env[1300]: 2025-08-13 03:22:11.350 [INFO][4043] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.132/26] IPv6=[] ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" HandleID="k8s-pod-network.9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:11.390991 env[1300]: 2025-08-13 03:22:11.352 [INFO][4032] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-txzpd" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-58fd7646b9-txzpd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califdc420158f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:11.390991 env[1300]: 2025-08-13 03:22:11.353 [INFO][4032] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.132/32] ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-txzpd" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:11.390991 env[1300]: 2025-08-13 03:22:11.353 [INFO][4032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califdc420158f3 ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-txzpd" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:11.390991 env[1300]: 2025-08-13 03:22:11.361 [INFO][4032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-txzpd" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:11.390991 env[1300]: 2025-08-13 03:22:11.362 [INFO][4032] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-txzpd" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8", Pod:"goldmane-58fd7646b9-txzpd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califdc420158f3", MAC:"6e:0e:85:92:28:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:11.390991 env[1300]: 2025-08-13 03:22:11.385 [INFO][4032] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-txzpd" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:11.466000 audit[4060]: NETFILTER_CFG table=filter:109 family=2 entries=52 op=nft_register_chain pid=4060 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:11.466000 audit[4060]: SYSCALL arch=c000003e syscall=46 success=yes exit=27556 a0=3 a1=7ffd0a43dd90 a2=0 a3=7ffd0a43dd7c items=0 ppid=3626 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:11.466000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 03:22:11.495658 env[1300]: time="2025-08-13T03:22:11.495539332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:22:11.495658 env[1300]: time="2025-08-13T03:22:11.495620138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:22:11.496198 env[1300]: time="2025-08-13T03:22:11.496085010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:22:11.497227 env[1300]: time="2025-08-13T03:22:11.497104469Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8 pid=4069 runtime=io.containerd.runc.v2 Aug 13 03:22:11.659727 env[1300]: time="2025-08-13T03:22:11.659552218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-txzpd,Uid:4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c,Namespace:calico-system,Attempt:1,} returns sandbox id \"9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8\"" Aug 13 03:22:12.096572 systemd-networkd[1075]: vxlan.calico: Gained IPv6LL Aug 13 03:22:12.737743 systemd-networkd[1075]: califdc420158f3: Gained IPv6LL Aug 13 03:22:13.629934 env[1300]: time="2025-08-13T03:22:13.629838620Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:13.632468 env[1300]: time="2025-08-13T03:22:13.632428078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:13.634392 env[1300]: time="2025-08-13T03:22:13.634353524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:13.636937 env[1300]: time="2025-08-13T03:22:13.636901833Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:13.639935 env[1300]: time="2025-08-13T03:22:13.639884544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 03:22:13.645073 env[1300]: time="2025-08-13T03:22:13.645024799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 03:22:13.656290 env[1300]: time="2025-08-13T03:22:13.656233804Z" level=info msg="CreateContainer within sandbox \"be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 03:22:13.680252 env[1300]: time="2025-08-13T03:22:13.680175176Z" level=info msg="CreateContainer within sandbox \"be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3f7c87181959d5f3416252b0f20f16a882ca317ea39d407ad72b6e98b0786317\"" Aug 13 03:22:13.681494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300265604.mount: Deactivated successfully. Aug 13 03:22:13.687802 env[1300]: time="2025-08-13T03:22:13.687742066Z" level=info msg="StartContainer for \"3f7c87181959d5f3416252b0f20f16a882ca317ea39d407ad72b6e98b0786317\"" Aug 13 03:22:13.738413 systemd[1]: run-containerd-runc-k8s.io-3f7c87181959d5f3416252b0f20f16a882ca317ea39d407ad72b6e98b0786317-runc.WKQUrw.mount: Deactivated successfully. Aug 13 03:22:13.844724 env[1300]: time="2025-08-13T03:22:13.844648090Z" level=info msg="StartContainer for \"3f7c87181959d5f3416252b0f20f16a882ca317ea39d407ad72b6e98b0786317\" returns successfully" Aug 13 03:22:14.251073 kubelet[2186]: I0813 03:22:14.250925 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75b8b879dd-pv4k9" podStartSLOduration=36.728020577 podStartE2EDuration="41.250841624s" podCreationTimestamp="2025-08-13 03:21:33 +0000 UTC" firstStartedPulling="2025-08-13 03:22:09.121090225 +0000 UTC m=+53.694530052" lastFinishedPulling="2025-08-13 03:22:13.643911272 +0000 UTC m=+58.217351099" observedRunningTime="2025-08-13 03:22:14.247545476 +0000 UTC m=+58.820985312" watchObservedRunningTime="2025-08-13 03:22:14.250841624 +0000 UTC m=+58.824281457" Aug 13 03:22:14.278000 audit[4142]: NETFILTER_CFG table=filter:110 family=2 entries=14 op=nft_register_rule pid=4142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:14.281926 kernel: kauditd_printk_skb: 564 callbacks suppressed Aug 13 03:22:14.282393 kernel: audit: type=1325 audit(1755055334.278:425): table=filter:110 family=2 entries=14 op=nft_register_rule pid=4142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:14.278000 audit[4142]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5c8874b0 a2=0 a3=7fff5c88749c items=0 ppid=2291 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:14.293735 kernel: audit: type=1300 audit(1755055334.278:425): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5c8874b0 a2=0 a3=7fff5c88749c items=0 ppid=2291 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:14.293836 kernel: audit: type=1327 audit(1755055334.278:425): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:14.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:14.299000 audit[4142]: NETFILTER_CFG table=nat:111 family=2 entries=20 op=nft_register_rule pid=4142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:14.299000 audit[4142]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff5c8874b0 a2=0 a3=7fff5c88749c items=0 ppid=2291 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:14.313073 kernel: audit: type=1325 audit(1755055334.299:426): table=nat:111 family=2 entries=20 op=nft_register_rule pid=4142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:14.313155 kernel: audit: type=1300 audit(1755055334.299:426): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff5c8874b0 a2=0 a3=7fff5c88749c items=0 ppid=2291 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:14.299000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:14.317167 kernel: audit: type=1327 audit(1755055334.299:426): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:15.235451 kubelet[2186]: I0813 03:22:15.235395 2186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 03:22:15.462048 env[1300]: time="2025-08-13T03:22:15.461979630Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:15.465714 env[1300]: time="2025-08-13T03:22:15.465678680Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:15.468670 env[1300]: time="2025-08-13T03:22:15.468635962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:15.472249 env[1300]: time="2025-08-13T03:22:15.472211721Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:15.473539 env[1300]: time="2025-08-13T03:22:15.473487494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 03:22:15.477467 env[1300]: time="2025-08-13T03:22:15.477431469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 03:22:15.481114 env[1300]: time="2025-08-13T03:22:15.481071298Z" level=info msg="CreateContainer within sandbox \"ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 03:22:15.506450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2784165702.mount: Deactivated successfully. Aug 13 03:22:15.510398 env[1300]: time="2025-08-13T03:22:15.510301470Z" level=info msg="CreateContainer within sandbox \"ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"96449c786cfcf3e7bb4113c825355abe62e16b6ea400d69602a6911c7969af2d\"" Aug 13 03:22:15.512711 env[1300]: time="2025-08-13T03:22:15.512674455Z" level=info msg="StartContainer for \"96449c786cfcf3e7bb4113c825355abe62e16b6ea400d69602a6911c7969af2d\"" Aug 13 03:22:15.678383 env[1300]: time="2025-08-13T03:22:15.678266526Z" level=info msg="StartContainer for \"96449c786cfcf3e7bb4113c825355abe62e16b6ea400d69602a6911c7969af2d\" returns successfully" Aug 13 03:22:15.743946 env[1300]: time="2025-08-13T03:22:15.743874112Z" level=info msg="StopPodSandbox for \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\"" Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.904 [WARNING][4186] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5e80d36b-0ef9-49a4-9b05-2a70df6f56d4", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537", Pod:"coredns-7c65d6cfc9-b5n98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06a03a31dc1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.907 [INFO][4186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.907 [INFO][4186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" iface="eth0" netns="" Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.907 [INFO][4186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.907 [INFO][4186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.978 [INFO][4195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" HandleID="k8s-pod-network.7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.978 [INFO][4195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.978 [INFO][4195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.989 [WARNING][4195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" HandleID="k8s-pod-network.7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.989 [INFO][4195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" HandleID="k8s-pod-network.7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:15.995 [INFO][4195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:16.007029 env[1300]: 2025-08-13 03:22:16.003 [INFO][4186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:16.008485 env[1300]: time="2025-08-13T03:22:16.007163496Z" level=info msg="TearDown network for sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\" successfully" Aug 13 03:22:16.008485 env[1300]: time="2025-08-13T03:22:16.007211657Z" level=info msg="StopPodSandbox for \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\" returns successfully" Aug 13 03:22:16.010369 env[1300]: time="2025-08-13T03:22:16.010288705Z" level=info msg="RemovePodSandbox for \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\"" Aug 13 03:22:16.010611 env[1300]: time="2025-08-13T03:22:16.010514023Z" level=info msg="Forcibly stopping sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\"" Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.094 [WARNING][4216] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5e80d36b-0ef9-49a4-9b05-2a70df6f56d4", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"36901489d8358c3d06a4286248cf38f463c668a2cdf9fd6e3e394c97b0dd2537", Pod:"coredns-7c65d6cfc9-b5n98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06a03a31dc1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.095 [INFO][4216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.095 [INFO][4216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" iface="eth0" netns="" Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.095 [INFO][4216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.095 [INFO][4216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.127 [INFO][4223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" HandleID="k8s-pod-network.7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.128 [INFO][4223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.128 [INFO][4223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.138 [WARNING][4223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" HandleID="k8s-pod-network.7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.138 [INFO][4223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" HandleID="k8s-pod-network.7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--b5n98-eth0" Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.141 [INFO][4223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:16.146575 env[1300]: 2025-08-13 03:22:16.143 [INFO][4216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119" Aug 13 03:22:16.148834 env[1300]: time="2025-08-13T03:22:16.146622139Z" level=info msg="TearDown network for sandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\" successfully" Aug 13 03:22:16.151602 env[1300]: time="2025-08-13T03:22:16.151559465Z" level=info msg="RemovePodSandbox \"7b6169d1fa512b05f4cd2f49a23f12b60138f8f28fbfc804e10f496530d9d119\" returns successfully" Aug 13 03:22:16.153358 env[1300]: time="2025-08-13T03:22:16.152501959Z" level=info msg="StopPodSandbox for \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\"" Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.205 [WARNING][4237] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.205 [INFO][4237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.206 [INFO][4237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" iface="eth0" netns="" Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.206 [INFO][4237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.206 [INFO][4237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.262 [INFO][4244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" HandleID="k8s-pod-network.1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.262 [INFO][4244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.263 [INFO][4244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.271 [WARNING][4244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" HandleID="k8s-pod-network.1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.271 [INFO][4244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" HandleID="k8s-pod-network.1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.274 [INFO][4244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:16.279687 env[1300]: 2025-08-13 03:22:16.276 [INFO][4237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:16.280753 env[1300]: time="2025-08-13T03:22:16.280698195Z" level=info msg="TearDown network for sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\" successfully" Aug 13 03:22:16.280924 env[1300]: time="2025-08-13T03:22:16.280868994Z" level=info msg="StopPodSandbox for \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\" returns successfully" Aug 13 03:22:16.281901 env[1300]: time="2025-08-13T03:22:16.281812591Z" level=info msg="RemovePodSandbox for \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\"" Aug 13 03:22:16.281997 env[1300]: time="2025-08-13T03:22:16.281920005Z" level=info msg="Forcibly stopping sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\"" Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.358 [WARNING][4260] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.359 [INFO][4260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.359 [INFO][4260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" iface="eth0" netns="" Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.359 [INFO][4260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.359 [INFO][4260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.424 [INFO][4268] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" HandleID="k8s-pod-network.1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.424 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.424 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.434 [WARNING][4268] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" HandleID="k8s-pod-network.1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.434 [INFO][4268] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" HandleID="k8s-pod-network.1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Workload="srv--pghwy.gb1.brightbox.com-k8s-whisker--76c6df68db--ldnkj-eth0" Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.437 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:16.440950 env[1300]: 2025-08-13 03:22:16.439 [INFO][4260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24" Aug 13 03:22:16.442388 env[1300]: time="2025-08-13T03:22:16.441414134Z" level=info msg="TearDown network for sandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\" successfully" Aug 13 03:22:16.445985 env[1300]: time="2025-08-13T03:22:16.445943091Z" level=info msg="RemovePodSandbox \"1715e53c5ab375df33e45c312cf760b7aadbfd34f2d0b3648bac3a097f17aa24\" returns successfully" Aug 13 03:22:16.446965 env[1300]: time="2025-08-13T03:22:16.446924805Z" level=info msg="StopPodSandbox for \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\"" Aug 13 03:22:16.497504 systemd[1]: run-containerd-runc-k8s.io-96449c786cfcf3e7bb4113c825355abe62e16b6ea400d69602a6911c7969af2d-runc.63THNn.mount: Deactivated successfully. Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.512 [WARNING][4282] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0", GenerateName:"calico-apiserver-75b8b879dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5686f5d5-01df-4f76-8a08-487dbaa97ed4", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8b879dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9", Pod:"calico-apiserver-75b8b879dd-pv4k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali172b8ee5cad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.513 [INFO][4282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.513 [INFO][4282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" iface="eth0" netns="" Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.513 [INFO][4282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.513 [INFO][4282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.554 [INFO][4289] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" HandleID="k8s-pod-network.942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.554 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.554 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.563 [WARNING][4289] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" HandleID="k8s-pod-network.942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.563 [INFO][4289] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" HandleID="k8s-pod-network.942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.566 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:16.571859 env[1300]: 2025-08-13 03:22:16.568 [INFO][4282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:16.571859 env[1300]: time="2025-08-13T03:22:16.570467029Z" level=info msg="TearDown network for sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\" successfully" Aug 13 03:22:16.571859 env[1300]: time="2025-08-13T03:22:16.570516567Z" level=info msg="StopPodSandbox for \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\" returns successfully" Aug 13 03:22:16.573797 env[1300]: time="2025-08-13T03:22:16.573760353Z" level=info msg="RemovePodSandbox for \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\"" Aug 13 03:22:16.574355 env[1300]: time="2025-08-13T03:22:16.573945545Z" level=info msg="Forcibly stopping sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\"" Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.636 [WARNING][4305] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0", GenerateName:"calico-apiserver-75b8b879dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5686f5d5-01df-4f76-8a08-487dbaa97ed4", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8b879dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"be9fd994c4d355f8a3ea4ebf8b84c329d99353c69c3534ed1c1fc9f8acab3dc9", Pod:"calico-apiserver-75b8b879dd-pv4k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali172b8ee5cad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.637 [INFO][4305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.637 [INFO][4305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" iface="eth0" netns="" Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.637 [INFO][4305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.637 [INFO][4305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.672 [INFO][4312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" HandleID="k8s-pod-network.942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.673 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.673 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.683 [WARNING][4312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" HandleID="k8s-pod-network.942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.683 [INFO][4312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" HandleID="k8s-pod-network.942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--pv4k9-eth0" Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.685 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:16.689837 env[1300]: 2025-08-13 03:22:16.687 [INFO][4305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a" Aug 13 03:22:16.691606 env[1300]: time="2025-08-13T03:22:16.690274733Z" level=info msg="TearDown network for sandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\" successfully" Aug 13 03:22:16.694712 env[1300]: time="2025-08-13T03:22:16.694676253Z" level=info msg="RemovePodSandbox \"942cf25e10dfc56a6b5b4d64cc745b3cd8eb6ce99fe98011c05ce3fa64c8227a\" returns successfully" Aug 13 03:22:16.695622 env[1300]: time="2025-08-13T03:22:16.695573988Z" level=info msg="StopPodSandbox for \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\"" Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.752 [WARNING][4327] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8", Pod:"goldmane-58fd7646b9-txzpd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califdc420158f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.756 [INFO][4327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.756 [INFO][4327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" iface="eth0" netns="" Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.757 [INFO][4327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.757 [INFO][4327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.789 [INFO][4334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" HandleID="k8s-pod-network.1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.789 [INFO][4334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.789 [INFO][4334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.802 [WARNING][4334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" HandleID="k8s-pod-network.1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.803 [INFO][4334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" HandleID="k8s-pod-network.1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.806 [INFO][4334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:16.812672 env[1300]: 2025-08-13 03:22:16.809 [INFO][4327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:16.814062 env[1300]: time="2025-08-13T03:22:16.812693611Z" level=info msg="TearDown network for sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\" successfully" Aug 13 03:22:16.814062 env[1300]: time="2025-08-13T03:22:16.812735313Z" level=info msg="StopPodSandbox for \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\" returns successfully" Aug 13 03:22:16.814062 env[1300]: time="2025-08-13T03:22:16.813604545Z" level=info msg="RemovePodSandbox for \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\"" Aug 13 03:22:16.814062 env[1300]: time="2025-08-13T03:22:16.813705306Z" level=info msg="Forcibly stopping sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\"" Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.887 [WARNING][4348] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4b2d60ed-1c14-4294-bc1a-5b84b78c6f2c", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8", Pod:"goldmane-58fd7646b9-txzpd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califdc420158f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.891 [INFO][4348] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.891 [INFO][4348] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" iface="eth0" netns="" Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.891 [INFO][4348] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.892 [INFO][4348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.921 [INFO][4355] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" HandleID="k8s-pod-network.1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.921 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.921 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.931 [WARNING][4355] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" HandleID="k8s-pod-network.1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.931 [INFO][4355] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" HandleID="k8s-pod-network.1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Workload="srv--pghwy.gb1.brightbox.com-k8s-goldmane--58fd7646b9--txzpd-eth0" Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.933 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:16.939378 env[1300]: 2025-08-13 03:22:16.935 [INFO][4348] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854" Aug 13 03:22:16.939378 env[1300]: time="2025-08-13T03:22:16.937814648Z" level=info msg="TearDown network for sandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\" successfully" Aug 13 03:22:16.942785 env[1300]: time="2025-08-13T03:22:16.942724613Z" level=info msg="RemovePodSandbox \"1576c297615b7143455b45055a35b1a2c472b1feed3dde9eac5948301c27d854\" returns successfully" Aug 13 03:22:17.700859 systemd[1]: run-containerd-runc-k8s.io-31299f475ea61fd2a56bb6474a9a6d532cdd4891da855937f89f6110a32420b0-runc.4fEyMv.mount: Deactivated successfully. Aug 13 03:22:17.994458 env[1300]: time="2025-08-13T03:22:17.993236297Z" level=info msg="StopPodSandbox for \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\"" Aug 13 03:22:17.995508 env[1300]: time="2025-08-13T03:22:17.995453964Z" level=info msg="StopPodSandbox for \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\"" Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.213 [INFO][4408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.213 [INFO][4408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" iface="eth0" netns="/var/run/netns/cni-971f0196-a0c4-943a-b84b-68e8d4be0eba" Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.215 [INFO][4408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" iface="eth0" netns="/var/run/netns/cni-971f0196-a0c4-943a-b84b-68e8d4be0eba" Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.216 [INFO][4408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" iface="eth0" netns="/var/run/netns/cni-971f0196-a0c4-943a-b84b-68e8d4be0eba" Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.216 [INFO][4408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.216 [INFO][4408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.363 [INFO][4422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" HandleID="k8s-pod-network.13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.364 [INFO][4422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.364 [INFO][4422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.381 [WARNING][4422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" HandleID="k8s-pod-network.13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.381 [INFO][4422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" HandleID="k8s-pod-network.13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.384 [INFO][4422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:18.403182 env[1300]: 2025-08-13 03:22:18.394 [INFO][4408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:22:18.408506 systemd[1]: run-netns-cni\x2d971f0196\x2da0c4\x2d943a\x2db84b\x2d68e8d4be0eba.mount: Deactivated successfully. Aug 13 03:22:18.415630 env[1300]: time="2025-08-13T03:22:18.415526382Z" level=info msg="TearDown network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\" successfully" Aug 13 03:22:18.415855 env[1300]: time="2025-08-13T03:22:18.415819236Z" level=info msg="StopPodSandbox for \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\" returns successfully" Aug 13 03:22:18.418747 env[1300]: time="2025-08-13T03:22:18.418691556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8b879dd-926tb,Uid:1b0738f7-1587-4a22-887f-5f8bd64e6743,Namespace:calico-apiserver,Attempt:1,}" Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.270 [INFO][4409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.271 [INFO][4409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" iface="eth0" netns="/var/run/netns/cni-41b4d7c6-3ca6-47df-535c-6a5ed1670743" Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.271 [INFO][4409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" iface="eth0" netns="/var/run/netns/cni-41b4d7c6-3ca6-47df-535c-6a5ed1670743" Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.271 [INFO][4409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" iface="eth0" netns="/var/run/netns/cni-41b4d7c6-3ca6-47df-535c-6a5ed1670743" Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.271 [INFO][4409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.271 [INFO][4409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.362 [INFO][4427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" HandleID="k8s-pod-network.9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.365 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.384 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.409 [WARNING][4427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" HandleID="k8s-pod-network.9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.409 [INFO][4427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" HandleID="k8s-pod-network.9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.413 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:18.422542 env[1300]: 2025-08-13 03:22:18.417 [INFO][4409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:22:18.428532 env[1300]: time="2025-08-13T03:22:18.427194870Z" level=info msg="TearDown network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\" successfully" Aug 13 03:22:18.428532 env[1300]: time="2025-08-13T03:22:18.427226134Z" level=info msg="StopPodSandbox for \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\" returns successfully" Aug 13 03:22:18.427037 systemd[1]: run-netns-cni\x2d41b4d7c6\x2d3ca6\x2d47df\x2d535c\x2d6a5ed1670743.mount: Deactivated successfully. Aug 13 03:22:18.429059 env[1300]: time="2025-08-13T03:22:18.428845185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f8b9fb6f-b4rrq,Uid:68df4933-b5c3-4312-8741-f03d5628c7c8,Namespace:calico-system,Attempt:1,}" Aug 13 03:22:18.781350 systemd-networkd[1075]: cali337789dcb10: Link UP Aug 13 03:22:18.790082 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 03:22:18.798471 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali337789dcb10: link becomes ready Aug 13 03:22:18.793767 systemd-networkd[1075]: cali337789dcb10: Gained carrier Aug 13 03:22:18.856996 env[1300]: time="2025-08-13T03:22:18.856923502Z" level=info msg="StopPodSandbox for \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\"" Aug 13 03:22:18.889601 env[1300]: time="2025-08-13T03:22:18.889523739Z" level=info msg="StopPodSandbox for \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\"" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.523 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0 calico-apiserver-75b8b879dd- calico-apiserver 1b0738f7-1587-4a22-887f-5f8bd64e6743 988 0 2025-08-13 03:21:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75b8b879dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-pghwy.gb1.brightbox.com calico-apiserver-75b8b879dd-926tb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali337789dcb10 [] [] }} ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-926tb" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.524 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-926tb" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.618 [INFO][4459] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" HandleID="k8s-pod-network.7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.622 [INFO][4459] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" HandleID="k8s-pod-network.7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-pghwy.gb1.brightbox.com", "pod":"calico-apiserver-75b8b879dd-926tb", "timestamp":"2025-08-13 03:22:18.618553099 +0000 UTC"}, Hostname:"srv-pghwy.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.622 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.622 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.622 [INFO][4459] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-pghwy.gb1.brightbox.com' Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.637 [INFO][4459] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.706 [INFO][4459] ipam/ipam.go 394: Looking up existing affinities for host host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.716 [INFO][4459] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.720 [INFO][4459] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.724 [INFO][4459] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.724 [INFO][4459] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.727 [INFO][4459] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5 Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.736 [INFO][4459] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.752 [INFO][4459] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.133/26] block=192.168.30.128/26 handle="k8s-pod-network.7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.752 [INFO][4459] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.133/26] handle="k8s-pod-network.7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.753 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:18.937310 env[1300]: 2025-08-13 03:22:18.753 [INFO][4459] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.133/26] IPv6=[] ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" HandleID="k8s-pod-network.7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:22:18.939154 env[1300]: 2025-08-13 03:22:18.757 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-926tb" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0", GenerateName:"calico-apiserver-75b8b879dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b0738f7-1587-4a22-887f-5f8bd64e6743", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8b879dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-75b8b879dd-926tb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali337789dcb10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:18.939154 env[1300]: 2025-08-13 03:22:18.757 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.133/32] ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-926tb" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:22:18.939154 env[1300]: 2025-08-13 03:22:18.757 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali337789dcb10 ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-926tb" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:22:18.939154 env[1300]: 2025-08-13 03:22:18.793 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-926tb" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:22:18.939154 env[1300]: 2025-08-13 03:22:18.795 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-926tb" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0", GenerateName:"calico-apiserver-75b8b879dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b0738f7-1587-4a22-887f-5f8bd64e6743", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8b879dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5", Pod:"calico-apiserver-75b8b879dd-926tb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali337789dcb10", MAC:"e2:f6:ed:17:2d:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:18.939154 env[1300]: 2025-08-13 03:22:18.835 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5" Namespace="calico-apiserver" Pod="calico-apiserver-75b8b879dd-926tb" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:22:18.978875 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2cbc29b8a68: link becomes ready Aug 13 03:22:18.978463 systemd-networkd[1075]: cali2cbc29b8a68: Link UP Aug 13 03:22:18.978809 systemd-networkd[1075]: cali2cbc29b8a68: Gained carrier Aug 13 03:22:19.007704 kernel: audit: type=1325 audit(1755055338.989:427): table=filter:112 family=2 entries=49 op=nft_register_chain pid=4497 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:19.007989 kernel: audit: type=1300 audit(1755055338.989:427): arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffc9638a250 a2=0 a3=7ffc9638a23c items=0 ppid=3626 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:18.989000 audit[4497]: NETFILTER_CFG table=filter:112 family=2 entries=49 op=nft_register_chain pid=4497 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:18.989000 audit[4497]: SYSCALL arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffc9638a250 a2=0 a3=7ffc9638a23c items=0 ppid=3626 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:19.024620 kernel: audit: type=1327 audit(1755055338.989:427): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 03:22:18.989000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.571 [INFO][4435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0 calico-kube-controllers-78f8b9fb6f- calico-system 68df4933-b5c3-4312-8741-f03d5628c7c8 989 0 2025-08-13 03:21:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78f8b9fb6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-pghwy.gb1.brightbox.com calico-kube-controllers-78f8b9fb6f-b4rrq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2cbc29b8a68 [] [] }} ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Namespace="calico-system" Pod="calico-kube-controllers-78f8b9fb6f-b4rrq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.571 [INFO][4435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Namespace="calico-system" Pod="calico-kube-controllers-78f8b9fb6f-b4rrq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.663 [INFO][4464] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" HandleID="k8s-pod-network.19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.664 [INFO][4464] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" HandleID="k8s-pod-network.19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-pghwy.gb1.brightbox.com", "pod":"calico-kube-controllers-78f8b9fb6f-b4rrq", "timestamp":"2025-08-13 03:22:18.66392418 +0000 UTC"}, Hostname:"srv-pghwy.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.664 [INFO][4464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.753 [INFO][4464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.755 [INFO][4464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-pghwy.gb1.brightbox.com' Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.806 [INFO][4464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.850 [INFO][4464] ipam/ipam.go 394: Looking up existing affinities for host host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.914 [INFO][4464] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.921 [INFO][4464] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.928 [INFO][4464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.928 [INFO][4464] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.931 [INFO][4464] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.936 [INFO][4464] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.959 [INFO][4464] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.134/26] block=192.168.30.128/26 handle="k8s-pod-network.19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.960 [INFO][4464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.134/26] handle="k8s-pod-network.19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.960 [INFO][4464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:19.089694 env[1300]: 2025-08-13 03:22:18.960 [INFO][4464] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.134/26] IPv6=[] ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" HandleID="k8s-pod-network.19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:22:19.092764 env[1300]: 2025-08-13 03:22:18.967 [INFO][4435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Namespace="calico-system" Pod="calico-kube-controllers-78f8b9fb6f-b4rrq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0", GenerateName:"calico-kube-controllers-78f8b9fb6f-", Namespace:"calico-system", SelfLink:"", UID:"68df4933-b5c3-4312-8741-f03d5628c7c8", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f8b9fb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-78f8b9fb6f-b4rrq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cbc29b8a68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:19.092764 env[1300]: 2025-08-13 03:22:18.968 [INFO][4435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.134/32] ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Namespace="calico-system" Pod="calico-kube-controllers-78f8b9fb6f-b4rrq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:22:19.092764 env[1300]: 2025-08-13 03:22:18.968 [INFO][4435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cbc29b8a68 ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Namespace="calico-system" Pod="calico-kube-controllers-78f8b9fb6f-b4rrq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:22:19.092764 env[1300]: 2025-08-13 03:22:18.979 [INFO][4435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Namespace="calico-system" Pod="calico-kube-controllers-78f8b9fb6f-b4rrq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:22:19.092764 env[1300]: 2025-08-13 03:22:18.982 [INFO][4435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Namespace="calico-system" Pod="calico-kube-controllers-78f8b9fb6f-b4rrq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0", GenerateName:"calico-kube-controllers-78f8b9fb6f-", Namespace:"calico-system", SelfLink:"", UID:"68df4933-b5c3-4312-8741-f03d5628c7c8", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f8b9fb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f", Pod:"calico-kube-controllers-78f8b9fb6f-b4rrq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cbc29b8a68", MAC:"46:09:45:af:7c:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:19.092764 env[1300]: 2025-08-13 03:22:19.026 [INFO][4435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f" Namespace="calico-system" Pod="calico-kube-controllers-78f8b9fb6f-b4rrq" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:22:19.105227 env[1300]: time="2025-08-13T03:22:19.084548730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:22:19.105227 env[1300]: time="2025-08-13T03:22:19.084679817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:22:19.105227 env[1300]: time="2025-08-13T03:22:19.084709066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:22:19.105227 env[1300]: time="2025-08-13T03:22:19.085145872Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5 pid=4508 runtime=io.containerd.runc.v2 Aug 13 03:22:19.142701 kernel: audit: type=1325 audit(1755055339.116:428): table=filter:113 family=2 entries=58 op=nft_register_chain pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:19.116000 audit[4525]: NETFILTER_CFG table=filter:113 family=2 entries=58 op=nft_register_chain pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:19.116000 audit[4525]: SYSCALL arch=c000003e syscall=46 success=yes exit=27180 a0=3 a1=7ffe78afccc0 a2=0 a3=7ffe78afccac items=0 ppid=3626 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:19.116000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 03:22:19.307780 systemd[1]: run-containerd-runc-k8s.io-7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5-runc.H1HzmU.mount: Deactivated successfully. Aug 13 03:22:19.348943 env[1300]: time="2025-08-13T03:22:19.348743097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:22:19.349214 env[1300]: time="2025-08-13T03:22:19.349168563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:22:19.349427 env[1300]: time="2025-08-13T03:22:19.349371673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:22:19.368742 env[1300]: time="2025-08-13T03:22:19.368624254Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f pid=4557 runtime=io.containerd.runc.v2 Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.509 [INFO][4526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.509 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" iface="eth0" netns="/var/run/netns/cni-df72e7f4-50d7-d328-57b2-1971c615dcc1" Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.510 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" iface="eth0" netns="/var/run/netns/cni-df72e7f4-50d7-d328-57b2-1971c615dcc1" Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.511 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" iface="eth0" netns="/var/run/netns/cni-df72e7f4-50d7-d328-57b2-1971c615dcc1" Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.511 [INFO][4526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.511 [INFO][4526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.603 [INFO][4597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" HandleID="k8s-pod-network.d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.603 [INFO][4597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.603 [INFO][4597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.621 [WARNING][4597] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" HandleID="k8s-pod-network.d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.621 [INFO][4597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" HandleID="k8s-pod-network.d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.624 [INFO][4597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:19.648099 env[1300]: 2025-08-13 03:22:19.629 [INFO][4526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:22:19.649717 env[1300]: time="2025-08-13T03:22:19.649587496Z" level=info msg="TearDown network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\" successfully" Aug 13 03:22:19.649869 env[1300]: time="2025-08-13T03:22:19.649833632Z" level=info msg="StopPodSandbox for \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\" returns successfully" Aug 13 03:22:19.682195 env[1300]: time="2025-08-13T03:22:19.682132110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqpff,Uid:a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00,Namespace:calico-system,Attempt:1,}" Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.437 [INFO][4531] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.437 [INFO][4531] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" iface="eth0" netns="/var/run/netns/cni-388011c1-eb42-50d2-0bc7-7069c683a4a8" Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.438 [INFO][4531] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" iface="eth0" netns="/var/run/netns/cni-388011c1-eb42-50d2-0bc7-7069c683a4a8" Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.438 [INFO][4531] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" iface="eth0" netns="/var/run/netns/cni-388011c1-eb42-50d2-0bc7-7069c683a4a8" Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.439 [INFO][4531] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.439 [INFO][4531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.623 [INFO][4588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" HandleID="k8s-pod-network.2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.627 [INFO][4588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.627 [INFO][4588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.658 [WARNING][4588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" HandleID="k8s-pod-network.2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.658 [INFO][4588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" HandleID="k8s-pod-network.2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.669 [INFO][4588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:19.688209 env[1300]: 2025-08-13 03:22:19.686 [INFO][4531] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:22:19.694116 systemd[1]: run-netns-cni\x2ddf72e7f4\x2d50d7\x2dd328\x2d57b2\x2d1971c615dcc1.mount: Deactivated successfully. Aug 13 03:22:19.694382 systemd[1]: run-netns-cni\x2d388011c1\x2deb42\x2d50d2\x2d0bc7\x2d7069c683a4a8.mount: Deactivated successfully. Aug 13 03:22:19.699272 env[1300]: time="2025-08-13T03:22:19.699217690Z" level=info msg="TearDown network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\" successfully" Aug 13 03:22:19.699468 env[1300]: time="2025-08-13T03:22:19.699432205Z" level=info msg="StopPodSandbox for \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\" returns successfully" Aug 13 03:22:19.708283 env[1300]: time="2025-08-13T03:22:19.708239305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t4mmj,Uid:3f742517-09bc-4214-9d75-c0b7d73d3fd4,Namespace:kube-system,Attempt:1,}" Aug 13 03:22:19.984352 env[1300]: time="2025-08-13T03:22:19.984238968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f8b9fb6f-b4rrq,Uid:68df4933-b5c3-4312-8741-f03d5628c7c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f\"" Aug 13 03:22:20.041833 env[1300]: time="2025-08-13T03:22:20.041262142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8b879dd-926tb,Uid:1b0738f7-1587-4a22-887f-5f8bd64e6743,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5\"" Aug 13 03:22:20.086376 env[1300]: time="2025-08-13T03:22:20.086198570Z" level=info msg="CreateContainer within sandbox \"7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 03:22:20.135354 env[1300]: time="2025-08-13T03:22:20.127591181Z" level=info msg="CreateContainer within sandbox \"7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b6c520b2656ef7ef2d7776dc4be349fac14c8e17912ced42b5e411b1012ba6f4\"" Aug 13 03:22:20.136800 env[1300]: time="2025-08-13T03:22:20.136734807Z" level=info msg="StartContainer for \"b6c520b2656ef7ef2d7776dc4be349fac14c8e17912ced42b5e411b1012ba6f4\"" Aug 13 03:22:20.233158 systemd-networkd[1075]: cali2cbc29b8a68: Gained IPv6LL Aug 13 03:22:20.546531 systemd-networkd[1075]: cali337789dcb10: Gained IPv6LL Aug 13 03:22:20.591365 env[1300]: time="2025-08-13T03:22:20.590670779Z" level=info msg="StartContainer for \"b6c520b2656ef7ef2d7776dc4be349fac14c8e17912ced42b5e411b1012ba6f4\" returns successfully" Aug 13 03:22:20.667381 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 03:22:20.667576 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0dd6445cccc: link becomes ready Aug 13 03:22:20.662842 systemd-networkd[1075]: cali0dd6445cccc: Link UP Aug 13 03:22:20.663132 systemd-networkd[1075]: cali0dd6445cccc: Gained carrier Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.152 [INFO][4623] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0 coredns-7c65d6cfc9- kube-system 3f742517-09bc-4214-9d75-c0b7d73d3fd4 1000 0 2025-08-13 03:21:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-pghwy.gb1.brightbox.com coredns-7c65d6cfc9-t4mmj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0dd6445cccc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4mmj" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.153 [INFO][4623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4mmj" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.453 [INFO][4657] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" HandleID="k8s-pod-network.dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.454 [INFO][4657] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" HandleID="k8s-pod-network.dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cdcb0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-pghwy.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-t4mmj", "timestamp":"2025-08-13 03:22:20.453963889 +0000 UTC"}, Hostname:"srv-pghwy.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.454 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.455 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.455 [INFO][4657] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-pghwy.gb1.brightbox.com' Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.542 [INFO][4657] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.564 [INFO][4657] ipam/ipam.go 394: Looking up existing affinities for host host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.571 [INFO][4657] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.581 [INFO][4657] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.589 [INFO][4657] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.590 [INFO][4657] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.596 [INFO][4657] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.608 [INFO][4657] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.625 [INFO][4657] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.135/26] block=192.168.30.128/26 handle="k8s-pod-network.dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.626 [INFO][4657] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.135/26] handle="k8s-pod-network.dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.626 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:20.729368 env[1300]: 2025-08-13 03:22:20.626 [INFO][4657] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.135/26] IPv6=[] ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" HandleID="k8s-pod-network.dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:22:20.740903 env[1300]: 2025-08-13 03:22:20.639 [INFO][4623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4mmj" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3f742517-09bc-4214-9d75-c0b7d73d3fd4", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-t4mmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dd6445cccc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:20.740903 env[1300]: 2025-08-13 03:22:20.639 [INFO][4623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.135/32] ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4mmj" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:22:20.740903 env[1300]: 2025-08-13 03:22:20.639 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dd6445cccc ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4mmj" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:22:20.740903 env[1300]: 2025-08-13 03:22:20.664 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4mmj" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:22:20.740903 env[1300]: 2025-08-13 03:22:20.681 [INFO][4623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4mmj" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3f742517-09bc-4214-9d75-c0b7d73d3fd4", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d", Pod:"coredns-7c65d6cfc9-t4mmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dd6445cccc", MAC:"22:96:85:d6:f0:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:20.740903 env[1300]: 2025-08-13 03:22:20.724 [INFO][4623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4mmj" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:22:20.771374 env[1300]: time="2025-08-13T03:22:20.763717150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:22:20.771374 env[1300]: time="2025-08-13T03:22:20.763884230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:22:20.771374 env[1300]: time="2025-08-13T03:22:20.763966464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:22:20.771374 env[1300]: time="2025-08-13T03:22:20.764499915Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d pid=4702 runtime=io.containerd.runc.v2 Aug 13 03:22:20.844222 systemd-networkd[1075]: cali49051ba0fbe: Link UP Aug 13 03:22:20.873898 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali49051ba0fbe: link becomes ready Aug 13 03:22:20.873573 systemd-networkd[1075]: cali49051ba0fbe: Gained carrier Aug 13 03:22:20.917534 systemd[1]: run-containerd-runc-k8s.io-dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d-runc.CXsPCJ.mount: Deactivated successfully. Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.167 [INFO][4612] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0 csi-node-driver- calico-system a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00 1001 0 2025-08-13 03:21:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-pghwy.gb1.brightbox.com csi-node-driver-rqpff eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali49051ba0fbe [] [] }} ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Namespace="calico-system" Pod="csi-node-driver-rqpff" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.168 [INFO][4612] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Namespace="calico-system" Pod="csi-node-driver-rqpff" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.523 [INFO][4658] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" HandleID="k8s-pod-network.6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.524 [INFO][4658] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" HandleID="k8s-pod-network.6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-pghwy.gb1.brightbox.com", "pod":"csi-node-driver-rqpff", "timestamp":"2025-08-13 03:22:20.523705836 +0000 UTC"}, Hostname:"srv-pghwy.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.524 [INFO][4658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.627 [INFO][4658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.627 [INFO][4658] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-pghwy.gb1.brightbox.com' Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.645 [INFO][4658] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.678 [INFO][4658] ipam/ipam.go 394: Looking up existing affinities for host host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.724 [INFO][4658] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.728 [INFO][4658] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.735 [INFO][4658] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.735 [INFO][4658] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.740 [INFO][4658] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.762 [INFO][4658] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.830 [INFO][4658] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.136/26] block=192.168.30.128/26 handle="k8s-pod-network.6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.830 [INFO][4658] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.136/26] handle="k8s-pod-network.6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" host="srv-pghwy.gb1.brightbox.com" Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.830 [INFO][4658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:22:20.965148 env[1300]: 2025-08-13 03:22:20.830 [INFO][4658] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.136/26] IPv6=[] ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" HandleID="k8s-pod-network.6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:22:20.975630 env[1300]: 2025-08-13 03:22:20.832 [INFO][4612] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Namespace="calico-system" Pod="csi-node-driver-rqpff" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-rqpff", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49051ba0fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:20.975630 env[1300]: 2025-08-13 03:22:20.833 [INFO][4612] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.136/32] ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Namespace="calico-system" Pod="csi-node-driver-rqpff" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:22:20.975630 env[1300]: 2025-08-13 03:22:20.833 [INFO][4612] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49051ba0fbe ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Namespace="calico-system" Pod="csi-node-driver-rqpff" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:22:20.975630 env[1300]: 2025-08-13 03:22:20.882 [INFO][4612] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Namespace="calico-system" Pod="csi-node-driver-rqpff" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:22:20.975630 env[1300]: 2025-08-13 03:22:20.882 [INFO][4612] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Namespace="calico-system" Pod="csi-node-driver-rqpff" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a", Pod:"csi-node-driver-rqpff", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49051ba0fbe", MAC:"1e:5a:c9:13:a9:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:22:20.975630 env[1300]: 2025-08-13 03:22:20.956 [INFO][4612] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a" Namespace="calico-system" Pod="csi-node-driver-rqpff" WorkloadEndpoint="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:22:21.106056 env[1300]: time="2025-08-13T03:22:21.105876825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 03:22:21.110451 env[1300]: time="2025-08-13T03:22:21.110391513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 03:22:21.110653 env[1300]: time="2025-08-13T03:22:21.110597152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 03:22:21.111095 env[1300]: time="2025-08-13T03:22:21.111037066Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a pid=4743 runtime=io.containerd.runc.v2 Aug 13 03:22:21.214374 env[1300]: time="2025-08-13T03:22:21.214257511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t4mmj,Uid:3f742517-09bc-4214-9d75-c0b7d73d3fd4,Namespace:kube-system,Attempt:1,} returns sandbox id \"dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d\"" Aug 13 03:22:21.226413 env[1300]: time="2025-08-13T03:22:21.226354697Z" level=info msg="CreateContainer within sandbox \"dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 03:22:21.281351 env[1300]: time="2025-08-13T03:22:21.278592010Z" level=info msg="CreateContainer within sandbox \"dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e275418d77a3cb8fdeca2eb9965f1a579e435a74cf1aff26c5b9f9bf9ef353e\"" Aug 13 03:22:21.282924 env[1300]: time="2025-08-13T03:22:21.282877362Z" level=info msg="StartContainer for \"1e275418d77a3cb8fdeca2eb9965f1a579e435a74cf1aff26c5b9f9bf9ef353e\"" Aug 13 03:22:21.419637 kernel: kauditd_printk_skb: 2 callbacks suppressed Aug 13 03:22:21.425097 kernel: audit: type=1325 audit(1755055341.404:429): table=filter:114 family=2 entries=82 op=nft_register_chain pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:21.425178 kernel: audit: type=1300 audit(1755055341.404:429): arch=c000003e syscall=46 success=yes exit=42232 a0=3 a1=7ffece9a8af0 a2=0 a3=7ffece9a8adc items=0 ppid=3626 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:21.425272 kernel: audit: type=1327 audit(1755055341.404:429): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 03:22:21.404000 audit[4800]: NETFILTER_CFG table=filter:114 family=2 entries=82 op=nft_register_chain pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 03:22:21.404000 audit[4800]: SYSCALL arch=c000003e syscall=46 success=yes exit=42232 a0=3 a1=7ffece9a8af0 a2=0 a3=7ffece9a8adc items=0 ppid=3626 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:21.404000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 03:22:21.515691 env[1300]: time="2025-08-13T03:22:21.515628961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqpff,Uid:a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00,Namespace:calico-system,Attempt:1,} returns sandbox id \"6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a\"" Aug 13 03:22:21.553353 env[1300]: time="2025-08-13T03:22:21.545553300Z" level=info msg="StartContainer for \"1e275418d77a3cb8fdeca2eb9965f1a579e435a74cf1aff26c5b9f9bf9ef353e\" returns successfully" Aug 13 03:22:21.975788 kernel: audit: type=1325 audit(1755055341.965:430): table=filter:115 family=2 entries=14 op=nft_register_rule pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:21.965000 audit[4832]: NETFILTER_CFG table=filter:115 family=2 entries=14 op=nft_register_rule pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:21.965000 audit[4832]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4e4aa6a0 a2=0 a3=7ffc4e4aa68c items=0 ppid=2291 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:22.013387 kernel: audit: type=1300 audit(1755055341.965:430): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4e4aa6a0 a2=0 a3=7ffc4e4aa68c items=0 ppid=2291 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:21.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:21.987000 audit[4832]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:22.053300 kernel: audit: type=1327 audit(1755055341.965:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:22.053410 kernel: audit: type=1325 audit(1755055341.987:431): table=nat:116 family=2 entries=20 op=nft_register_rule pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:21.987000 audit[4832]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc4e4aa6a0 a2=0 a3=7ffc4e4aa68c items=0 ppid=2291 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:22.071367 kernel: audit: type=1300 audit(1755055341.987:431): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc4e4aa6a0 a2=0 a3=7ffc4e4aa68c items=0 ppid=2291 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:21.987000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:22.088360 kernel: audit: type=1327 audit(1755055341.987:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:22.208813 systemd-networkd[1075]: cali0dd6445cccc: Gained IPv6LL Aug 13 03:22:22.209535 systemd-networkd[1075]: cali49051ba0fbe: Gained IPv6LL Aug 13 03:22:22.846674 kubelet[2186]: I0813 03:22:22.844192 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-t4mmj" podStartSLOduration=61.822222386 podStartE2EDuration="1m1.822222386s" podCreationTimestamp="2025-08-13 03:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 03:22:22.817707147 +0000 UTC m=+67.391146988" watchObservedRunningTime="2025-08-13 03:22:22.822222386 +0000 UTC m=+67.395662221" Aug 13 03:22:22.857764 kubelet[2186]: I0813 03:22:22.847357 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75b8b879dd-926tb" podStartSLOduration=49.847319236 podStartE2EDuration="49.847319236s" podCreationTimestamp="2025-08-13 03:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 03:22:21.775165039 +0000 UTC m=+66.348604870" watchObservedRunningTime="2025-08-13 03:22:22.847319236 +0000 UTC m=+67.420759072" Aug 13 03:22:23.160000 audit[4841]: NETFILTER_CFG table=filter:117 family=2 entries=14 op=nft_register_rule pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:23.170386 kernel: audit: type=1325 audit(1755055343.160:432): table=filter:117 family=2 entries=14 op=nft_register_rule pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:23.160000 audit[4841]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca5eff800 a2=0 a3=7ffca5eff7ec items=0 ppid=2291 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:23.160000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:23.175000 audit[4841]: NETFILTER_CFG table=nat:118 family=2 entries=44 op=nft_register_rule pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:23.175000 audit[4841]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffca5eff800 a2=0 a3=7ffca5eff7ec items=0 ppid=2291 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:23.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:23.302136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3710064442.mount: Deactivated successfully. Aug 13 03:22:23.775000 audit[4843]: NETFILTER_CFG table=filter:119 family=2 entries=14 op=nft_register_rule pid=4843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:23.775000 audit[4843]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffddd8e56e0 a2=0 a3=7ffddd8e56cc items=0 ppid=2291 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:23.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:23.823000 audit[4843]: NETFILTER_CFG table=nat:120 family=2 entries=56 op=nft_register_chain pid=4843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:23.823000 audit[4843]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffddd8e56e0 a2=0 a3=7ffddd8e56cc items=0 ppid=2291 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:23.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:25.353364 env[1300]: time="2025-08-13T03:22:25.353264206Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:25.358342 env[1300]: time="2025-08-13T03:22:25.358297013Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:25.362072 env[1300]: time="2025-08-13T03:22:25.362033617Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:25.366119 env[1300]: time="2025-08-13T03:22:25.366074882Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:25.373374 env[1300]: time="2025-08-13T03:22:25.367268852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 03:22:25.443974 env[1300]: time="2025-08-13T03:22:25.443902635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 03:22:25.589120 env[1300]: time="2025-08-13T03:22:25.588850588Z" level=info msg="CreateContainer within sandbox \"9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 03:22:25.645317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547033275.mount: Deactivated successfully. Aug 13 03:22:25.658043 env[1300]: time="2025-08-13T03:22:25.652895249Z" level=info msg="CreateContainer within sandbox \"9c66403267b1f9001a595f46261aeebb0052c077d137ddb6b211f505ef07e1f8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"27ccb63bef659184eca1a64825fd054097700cb32b77685d8856411e3e5a63c7\"" Aug 13 03:22:25.666296 env[1300]: time="2025-08-13T03:22:25.664472503Z" level=info msg="StartContainer for \"27ccb63bef659184eca1a64825fd054097700cb32b77685d8856411e3e5a63c7\"" Aug 13 03:22:25.871373 systemd[1]: run-containerd-runc-k8s.io-27ccb63bef659184eca1a64825fd054097700cb32b77685d8856411e3e5a63c7-runc.vpAn5B.mount: Deactivated successfully. Aug 13 03:22:26.087708 env[1300]: time="2025-08-13T03:22:26.087638407Z" level=info msg="StartContainer for \"27ccb63bef659184eca1a64825fd054097700cb32b77685d8856411e3e5a63c7\" returns successfully" Aug 13 03:22:26.953380 systemd[1]: run-containerd-runc-k8s.io-27ccb63bef659184eca1a64825fd054097700cb32b77685d8856411e3e5a63c7-runc.J6IO8Z.mount: Deactivated successfully. Aug 13 03:22:27.082000 audit[4898]: NETFILTER_CFG table=filter:121 family=2 entries=14 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:27.089973 kernel: kauditd_printk_skb: 11 callbacks suppressed Aug 13 03:22:27.092035 kernel: audit: type=1325 audit(1755055347.082:436): table=filter:121 family=2 entries=14 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:27.082000 audit[4898]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9c5384b0 a2=0 a3=7ffd9c53849c items=0 ppid=2291 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:27.119443 kernel: audit: type=1300 audit(1755055347.082:436): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9c5384b0 a2=0 a3=7ffd9c53849c items=0 ppid=2291 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:27.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:27.130419 kernel: audit: type=1327 audit(1755055347.082:436): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:27.105000 audit[4898]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:27.149518 kernel: audit: type=1325 audit(1755055347.105:437): table=nat:122 family=2 entries=20 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:27.149662 kernel: audit: type=1300 audit(1755055347.105:437): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd9c5384b0 a2=0 a3=7ffd9c53849c items=0 ppid=2291 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:27.105000 audit[4898]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd9c5384b0 a2=0 a3=7ffd9c53849c items=0 ppid=2291 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:27.105000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:27.157721 kernel: audit: type=1327 audit(1755055347.105:437): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:28.221430 systemd[1]: run-containerd-runc-k8s.io-27ccb63bef659184eca1a64825fd054097700cb32b77685d8856411e3e5a63c7-runc.6V7005.mount: Deactivated successfully. Aug 13 03:22:28.950423 systemd[1]: run-containerd-runc-k8s.io-27ccb63bef659184eca1a64825fd054097700cb32b77685d8856411e3e5a63c7-runc.2ehPAz.mount: Deactivated successfully. Aug 13 03:22:29.991038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142760543.mount: Deactivated successfully. Aug 13 03:22:30.014482 env[1300]: time="2025-08-13T03:22:30.014414188Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:30.018665 env[1300]: time="2025-08-13T03:22:30.018630281Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:30.021747 env[1300]: time="2025-08-13T03:22:30.021713741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:30.024902 env[1300]: time="2025-08-13T03:22:30.024866878Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:30.026213 env[1300]: time="2025-08-13T03:22:30.026155233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 03:22:30.047466 env[1300]: time="2025-08-13T03:22:30.047379923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 03:22:30.060689 env[1300]: time="2025-08-13T03:22:30.060118327Z" level=info msg="CreateContainer within sandbox \"ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 03:22:30.076481 env[1300]: time="2025-08-13T03:22:30.076430770Z" level=info msg="CreateContainer within sandbox \"ea6ff386ca41fbd5ac517bd2c6228534150979053688632df784d73c9b2ca06c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4fb08362d35d727dc9cada8aa5c94efad230a2cb9df629ca2721edee2cd43f0b\"" Aug 13 03:22:30.077513 env[1300]: time="2025-08-13T03:22:30.077474875Z" level=info msg="StartContainer for \"4fb08362d35d727dc9cada8aa5c94efad230a2cb9df629ca2721edee2cd43f0b\"" Aug 13 03:22:30.279866 env[1300]: time="2025-08-13T03:22:30.279702660Z" level=info msg="StartContainer for \"4fb08362d35d727dc9cada8aa5c94efad230a2cb9df629ca2721edee2cd43f0b\" returns successfully" Aug 13 03:22:30.942462 kubelet[2186]: I0813 03:22:30.933365 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-txzpd" podStartSLOduration=39.171478006 podStartE2EDuration="52.928046617s" podCreationTimestamp="2025-08-13 03:21:38 +0000 UTC" firstStartedPulling="2025-08-13 03:22:11.663968985 +0000 UTC m=+56.237408807" lastFinishedPulling="2025-08-13 03:22:25.420537578 +0000 UTC m=+69.993977418" observedRunningTime="2025-08-13 03:22:26.890699983 +0000 UTC m=+71.464139824" watchObservedRunningTime="2025-08-13 03:22:30.928046617 +0000 UTC m=+75.501486448" Aug 13 03:22:30.953917 kubelet[2186]: I0813 03:22:30.943427 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5fdb56b4b-lpvhq" podStartSLOduration=2.631540764 podStartE2EDuration="22.943405715s" podCreationTimestamp="2025-08-13 03:22:08 +0000 UTC" firstStartedPulling="2025-08-13 03:22:09.719145978 +0000 UTC m=+54.292585807" lastFinishedPulling="2025-08-13 03:22:30.031010925 +0000 UTC m=+74.604450758" observedRunningTime="2025-08-13 03:22:30.924336228 +0000 UTC m=+75.497776088" watchObservedRunningTime="2025-08-13 03:22:30.943405715 +0000 UTC m=+75.516845546" Aug 13 03:22:30.991154 systemd[1]: run-containerd-runc-k8s.io-4fb08362d35d727dc9cada8aa5c94efad230a2cb9df629ca2721edee2cd43f0b-runc.MM9B6S.mount: Deactivated successfully. Aug 13 03:22:31.048000 audit[4977]: NETFILTER_CFG table=filter:123 family=2 entries=13 op=nft_register_rule pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:31.061455 kernel: audit: type=1325 audit(1755055351.048:438): table=filter:123 family=2 entries=13 op=nft_register_rule pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:31.061590 kernel: audit: type=1300 audit(1755055351.048:438): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe0ff9f720 a2=0 a3=7ffe0ff9f70c items=0 ppid=2291 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:31.048000 audit[4977]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe0ff9f720 a2=0 a3=7ffe0ff9f70c items=0 ppid=2291 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:31.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:31.075595 kernel: audit: type=1327 audit(1755055351.048:438): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:31.081631 kernel: audit: type=1325 audit(1755055351.067:439): table=nat:124 family=2 entries=27 op=nft_register_chain pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:31.067000 audit[4977]: NETFILTER_CFG table=nat:124 family=2 entries=27 op=nft_register_chain pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:31.067000 audit[4977]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe0ff9f720 a2=0 a3=7ffe0ff9f70c items=0 ppid=2291 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:31.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:35.751877 env[1300]: time="2025-08-13T03:22:35.751559311Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:35.775821 env[1300]: time="2025-08-13T03:22:35.760477112Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:35.775821 env[1300]: time="2025-08-13T03:22:35.768040945Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:35.776306 env[1300]: time="2025-08-13T03:22:35.776080313Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:35.778029 env[1300]: time="2025-08-13T03:22:35.777983022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 03:22:35.920108 env[1300]: time="2025-08-13T03:22:35.919721908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 03:22:36.259833 env[1300]: time="2025-08-13T03:22:36.259767476Z" level=info msg="CreateContainer within sandbox \"19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 03:22:36.307847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount542693449.mount: Deactivated successfully. Aug 13 03:22:36.319554 env[1300]: time="2025-08-13T03:22:36.319496043Z" level=info msg="CreateContainer within sandbox \"19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a4c9a4fbe87f8ec601f9e4e7be8abc85dce33987334ecbbdd0ac258ef867c965\"" Aug 13 03:22:36.324925 env[1300]: time="2025-08-13T03:22:36.324880250Z" level=info msg="StartContainer for \"a4c9a4fbe87f8ec601f9e4e7be8abc85dce33987334ecbbdd0ac258ef867c965\"" Aug 13 03:22:36.547742 env[1300]: time="2025-08-13T03:22:36.547611379Z" level=info msg="StartContainer for \"a4c9a4fbe87f8ec601f9e4e7be8abc85dce33987334ecbbdd0ac258ef867c965\" returns successfully" Aug 13 03:22:37.272767 systemd[1]: run-containerd-runc-k8s.io-a4c9a4fbe87f8ec601f9e4e7be8abc85dce33987334ecbbdd0ac258ef867c965-runc.PBKeHD.mount: Deactivated successfully. Aug 13 03:22:37.361344 kubelet[2186]: I0813 03:22:37.350549 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78f8b9fb6f-b4rrq" podStartSLOduration=43.525370711 podStartE2EDuration="59.317253017s" podCreationTimestamp="2025-08-13 03:21:38 +0000 UTC" firstStartedPulling="2025-08-13 03:22:20.017566989 +0000 UTC m=+64.591006817" lastFinishedPulling="2025-08-13 03:22:35.809449278 +0000 UTC m=+80.382889123" observedRunningTime="2025-08-13 03:22:37.213907081 +0000 UTC m=+81.787346909" watchObservedRunningTime="2025-08-13 03:22:37.317253017 +0000 UTC m=+81.890692852" Aug 13 03:22:38.067464 env[1300]: time="2025-08-13T03:22:38.067371013Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:38.069863 env[1300]: time="2025-08-13T03:22:38.069813709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:38.072754 env[1300]: time="2025-08-13T03:22:38.072701670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:38.075395 env[1300]: time="2025-08-13T03:22:38.075353858Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:38.077107 env[1300]: time="2025-08-13T03:22:38.076268128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 03:22:38.091164 env[1300]: time="2025-08-13T03:22:38.091106400Z" level=info msg="CreateContainer within sandbox \"6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 03:22:38.120848 env[1300]: time="2025-08-13T03:22:38.120776781Z" level=info msg="CreateContainer within sandbox \"6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f2f0c80e9f485d9bcc5875f6d21b7ccd2c609d9b5907167064a7929b292f250b\"" Aug 13 03:22:38.122139 env[1300]: time="2025-08-13T03:22:38.122081963Z" level=info msg="StartContainer for \"f2f0c80e9f485d9bcc5875f6d21b7ccd2c609d9b5907167064a7929b292f250b\"" Aug 13 03:22:38.223821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471957525.mount: Deactivated successfully. Aug 13 03:22:38.366710 env[1300]: time="2025-08-13T03:22:38.366520151Z" level=info msg="StartContainer for \"f2f0c80e9f485d9bcc5875f6d21b7ccd2c609d9b5907167064a7929b292f250b\" returns successfully" Aug 13 03:22:38.378262 env[1300]: time="2025-08-13T03:22:38.378207460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 03:22:40.617778 env[1300]: time="2025-08-13T03:22:40.617677327Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:40.623807 env[1300]: time="2025-08-13T03:22:40.623764485Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:40.626015 env[1300]: time="2025-08-13T03:22:40.625978531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:40.629454 env[1300]: time="2025-08-13T03:22:40.629403844Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 03:22:40.630175 env[1300]: time="2025-08-13T03:22:40.630134614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 03:22:40.635925 env[1300]: time="2025-08-13T03:22:40.635832818Z" level=info msg="CreateContainer within sandbox \"6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 03:22:40.658670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459772140.mount: Deactivated successfully. Aug 13 03:22:40.671031 env[1300]: time="2025-08-13T03:22:40.670967047Z" level=info msg="CreateContainer within sandbox \"6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8f2dad1ff5cabc979e8edc8efedf67af04bf3912e786668cb3e5257b47257191\"" Aug 13 03:22:40.672088 env[1300]: time="2025-08-13T03:22:40.672037672Z" level=info msg="StartContainer for \"8f2dad1ff5cabc979e8edc8efedf67af04bf3912e786668cb3e5257b47257191\"" Aug 13 03:22:40.859812 env[1300]: time="2025-08-13T03:22:40.859751852Z" level=info msg="StartContainer for \"8f2dad1ff5cabc979e8edc8efedf67af04bf3912e786668cb3e5257b47257191\" returns successfully" Aug 13 03:22:41.219543 kubelet[2186]: I0813 03:22:41.194795 2186 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 03:22:41.223732 kubelet[2186]: I0813 03:22:41.223692 2186 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 03:22:41.446841 kubelet[2186]: I0813 03:22:41.423315 2186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rqpff" podStartSLOduration=44.296016957 podStartE2EDuration="1m3.407353354s" podCreationTimestamp="2025-08-13 03:21:38 +0000 UTC" firstStartedPulling="2025-08-13 03:22:21.520726079 +0000 UTC m=+66.094165903" lastFinishedPulling="2025-08-13 03:22:40.632062474 +0000 UTC m=+85.205502300" observedRunningTime="2025-08-13 03:22:41.367292256 +0000 UTC m=+85.940732112" watchObservedRunningTime="2025-08-13 03:22:41.407353354 +0000 UTC m=+85.980793188" Aug 13 03:22:41.653280 systemd[1]: run-containerd-runc-k8s.io-8f2dad1ff5cabc979e8edc8efedf67af04bf3912e786668cb3e5257b47257191-runc.AJKQ8h.mount: Deactivated successfully. Aug 13 03:22:45.501825 kubelet[2186]: I0813 03:22:45.501775 2186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 03:22:46.097123 kernel: kauditd_printk_skb: 2 callbacks suppressed Aug 13 03:22:46.132551 kernel: audit: type=1325 audit(1755055366.086:440): table=filter:125 family=2 entries=11 op=nft_register_rule pid=5138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:46.134405 kernel: audit: type=1300 audit(1755055366.086:440): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe66026fc0 a2=0 a3=7ffe66026fac items=0 ppid=2291 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:46.134490 kernel: audit: type=1327 audit(1755055366.086:440): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:46.134906 kernel: audit: type=1325 audit(1755055366.114:441): table=nat:126 family=2 entries=29 op=nft_register_chain pid=5138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:46.134994 kernel: audit: type=1300 audit(1755055366.114:441): arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffe66026fc0 a2=0 a3=7ffe66026fac items=0 ppid=2291 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:46.135406 kernel: audit: type=1327 audit(1755055366.114:441): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:46.086000 audit[5138]: NETFILTER_CFG table=filter:125 family=2 entries=11 op=nft_register_rule pid=5138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:46.086000 audit[5138]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe66026fc0 a2=0 a3=7ffe66026fac items=0 ppid=2291 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:46.086000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:46.114000 audit[5138]: NETFILTER_CFG table=nat:126 family=2 entries=29 op=nft_register_chain pid=5138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:46.114000 audit[5138]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffe66026fc0 a2=0 a3=7ffe66026fac items=0 ppid=2291 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:46.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:47.687587 systemd[1]: run-containerd-runc-k8s.io-31299f475ea61fd2a56bb6474a9a6d532cdd4891da855937f89f6110a32420b0-runc.8HqNjS.mount: Deactivated successfully. Aug 13 03:22:53.598771 systemd[1]: run-containerd-runc-k8s.io-27ccb63bef659184eca1a64825fd054097700cb32b77685d8856411e3e5a63c7-runc.YYrt30.mount: Deactivated successfully. Aug 13 03:22:54.749000 audit[5209]: NETFILTER_CFG table=filter:127 family=2 entries=9 op=nft_register_rule pid=5209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:54.749000 audit[5209]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff3bb77760 a2=0 a3=7fff3bb7774c items=0 ppid=2291 pid=5209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:54.766984 kernel: audit: type=1325 audit(1755055374.749:442): table=filter:127 family=2 entries=9 op=nft_register_rule pid=5209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:54.768745 kernel: audit: type=1300 audit(1755055374.749:442): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff3bb77760 a2=0 a3=7fff3bb7774c items=0 ppid=2291 pid=5209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:54.768822 kernel: audit: type=1327 audit(1755055374.749:442): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:54.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:54.776000 audit[5209]: NETFILTER_CFG table=nat:128 family=2 entries=31 op=nft_register_chain pid=5209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:54.776000 audit[5209]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fff3bb77760 a2=0 a3=7fff3bb7774c items=0 ppid=2291 pid=5209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:54.789489 kernel: audit: type=1325 audit(1755055374.776:443): table=nat:128 family=2 entries=31 op=nft_register_chain pid=5209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:22:54.789646 kernel: audit: type=1300 audit(1755055374.776:443): arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fff3bb77760 a2=0 a3=7fff3bb7774c items=0 ppid=2291 pid=5209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:22:54.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:22:54.793518 kernel: audit: type=1327 audit(1755055374.776:443): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:23:03.644251 systemd[1]: Started sshd@9-10.230.26.254:22-139.178.89.65:43710.service. Aug 13 03:23:03.679572 kernel: audit: type=1130 audit(1755055383.648:444): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.26.254:22-139.178.89.65:43710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:03.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.26.254:22-139.178.89.65:43710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:04.692000 audit[5233]: USER_ACCT pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:04.711614 kernel: audit: type=1101 audit(1755055384.692:445): pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:04.716250 kernel: audit: type=1103 audit(1755055384.703:446): pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:04.716370 kernel: audit: type=1006 audit(1755055384.703:447): pid=5233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Aug 13 03:23:04.703000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:04.717202 sshd[5233]: Accepted publickey for core from 139.178.89.65 port 43710 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:23:04.729238 kernel: audit: type=1300 audit(1755055384.703:447): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc54da0ed0 a2=3 a3=0 items=0 ppid=1 pid=5233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:04.729346 kernel: audit: type=1327 audit(1755055384.703:447): proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:04.703000 audit[5233]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc54da0ed0 a2=3 a3=0 items=0 ppid=1 pid=5233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:04.703000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:04.714977 sshd[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:23:04.762569 systemd-logind[1285]: New session 10 of user core. Aug 13 03:23:04.770850 systemd[1]: Started session-10.scope. Aug 13 03:23:04.799000 audit[5233]: USER_START pid=5233 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:04.809357 kernel: audit: type=1105 audit(1755055384.799:448): pid=5233 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:04.809000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:04.818476 kernel: audit: type=1103 audit(1755055384.809:449): pid=5236 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:06.503081 sshd[5233]: pam_unix(sshd:session): session closed for user core Aug 13 03:23:06.505000 audit[5233]: USER_END pid=5233 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:06.517368 kernel: audit: type=1106 audit(1755055386.505:450): pid=5233 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:06.509000 audit[5233]: CRED_DISP pid=5233 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:06.525813 kernel: audit: type=1104 audit(1755055386.509:451): pid=5233 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:06.525710 systemd[1]: sshd@9-10.230.26.254:22-139.178.89.65:43710.service: Deactivated successfully. Aug 13 03:23:06.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.26.254:22-139.178.89.65:43710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:06.533706 systemd-logind[1285]: Session 10 logged out. Waiting for processes to exit. Aug 13 03:23:06.535891 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 03:23:06.538750 systemd-logind[1285]: Removed session 10. Aug 13 03:23:11.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.26.254:22-139.178.89.65:55936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:11.708728 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 03:23:11.709437 kernel: audit: type=1130 audit(1755055391.690:453): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.26.254:22-139.178.89.65:55936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:11.691942 systemd[1]: Started sshd@10-10.230.26.254:22-139.178.89.65:55936.service. Aug 13 03:23:12.683000 audit[5250]: NETFILTER_CFG table=filter:129 family=2 entries=8 op=nft_register_rule pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:23:12.696674 kernel: audit: type=1325 audit(1755055392.683:454): table=filter:129 family=2 entries=8 op=nft_register_rule pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:23:12.697174 kernel: audit: type=1300 audit(1755055392.683:454): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff256c47a0 a2=0 a3=7fff256c478c items=0 ppid=2291 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:12.683000 audit[5250]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff256c47a0 a2=0 a3=7fff256c478c items=0 ppid=2291 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:12.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:23:12.706040 kernel: audit: type=1327 audit(1755055392.683:454): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:23:12.707000 audit[5250]: NETFILTER_CFG table=nat:130 family=2 entries=38 op=nft_register_chain pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:23:12.707000 audit[5250]: SYSCALL arch=c000003e syscall=46 success=yes exit=12772 a0=3 a1=7fff256c47a0 a2=0 a3=7fff256c478c items=0 ppid=2291 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:12.721226 kernel: audit: type=1325 audit(1755055392.707:455): table=nat:130 family=2 entries=38 op=nft_register_chain pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:23:12.721342 kernel: audit: type=1300 audit(1755055392.707:455): arch=c000003e syscall=46 success=yes exit=12772 a0=3 a1=7fff256c47a0 a2=0 a3=7fff256c478c items=0 ppid=2291 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:12.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:23:12.727382 kernel: audit: type=1327 audit(1755055392.707:455): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:23:12.766000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:12.776307 kernel: audit: type=1101 audit(1755055392.766:456): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:12.777468 sshd[5247]: Accepted publickey for core from 139.178.89.65 port 55936 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:23:12.791120 kernel: audit: type=1103 audit(1755055392.776:457): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:12.791208 kernel: audit: type=1006 audit(1755055392.776:458): pid=5247 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Aug 13 03:23:12.776000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:12.776000 audit[5247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe038d3860 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:12.776000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:12.786252 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:23:12.826388 systemd-logind[1285]: New session 11 of user core. Aug 13 03:23:12.829478 systemd[1]: Started session-11.scope. Aug 13 03:23:12.848000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:12.852000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:15.006304 sshd[5247]: pam_unix(sshd:session): session closed for user core Aug 13 03:23:15.038000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:15.038000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:15.052685 systemd[1]: sshd@10-10.230.26.254:22-139.178.89.65:55936.service: Deactivated successfully. Aug 13 03:23:15.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.26.254:22-139.178.89.65:55936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:15.059561 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 03:23:15.060299 systemd-logind[1285]: Session 11 logged out. Waiting for processes to exit. Aug 13 03:23:15.072611 systemd-logind[1285]: Removed session 11. Aug 13 03:23:16.972662 env[1300]: time="2025-08-13T03:23:16.972496348Z" level=info msg="StopPodSandbox for \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\"" Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.305 [WARNING][5274] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0", GenerateName:"calico-kube-controllers-78f8b9fb6f-", Namespace:"calico-system", SelfLink:"", UID:"68df4933-b5c3-4312-8741-f03d5628c7c8", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f8b9fb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f", Pod:"calico-kube-controllers-78f8b9fb6f-b4rrq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cbc29b8a68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.310 [INFO][5274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.310 [INFO][5274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" iface="eth0" netns="" Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.310 [INFO][5274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.310 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.791 [INFO][5281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" HandleID="k8s-pod-network.9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.796 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.800 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.887 [WARNING][5281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" HandleID="k8s-pod-network.9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.887 [INFO][5281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" HandleID="k8s-pod-network.9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.915 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:23:17.936975 env[1300]: 2025-08-13 03:23:17.921 [INFO][5274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:23:17.965748 env[1300]: time="2025-08-13T03:23:17.937040228Z" level=info msg="TearDown network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\" successfully" Aug 13 03:23:17.965748 env[1300]: time="2025-08-13T03:23:17.939671792Z" level=info msg="StopPodSandbox for \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\" returns successfully" Aug 13 03:23:17.965748 env[1300]: time="2025-08-13T03:23:17.950729959Z" level=info msg="RemovePodSandbox for \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\"" Aug 13 03:23:17.965748 env[1300]: time="2025-08-13T03:23:17.950817635Z" level=info msg="Forcibly stopping sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\"" Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.488 [WARNING][5313] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0", GenerateName:"calico-kube-controllers-78f8b9fb6f-", Namespace:"calico-system", SelfLink:"", UID:"68df4933-b5c3-4312-8741-f03d5628c7c8", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f8b9fb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"19de74d0109eb46067fd9389bf3643df5a4e5ae8ef0be6ee50d9e7a452c7b57f", Pod:"calico-kube-controllers-78f8b9fb6f-b4rrq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cbc29b8a68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.501 [INFO][5313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.501 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" iface="eth0" netns="" Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.501 [INFO][5313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.501 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.920 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" HandleID="k8s-pod-network.9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.925 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.926 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.956 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" HandleID="k8s-pod-network.9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.956 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" HandleID="k8s-pod-network.9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--kube--controllers--78f8b9fb6f--b4rrq-eth0" Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.962 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:23:18.982618 env[1300]: 2025-08-13 03:23:18.969 [INFO][5313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46" Aug 13 03:23:18.982618 env[1300]: time="2025-08-13T03:23:18.979286000Z" level=info msg="TearDown network for sandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\" successfully" Aug 13 03:23:19.046833 env[1300]: time="2025-08-13T03:23:19.000846638Z" level=info msg="RemovePodSandbox \"9c596bf2437e9e1df7cc8ff1dc6cb8421e911ab2d8adcf7c7a26f7a2475daf46\" returns successfully" Aug 13 03:23:19.387978 env[1300]: time="2025-08-13T03:23:19.386854499Z" level=info msg="StopPodSandbox for \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\"" Aug 13 03:23:20.169157 systemd[1]: Started sshd@11-10.230.26.254:22-139.178.89.65:58676.service. Aug 13 03:23:20.241527 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 03:23:20.244233 kernel: audit: type=1130 audit(1755055400.171:464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.26.254:22-139.178.89.65:58676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:20.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.26.254:22-139.178.89.65:58676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:19.802 [WARNING][5341] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a", Pod:"csi-node-driver-rqpff", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49051ba0fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:19.806 [INFO][5341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:19.806 [INFO][5341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" iface="eth0" netns="" Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:19.806 [INFO][5341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:19.806 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:20.165 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" HandleID="k8s-pod-network.d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:20.188 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:20.189 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:20.316 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" HandleID="k8s-pod-network.d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:20.316 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" HandleID="k8s-pod-network.d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:20.320 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:23:20.369617 env[1300]: 2025-08-13 03:23:20.341 [INFO][5341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:23:20.394588 env[1300]: time="2025-08-13T03:23:20.377468227Z" level=info msg="TearDown network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\" successfully" Aug 13 03:23:20.394588 env[1300]: time="2025-08-13T03:23:20.377540770Z" level=info msg="StopPodSandbox for \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\" returns successfully" Aug 13 03:23:20.545601 env[1300]: time="2025-08-13T03:23:20.545540800Z" level=info msg="RemovePodSandbox for \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\"" Aug 13 03:23:20.553107 env[1300]: time="2025-08-13T03:23:20.553039550Z" level=info msg="Forcibly stopping sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\"" Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:20.905 [WARNING][5366] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a40dec4e-22d9-4de3-ac84-8bf0f5fb9f00", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"6d6ece560171ca05e76da9b8e50324c74f8e4724bb8acbc672b7f6b49953f95a", Pod:"csi-node-driver-rqpff", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49051ba0fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:20.907 [INFO][5366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:20.907 [INFO][5366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" iface="eth0" netns="" Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:20.907 [INFO][5366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:20.907 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:21.191 [INFO][5373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" HandleID="k8s-pod-network.d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:21.192 [INFO][5373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:21.192 [INFO][5373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:21.207 [WARNING][5373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" HandleID="k8s-pod-network.d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:21.207 [INFO][5373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" HandleID="k8s-pod-network.d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Workload="srv--pghwy.gb1.brightbox.com-k8s-csi--node--driver--rqpff-eth0" Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:21.209 [INFO][5373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:23:21.215219 env[1300]: 2025-08-13 03:23:21.212 [INFO][5366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510" Aug 13 03:23:21.222471 env[1300]: time="2025-08-13T03:23:21.215709582Z" level=info msg="TearDown network for sandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\" successfully" Aug 13 03:23:21.225802 env[1300]: time="2025-08-13T03:23:21.225734934Z" level=info msg="RemovePodSandbox \"d3e75c89e5054acb92a9547366807014c9b0dffad2034bc4a6c178786329b510\" returns successfully" Aug 13 03:23:21.241143 env[1300]: time="2025-08-13T03:23:21.241100739Z" level=info msg="StopPodSandbox for \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\"" Aug 13 03:23:21.246000 audit[5353]: USER_ACCT pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:21.269153 kernel: audit: type=1101 audit(1755055401.246:465): pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:21.282973 kernel: audit: type=1103 audit(1755055401.269:466): pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:21.283078 kernel: audit: type=1006 audit(1755055401.269:467): pid=5353 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Aug 13 03:23:21.269000 audit[5353]: CRED_ACQ pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:21.283650 sshd[5353]: Accepted publickey for core from 139.178.89.65 port 58676 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:23:21.306945 kernel: audit: type=1300 audit(1755055401.269:467): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfc6393b0 a2=3 a3=0 items=0 ppid=1 pid=5353 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:21.307026 kernel: audit: type=1327 audit(1755055401.269:467): proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:21.269000 audit[5353]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfc6393b0 a2=3 a3=0 items=0 ppid=1 pid=5353 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:21.269000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:21.306769 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:23:21.413350 systemd[1]: Started session-12.scope. Aug 13 03:23:21.414119 systemd-logind[1285]: New session 12 of user core. Aug 13 03:23:21.456000 audit[5353]: USER_START pid=5353 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:21.467995 kernel: audit: type=1105 audit(1755055401.456:468): pid=5353 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:21.469000 audit[5394]: CRED_ACQ pid=5394 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:21.478382 kernel: audit: type=1103 audit(1755055401.469:469): pid=5394 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.607 [WARNING][5389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3f742517-09bc-4214-9d75-c0b7d73d3fd4", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d", Pod:"coredns-7c65d6cfc9-t4mmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dd6445cccc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.608 [INFO][5389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.608 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" iface="eth0" netns="" Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.608 [INFO][5389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.608 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.828 [INFO][5398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" HandleID="k8s-pod-network.2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.845 [INFO][5398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.849 [INFO][5398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.871 [WARNING][5398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" HandleID="k8s-pod-network.2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.871 [INFO][5398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" HandleID="k8s-pod-network.2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.875 [INFO][5398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:23:21.882277 env[1300]: 2025-08-13 03:23:21.877 [INFO][5389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:23:21.886075 env[1300]: time="2025-08-13T03:23:21.883600582Z" level=info msg="TearDown network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\" successfully" Aug 13 03:23:21.886075 env[1300]: time="2025-08-13T03:23:21.883737489Z" level=info msg="StopPodSandbox for \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\" returns successfully" Aug 13 03:23:22.091274 env[1300]: time="2025-08-13T03:23:22.091190353Z" level=info msg="RemovePodSandbox for \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\"" Aug 13 03:23:22.091840 env[1300]: time="2025-08-13T03:23:22.091511438Z" level=info msg="Forcibly stopping sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\"" Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.336 [WARNING][5421] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3f742517-09bc-4214-9d75-c0b7d73d3fd4", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"dc92a695ae5167ab86d55d9706e140c32d3cfe5429ede2ab65159de03447d99d", Pod:"coredns-7c65d6cfc9-t4mmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dd6445cccc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.337 [INFO][5421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.337 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" iface="eth0" netns="" Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.337 [INFO][5421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.337 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.518 [INFO][5428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" HandleID="k8s-pod-network.2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.520 [INFO][5428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.521 [INFO][5428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.555 [WARNING][5428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" HandleID="k8s-pod-network.2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.555 [INFO][5428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" HandleID="k8s-pod-network.2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Workload="srv--pghwy.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--t4mmj-eth0" Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.574 [INFO][5428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:23:22.580927 env[1300]: 2025-08-13 03:23:22.577 [INFO][5421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df" Aug 13 03:23:22.586423 env[1300]: time="2025-08-13T03:23:22.581276865Z" level=info msg="TearDown network for sandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\" successfully" Aug 13 03:23:22.603864 env[1300]: time="2025-08-13T03:23:22.603804206Z" level=info msg="RemovePodSandbox \"2efba4ce0b77ecb6f53a3d41558459822d8f7f173a41e574d6e0ccd2195500df\" returns successfully" Aug 13 03:23:22.694773 env[1300]: time="2025-08-13T03:23:22.694709233Z" level=info msg="StopPodSandbox for \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\"" Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:22.931 [WARNING][5442] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0", GenerateName:"calico-apiserver-75b8b879dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b0738f7-1587-4a22-887f-5f8bd64e6743", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8b879dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5", Pod:"calico-apiserver-75b8b879dd-926tb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali337789dcb10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:22.932 [INFO][5442] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:22.933 [INFO][5442] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" iface="eth0" netns="" Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:22.933 [INFO][5442] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:22.933 [INFO][5442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:23.080 [INFO][5449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" HandleID="k8s-pod-network.13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:23.081 [INFO][5449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:23.081 [INFO][5449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:23.121 [WARNING][5449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" HandleID="k8s-pod-network.13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:23.121 [INFO][5449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" HandleID="k8s-pod-network.13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:23.124 [INFO][5449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:23:23.137575 env[1300]: 2025-08-13 03:23:23.126 [INFO][5442] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:23:23.146878 env[1300]: time="2025-08-13T03:23:23.138815518Z" level=info msg="TearDown network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\" successfully" Aug 13 03:23:23.146878 env[1300]: time="2025-08-13T03:23:23.139024460Z" level=info msg="StopPodSandbox for \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\" returns successfully" Aug 13 03:23:23.296297 env[1300]: time="2025-08-13T03:23:23.295276508Z" level=info msg="RemovePodSandbox for \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\"" Aug 13 03:23:23.296297 env[1300]: time="2025-08-13T03:23:23.295377510Z" level=info msg="Forcibly stopping sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\"" Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.543 [WARNING][5465] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0", GenerateName:"calico-apiserver-75b8b879dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b0738f7-1587-4a22-887f-5f8bd64e6743", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 3, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8b879dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-pghwy.gb1.brightbox.com", ContainerID:"7f648274c09a0191954b58476433171d01de9b930fbd417c91d4326d50a334d5", Pod:"calico-apiserver-75b8b879dd-926tb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali337789dcb10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.547 [INFO][5465] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.547 [INFO][5465] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" iface="eth0" netns="" Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.547 [INFO][5465] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.547 [INFO][5465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.802 [INFO][5472] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" HandleID="k8s-pod-network.13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.810 [INFO][5472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.813 [INFO][5472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.842 [WARNING][5472] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" HandleID="k8s-pod-network.13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.842 [INFO][5472] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" HandleID="k8s-pod-network.13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Workload="srv--pghwy.gb1.brightbox.com-k8s-calico--apiserver--75b8b879dd--926tb-eth0" Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.848 [INFO][5472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 03:23:23.857969 env[1300]: 2025-08-13 03:23:23.853 [INFO][5465] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46" Aug 13 03:23:23.860854 env[1300]: time="2025-08-13T03:23:23.858597494Z" level=info msg="TearDown network for sandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\" successfully" Aug 13 03:23:23.873485 env[1300]: time="2025-08-13T03:23:23.873377789Z" level=info msg="RemovePodSandbox \"13f47fd26cb2d949b79974404358b84bfb6f6bb5fff4e6a5c0c53eee4840ef46\" returns successfully" Aug 13 03:23:23.995781 sshd[5353]: pam_unix(sshd:session): session closed for user core Aug 13 03:23:24.050000 audit[5353]: USER_END pid=5353 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:24.070735 kernel: audit: type=1106 audit(1755055404.050:470): pid=5353 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:24.081987 kernel: audit: type=1104 audit(1755055404.063:471): pid=5353 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:24.063000 audit[5353]: CRED_DISP pid=5353 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:24.078536 systemd[1]: sshd@11-10.230.26.254:22-139.178.89.65:58676.service: Deactivated successfully. Aug 13 03:23:24.086252 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 03:23:24.088659 systemd-logind[1285]: Session 12 logged out. Waiting for processes to exit. Aug 13 03:23:24.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.26.254:22-139.178.89.65:58676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:24.108022 systemd-logind[1285]: Removed session 12. Aug 13 03:23:24.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.26.254:22-139.178.89.65:58684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:24.136262 systemd[1]: Started sshd@12-10.230.26.254:22-139.178.89.65:58684.service. Aug 13 03:23:24.247912 systemd[1]: run-containerd-runc-k8s.io-a4c9a4fbe87f8ec601f9e4e7be8abc85dce33987334ecbbdd0ac258ef867c965-runc.5M3GxF.mount: Deactivated successfully. Aug 13 03:23:24.326925 systemd[1]: run-containerd-runc-k8s.io-27ccb63bef659184eca1a64825fd054097700cb32b77685d8856411e3e5a63c7-runc.CjLGwm.mount: Deactivated successfully. Aug 13 03:23:25.147893 sshd[5493]: Accepted publickey for core from 139.178.89.65 port 58684 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:23:25.144000 audit[5493]: USER_ACCT pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:25.148000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:25.148000 audit[5493]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb26b5ba0 a2=3 a3=0 items=0 ppid=1 pid=5493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:25.148000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:25.152469 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:23:25.174422 systemd-logind[1285]: New session 13 of user core. Aug 13 03:23:25.175713 systemd[1]: Started session-13.scope. Aug 13 03:23:25.212362 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 03:23:25.213698 kernel: audit: type=1105 audit(1755055405.194:477): pid=5493 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:25.194000 audit[5493]: USER_START pid=5493 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:25.205000 audit[5524]: CRED_ACQ pid=5524 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:25.221402 kernel: audit: type=1103 audit(1755055405.205:478): pid=5524 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:26.661345 sshd[5493]: pam_unix(sshd:session): session closed for user core Aug 13 03:23:26.662000 audit[5493]: USER_END pid=5493 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:26.675607 kernel: audit: type=1106 audit(1755055406.662:479): pid=5493 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:26.662000 audit[5493]: CRED_DISP pid=5493 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:26.683685 kernel: audit: type=1104 audit(1755055406.662:480): pid=5493 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:26.684431 systemd[1]: sshd@12-10.230.26.254:22-139.178.89.65:58684.service: Deactivated successfully. Aug 13 03:23:26.686579 systemd-logind[1285]: Session 13 logged out. Waiting for processes to exit. Aug 13 03:23:26.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.26.254:22-139.178.89.65:58684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:26.686581 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 03:23:26.695059 kernel: audit: type=1131 audit(1755055406.683:481): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.26.254:22-139.178.89.65:58684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:26.694540 systemd-logind[1285]: Removed session 13. Aug 13 03:23:26.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.26.254:22-139.178.89.65:58694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:26.811000 systemd[1]: Started sshd@13-10.230.26.254:22-139.178.89.65:58694.service. Aug 13 03:23:26.818358 kernel: audit: type=1130 audit(1755055406.810:482): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.26.254:22-139.178.89.65:58694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:27.752000 audit[5533]: USER_ACCT pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:27.757523 sshd[5533]: Accepted publickey for core from 139.178.89.65 port 58694 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:23:27.767524 kernel: audit: type=1101 audit(1755055407.752:483): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:27.767000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:27.777073 kernel: audit: type=1103 audit(1755055407.767:484): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:27.778481 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:23:27.788307 kernel: audit: type=1006 audit(1755055407.767:485): pid=5533 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Aug 13 03:23:27.767000 audit[5533]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7fb89230 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:27.796347 kernel: audit: type=1300 audit(1755055407.767:485): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7fb89230 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:27.767000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:27.818900 systemd-logind[1285]: New session 14 of user core. Aug 13 03:23:27.819496 systemd[1]: Started session-14.scope. Aug 13 03:23:27.855000 audit[5533]: USER_START pid=5533 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:27.858000 audit[5536]: CRED_ACQ pid=5536 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:29.280995 sshd[5533]: pam_unix(sshd:session): session closed for user core Aug 13 03:23:29.286000 audit[5533]: USER_END pid=5533 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:29.287000 audit[5533]: CRED_DISP pid=5533 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:29.290886 systemd[1]: sshd@13-10.230.26.254:22-139.178.89.65:58694.service: Deactivated successfully. Aug 13 03:23:29.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.26.254:22-139.178.89.65:58694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:29.292478 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 03:23:29.292564 systemd-logind[1285]: Session 14 logged out. Waiting for processes to exit. Aug 13 03:23:29.294565 systemd-logind[1285]: Removed session 14. Aug 13 03:23:34.432982 systemd[1]: Started sshd@14-10.230.26.254:22-139.178.89.65:50076.service. Aug 13 03:23:34.446287 kernel: kauditd_printk_skb: 6 callbacks suppressed Aug 13 03:23:34.448803 kernel: audit: type=1130 audit(1755055414.432:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.26.254:22-139.178.89.65:50076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:34.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.26.254:22-139.178.89.65:50076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:35.402000 audit[5556]: USER_ACCT pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:35.413745 sshd[5556]: Accepted publickey for core from 139.178.89.65 port 50076 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:23:35.437373 kernel: audit: type=1101 audit(1755055415.402:492): pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:35.437531 kernel: audit: type=1103 audit(1755055415.416:493): pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:35.440056 kernel: audit: type=1006 audit(1755055415.416:494): pid=5556 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Aug 13 03:23:35.440530 kernel: audit: type=1300 audit(1755055415.416:494): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0ef24bc0 a2=3 a3=0 items=0 ppid=1 pid=5556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:35.440602 kernel: audit: type=1327 audit(1755055415.416:494): proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:35.416000 audit[5556]: CRED_ACQ pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:35.416000 audit[5556]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0ef24bc0 a2=3 a3=0 items=0 ppid=1 pid=5556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:35.416000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:35.430182 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:23:35.458441 systemd-logind[1285]: New session 15 of user core. Aug 13 03:23:35.458576 systemd[1]: Started session-15.scope. Aug 13 03:23:35.480581 kernel: audit: type=1105 audit(1755055415.470:495): pid=5556 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:35.470000 audit[5556]: USER_START pid=5556 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:35.478000 audit[5579]: CRED_ACQ pid=5579 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:35.488824 kernel: audit: type=1103 audit(1755055415.478:496): pid=5579 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:36.795996 sshd[5556]: pam_unix(sshd:session): session closed for user core Aug 13 03:23:36.803000 audit[5556]: USER_END pid=5556 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:36.820410 kernel: audit: type=1106 audit(1755055416.803:497): pid=5556 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:36.814044 systemd[1]: sshd@14-10.230.26.254:22-139.178.89.65:50076.service: Deactivated successfully. Aug 13 03:23:36.815962 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 03:23:36.820250 systemd-logind[1285]: Session 15 logged out. Waiting for processes to exit. Aug 13 03:23:36.803000 audit[5556]: CRED_DISP pid=5556 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:36.833859 kernel: audit: type=1104 audit(1755055416.803:498): pid=5556 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:36.833054 systemd-logind[1285]: Removed session 15. Aug 13 03:23:36.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.26.254:22-139.178.89.65:50076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:41.984432 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 03:23:41.988085 kernel: audit: type=1130 audit(1755055421.971:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.26.254:22-139.178.89.65:42022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:41.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.26.254:22-139.178.89.65:42022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:41.971672 systemd[1]: Started sshd@15-10.230.26.254:22-139.178.89.65:42022.service. Aug 13 03:23:43.007000 audit[5590]: USER_ACCT pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:43.022769 kernel: audit: type=1101 audit(1755055423.007:501): pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:43.022895 sshd[5590]: Accepted publickey for core from 139.178.89.65 port 42022 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:23:43.023000 audit[5590]: CRED_ACQ pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:43.035069 kernel: audit: type=1103 audit(1755055423.023:502): pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:43.040388 kernel: audit: type=1006 audit(1755055423.023:503): pid=5590 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Aug 13 03:23:43.041250 sshd[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:23:43.023000 audit[5590]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd330e3e50 a2=3 a3=0 items=0 ppid=1 pid=5590 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:43.054368 kernel: audit: type=1300 audit(1755055423.023:503): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd330e3e50 a2=3 a3=0 items=0 ppid=1 pid=5590 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:43.056629 kernel: audit: type=1327 audit(1755055423.023:503): proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:43.023000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:43.072241 systemd-logind[1285]: New session 16 of user core. Aug 13 03:23:43.073799 systemd[1]: Started session-16.scope. Aug 13 03:23:43.089000 audit[5590]: USER_START pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:43.101646 kernel: audit: type=1105 audit(1755055423.089:504): pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:43.099000 audit[5593]: CRED_ACQ pid=5593 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:43.108352 kernel: audit: type=1103 audit(1755055423.099:505): pid=5593 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:44.579503 sshd[5590]: pam_unix(sshd:session): session closed for user core Aug 13 03:23:44.580000 audit[5590]: USER_END pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:44.599205 kernel: audit: type=1106 audit(1755055424.580:506): pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:44.599892 kernel: audit: type=1104 audit(1755055424.590:507): pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:44.590000 audit[5590]: CRED_DISP pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:44.595852 systemd[1]: sshd@15-10.230.26.254:22-139.178.89.65:42022.service: Deactivated successfully. Aug 13 03:23:44.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.26.254:22-139.178.89.65:42022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:44.597430 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 03:23:44.598396 systemd-logind[1285]: Session 16 logged out. Waiting for processes to exit. Aug 13 03:23:44.600954 systemd-logind[1285]: Removed session 16. Aug 13 03:23:49.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.26.254:22-139.178.89.65:35430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:49.752416 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 03:23:49.753910 kernel: audit: type=1130 audit(1755055429.746:509): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.26.254:22-139.178.89.65:35430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:49.747644 systemd[1]: Started sshd@16-10.230.26.254:22-139.178.89.65:35430.service. Aug 13 03:23:50.804000 audit[5643]: USER_ACCT pid=5643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:50.809126 sshd[5643]: Accepted publickey for core from 139.178.89.65 port 35430 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:23:50.813631 kernel: audit: type=1101 audit(1755055430.804:510): pid=5643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:50.814892 sshd[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:23:50.811000 audit[5643]: CRED_ACQ pid=5643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:50.824348 kernel: audit: type=1103 audit(1755055430.811:511): pid=5643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:50.837432 systemd[1]: Started session-17.scope. Aug 13 03:23:50.839413 systemd-logind[1285]: New session 17 of user core. Aug 13 03:23:50.859041 kernel: audit: type=1006 audit(1755055430.811:512): pid=5643 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Aug 13 03:23:50.859168 kernel: audit: type=1300 audit(1755055430.811:512): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf7722790 a2=3 a3=0 items=0 ppid=1 pid=5643 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:50.811000 audit[5643]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf7722790 a2=3 a3=0 items=0 ppid=1 pid=5643 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:50.811000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:50.882676 kernel: audit: type=1327 audit(1755055430.811:512): proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:50.866000 audit[5643]: USER_START pid=5643 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:50.897360 kernel: audit: type=1105 audit(1755055430.866:513): pid=5643 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:50.873000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:50.904424 kernel: audit: type=1103 audit(1755055430.873:514): pid=5646 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:52.326464 sshd[5643]: pam_unix(sshd:session): session closed for user core Aug 13 03:23:52.328000 audit[5643]: USER_END pid=5643 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:52.340550 kernel: audit: type=1106 audit(1755055432.328:515): pid=5643 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:52.341504 systemd[1]: sshd@16-10.230.26.254:22-139.178.89.65:35430.service: Deactivated successfully. Aug 13 03:23:52.343484 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 03:23:52.343681 systemd-logind[1285]: Session 17 logged out. Waiting for processes to exit. Aug 13 03:23:52.349214 systemd-logind[1285]: Removed session 17. Aug 13 03:23:52.328000 audit[5643]: CRED_DISP pid=5643 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:52.370476 kernel: audit: type=1104 audit(1755055432.328:516): pid=5643 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:52.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.26.254:22-139.178.89.65:35430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:52.484048 systemd[1]: Started sshd@17-10.230.26.254:22-139.178.89.65:35440.service. Aug 13 03:23:52.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.26.254:22-139.178.89.65:35440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:53.439000 audit[5659]: USER_ACCT pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:53.441513 sshd[5659]: Accepted publickey for core from 139.178.89.65 port 35440 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:23:53.441000 audit[5659]: CRED_ACQ pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:53.441000 audit[5659]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd46a13840 a2=3 a3=0 items=0 ppid=1 pid=5659 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:53.441000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:53.444158 sshd[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:23:53.455197 systemd-logind[1285]: New session 18 of user core. Aug 13 03:23:53.456575 systemd[1]: Started session-18.scope. Aug 13 03:23:53.467000 audit[5659]: USER_START pid=5659 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:53.470000 audit[5662]: CRED_ACQ pid=5662 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:54.739685 sshd[5659]: pam_unix(sshd:session): session closed for user core Aug 13 03:23:54.745000 audit[5659]: USER_END pid=5659 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:54.747000 audit[5659]: CRED_DISP pid=5659 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:54.755687 systemd[1]: sshd@17-10.230.26.254:22-139.178.89.65:35440.service: Deactivated successfully. Aug 13 03:23:54.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.26.254:22-139.178.89.65:35440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:54.758623 kernel: kauditd_printk_skb: 11 callbacks suppressed Aug 13 03:23:54.759803 kernel: audit: type=1131 audit(1755055434.755:526): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.26.254:22-139.178.89.65:35440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:54.765648 systemd-logind[1285]: Session 18 logged out. Waiting for processes to exit. Aug 13 03:23:54.767218 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 03:23:54.769695 systemd-logind[1285]: Removed session 18. Aug 13 03:23:54.884144 systemd[1]: Started sshd@18-10.230.26.254:22-139.178.89.65:35442.service. Aug 13 03:23:54.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.26.254:22-139.178.89.65:35442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:54.891352 kernel: audit: type=1130 audit(1755055434.884:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.26.254:22-139.178.89.65:35442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:23:55.884000 audit[5709]: USER_ACCT pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:55.889660 sshd[5709]: Accepted publickey for core from 139.178.89.65 port 35442 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:23:55.896363 kernel: audit: type=1101 audit(1755055435.884:528): pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:55.899000 audit[5709]: CRED_ACQ pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:55.910944 kernel: audit: type=1103 audit(1755055435.899:529): pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:55.912251 kernel: audit: type=1006 audit(1755055435.899:530): pid=5709 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Aug 13 03:23:55.911228 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:23:55.899000 audit[5709]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe653e8ae0 a2=3 a3=0 items=0 ppid=1 pid=5709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:55.920385 kernel: audit: type=1300 audit(1755055435.899:530): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe653e8ae0 a2=3 a3=0 items=0 ppid=1 pid=5709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:23:55.920704 kernel: audit: type=1327 audit(1755055435.899:530): proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:55.899000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:23:55.934242 systemd-logind[1285]: New session 19 of user core. Aug 13 03:23:55.935503 systemd[1]: Started session-19.scope. Aug 13 03:23:55.957000 audit[5709]: USER_START pid=5709 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:55.966357 kernel: audit: type=1105 audit(1755055435.957:531): pid=5709 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:55.968000 audit[5712]: CRED_ACQ pid=5712 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:23:55.981725 kernel: audit: type=1103 audit(1755055435.968:532): pid=5712 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:00.261000 audit[5722]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:24:00.355542 kernel: audit: type=1325 audit(1755055440.261:533): table=filter:131 family=2 entries=20 op=nft_register_rule pid=5722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:24:00.357929 kernel: audit: type=1300 audit(1755055440.261:533): arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffc84fd1db0 a2=0 a3=7ffc84fd1d9c items=0 ppid=2291 pid=5722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:00.359281 kernel: audit: type=1327 audit(1755055440.261:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:24:00.359387 kernel: audit: type=1325 audit(1755055440.292:534): table=nat:132 family=2 entries=26 op=nft_register_rule pid=5722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:24:00.360746 kernel: audit: type=1300 audit(1755055440.292:534): arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffc84fd1db0 a2=0 a3=0 items=0 ppid=2291 pid=5722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:00.360820 kernel: audit: type=1327 audit(1755055440.292:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:24:00.261000 audit[5722]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffc84fd1db0 a2=0 a3=7ffc84fd1d9c items=0 ppid=2291 pid=5722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:00.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:24:00.292000 audit[5722]: NETFILTER_CFG table=nat:132 family=2 entries=26 op=nft_register_rule pid=5722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:24:00.292000 audit[5722]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffc84fd1db0 a2=0 a3=0 items=0 ppid=2291 pid=5722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:00.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:24:00.328648 sshd[5709]: pam_unix(sshd:session): session closed for user core Aug 13 03:24:00.408000 audit[5709]: USER_END pid=5709 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:00.418346 kernel: audit: type=1106 audit(1755055440.408:535): pid=5709 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:00.420904 systemd[1]: sshd@18-10.230.26.254:22-139.178.89.65:35442.service: Deactivated successfully. Aug 13 03:24:00.410000 audit[5709]: CRED_DISP pid=5709 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:00.433873 kernel: audit: type=1104 audit(1755055440.410:536): pid=5709 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:00.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.26.254:22-139.178.89.65:35442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:00.442111 kernel: audit: type=1131 audit(1755055440.424:537): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.26.254:22-139.178.89.65:35442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:00.441078 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 03:24:00.441985 systemd-logind[1285]: Session 19 logged out. Waiting for processes to exit. Aug 13 03:24:00.458493 systemd-logind[1285]: Removed session 19. Aug 13 03:24:00.459864 systemd[1]: Started sshd@19-10.230.26.254:22-139.178.89.65:36566.service. Aug 13 03:24:00.470170 kernel: audit: type=1130 audit(1755055440.459:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.26.254:22-139.178.89.65:36566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:00.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.26.254:22-139.178.89.65:36566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:00.574000 audit[5727]: NETFILTER_CFG table=filter:133 family=2 entries=32 op=nft_register_rule pid=5727 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:24:00.574000 audit[5727]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffc42172230 a2=0 a3=7ffc4217221c items=0 ppid=2291 pid=5727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:00.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:24:00.579000 audit[5727]: NETFILTER_CFG table=nat:134 family=2 entries=26 op=nft_register_rule pid=5727 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:24:00.579000 audit[5727]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffc42172230 a2=0 a3=0 items=0 ppid=2291 pid=5727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:00.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:24:01.236957 kubelet[2186]: E0813 03:24:01.230458 2186 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.374s" Aug 13 03:24:01.502000 audit[5726]: USER_ACCT pid=5726 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:01.515184 sshd[5726]: Accepted publickey for core from 139.178.89.65 port 36566 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:24:01.528000 audit[5726]: CRED_ACQ pid=5726 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:01.530000 audit[5726]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff664d7b20 a2=3 a3=0 items=0 ppid=1 pid=5726 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:01.530000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:24:01.534164 sshd[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:24:01.577667 systemd-logind[1285]: New session 20 of user core. Aug 13 03:24:01.579625 systemd[1]: Started session-20.scope. Aug 13 03:24:01.605000 audit[5726]: USER_START pid=5726 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:01.611000 audit[5749]: CRED_ACQ pid=5749 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:03.095862 sshd[5726]: pam_unix(sshd:session): session closed for user core Aug 13 03:24:03.122000 audit[5726]: USER_END pid=5726 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:03.123000 audit[5726]: CRED_DISP pid=5726 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:03.134683 systemd[1]: sshd@19-10.230.26.254:22-139.178.89.65:36566.service: Deactivated successfully. Aug 13 03:24:03.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.26.254:22-139.178.89.65:36566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:03.141200 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 03:24:03.141891 systemd-logind[1285]: Session 20 logged out. Waiting for processes to exit. Aug 13 03:24:03.149041 systemd-logind[1285]: Removed session 20. Aug 13 03:24:03.236490 systemd[1]: Started sshd@20-10.230.26.254:22-139.178.89.65:36580.service. Aug 13 03:24:03.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.26.254:22-139.178.89.65:36580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:04.171000 audit[5758]: USER_ACCT pid=5758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:04.173992 sshd[5758]: Accepted publickey for core from 139.178.89.65 port 36580 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:24:04.174000 audit[5758]: CRED_ACQ pid=5758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:04.174000 audit[5758]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff819f02c0 a2=3 a3=0 items=0 ppid=1 pid=5758 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:04.174000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:24:04.177539 sshd[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:24:04.191557 systemd[1]: Started session-21.scope. Aug 13 03:24:04.192127 systemd-logind[1285]: New session 21 of user core. Aug 13 03:24:04.200000 audit[5758]: USER_START pid=5758 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:04.203000 audit[5761]: CRED_ACQ pid=5761 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:05.900743 sshd[5758]: pam_unix(sshd:session): session closed for user core Aug 13 03:24:05.922603 kernel: kauditd_printk_skb: 24 callbacks suppressed Aug 13 03:24:05.927406 kernel: audit: type=1106 audit(1755055445.907:555): pid=5758 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:05.928598 kernel: audit: type=1104 audit(1755055445.909:556): pid=5758 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:05.907000 audit[5758]: USER_END pid=5758 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:05.909000 audit[5758]: CRED_DISP pid=5758 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:05.913700 systemd[1]: sshd@20-10.230.26.254:22-139.178.89.65:36580.service: Deactivated successfully. Aug 13 03:24:05.933850 kernel: audit: type=1131 audit(1755055445.919:557): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.26.254:22-139.178.89.65:36580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:05.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.26.254:22-139.178.89.65:36580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:05.923026 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 03:24:05.924106 systemd-logind[1285]: Session 21 logged out. Waiting for processes to exit. Aug 13 03:24:05.932873 systemd-logind[1285]: Removed session 21. Aug 13 03:24:11.074128 systemd[1]: Started sshd@21-10.230.26.254:22-139.178.89.65:47948.service. Aug 13 03:24:11.101501 kernel: audit: type=1130 audit(1755055451.074:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.26.254:22-139.178.89.65:47948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:11.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.26.254:22-139.178.89.65:47948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:11.602000 audit[5774]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:24:11.622869 kernel: audit: type=1325 audit(1755055451.602:559): table=filter:135 family=2 entries=20 op=nft_register_rule pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:24:11.624237 kernel: audit: type=1300 audit(1755055451.602:559): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff52849540 a2=0 a3=7fff5284952c items=0 ppid=2291 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:11.602000 audit[5774]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff52849540 a2=0 a3=7fff5284952c items=0 ppid=2291 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:11.602000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:24:11.636817 kernel: audit: type=1327 audit(1755055451.602:559): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:24:11.627000 audit[5774]: NETFILTER_CFG table=nat:136 family=2 entries=110 op=nft_register_chain pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:24:11.627000 audit[5774]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fff52849540 a2=0 a3=7fff5284952c items=0 ppid=2291 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:11.650130 kernel: audit: type=1325 audit(1755055451.627:560): table=nat:136 family=2 entries=110 op=nft_register_chain pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 03:24:11.650602 kernel: audit: type=1300 audit(1755055451.627:560): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fff52849540 a2=0 a3=7fff5284952c items=0 ppid=2291 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:11.650666 kernel: audit: type=1327 audit(1755055451.627:560): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:24:11.627000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 03:24:12.080000 audit[5771]: USER_ACCT pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:12.084442 sshd[5771]: Accepted publickey for core from 139.178.89.65 port 47948 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:24:12.096580 kernel: audit: type=1101 audit(1755055452.080:561): pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:12.107000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:12.114364 kernel: audit: type=1103 audit(1755055452.107:562): pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:12.109565 sshd[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:24:12.123426 kernel: audit: type=1006 audit(1755055452.107:563): pid=5771 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Aug 13 03:24:12.107000 audit[5771]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc22e704e0 a2=3 a3=0 items=0 ppid=1 pid=5771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:12.107000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:24:12.189544 systemd-logind[1285]: New session 22 of user core. Aug 13 03:24:12.194854 systemd[1]: Started session-22.scope. Aug 13 03:24:12.210000 audit[5771]: USER_START pid=5771 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:12.218000 audit[5777]: CRED_ACQ pid=5777 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:13.902008 sshd[5771]: pam_unix(sshd:session): session closed for user core Aug 13 03:24:13.920000 audit[5771]: USER_END pid=5771 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:13.921000 audit[5771]: CRED_DISP pid=5771 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:13.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.26.254:22-139.178.89.65:47948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:13.928392 systemd[1]: sshd@21-10.230.26.254:22-139.178.89.65:47948.service: Deactivated successfully. Aug 13 03:24:13.932767 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 03:24:13.932820 systemd-logind[1285]: Session 22 logged out. Waiting for processes to exit. Aug 13 03:24:13.939337 systemd-logind[1285]: Removed session 22. Aug 13 03:24:19.070927 systemd[1]: Started sshd@22-10.230.26.254:22-139.178.89.65:47950.service. Aug 13 03:24:19.121188 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 03:24:19.123730 kernel: audit: type=1130 audit(1755055459.071:569): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.26.254:22-139.178.89.65:47950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:19.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.26.254:22-139.178.89.65:47950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:20.064000 audit[5810]: USER_ACCT pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:20.075280 sshd[5810]: Accepted publickey for core from 139.178.89.65 port 47950 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:24:20.077937 kernel: audit: type=1101 audit(1755055460.064:570): pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:20.086000 audit[5810]: CRED_ACQ pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:20.098279 kernel: audit: type=1103 audit(1755055460.086:571): pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:20.098416 kernel: audit: type=1006 audit(1755055460.086:572): pid=5810 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Aug 13 03:24:20.086000 audit[5810]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4e1398e0 a2=3 a3=0 items=0 ppid=1 pid=5810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:20.086000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:24:20.106615 sshd[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:24:20.110523 kernel: audit: type=1300 audit(1755055460.086:572): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4e1398e0 a2=3 a3=0 items=0 ppid=1 pid=5810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:20.110686 kernel: audit: type=1327 audit(1755055460.086:572): proctitle=737368643A20636F7265205B707269765D Aug 13 03:24:20.154764 systemd-logind[1285]: New session 23 of user core. Aug 13 03:24:20.157756 systemd[1]: Started session-23.scope. Aug 13 03:24:20.173000 audit[5810]: USER_START pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:20.183394 kernel: audit: type=1105 audit(1755055460.173:573): pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:20.183000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:20.191667 kernel: audit: type=1103 audit(1755055460.183:574): pid=5813 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:22.476369 sshd[5810]: pam_unix(sshd:session): session closed for user core Aug 13 03:24:22.536223 kernel: audit: type=1106 audit(1755055462.510:575): pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:22.536534 kernel: audit: type=1104 audit(1755055462.521:576): pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:22.510000 audit[5810]: USER_END pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:22.521000 audit[5810]: CRED_DISP pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:22.551780 systemd[1]: sshd@22-10.230.26.254:22-139.178.89.65:47950.service: Deactivated successfully. Aug 13 03:24:22.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.26.254:22-139.178.89.65:47950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:22.559927 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 03:24:22.559969 systemd-logind[1285]: Session 23 logged out. Waiting for processes to exit. Aug 13 03:24:22.569142 systemd-logind[1285]: Removed session 23. Aug 13 03:24:27.643624 systemd[1]: Started sshd@23-10.230.26.254:22-139.178.89.65:54706.service. Aug 13 03:24:27.674432 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 03:24:27.675718 kernel: audit: type=1130 audit(1755055467.650:578): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.26.254:22-139.178.89.65:54706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:27.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.26.254:22-139.178.89.65:54706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:28.622019 sshd[5863]: Accepted publickey for core from 139.178.89.65 port 54706 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 03:24:28.656163 kernel: audit: type=1101 audit(1755055468.620:579): pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:28.657081 kernel: audit: type=1103 audit(1755055468.628:580): pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:28.657616 kernel: audit: type=1006 audit(1755055468.629:581): pid=5863 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Aug 13 03:24:28.620000 audit[5863]: USER_ACCT pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:28.628000 audit[5863]: CRED_ACQ pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:28.629000 audit[5863]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe97519800 a2=3 a3=0 items=0 ppid=1 pid=5863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:28.658193 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 03:24:28.629000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 03:24:28.669677 kernel: audit: type=1300 audit(1755055468.629:581): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe97519800 a2=3 a3=0 items=0 ppid=1 pid=5863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 03:24:28.669780 kernel: audit: type=1327 audit(1755055468.629:581): proctitle=737368643A20636F7265205B707269765D Aug 13 03:24:28.709113 systemd-logind[1285]: New session 24 of user core. Aug 13 03:24:28.712567 systemd[1]: Started session-24.scope. Aug 13 03:24:28.732000 audit[5863]: USER_START pid=5863 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:28.742986 kernel: audit: type=1105 audit(1755055468.732:582): pid=5863 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:28.743400 kernel: audit: type=1103 audit(1755055468.740:583): pid=5866 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:28.740000 audit[5866]: CRED_ACQ pid=5866 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:30.501028 sshd[5863]: pam_unix(sshd:session): session closed for user core Aug 13 03:24:30.522666 kernel: audit: type=1106 audit(1755055470.506:584): pid=5863 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:30.523236 kernel: audit: type=1104 audit(1755055470.510:585): pid=5863 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:30.506000 audit[5863]: USER_END pid=5863 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:30.510000 audit[5863]: CRED_DISP pid=5863 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Aug 13 03:24:30.528140 systemd[1]: sshd@23-10.230.26.254:22-139.178.89.65:54706.service: Deactivated successfully. Aug 13 03:24:30.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.26.254:22-139.178.89.65:54706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 03:24:30.538302 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 03:24:30.538826 systemd-logind[1285]: Session 24 logged out. Waiting for processes to exit. Aug 13 03:24:30.542872 systemd-logind[1285]: Removed session 24.