Aug 13 00:45:24.977483 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:45:24.977514 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:45:24.977527 kernel: BIOS-provided physical RAM map: Aug 13 00:45:24.977535 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:45:24.977542 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 00:45:24.977549 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 00:45:24.977559 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 00:45:24.977566 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 00:45:24.977573 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 00:45:24.977584 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 00:45:24.977591 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 13 00:45:24.977599 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Aug 13 00:45:24.977606 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 00:45:24.977614 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 00:45:24.977624 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 00:45:24.977633 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 00:45:24.977641 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 00:45:24.977648 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:45:24.977660 kernel: NX (Execute Disable) protection: active Aug 13 00:45:24.977669 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Aug 13 00:45:24.977677 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Aug 13 00:45:24.977685 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Aug 13 00:45:24.977693 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Aug 13 00:45:24.977700 kernel: extended physical RAM map: Aug 13 00:45:24.977708 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:45:24.977718 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 00:45:24.977727 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 00:45:24.977735 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 00:45:24.977743 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 00:45:24.977750 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 00:45:24.977757 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 00:45:24.977765 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Aug 13 00:45:24.977773 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Aug 13 00:45:24.977781 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Aug 13 00:45:24.977789 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Aug 13 00:45:24.977796 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Aug 13 00:45:24.977807 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Aug 13 00:45:24.977815 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 00:45:24.977823 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 00:45:24.977831 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 00:45:24.977843 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 00:45:24.977852 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 00:45:24.977860 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:45:24.977871 kernel: efi: EFI v2.70 by EDK II Aug 13 00:45:24.977878 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Aug 13 00:45:24.977886 kernel: random: crng init done Aug 13 00:45:24.977895 kernel: SMBIOS 2.8 present. Aug 13 00:45:24.977904 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Aug 13 00:45:24.977913 kernel: Hypervisor detected: KVM Aug 13 00:45:24.977921 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:45:24.977930 kernel: kvm-clock: cpu 0, msr 5319e001, primary cpu clock Aug 13 00:45:24.977939 kernel: kvm-clock: using sched offset of 5203126087 cycles Aug 13 00:45:24.977955 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:45:24.977963 kernel: tsc: Detected 2794.750 MHz processor Aug 13 00:45:24.977972 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:45:24.977981 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:45:24.977990 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 13 00:45:24.978000 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:45:24.978009 kernel: Using GB pages for direct mapping Aug 13 00:45:24.978019 kernel: Secure boot disabled Aug 13 00:45:24.978029 kernel: ACPI: Early table checksum verification disabled Aug 13 00:45:24.978058 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 13 00:45:24.978068 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:45:24.978078 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:24.978087 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:24.978101 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 13 00:45:24.978111 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:24.978120 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:24.978133 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:24.978143 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:24.978155 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 00:45:24.978164 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 13 00:45:24.978173 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 13 00:45:24.978183 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 13 00:45:24.978192 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 13 00:45:24.978202 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 13 00:45:24.978211 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 13 00:45:24.978220 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 13 00:45:24.978229 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 13 00:45:24.978253 kernel: No NUMA configuration found Aug 13 00:45:24.978262 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 13 00:45:24.978271 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 13 00:45:24.978280 kernel: Zone ranges: Aug 13 00:45:24.978288 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:45:24.978297 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 13 00:45:24.978306 kernel: Normal empty Aug 13 00:45:24.978316 kernel: Movable zone start for each node Aug 13 00:45:24.978325 kernel: Early memory node ranges Aug 13 00:45:24.978337 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:45:24.978347 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 13 00:45:24.978357 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 13 00:45:24.978366 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 13 00:45:24.978376 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 13 00:45:24.978386 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 13 00:45:24.978395 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 13 00:45:24.978405 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:45:24.978415 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:45:24.978424 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 13 00:45:24.978436 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:45:24.978445 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 13 00:45:24.978455 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 00:45:24.978465 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 13 00:45:24.978475 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:45:24.978485 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:45:24.978494 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:45:24.978503 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:45:24.978512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:45:24.978523 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:45:24.978531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:45:24.978541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:45:24.978554 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:45:24.978565 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:45:24.978574 kernel: TSC deadline timer available Aug 13 00:45:24.978583 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 00:45:24.978591 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:45:24.978600 kernel: kvm-guest: setup PV sched yield Aug 13 00:45:24.978611 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 00:45:24.978620 kernel: Booting paravirtualized kernel on KVM Aug 13 00:45:24.978635 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:45:24.978646 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Aug 13 00:45:24.978656 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Aug 13 00:45:24.978665 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Aug 13 00:45:24.978673 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 00:45:24.978682 kernel: kvm-guest: setup async PF for cpu 0 Aug 13 00:45:24.978691 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Aug 13 00:45:24.978701 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:45:24.978710 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:45:24.978718 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 13 00:45:24.978729 kernel: Policy zone: DMA32 Aug 13 00:45:24.978740 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:45:24.978750 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:45:24.978759 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:45:24.978769 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:45:24.978779 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:45:24.978789 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 169308K reserved, 0K cma-reserved) Aug 13 00:45:24.978798 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:45:24.978806 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:45:24.978814 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:45:24.978823 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:45:24.978833 kernel: rcu: RCU event tracing is enabled. Aug 13 00:45:24.978845 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:45:24.978854 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:45:24.978862 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:45:24.978872 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:45:24.978881 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:45:24.978890 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 00:45:24.978899 kernel: Console: colour dummy device 80x25 Aug 13 00:45:24.978908 kernel: printk: console [ttyS0] enabled Aug 13 00:45:24.978917 kernel: ACPI: Core revision 20210730 Aug 13 00:45:24.978926 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:45:24.978937 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:45:24.978946 kernel: x2apic enabled Aug 13 00:45:24.978955 kernel: Switched APIC routing to physical x2apic. Aug 13 00:45:24.978964 kernel: kvm-guest: setup PV IPIs Aug 13 00:45:24.978973 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:45:24.978982 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 00:45:24.978991 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 00:45:24.979000 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:45:24.979014 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:45:24.979026 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:45:24.979050 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:45:24.979059 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:45:24.979069 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:45:24.979079 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 00:45:24.979088 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 00:45:24.979098 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:45:24.979111 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:45:24.979123 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:45:24.979131 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:45:24.979140 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:45:24.979150 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:45:24.979160 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:45:24.979170 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:45:24.979180 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:45:24.979190 kernel: LSM: Security Framework initializing Aug 13 00:45:24.979200 kernel: SELinux: Initializing. Aug 13 00:45:24.979212 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:45:24.979223 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:45:24.979244 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 00:45:24.979254 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:45:24.979264 kernel: ... version: 0 Aug 13 00:45:24.979274 kernel: ... bit width: 48 Aug 13 00:45:24.979284 kernel: ... generic registers: 6 Aug 13 00:45:24.979294 kernel: ... value mask: 0000ffffffffffff Aug 13 00:45:24.979304 kernel: ... max period: 00007fffffffffff Aug 13 00:45:24.979316 kernel: ... fixed-purpose events: 0 Aug 13 00:45:24.979326 kernel: ... event mask: 000000000000003f Aug 13 00:45:24.979336 kernel: signal: max sigframe size: 1776 Aug 13 00:45:24.979346 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:45:24.979355 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:45:24.979364 kernel: x86: Booting SMP configuration: Aug 13 00:45:24.979373 kernel: .... node #0, CPUs: #1 Aug 13 00:45:24.979383 kernel: kvm-clock: cpu 1, msr 5319e041, secondary cpu clock Aug 13 00:45:24.979391 kernel: kvm-guest: setup async PF for cpu 1 Aug 13 00:45:24.979399 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Aug 13 00:45:24.979410 kernel: #2 Aug 13 00:45:24.979419 kernel: kvm-clock: cpu 2, msr 5319e081, secondary cpu clock Aug 13 00:45:24.979429 kernel: kvm-guest: setup async PF for cpu 2 Aug 13 00:45:24.979439 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Aug 13 00:45:24.979448 kernel: #3 Aug 13 00:45:24.979457 kernel: kvm-clock: cpu 3, msr 5319e0c1, secondary cpu clock Aug 13 00:45:24.979465 kernel: kvm-guest: setup async PF for cpu 3 Aug 13 00:45:24.979474 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Aug 13 00:45:24.979482 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:45:24.979499 kernel: smpboot: Max logical packages: 1 Aug 13 00:45:24.979508 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 00:45:24.979516 kernel: devtmpfs: initialized Aug 13 00:45:24.979525 kernel: x86/mm: Memory block size: 128MB Aug 13 00:45:24.979535 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 13 00:45:24.979544 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 13 00:45:24.979552 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 13 00:45:24.979561 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 13 00:45:24.979569 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 13 00:45:24.979581 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:45:24.979590 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:45:24.979600 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:45:24.979608 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:45:24.979617 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:45:24.979626 kernel: audit: type=2000 audit(1755045923.916:1): state=initialized audit_enabled=0 res=1 Aug 13 00:45:24.979635 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:45:24.979644 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:45:24.979653 kernel: cpuidle: using governor menu Aug 13 00:45:24.979664 kernel: ACPI: bus type PCI registered Aug 13 00:45:24.979674 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:45:24.979683 kernel: dca service started, version 1.12.1 Aug 13 00:45:24.979692 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 00:45:24.979701 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Aug 13 00:45:24.979709 kernel: PCI: Using configuration type 1 for base access Aug 13 00:45:24.979718 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:45:24.979728 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:45:24.979737 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:45:24.979748 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:45:24.979757 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:45:24.979766 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:45:24.979775 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:45:24.979784 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:45:24.979793 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:45:24.979802 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:45:24.979811 kernel: ACPI: Interpreter enabled Aug 13 00:45:24.979821 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:45:24.979832 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:45:24.979841 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:45:24.979849 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:45:24.979859 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:45:24.980066 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:45:24.980185 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:45:24.980313 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:45:24.980331 kernel: PCI host bridge to bus 0000:00 Aug 13 00:45:24.980457 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:45:24.980556 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:45:24.980653 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:45:24.980881 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 00:45:24.981062 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:45:24.981172 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Aug 13 00:45:24.981305 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:45:24.981459 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 00:45:24.981588 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 00:45:24.981694 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 13 00:45:24.981808 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Aug 13 00:45:24.981913 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 00:45:24.982015 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Aug 13 00:45:24.982150 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:45:24.982289 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:45:24.982399 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Aug 13 00:45:24.982506 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Aug 13 00:45:24.982608 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 13 00:45:24.982729 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:45:24.982834 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Aug 13 00:45:24.982934 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 13 00:45:24.983053 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 13 00:45:24.983176 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:45:24.983291 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Aug 13 00:45:24.983392 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 13 00:45:24.983491 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 13 00:45:24.983595 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 13 00:45:24.983722 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 00:45:24.986212 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:45:24.986363 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 00:45:24.986471 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Aug 13 00:45:24.986574 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Aug 13 00:45:24.986699 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 00:45:24.986865 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Aug 13 00:45:24.986881 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:45:24.986891 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:45:24.986902 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:45:24.986911 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:45:24.986922 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:45:24.986932 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:45:24.986942 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:45:24.986955 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:45:24.986965 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:45:24.986975 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:45:24.986985 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:45:24.986995 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:45:24.987005 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:45:24.987015 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:45:24.987024 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:45:24.987048 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:45:24.987062 kernel: iommu: Default domain type: Translated Aug 13 00:45:24.987071 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:45:24.987763 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:45:24.987908 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:45:24.988044 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:45:24.988077 kernel: vgaarb: loaded Aug 13 00:45:24.988088 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:45:24.988098 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:45:24.988108 kernel: PTP clock support registered Aug 13 00:45:24.988124 kernel: Registered efivars operations Aug 13 00:45:24.988134 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:45:24.988144 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:45:24.988154 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 13 00:45:24.988164 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 13 00:45:24.988173 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Aug 13 00:45:24.988182 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Aug 13 00:45:24.988192 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 13 00:45:24.988201 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 13 00:45:24.988213 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:45:24.988223 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:45:24.988250 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:45:24.988261 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:45:24.988272 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:45:24.988282 kernel: pnp: PnP ACPI init Aug 13 00:45:24.988443 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:45:24.988460 kernel: pnp: PnP ACPI: found 6 devices Aug 13 00:45:24.988474 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:45:24.988484 kernel: NET: Registered PF_INET protocol family Aug 13 00:45:24.988493 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:45:24.988503 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:45:24.988513 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:45:24.988522 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:45:24.988532 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:45:24.988542 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:45:24.988552 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:45:24.988563 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:45:24.988572 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:45:24.988582 kernel: NET: Registered PF_XDP protocol family Aug 13 00:45:24.988700 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 13 00:45:24.988810 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 13 00:45:24.988919 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:45:24.989017 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:45:24.989385 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:45:24.995844 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 00:45:24.995945 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:45:24.996083 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Aug 13 00:45:24.996117 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:45:24.996128 kernel: Initialise system trusted keyrings Aug 13 00:45:24.996138 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:45:24.996148 kernel: Key type asymmetric registered Aug 13 00:45:24.996157 kernel: Asymmetric key parser 'x509' registered Aug 13 00:45:24.996172 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:45:24.996194 kernel: io scheduler mq-deadline registered Aug 13 00:45:24.996212 kernel: io scheduler kyber registered Aug 13 00:45:24.996246 kernel: io scheduler bfq registered Aug 13 00:45:24.996259 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:45:24.996270 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:45:24.996279 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:45:24.996306 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 00:45:24.996316 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:45:24.996329 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:45:24.996340 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:45:24.996350 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:45:24.996360 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:45:24.996555 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 00:45:24.996574 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:45:24.996671 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 00:45:24.996793 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T00:45:24 UTC (1755045924) Aug 13 00:45:24.996917 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:45:24.996933 kernel: efifb: probing for efifb Aug 13 00:45:24.996944 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Aug 13 00:45:24.996972 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Aug 13 00:45:24.996983 kernel: efifb: scrolling: redraw Aug 13 00:45:24.996993 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:45:24.997002 kernel: Console: switching to colour frame buffer device 160x50 Aug 13 00:45:24.997011 kernel: fb0: EFI VGA frame buffer device Aug 13 00:45:24.997022 kernel: pstore: Registered efi as persistent store backend Aug 13 00:45:24.997116 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:45:24.997127 kernel: Segment Routing with IPv6 Aug 13 00:45:24.997137 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:45:24.997165 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:45:24.997179 kernel: Key type dns_resolver registered Aug 13 00:45:24.997189 kernel: IPI shorthand broadcast: enabled Aug 13 00:45:24.997202 kernel: sched_clock: Marking stable (626112406, 147522616)->(852267818, -78632796) Aug 13 00:45:24.997211 kernel: registered taskstats version 1 Aug 13 00:45:24.997221 kernel: Loading compiled-in X.509 certificates Aug 13 00:45:24.997245 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:45:24.997256 kernel: Key type .fscrypt registered Aug 13 00:45:24.997266 kernel: Key type fscrypt-provisioning registered Aug 13 00:45:24.997293 kernel: pstore: Using crash dump compression: deflate Aug 13 00:45:24.997304 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:45:24.997317 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:45:24.997327 kernel: ima: No architecture policies found Aug 13 00:45:24.997336 kernel: clk: Disabling unused clocks Aug 13 00:45:24.997347 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:45:24.997357 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:45:24.997366 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:45:24.997377 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:45:24.997387 kernel: Run /init as init process Aug 13 00:45:24.997415 kernel: with arguments: Aug 13 00:45:24.997425 kernel: /init Aug 13 00:45:24.997439 kernel: with environment: Aug 13 00:45:24.997449 kernel: HOME=/ Aug 13 00:45:24.997458 kernel: TERM=linux Aug 13 00:45:24.997467 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:45:24.997611 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:45:24.997633 systemd[1]: Detected virtualization kvm. Aug 13 00:45:24.997646 systemd[1]: Detected architecture x86-64. Aug 13 00:45:24.997677 systemd[1]: Running in initrd. Aug 13 00:45:24.997691 systemd[1]: No hostname configured, using default hostname. Aug 13 00:45:24.997700 systemd[1]: Hostname set to . Aug 13 00:45:24.997711 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:45:24.997722 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:45:24.997733 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:45:24.997761 systemd[1]: Reached target cryptsetup.target. Aug 13 00:45:24.997773 systemd[1]: Reached target paths.target. Aug 13 00:45:24.997783 systemd[1]: Reached target slices.target. Aug 13 00:45:24.997798 systemd[1]: Reached target swap.target. Aug 13 00:45:24.997808 systemd[1]: Reached target timers.target. Aug 13 00:45:24.997835 systemd[1]: Listening on iscsid.socket. Aug 13 00:45:24.997847 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:45:24.997858 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:45:24.997869 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:45:24.997879 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:45:24.997911 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:45:24.997923 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:45:24.998012 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:45:24.998024 systemd[1]: Reached target sockets.target. Aug 13 00:45:24.998049 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:45:24.998060 systemd[1]: Finished network-cleanup.service. Aug 13 00:45:24.998088 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:45:24.998099 systemd[1]: Starting systemd-journald.service... Aug 13 00:45:24.998110 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:45:24.998251 systemd[1]: Starting systemd-resolved.service... Aug 13 00:45:24.998395 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:45:24.998410 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:45:24.998421 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:45:24.998442 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:45:24.998462 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:45:24.998473 kernel: audit: type=1130 audit(1755045924.980:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:24.998484 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:45:24.998495 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:45:24.998511 kernel: audit: type=1130 audit(1755045924.991:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:24.998542 systemd-journald[197]: Journal started Aug 13 00:45:24.998626 systemd-journald[197]: Runtime Journal (/run/log/journal/3e5ea8801134437a8be3fc4d285a3da2) is 6.0M, max 48.4M, 42.4M free. Aug 13 00:45:24.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:24.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:24.984317 systemd-modules-load[198]: Inserted module 'overlay' Aug 13 00:45:25.006230 systemd[1]: Started systemd-journald.service. Aug 13 00:45:25.006291 kernel: audit: type=1130 audit(1755045925.001:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.009373 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:45:25.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.015087 kernel: audit: type=1130 audit(1755045925.011:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.012642 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:45:25.021852 systemd-resolved[199]: Positive Trust Anchors: Aug 13 00:45:25.021868 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:45:25.021896 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:45:25.027113 systemd-resolved[199]: Defaulting to hostname 'linux'. Aug 13 00:45:25.039111 kernel: audit: type=1130 audit(1755045925.031:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.039157 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:45:25.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.028113 systemd[1]: Started systemd-resolved.service. Aug 13 00:45:25.031444 systemd[1]: Reached target nss-lookup.target. Aug 13 00:45:25.043637 dracut-cmdline[216]: dracut-dracut-053 Aug 13 00:45:25.046330 systemd-modules-load[198]: Inserted module 'br_netfilter' Aug 13 00:45:25.047453 kernel: Bridge firewalling registered Aug 13 00:45:25.048820 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:45:25.068094 kernel: SCSI subsystem initialized Aug 13 00:45:25.084092 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:45:25.084191 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:45:25.084210 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:45:25.089733 systemd-modules-load[198]: Inserted module 'dm_multipath' Aug 13 00:45:25.090706 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:45:25.096449 kernel: audit: type=1130 audit(1755045925.091:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.095653 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:45:25.104768 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:45:25.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.111075 kernel: audit: type=1130 audit(1755045925.106:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.136073 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:45:25.155066 kernel: iscsi: registered transport (tcp) Aug 13 00:45:25.178070 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:45:25.178132 kernel: QLogic iSCSI HBA Driver Aug 13 00:45:25.218881 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:45:25.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.224521 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:45:25.227810 kernel: audit: type=1130 audit(1755045925.223:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.270057 kernel: raid6: avx2x4 gen() 30672 MB/s Aug 13 00:45:25.302050 kernel: raid6: avx2x4 xor() 8223 MB/s Aug 13 00:45:25.319053 kernel: raid6: avx2x2 gen() 32415 MB/s Aug 13 00:45:25.336064 kernel: raid6: avx2x2 xor() 17827 MB/s Aug 13 00:45:25.353055 kernel: raid6: avx2x1 gen() 23588 MB/s Aug 13 00:45:25.370050 kernel: raid6: avx2x1 xor() 13185 MB/s Aug 13 00:45:25.387074 kernel: raid6: sse2x4 gen() 11860 MB/s Aug 13 00:45:25.404065 kernel: raid6: sse2x4 xor() 6073 MB/s Aug 13 00:45:25.421052 kernel: raid6: sse2x2 gen() 11947 MB/s Aug 13 00:45:25.438054 kernel: raid6: sse2x2 xor() 8768 MB/s Aug 13 00:45:25.455054 kernel: raid6: sse2x1 gen() 11084 MB/s Aug 13 00:45:25.472410 kernel: raid6: sse2x1 xor() 6089 MB/s Aug 13 00:45:25.472440 kernel: raid6: using algorithm avx2x2 gen() 32415 MB/s Aug 13 00:45:25.472450 kernel: raid6: .... xor() 17827 MB/s, rmw enabled Aug 13 00:45:25.473092 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:45:25.485057 kernel: xor: automatically using best checksumming function avx Aug 13 00:45:25.577066 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:45:25.584602 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:45:25.588796 kernel: audit: type=1130 audit(1755045925.584:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.588000 audit: BPF prog-id=7 op=LOAD Aug 13 00:45:25.588000 audit: BPF prog-id=8 op=LOAD Aug 13 00:45:25.589233 systemd[1]: Starting systemd-udevd.service... Aug 13 00:45:25.601240 systemd-udevd[400]: Using default interface naming scheme 'v252'. Aug 13 00:45:25.605198 systemd[1]: Started systemd-udevd.service. Aug 13 00:45:25.605914 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:45:25.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.616113 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Aug 13 00:45:25.638207 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:45:25.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.638974 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:45:25.673529 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:45:25.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:25.701058 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:45:25.706870 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:45:25.706884 kernel: GPT:9289727 != 19775487 Aug 13 00:45:25.706893 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:45:25.706902 kernel: GPT:9289727 != 19775487 Aug 13 00:45:25.706910 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:45:25.706920 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:45:25.709050 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:45:25.727722 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:45:25.727801 kernel: AES CTR mode by8 optimization enabled Aug 13 00:45:25.734047 kernel: libata version 3.00 loaded. Aug 13 00:45:25.742050 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:45:25.759816 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:45:25.759831 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 00:45:25.759926 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (464) Aug 13 00:45:25.759937 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:45:25.760017 kernel: scsi host0: ahci Aug 13 00:45:25.760152 kernel: scsi host1: ahci Aug 13 00:45:25.760259 kernel: scsi host2: ahci Aug 13 00:45:25.760346 kernel: scsi host3: ahci Aug 13 00:45:25.760431 kernel: scsi host4: ahci Aug 13 00:45:25.760531 kernel: scsi host5: ahci Aug 13 00:45:25.760618 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Aug 13 00:45:25.760629 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Aug 13 00:45:25.760639 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Aug 13 00:45:25.760649 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Aug 13 00:45:25.760658 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Aug 13 00:45:25.760666 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Aug 13 00:45:25.745107 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:45:25.751855 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:45:25.764172 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:45:25.766711 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:45:25.776434 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:45:25.778942 systemd[1]: Starting disk-uuid.service... Aug 13 00:45:25.784961 disk-uuid[547]: Primary Header is updated. Aug 13 00:45:25.784961 disk-uuid[547]: Secondary Entries is updated. Aug 13 00:45:25.784961 disk-uuid[547]: Secondary Header is updated. Aug 13 00:45:25.789056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:45:25.792058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:45:26.069072 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 00:45:26.069149 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:45:26.070069 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:45:26.071054 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 00:45:26.072526 kernel: ata3.00: applying bridge limits Aug 13 00:45:26.072548 kernel: ata3.00: configured for UDMA/100 Aug 13 00:45:26.073745 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:45:26.079045 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:45:26.079067 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:45:26.080052 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:45:26.107326 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 00:45:26.124637 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:45:26.124655 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:45:26.799079 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:45:26.800512 disk-uuid[548]: The operation has completed successfully. Aug 13 00:45:26.853506 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:45:26.854375 systemd[1]: Finished disk-uuid.service. Aug 13 00:45:26.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:26.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:26.881674 systemd[1]: Starting verity-setup.service... Aug 13 00:45:26.909576 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 00:45:27.003579 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:45:27.005563 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:45:27.008882 systemd[1]: Finished verity-setup.service. Aug 13 00:45:27.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.147130 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:45:27.147087 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:45:27.149244 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:45:27.152301 systemd[1]: Starting ignition-setup.service... Aug 13 00:45:27.154824 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:45:27.170274 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:45:27.170348 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:45:27.170369 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:45:27.181895 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:45:27.192290 systemd[1]: Finished ignition-setup.service. Aug 13 00:45:27.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.194923 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:45:27.238612 ignition[659]: Ignition 2.14.0 Aug 13 00:45:27.238628 ignition[659]: Stage: fetch-offline Aug 13 00:45:27.240058 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:45:27.238692 ignition[659]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:27.238704 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:45:27.238816 ignition[659]: parsed url from cmdline: "" Aug 13 00:45:27.238819 ignition[659]: no config URL provided Aug 13 00:45:27.238824 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:45:27.238832 ignition[659]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:45:27.238851 ignition[659]: op(1): [started] loading QEMU firmware config module Aug 13 00:45:27.238856 ignition[659]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:45:27.247832 ignition[659]: op(1): [finished] loading QEMU firmware config module Aug 13 00:45:27.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.250000 audit: BPF prog-id=9 op=LOAD Aug 13 00:45:27.250955 systemd[1]: Starting systemd-networkd.service... Aug 13 00:45:27.283622 ignition[659]: parsing config with SHA512: eaf9a31b50347e3574d4ea55f25ed5d38c07c57b422208a7c53778d0f560b9928b55bc49c77febdf25f79c9ab92289f62a5ea4762ce2ea9c11b9c82d09320148 Aug 13 00:45:27.290609 unknown[659]: fetched base config from "system" Aug 13 00:45:27.290623 unknown[659]: fetched user config from "qemu" Aug 13 00:45:27.291201 ignition[659]: fetch-offline: fetch-offline passed Aug 13 00:45:27.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.292217 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:45:27.291262 ignition[659]: Ignition finished successfully Aug 13 00:45:27.308778 systemd-networkd[730]: lo: Link UP Aug 13 00:45:27.308787 systemd-networkd[730]: lo: Gained carrier Aug 13 00:45:27.310689 systemd-networkd[730]: Enumeration completed Aug 13 00:45:27.310776 systemd[1]: Started systemd-networkd.service. Aug 13 00:45:27.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.312401 systemd[1]: Reached target network.target. Aug 13 00:45:27.313915 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:45:27.314726 systemd[1]: Starting ignition-kargs.service... Aug 13 00:45:27.317058 systemd[1]: Starting iscsiuio.service... Aug 13 00:45:27.318732 systemd-networkd[730]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:45:27.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.320018 systemd-networkd[730]: eth0: Link UP Aug 13 00:45:27.320022 systemd-networkd[730]: eth0: Gained carrier Aug 13 00:45:27.322074 systemd[1]: Started iscsiuio.service. Aug 13 00:45:27.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.326748 ignition[732]: Ignition 2.14.0 Aug 13 00:45:27.324370 systemd[1]: Starting iscsid.service... Aug 13 00:45:27.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.333847 iscsid[741]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:45:27.333847 iscsid[741]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 00:45:27.333847 iscsid[741]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:45:27.333847 iscsid[741]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:45:27.333847 iscsid[741]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:45:27.333847 iscsid[741]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:45:27.333847 iscsid[741]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:45:27.326753 ignition[732]: Stage: kargs Aug 13 00:45:27.330137 systemd[1]: Started iscsid.service. Aug 13 00:45:27.326849 ignition[732]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:27.331713 systemd[1]: Finished ignition-kargs.service. Aug 13 00:45:27.326859 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:45:27.345821 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:45:27.328024 ignition[732]: kargs: kargs passed Aug 13 00:45:27.328089 ignition[732]: Ignition finished successfully Aug 13 00:45:27.352283 systemd[1]: Starting ignition-disks.service... Aug 13 00:45:27.354402 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:45:27.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.356415 systemd-networkd[730]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:45:27.356500 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:45:27.359962 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:45:27.361995 ignition[748]: Ignition 2.14.0 Aug 13 00:45:27.362002 ignition[748]: Stage: disks Aug 13 00:45:27.362220 systemd[1]: Reached target remote-fs.target. Aug 13 00:45:27.362152 ignition[748]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:27.362177 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:45:27.363593 ignition[748]: disks: disks passed Aug 13 00:45:27.363643 ignition[748]: Ignition finished successfully Aug 13 00:45:27.368324 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:45:27.370785 systemd[1]: Finished ignition-disks.service. Aug 13 00:45:27.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.372449 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:45:27.374213 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:45:27.375759 systemd[1]: Reached target local-fs.target. Aug 13 00:45:27.377255 systemd[1]: Reached target sysinit.target. Aug 13 00:45:27.378709 systemd[1]: Reached target basic.target. Aug 13 00:45:27.380367 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:45:27.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.382574 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:45:27.391524 systemd-resolved[199]: Detected conflict on linux IN A 10.0.0.21 Aug 13 00:45:27.391536 systemd-resolved[199]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Aug 13 00:45:27.392450 systemd-fsck[763]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 00:45:27.398123 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:45:27.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.400859 systemd[1]: Mounting sysroot.mount... Aug 13 00:45:27.408069 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:45:27.408907 systemd[1]: Mounted sysroot.mount. Aug 13 00:45:27.410254 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:45:27.411458 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:45:27.412727 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 00:45:27.412758 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:45:27.412777 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:45:27.414685 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:45:27.416558 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:45:27.423362 initrd-setup-root[773]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:45:27.427311 initrd-setup-root[781]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:45:27.430938 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:45:27.434783 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:45:27.457373 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:45:27.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.459824 systemd[1]: Starting ignition-mount.service... Aug 13 00:45:27.461872 systemd[1]: Starting sysroot-boot.service... Aug 13 00:45:27.464241 bash[814]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 00:45:27.473489 ignition[815]: INFO : Ignition 2.14.0 Aug 13 00:45:27.473489 ignition[815]: INFO : Stage: mount Aug 13 00:45:27.475297 ignition[815]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:27.475297 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:45:27.477615 ignition[815]: INFO : mount: mount passed Aug 13 00:45:27.477615 ignition[815]: INFO : Ignition finished successfully Aug 13 00:45:27.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:27.478496 systemd[1]: Finished ignition-mount.service. Aug 13 00:45:27.481621 systemd[1]: Finished sysroot-boot.service. Aug 13 00:45:27.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:28.021844 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:45:28.031165 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (825) Aug 13 00:45:28.033398 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:45:28.033428 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:45:28.033443 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:45:28.038006 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:45:28.039792 systemd[1]: Starting ignition-files.service... Aug 13 00:45:28.055058 ignition[845]: INFO : Ignition 2.14.0 Aug 13 00:45:28.055058 ignition[845]: INFO : Stage: files Aug 13 00:45:28.056917 ignition[845]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:28.056917 ignition[845]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:45:28.056917 ignition[845]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:45:28.061465 ignition[845]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:45:28.061465 ignition[845]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:45:28.061465 ignition[845]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:45:28.061465 ignition[845]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:45:28.061465 ignition[845]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:45:28.060858 unknown[845]: wrote ssh authorized keys file for user: core Aug 13 00:45:28.072195 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:45:28.072195 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:45:28.072195 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:45:28.072195 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:45:28.117495 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:45:28.470639 systemd-networkd[730]: eth0: Gained IPv6LL Aug 13 00:45:28.578195 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:45:28.580753 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:45:28.582724 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:45:28.584619 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:45:28.587946 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:45:28.587946 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:45:28.587946 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:45:28.587946 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:45:28.587946 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:45:28.599188 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:45:28.599188 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:45:28.599188 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:45:28.599188 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:45:28.599188 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:45:28.599188 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:45:29.057647 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:45:29.781722 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:45:29.781722 ignition[845]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 00:45:29.785613 ignition[845]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:45:29.785613 ignition[845]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:45:29.785613 ignition[845]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 00:45:29.785613 ignition[845]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 00:45:29.785613 ignition[845]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:45:29.785613 ignition[845]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:45:29.785613 ignition[845]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 00:45:29.797289 ignition[845]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Aug 13 00:45:29.797289 ignition[845]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:45:29.797289 ignition[845]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:45:29.797289 ignition[845]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Aug 13 00:45:29.797289 ignition[845]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:45:29.797289 ignition[845]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:45:29.797289 ignition[845]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:45:29.797289 ignition[845]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:45:30.082383 ignition[845]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:45:30.084646 ignition[845]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:45:30.089248 ignition[845]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:45:30.089248 ignition[845]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:45:30.089248 ignition[845]: INFO : files: files passed Aug 13 00:45:30.089248 ignition[845]: INFO : Ignition finished successfully Aug 13 00:45:30.105532 kernel: kauditd_printk_skb: 23 callbacks suppressed Aug 13 00:45:30.106468 kernel: audit: type=1130 audit(1755045930.094:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.092585 systemd[1]: Finished ignition-files.service. Aug 13 00:45:30.101358 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:45:30.106734 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:45:30.111618 initrd-setup-root-after-ignition[869]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 13 00:45:30.114408 initrd-setup-root-after-ignition[872]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:45:30.113459 systemd[1]: Starting ignition-quench.service... Aug 13 00:45:30.118125 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:45:30.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.120317 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:45:30.125826 kernel: audit: type=1130 audit(1755045930.120:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.120417 systemd[1]: Finished ignition-quench.service. Aug 13 00:45:30.134638 kernel: audit: type=1130 audit(1755045930.125:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.134671 kernel: audit: type=1131 audit(1755045930.128:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.128554 systemd[1]: Reached target ignition-complete.target. Aug 13 00:45:30.137642 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:45:30.157127 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:45:30.157251 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:45:30.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.160445 systemd[1]: Reached target initrd-fs.target. Aug 13 00:45:30.168590 kernel: audit: type=1130 audit(1755045930.160:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.168618 kernel: audit: type=1131 audit(1755045930.160:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.168551 systemd[1]: Reached target initrd.target. Aug 13 00:45:30.170402 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:45:30.173405 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:45:30.183823 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:45:30.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.186931 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:45:30.190688 kernel: audit: type=1130 audit(1755045930.185:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.197109 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:45:30.198953 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:45:30.201024 systemd[1]: Stopped target timers.target. Aug 13 00:45:30.202917 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:45:30.204075 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:45:30.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.206112 systemd[1]: Stopped target initrd.target. Aug 13 00:45:30.210393 kernel: audit: type=1131 audit(1755045930.205:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.210530 systemd[1]: Stopped target basic.target. Aug 13 00:45:30.212307 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:45:30.214384 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:45:30.216428 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:45:30.218537 systemd[1]: Stopped target remote-fs.target. Aug 13 00:45:30.220432 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:45:30.222470 systemd[1]: Stopped target sysinit.target. Aug 13 00:45:30.224312 systemd[1]: Stopped target local-fs.target. Aug 13 00:45:30.226113 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:45:30.228025 systemd[1]: Stopped target swap.target. Aug 13 00:45:30.229862 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:45:30.231107 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:45:30.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.280596 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:45:30.285001 kernel: audit: type=1131 audit(1755045930.280:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.285100 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:45:30.286307 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:45:30.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.288297 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:45:30.292105 kernel: audit: type=1131 audit(1755045930.288:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.288422 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:45:30.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.294189 systemd[1]: Stopped target paths.target. Aug 13 00:45:30.295905 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:45:30.301103 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:45:30.303237 systemd[1]: Stopped target slices.target. Aug 13 00:45:30.304972 systemd[1]: Stopped target sockets.target. Aug 13 00:45:30.306775 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:45:30.307785 systemd[1]: Closed iscsid.socket. Aug 13 00:45:30.309408 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:45:30.310362 systemd[1]: Closed iscsiuio.socket. Aug 13 00:45:30.312052 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:45:30.313410 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:45:30.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.315723 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:45:30.316838 systemd[1]: Stopped ignition-files.service. Aug 13 00:45:30.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.319854 systemd[1]: Stopping ignition-mount.service... Aug 13 00:45:30.322007 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:45:30.323421 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:45:30.324474 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:45:30.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.326187 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:45:30.327232 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:45:30.328841 ignition[887]: INFO : Ignition 2.14.0 Aug 13 00:45:30.328841 ignition[887]: INFO : Stage: umount Aug 13 00:45:30.328841 ignition[887]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:30.328841 ignition[887]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:45:30.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.332833 ignition[887]: INFO : umount: umount passed Aug 13 00:45:30.332833 ignition[887]: INFO : Ignition finished successfully Aug 13 00:45:30.335549 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:45:30.336505 systemd[1]: Stopped ignition-mount.service. Aug 13 00:45:30.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.338328 systemd[1]: Stopped target network.target. Aug 13 00:45:30.339820 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:45:30.339859 systemd[1]: Stopped ignition-disks.service. Aug 13 00:45:30.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.342241 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:45:30.342280 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:45:30.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.344656 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:45:30.345556 systemd[1]: Stopped ignition-setup.service. Aug 13 00:45:30.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.347169 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:45:30.348870 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:45:30.350670 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:45:30.351607 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:45:30.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.358132 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:45:30.359494 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:45:30.360445 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:45:30.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.363165 systemd-networkd[730]: eth0: DHCPv6 lease lost Aug 13 00:45:30.364526 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:45:30.365000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:45:30.365636 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:45:30.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.367712 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:45:30.367745 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:45:30.370775 systemd[1]: Stopping network-cleanup.service... Aug 13 00:45:30.371000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:45:30.372442 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:45:30.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.372495 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:45:30.374374 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:45:30.374411 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:45:30.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.377697 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:45:30.378635 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:45:30.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.380420 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:45:30.382875 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:45:30.386319 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:45:30.387289 systemd[1]: Stopped network-cleanup.service. Aug 13 00:45:30.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.390806 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:45:30.391798 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:45:30.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.393646 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:45:30.393684 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:45:30.396151 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:45:30.396182 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:45:30.398622 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:45:30.398659 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:45:30.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.400942 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:45:30.400973 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:45:30.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.420973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:45:30.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.421694 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:45:30.425105 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:45:30.427017 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:45:30.427087 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 00:45:30.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.429925 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:45:30.430941 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:45:30.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.432560 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:45:30.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.432596 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:45:30.435997 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:45:30.437787 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:45:30.438908 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:45:30.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.485801 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:45:30.485913 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:45:30.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.488430 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:45:30.490199 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:45:30.490242 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:45:30.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:30.493477 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:45:30.499995 systemd[1]: Switching root. Aug 13 00:45:30.501000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:45:30.501000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:45:30.503000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:45:30.503000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:45:30.503000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:45:30.521328 iscsid[741]: iscsid shutting down. Aug 13 00:45:30.522091 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Aug 13 00:45:30.522151 systemd-journald[197]: Journal stopped Aug 13 00:45:35.541278 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:45:35.541366 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:45:35.541384 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:45:35.541396 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:45:35.541408 kernel: SELinux: policy capability open_perms=1 Aug 13 00:45:35.541420 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:45:35.541437 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:45:35.541449 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:45:35.541461 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:45:35.541474 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:45:35.541498 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:45:35.541512 systemd[1]: Successfully loaded SELinux policy in 40.624ms. Aug 13 00:45:35.541538 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.064ms. Aug 13 00:45:35.541561 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:45:35.541575 systemd[1]: Detected virtualization kvm. Aug 13 00:45:35.541588 systemd[1]: Detected architecture x86-64. Aug 13 00:45:35.541608 systemd[1]: Detected first boot. Aug 13 00:45:35.541621 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:45:35.541633 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:45:35.541647 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:45:35.541661 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:45:35.541686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:45:35.541704 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:45:35.541727 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:45:35.541741 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 00:45:35.541755 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:45:35.541768 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:45:35.541788 systemd[1]: Created slice system-getty.slice. Aug 13 00:45:35.541803 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:45:35.541817 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:45:35.541830 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:45:35.541851 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:45:35.541865 systemd[1]: Created slice user.slice. Aug 13 00:45:35.541880 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:45:35.541894 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:45:35.541908 systemd[1]: Set up automount boot.automount. Aug 13 00:45:35.541941 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:45:35.541955 systemd[1]: Reached target integritysetup.target. Aug 13 00:45:35.541967 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:45:35.541984 systemd[1]: Reached target remote-fs.target. Aug 13 00:45:35.542005 systemd[1]: Reached target slices.target. Aug 13 00:45:35.542018 systemd[1]: Reached target swap.target. Aug 13 00:45:35.542057 systemd[1]: Reached target torcx.target. Aug 13 00:45:35.542072 systemd[1]: Reached target veritysetup.target. Aug 13 00:45:35.542091 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:45:35.542104 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:45:35.542118 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:45:35.542134 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:45:35.542149 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:45:35.542170 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:45:35.542187 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:45:35.542200 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:45:35.542213 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:45:35.542230 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:45:35.542243 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:45:35.542257 systemd[1]: Mounting media.mount... Aug 13 00:45:35.542277 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:35.542291 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:45:35.542312 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:45:35.542324 systemd[1]: Mounting tmp.mount... Aug 13 00:45:35.542337 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:45:35.542350 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:45:35.542363 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:45:35.542386 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:45:35.542399 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:45:35.542413 systemd[1]: Starting modprobe@drm.service... Aug 13 00:45:35.542427 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:45:35.542448 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:45:35.542464 systemd[1]: Starting modprobe@loop.service... Aug 13 00:45:35.542477 kernel: fuse: init (API version 7.34) Aug 13 00:45:35.542491 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:45:35.542512 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:45:35.542526 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:45:35.542539 systemd[1]: Starting systemd-journald.service... Aug 13 00:45:35.542552 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:45:35.542565 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:45:35.542583 kernel: loop: module loaded Aug 13 00:45:35.542597 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:45:35.542610 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:45:35.542632 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:35.542649 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:45:35.542668 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:45:35.542681 systemd[1]: Mounted media.mount. Aug 13 00:45:35.542695 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:45:35.542718 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:45:35.542740 systemd[1]: Mounted tmp.mount. Aug 13 00:45:35.542752 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:45:35.542764 kernel: kauditd_printk_skb: 48 callbacks suppressed Aug 13 00:45:35.542779 kernel: audit: type=1130 audit(1755045935.529:85): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.542793 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:45:35.542810 kernel: audit: type=1305 audit(1755045935.539:86): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:45:35.542825 kernel: audit: type=1300 audit(1755045935.539:86): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc321b2c30 a2=4000 a3=7ffc321b2ccc items=0 ppid=1 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:35.542848 systemd-journald[1050]: Journal started Aug 13 00:45:35.542905 systemd-journald[1050]: Runtime Journal (/run/log/journal/3e5ea8801134437a8be3fc4d285a3da2) is 6.0M, max 48.4M, 42.4M free. Aug 13 00:45:35.062000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:45:35.062000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:45:35.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.539000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:45:35.539000 audit[1050]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc321b2c30 a2=4000 a3=7ffc321b2ccc items=0 ppid=1 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:35.539000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:45:35.549314 kernel: audit: type=1327 audit(1755045935.539:86): proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:45:35.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.556063 kernel: audit: type=1130 audit(1755045935.550:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.556107 systemd[1]: Started systemd-journald.service. Aug 13 00:45:35.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.569761 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:45:35.570440 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:45:35.575138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:45:35.575389 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:45:35.576664 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:45:35.576840 systemd[1]: Finished modprobe@drm.service. Aug 13 00:45:35.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.586203 kernel: audit: type=1130 audit(1755045935.568:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.586269 kernel: audit: type=1130 audit(1755045935.574:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.586286 kernel: audit: type=1131 audit(1755045935.574:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.586306 kernel: audit: type=1130 audit(1755045935.576:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.586323 kernel: audit: type=1131 audit(1755045935.576:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.584438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:45:35.584631 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:45:35.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.600315 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:45:35.600526 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:45:35.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.603665 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:45:35.604584 systemd[1]: Finished modprobe@loop.service. Aug 13 00:45:35.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.606397 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:45:35.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.607963 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:45:35.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.609480 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:45:35.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.610980 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:45:35.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.612754 systemd[1]: Reached target network-pre.target. Aug 13 00:45:35.615513 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:45:35.618220 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:45:35.619265 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:45:35.624424 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:45:35.627626 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:45:35.629047 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:45:35.630382 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:45:35.631805 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:45:35.635384 systemd-journald[1050]: Time spent on flushing to /var/log/journal/3e5ea8801134437a8be3fc4d285a3da2 is 29.586ms for 1109 entries. Aug 13 00:45:35.635384 systemd-journald[1050]: System Journal (/var/log/journal/3e5ea8801134437a8be3fc4d285a3da2) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:45:35.682708 systemd-journald[1050]: Received client request to flush runtime journal. Aug 13 00:45:35.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.636341 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:45:35.645048 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:45:35.655928 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:45:35.663990 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:45:35.690532 udevadm[1073]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:45:35.667689 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:45:35.669124 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:45:35.676996 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:45:35.689461 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:45:35.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.697942 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:45:35.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.760505 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:45:35.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:35.763988 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:45:35.828273 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:45:35.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:36.750086 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:45:36.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:36.752676 systemd[1]: Starting systemd-udevd.service... Aug 13 00:45:36.771568 systemd-udevd[1083]: Using default interface naming scheme 'v252'. Aug 13 00:45:36.787581 systemd[1]: Started systemd-udevd.service. Aug 13 00:45:36.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:36.793815 systemd[1]: Starting systemd-networkd.service... Aug 13 00:45:36.801116 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:45:36.813536 systemd[1]: Found device dev-ttyS0.device. Aug 13 00:45:36.852461 systemd[1]: Started systemd-userdbd.service. Aug 13 00:45:36.858463 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:45:36.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:36.868269 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:45:36.945439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:45:36.886000 audit[1087]: AVC avc: denied { confidentiality } for pid=1087 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:45:36.886000 audit[1087]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55cff35e1800 a1=338ac a2=7fe758453bc5 a3=5 items=110 ppid=1083 pid=1087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:36.886000 audit: CWD cwd="/" Aug 13 00:45:36.886000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=1 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=2 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=3 name=(null) inode=12954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=4 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=5 name=(null) inode=12955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=6 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=7 name=(null) inode=12956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=8 name=(null) inode=12956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=9 name=(null) inode=12957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=10 name=(null) inode=12956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=11 name=(null) inode=12958 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=12 name=(null) inode=12956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=13 name=(null) inode=12959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=14 name=(null) inode=12956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=15 name=(null) inode=12960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=16 name=(null) inode=12956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=17 name=(null) inode=12961 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=18 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=19 name=(null) inode=12962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=20 name=(null) inode=12962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=21 name=(null) inode=12963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=22 name=(null) inode=12962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=23 name=(null) inode=12964 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=24 name=(null) inode=12962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=25 name=(null) inode=12965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=26 name=(null) inode=12962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=27 name=(null) inode=12966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=28 name=(null) inode=12962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=29 name=(null) inode=12967 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=30 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=31 name=(null) inode=12968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=32 name=(null) inode=12968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=33 name=(null) inode=12969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=34 name=(null) inode=12968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=35 name=(null) inode=12970 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=36 name=(null) inode=12968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=37 name=(null) inode=12971 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=38 name=(null) inode=12968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=39 name=(null) inode=12972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=40 name=(null) inode=12968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=41 name=(null) inode=12973 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=42 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=43 name=(null) inode=12974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=44 name=(null) inode=12974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=45 name=(null) inode=12975 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=46 name=(null) inode=12974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=47 name=(null) inode=12976 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=48 name=(null) inode=12974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=49 name=(null) inode=12977 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=50 name=(null) inode=12974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=51 name=(null) inode=12978 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=52 name=(null) inode=12974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=53 name=(null) inode=12979 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=55 name=(null) inode=12980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=56 name=(null) inode=12980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=57 name=(null) inode=12981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=58 name=(null) inode=12980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=59 name=(null) inode=12982 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=60 name=(null) inode=12980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=61 name=(null) inode=12983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=62 name=(null) inode=12983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=63 name=(null) inode=12984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=64 name=(null) inode=12983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=65 name=(null) inode=12985 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=66 name=(null) inode=12983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=67 name=(null) inode=12986 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=68 name=(null) inode=12983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=69 name=(null) inode=12987 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=70 name=(null) inode=12983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=71 name=(null) inode=12988 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=72 name=(null) inode=12980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=73 name=(null) inode=12989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=74 name=(null) inode=12989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=75 name=(null) inode=12990 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=76 name=(null) inode=12989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=77 name=(null) inode=12991 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=78 name=(null) inode=12989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=79 name=(null) inode=12992 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=80 name=(null) inode=12989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=81 name=(null) inode=12993 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=82 name=(null) inode=12989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=83 name=(null) inode=12994 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=84 name=(null) inode=12980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=85 name=(null) inode=12995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=86 name=(null) inode=12995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=87 name=(null) inode=12996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=88 name=(null) inode=12995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=89 name=(null) inode=12997 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=90 name=(null) inode=12995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=91 name=(null) inode=12998 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=92 name=(null) inode=12995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=93 name=(null) inode=12999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=94 name=(null) inode=12995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=95 name=(null) inode=13000 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=96 name=(null) inode=12980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=97 name=(null) inode=13001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=98 name=(null) inode=13001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=99 name=(null) inode=13002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=100 name=(null) inode=13001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=101 name=(null) inode=13003 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=102 name=(null) inode=13001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=103 name=(null) inode=13004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=104 name=(null) inode=13001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=105 name=(null) inode=13005 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=106 name=(null) inode=13001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=107 name=(null) inode=13006 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PATH item=109 name=(null) inode=13007 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:45:36.886000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:45:36.997064 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:45:37.000790 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 13 00:45:37.007211 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:45:37.007349 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 00:45:37.007469 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:45:37.007563 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:45:37.025648 systemd-networkd[1102]: lo: Link UP Aug 13 00:45:37.026047 systemd-networkd[1102]: lo: Gained carrier Aug 13 00:45:37.026636 systemd-networkd[1102]: Enumeration completed Aug 13 00:45:37.026833 systemd[1]: Started systemd-networkd.service. Aug 13 00:45:37.027130 systemd-networkd[1102]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:45:37.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.029157 systemd-networkd[1102]: eth0: Link UP Aug 13 00:45:37.029302 systemd-networkd[1102]: eth0: Gained carrier Aug 13 00:45:37.064229 systemd-networkd[1102]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:45:37.069194 kernel: kvm: Nested Virtualization enabled Aug 13 00:45:37.069252 kernel: SVM: kvm: Nested Paging enabled Aug 13 00:45:37.070487 kernel: SVM: Virtual VMLOAD VMSAVE supported Aug 13 00:45:37.070552 kernel: SVM: Virtual GIF supported Aug 13 00:45:37.086061 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:45:37.106692 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:45:37.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.109682 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:45:37.120007 lvm[1120]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:45:37.148550 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:45:37.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.149841 systemd[1]: Reached target cryptsetup.target. Aug 13 00:45:37.152579 systemd[1]: Starting lvm2-activation.service... Aug 13 00:45:37.158197 lvm[1122]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:45:37.194519 systemd[1]: Finished lvm2-activation.service. Aug 13 00:45:37.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.195861 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:45:37.197076 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:45:37.197115 systemd[1]: Reached target local-fs.target. Aug 13 00:45:37.198208 systemd[1]: Reached target machines.target. Aug 13 00:45:37.201194 systemd[1]: Starting ldconfig.service... Aug 13 00:45:37.202594 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.202671 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:45:37.204264 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:45:37.206623 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:45:37.209858 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:45:37.212789 systemd[1]: Starting systemd-sysext.service... Aug 13 00:45:37.214673 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1125 (bootctl) Aug 13 00:45:37.216359 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:45:37.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.220724 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:45:37.225141 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:45:37.228744 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:45:37.228960 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:45:37.253064 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 00:45:37.259751 systemd-fsck[1134]: fsck.fat 4.2 (2021-01-31) Aug 13 00:45:37.259751 systemd-fsck[1134]: /dev/vda1: 790 files, 119344/258078 clusters Aug 13 00:45:37.261582 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:45:37.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.264531 systemd[1]: Mounting boot.mount... Aug 13 00:45:37.282946 systemd[1]: Mounted boot.mount. Aug 13 00:45:37.295465 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:45:37.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.495020 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:45:37.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.495689 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:45:37.499048 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:45:37.519046 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 00:45:37.525754 (sd-sysext)[1147]: Using extensions 'kubernetes'. Aug 13 00:45:37.526141 (sd-sysext)[1147]: Merged extensions into '/usr'. Aug 13 00:45:37.608732 ldconfig[1124]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:45:37.613948 systemd[1]: Finished ldconfig.service. Aug 13 00:45:37.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.622468 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:37.624179 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:45:37.625016 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.626169 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:45:37.627937 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:45:37.630764 systemd[1]: Starting modprobe@loop.service... Aug 13 00:45:37.631574 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.631682 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:45:37.631782 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:37.634503 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:45:37.635545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:45:37.635709 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:45:37.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.636909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:45:37.637070 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:45:37.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.638295 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:45:37.638436 systemd[1]: Finished modprobe@loop.service. Aug 13 00:45:37.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.639559 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:45:37.639653 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.641201 systemd[1]: Finished systemd-sysext.service. Aug 13 00:45:37.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.643434 systemd[1]: Starting ensure-sysext.service... Aug 13 00:45:37.645092 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:45:37.648975 systemd[1]: Reloading. Aug 13 00:45:37.655560 systemd-tmpfiles[1162]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:45:37.656746 systemd-tmpfiles[1162]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:45:37.658396 systemd-tmpfiles[1162]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:45:37.724216 /usr/lib/systemd/system-generators/torcx-generator[1182]: time="2025-08-13T00:45:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:45:37.724242 /usr/lib/systemd/system-generators/torcx-generator[1182]: time="2025-08-13T00:45:37Z" level=info msg="torcx already run" Aug 13 00:45:37.790522 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:45:37.790540 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:45:37.809745 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:45:37.864329 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:45:37.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.868467 systemd[1]: Starting audit-rules.service... Aug 13 00:45:37.870502 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:45:37.872480 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:45:37.875240 systemd[1]: Starting systemd-resolved.service... Aug 13 00:45:37.877469 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:45:37.879358 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:45:37.880794 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:45:37.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.884203 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:45:37.885991 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.886000 audit[1239]: SYSTEM_BOOT pid=1239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.888790 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:45:37.890975 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:45:37.893080 systemd[1]: Starting modprobe@loop.service... Aug 13 00:45:37.893827 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.893978 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:45:37.894098 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:45:37.894971 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:45:37.895128 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:45:37.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.896376 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:45:37.896512 systemd[1]: Finished modprobe@loop.service. Aug 13 00:45:37.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.899258 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.900376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:45:37.900591 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:45:37.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.902305 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:45:37.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.906096 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.907674 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:45:37.909761 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:45:37.911940 systemd[1]: Starting modprobe@loop.service... Aug 13 00:45:37.913047 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.913195 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:45:37.913316 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:45:37.914061 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:45:37.914212 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:45:37.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.915934 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:45:37.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.917617 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:45:37.917743 systemd[1]: Finished modprobe@loop.service. Aug 13 00:45:37.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.919090 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.920661 systemd[1]: Starting systemd-update-done.service... Aug 13 00:45:37.924802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:45:37.925073 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:45:37.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.926473 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.927953 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:45:37.930290 systemd[1]: Starting modprobe@drm.service... Aug 13 00:45:37.932502 systemd[1]: Starting modprobe@loop.service... Aug 13 00:45:37.933434 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.933572 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:45:37.934937 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:45:37.936119 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:45:37.936265 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:45:37.938111 systemd[1]: Finished systemd-update-done.service. Aug 13 00:45:37.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.940658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:45:37.940820 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:45:37.943928 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:45:37.944103 systemd[1]: Finished modprobe@drm.service. Aug 13 00:45:37.945211 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:45:37.945352 systemd[1]: Finished modprobe@loop.service. Aug 13 00:45:37.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.947665 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.948752 systemd[1]: Finished ensure-sysext.service. Aug 13 00:45:37.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:37.951000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:45:37.951000 audit[1271]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc71aaf7e0 a2=420 a3=0 items=0 ppid=1232 pid=1271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:37.951000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:45:37.951660 augenrules[1271]: No rules Aug 13 00:45:37.952287 systemd[1]: Finished audit-rules.service. Aug 13 00:45:37.971482 systemd-resolved[1235]: Positive Trust Anchors: Aug 13 00:45:37.971494 systemd-resolved[1235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:45:37.971522 systemd-resolved[1235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:45:37.974763 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:45:37.975913 systemd[1]: Reached target time-set.target. Aug 13 00:45:37.976118 systemd-timesyncd[1238]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:45:37.976168 systemd-timesyncd[1238]: Initial clock synchronization to Wed 2025-08-13 00:45:38.014535 UTC. Aug 13 00:45:37.978973 systemd-resolved[1235]: Defaulting to hostname 'linux'. Aug 13 00:45:37.980397 systemd[1]: Started systemd-resolved.service. Aug 13 00:45:37.981240 systemd[1]: Reached target network.target. Aug 13 00:45:37.982001 systemd[1]: Reached target nss-lookup.target. Aug 13 00:45:37.982773 systemd[1]: Reached target sysinit.target. Aug 13 00:45:37.983588 systemd[1]: Started motdgen.path. Aug 13 00:45:37.984295 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:45:37.985480 systemd[1]: Started logrotate.timer. Aug 13 00:45:37.986265 systemd[1]: Started mdadm.timer. Aug 13 00:45:37.986921 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:45:37.987726 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:45:37.987797 systemd[1]: Reached target paths.target. Aug 13 00:45:37.988519 systemd[1]: Reached target timers.target. Aug 13 00:45:37.989482 systemd[1]: Listening on dbus.socket. Aug 13 00:45:37.991242 systemd[1]: Starting docker.socket... Aug 13 00:45:37.992722 systemd[1]: Listening on sshd.socket. Aug 13 00:45:37.993507 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:45:37.993763 systemd[1]: Listening on docker.socket. Aug 13 00:45:37.994512 systemd[1]: Reached target sockets.target. Aug 13 00:45:37.995276 systemd[1]: Reached target basic.target. Aug 13 00:45:37.996141 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:45:37.996183 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.996200 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:45:37.997200 systemd[1]: Starting containerd.service... Aug 13 00:45:37.998934 systemd[1]: Starting dbus.service... Aug 13 00:45:38.000610 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:45:38.002496 systemd[1]: Starting extend-filesystems.service... Aug 13 00:45:38.003435 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:45:38.004663 systemd[1]: Starting motdgen.service... Aug 13 00:45:38.005256 jq[1292]: false Aug 13 00:45:38.006593 systemd[1]: Starting prepare-helm.service... Aug 13 00:45:38.010078 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:45:38.012061 systemd[1]: Starting sshd-keygen.service... Aug 13 00:45:38.014816 systemd[1]: Starting systemd-logind.service... Aug 13 00:45:38.015573 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:45:38.015643 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:45:38.019671 systemd[1]: Starting update-engine.service... Aug 13 00:45:38.021908 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:45:38.025972 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:45:38.026320 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:45:38.030937 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:45:38.038514 jq[1313]: true Aug 13 00:45:38.038613 extend-filesystems[1293]: Found loop1 Aug 13 00:45:38.038613 extend-filesystems[1293]: Found sr0 Aug 13 00:45:38.038613 extend-filesystems[1293]: Found vda Aug 13 00:45:38.038613 extend-filesystems[1293]: Found vda1 Aug 13 00:45:38.038613 extend-filesystems[1293]: Found vda2 Aug 13 00:45:38.038613 extend-filesystems[1293]: Found vda3 Aug 13 00:45:38.038613 extend-filesystems[1293]: Found usr Aug 13 00:45:38.038613 extend-filesystems[1293]: Found vda4 Aug 13 00:45:38.038613 extend-filesystems[1293]: Found vda6 Aug 13 00:45:38.038613 extend-filesystems[1293]: Found vda7 Aug 13 00:45:38.038613 extend-filesystems[1293]: Found vda9 Aug 13 00:45:38.038613 extend-filesystems[1293]: Checking size of /dev/vda9 Aug 13 00:45:38.031200 systemd[1]: Finished motdgen.service. Aug 13 00:45:38.043862 dbus-daemon[1291]: [system] SELinux support is enabled Aug 13 00:45:38.064801 tar[1318]: linux-amd64/helm Aug 13 00:45:38.033394 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:45:38.033654 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:45:38.065854 jq[1323]: true Aug 13 00:45:38.044432 systemd[1]: Started dbus.service. Aug 13 00:45:38.047351 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:45:38.047382 systemd[1]: Reached target system-config.target. Aug 13 00:45:38.048416 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:45:38.048436 systemd[1]: Reached target user-config.target. Aug 13 00:45:38.087350 extend-filesystems[1293]: Resized partition /dev/vda9 Aug 13 00:45:38.090082 update_engine[1310]: I0813 00:45:38.089711 1310 main.cc:92] Flatcar Update Engine starting Aug 13 00:45:38.092434 systemd[1]: Started update-engine.service. Aug 13 00:45:38.092685 update_engine[1310]: I0813 00:45:38.092517 1310 update_check_scheduler.cc:74] Next update check in 10m16s Aug 13 00:45:38.094023 extend-filesystems[1330]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:45:38.096922 systemd[1]: Started locksmithd.service. Aug 13 00:45:38.099080 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:45:38.114605 systemd-logind[1304]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:45:38.114634 systemd-logind[1304]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:45:38.116522 systemd-logind[1304]: New seat seat0. Aug 13 00:45:38.121730 systemd[1]: Started systemd-logind.service. Aug 13 00:45:38.146273 env[1320]: time="2025-08-13T00:45:38.146220763Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:45:38.152060 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:45:38.230864 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:38.246571 env[1320]: time="2025-08-13T00:45:38.237398154Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:45:38.230921 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:38.247491 env[1320]: time="2025-08-13T00:45:38.247459703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:45:38.254948 extend-filesystems[1330]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:45:38.254948 extend-filesystems[1330]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:45:38.254948 extend-filesystems[1330]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:45:38.259643 env[1320]: time="2025-08-13T00:45:38.252044217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:45:38.259643 env[1320]: time="2025-08-13T00:45:38.252077780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:45:38.259643 env[1320]: time="2025-08-13T00:45:38.252358137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:45:38.259643 env[1320]: time="2025-08-13T00:45:38.252372800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:45:38.259643 env[1320]: time="2025-08-13T00:45:38.252388687Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:45:38.259643 env[1320]: time="2025-08-13T00:45:38.252404425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:45:38.259643 env[1320]: time="2025-08-13T00:45:38.252513681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:45:38.259643 env[1320]: time="2025-08-13T00:45:38.252751828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:45:38.259643 env[1320]: time="2025-08-13T00:45:38.252916220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:45:38.259643 env[1320]: time="2025-08-13T00:45:38.252934759Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:45:38.259883 bash[1348]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:45:38.251381 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:45:38.260160 extend-filesystems[1293]: Resized filesystem in /dev/vda9 Aug 13 00:45:38.261173 env[1320]: time="2025-08-13T00:45:38.252983849Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:45:38.261173 env[1320]: time="2025-08-13T00:45:38.253004657Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:45:38.251629 systemd[1]: Finished extend-filesystems.service. Aug 13 00:45:38.253108 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:45:38.263575 env[1320]: time="2025-08-13T00:45:38.263545362Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:45:38.263624 env[1320]: time="2025-08-13T00:45:38.263588225Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:45:38.263624 env[1320]: time="2025-08-13T00:45:38.263613182Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:45:38.263849 env[1320]: time="2025-08-13T00:45:38.263654608Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:45:38.263849 env[1320]: time="2025-08-13T00:45:38.263672907Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:45:38.263849 env[1320]: time="2025-08-13T00:45:38.263692520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:45:38.263849 env[1320]: time="2025-08-13T00:45:38.263704080Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:45:38.263849 env[1320]: time="2025-08-13T00:45:38.263723533Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:45:38.263849 env[1320]: time="2025-08-13T00:45:38.263734871Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:45:38.263849 env[1320]: time="2025-08-13T00:45:38.263751773Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:45:38.263849 env[1320]: time="2025-08-13T00:45:38.263767330Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:45:38.263849 env[1320]: time="2025-08-13T00:45:38.263779050Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:45:38.264021 env[1320]: time="2025-08-13T00:45:38.263881950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:45:38.264021 env[1320]: time="2025-08-13T00:45:38.263967283Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:45:38.264428 env[1320]: time="2025-08-13T00:45:38.264370817Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:45:38.264428 env[1320]: time="2025-08-13T00:45:38.264402341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264428 env[1320]: time="2025-08-13T00:45:38.264419052Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:45:38.264532 env[1320]: time="2025-08-13T00:45:38.264494726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264532 env[1320]: time="2025-08-13T00:45:38.264507129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264532 env[1320]: time="2025-08-13T00:45:38.264523880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264587 env[1320]: time="2025-08-13T00:45:38.264533853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264587 env[1320]: time="2025-08-13T00:45:38.264544599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264587 env[1320]: time="2025-08-13T00:45:38.264573874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264648 env[1320]: time="2025-08-13T00:45:38.264586176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264648 env[1320]: time="2025-08-13T00:45:38.264600829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264648 env[1320]: time="2025-08-13T00:45:38.264612438Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:45:38.264770 env[1320]: time="2025-08-13T00:45:38.264746360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264770 env[1320]: time="2025-08-13T00:45:38.264770624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264834 env[1320]: time="2025-08-13T00:45:38.264781028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.264834 env[1320]: time="2025-08-13T00:45:38.264790971Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:45:38.264834 env[1320]: time="2025-08-13T00:45:38.264807872Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:45:38.264834 env[1320]: time="2025-08-13T00:45:38.264819653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:45:38.265018 env[1320]: time="2025-08-13T00:45:38.264845524Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:45:38.265018 env[1320]: time="2025-08-13T00:45:38.264887824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:45:38.265252 env[1320]: time="2025-08-13T00:45:38.265195126Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:45:38.266689 env[1320]: time="2025-08-13T00:45:38.265259862Z" level=info msg="Connect containerd service" Aug 13 00:45:38.266689 env[1320]: time="2025-08-13T00:45:38.265302163Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:45:38.266689 env[1320]: time="2025-08-13T00:45:38.266094837Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:45:38.266689 env[1320]: time="2025-08-13T00:45:38.266391604Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:45:38.266689 env[1320]: time="2025-08-13T00:45:38.266431936Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:45:38.266689 env[1320]: time="2025-08-13T00:45:38.266474629Z" level=info msg="containerd successfully booted in 0.122582s" Aug 13 00:45:38.266568 systemd[1]: Started containerd.service. Aug 13 00:45:38.272421 env[1320]: time="2025-08-13T00:45:38.272355859Z" level=info msg="Start subscribing containerd event" Aug 13 00:45:38.272480 env[1320]: time="2025-08-13T00:45:38.272466994Z" level=info msg="Start recovering state" Aug 13 00:45:38.272589 env[1320]: time="2025-08-13T00:45:38.272568718Z" level=info msg="Start event monitor" Aug 13 00:45:38.272634 env[1320]: time="2025-08-13T00:45:38.272591726Z" level=info msg="Start snapshots syncer" Aug 13 00:45:38.272634 env[1320]: time="2025-08-13T00:45:38.272606720Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:45:38.272634 env[1320]: time="2025-08-13T00:45:38.272614323Z" level=info msg="Start streaming server" Aug 13 00:45:38.371954 locksmithd[1334]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:45:38.780353 tar[1318]: linux-amd64/LICENSE Aug 13 00:45:38.780353 tar[1318]: linux-amd64/README.md Aug 13 00:45:38.785383 systemd[1]: Finished prepare-helm.service. Aug 13 00:45:38.838295 systemd-networkd[1102]: eth0: Gained IPv6LL Aug 13 00:45:38.840197 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:45:38.841521 systemd[1]: Reached target network-online.target. Aug 13 00:45:38.843740 systemd[1]: Starting kubelet.service... Aug 13 00:45:39.240122 sshd_keygen[1314]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:45:39.296090 systemd[1]: Finished sshd-keygen.service. Aug 13 00:45:39.304614 systemd[1]: Starting issuegen.service... Aug 13 00:45:39.315433 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:45:39.316829 systemd[1]: Finished issuegen.service. Aug 13 00:45:39.322246 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:45:39.341926 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:45:39.348564 systemd[1]: Started getty@tty1.service. Aug 13 00:45:39.360327 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:45:39.367497 systemd[1]: Reached target getty.target. Aug 13 00:45:41.838146 systemd[1]: Started kubelet.service. Aug 13 00:45:41.843559 systemd[1]: Reached target multi-user.target. Aug 13 00:45:41.851580 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:45:41.866695 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:45:41.868173 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:45:41.875346 systemd[1]: Startup finished in 6.672s (kernel) + 11.309s (userspace) = 17.982s. Aug 13 00:45:42.693328 kubelet[1391]: E0813 00:45:42.693226 1391 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:45:42.695065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:45:42.695235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:45:47.646266 systemd[1]: Created slice system-sshd.slice. Aug 13 00:45:47.652541 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:34464.service. Aug 13 00:45:47.783781 sshd[1401]: Accepted publickey for core from 10.0.0.1 port 34464 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:45:47.787886 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:45:47.830392 systemd[1]: Created slice user-500.slice. Aug 13 00:45:47.835980 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:45:47.843926 systemd-logind[1304]: New session 1 of user core. Aug 13 00:45:47.860854 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:45:47.864771 systemd[1]: Starting user@500.service... Aug 13 00:45:47.877498 (systemd)[1406]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:45:48.028896 systemd[1406]: Queued start job for default target default.target. Aug 13 00:45:48.029313 systemd[1406]: Reached target paths.target. Aug 13 00:45:48.029348 systemd[1406]: Reached target sockets.target. Aug 13 00:45:48.029381 systemd[1406]: Reached target timers.target. Aug 13 00:45:48.029400 systemd[1406]: Reached target basic.target. Aug 13 00:45:48.029449 systemd[1406]: Reached target default.target. Aug 13 00:45:48.029478 systemd[1406]: Startup finished in 138ms. Aug 13 00:45:48.030116 systemd[1]: Started user@500.service. Aug 13 00:45:48.031585 systemd[1]: Started session-1.scope. Aug 13 00:45:48.123170 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:43622.service. Aug 13 00:45:48.205298 sshd[1415]: Accepted publickey for core from 10.0.0.1 port 43622 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:45:48.218815 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:45:48.243546 systemd-logind[1304]: New session 2 of user core. Aug 13 00:45:48.244790 systemd[1]: Started session-2.scope. Aug 13 00:45:48.338636 sshd[1415]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:48.342671 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:43628.service. Aug 13 00:45:48.343532 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:43622.service: Deactivated successfully. Aug 13 00:45:48.345132 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:45:48.345227 systemd-logind[1304]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:45:48.346563 systemd-logind[1304]: Removed session 2. Aug 13 00:45:48.383090 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 43628 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:45:48.384779 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:45:48.391754 systemd-logind[1304]: New session 3 of user core. Aug 13 00:45:48.392669 systemd[1]: Started session-3.scope. Aug 13 00:45:48.453204 sshd[1421]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:48.458441 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:43642.service. Aug 13 00:45:48.461395 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:43628.service: Deactivated successfully. Aug 13 00:45:48.464579 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:45:48.465358 systemd-logind[1304]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:45:48.467053 systemd-logind[1304]: Removed session 3. Aug 13 00:45:48.670180 sshd[1427]: Accepted publickey for core from 10.0.0.1 port 43642 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:45:48.677158 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:45:48.691454 systemd-logind[1304]: New session 4 of user core. Aug 13 00:45:48.693971 systemd[1]: Started session-4.scope. Aug 13 00:45:48.791260 sshd[1427]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:48.808675 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:43656.service. Aug 13 00:45:48.814133 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:43642.service: Deactivated successfully. Aug 13 00:45:48.815955 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:45:48.821380 systemd-logind[1304]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:45:48.824730 systemd-logind[1304]: Removed session 4. Aug 13 00:45:48.865074 sshd[1434]: Accepted publickey for core from 10.0.0.1 port 43656 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:45:48.867859 sshd[1434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:45:48.881322 systemd-logind[1304]: New session 5 of user core. Aug 13 00:45:48.884536 systemd[1]: Started session-5.scope. Aug 13 00:45:48.989889 sudo[1440]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:45:48.992468 sudo[1440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:45:49.003477 dbus-daemon[1291]: \xd0\u000d\x83\xbd\xa3U: received setenforce notice (enforcing=1701202096) Aug 13 00:45:49.006673 sudo[1440]: pam_unix(sudo:session): session closed for user root Aug 13 00:45:49.009759 sshd[1434]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:49.014541 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:43658.service. Aug 13 00:45:49.018272 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:43656.service: Deactivated successfully. Aug 13 00:45:49.019273 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:45:49.022579 systemd-logind[1304]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:45:49.025196 systemd-logind[1304]: Removed session 5. Aug 13 00:45:49.078876 sshd[1442]: Accepted publickey for core from 10.0.0.1 port 43658 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:45:49.082231 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:45:49.100307 systemd-logind[1304]: New session 6 of user core. Aug 13 00:45:49.101481 systemd[1]: Started session-6.scope. Aug 13 00:45:49.324009 sudo[1449]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:45:49.324233 sudo[1449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:45:49.327598 sudo[1449]: pam_unix(sudo:session): session closed for user root Aug 13 00:45:49.332189 sudo[1448]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:45:49.332382 sudo[1448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:45:49.342548 systemd[1]: Stopping audit-rules.service... Aug 13 00:45:49.343000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:45:49.344638 auditctl[1452]: No rules Aug 13 00:45:49.345025 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:45:49.345277 systemd[1]: Stopped audit-rules.service. Aug 13 00:45:49.346063 kernel: kauditd_printk_skb: 178 callbacks suppressed Aug 13 00:45:49.346116 kernel: audit: type=1305 audit(1755045949.343:156): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:45:49.343000 audit[1452]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffeb0dcd90 a2=420 a3=0 items=0 ppid=1 pid=1452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:49.347280 systemd[1]: Starting audit-rules.service... Aug 13 00:45:49.352989 kernel: audit: type=1300 audit(1755045949.343:156): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffeb0dcd90 a2=420 a3=0 items=0 ppid=1 pid=1452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:49.343000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:45:49.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.358602 kernel: audit: type=1327 audit(1755045949.343:156): proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:45:49.358645 kernel: audit: type=1131 audit(1755045949.344:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.366942 augenrules[1470]: No rules Aug 13 00:45:49.367798 systemd[1]: Finished audit-rules.service. Aug 13 00:45:49.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.369257 sudo[1448]: pam_unix(sudo:session): session closed for user root Aug 13 00:45:49.370965 sshd[1442]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:49.368000 audit[1448]: USER_END pid=1448 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.374233 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:43672.service. Aug 13 00:45:49.376405 kernel: audit: type=1130 audit(1755045949.366:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.377726 kernel: audit: type=1106 audit(1755045949.368:159): pid=1448 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.378346 kernel: audit: type=1104 audit(1755045949.368:160): pid=1448 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.368000 audit[1448]: CRED_DISP pid=1448 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.376944 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:43658.service: Deactivated successfully. Aug 13 00:45:49.377966 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:45:49.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.21:22-10.0.0.1:43672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.382094 systemd-logind[1304]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:45:49.383070 systemd-logind[1304]: Removed session 6. Aug 13 00:45:49.384186 kernel: audit: type=1130 audit(1755045949.373:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.21:22-10.0.0.1:43672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.384237 kernel: audit: type=1106 audit(1755045949.374:162): pid=1442 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:45:49.374000 audit[1442]: USER_END pid=1442 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:45:49.388875 kernel: audit: type=1104 audit(1755045949.374:163): pid=1442 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:45:49.374000 audit[1442]: CRED_DISP pid=1442 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:45:49.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.21:22-10.0.0.1:43658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.411000 audit[1475]: USER_ACCT pid=1475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:45:49.411408 sshd[1475]: Accepted publickey for core from 10.0.0.1 port 43672 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:45:49.412000 audit[1475]: CRED_ACQ pid=1475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:45:49.412000 audit[1475]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff08ad3d00 a2=3 a3=0 items=0 ppid=1 pid=1475 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:49.412000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:45:49.413146 sshd[1475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:45:49.418278 systemd-logind[1304]: New session 7 of user core. Aug 13 00:45:49.419487 systemd[1]: Started session-7.scope. Aug 13 00:45:49.424000 audit[1475]: USER_START pid=1475 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:45:49.426000 audit[1480]: CRED_ACQ pid=1480 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:45:49.476000 audit[1481]: USER_ACCT pid=1481 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.477000 audit[1481]: CRED_REFR pid=1481 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.477828 sudo[1481]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:45:49.478249 sudo[1481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:45:49.480000 audit[1481]: USER_START pid=1481 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:45:49.631726 systemd[1]: Starting docker.service... Aug 13 00:45:49.774434 env[1493]: time="2025-08-13T00:45:49.774355290Z" level=info msg="Starting up" Aug 13 00:45:49.775915 env[1493]: time="2025-08-13T00:45:49.775876797Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:45:49.775915 env[1493]: time="2025-08-13T00:45:49.775897600Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:45:49.776017 env[1493]: time="2025-08-13T00:45:49.775934894Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:45:49.776017 env[1493]: time="2025-08-13T00:45:49.775959569Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:45:49.778919 env[1493]: time="2025-08-13T00:45:49.778886728Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:45:49.778919 env[1493]: time="2025-08-13T00:45:49.778904392Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:45:49.779063 env[1493]: time="2025-08-13T00:45:49.778920712Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:45:49.779063 env[1493]: time="2025-08-13T00:45:49.778937694Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:45:50.716388 env[1493]: time="2025-08-13T00:45:50.716291824Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 00:45:50.716388 env[1493]: time="2025-08-13T00:45:50.716338693Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 00:45:50.716871 env[1493]: time="2025-08-13T00:45:50.716715928Z" level=info msg="Loading containers: start." Aug 13 00:45:50.851000 audit[1528]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:50.851000 audit[1528]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdcfe4e570 a2=0 a3=7ffdcfe4e55c items=0 ppid=1493 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:50.851000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Aug 13 00:45:50.853000 audit[1530]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:50.853000 audit[1530]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd854d0310 a2=0 a3=7ffd854d02fc items=0 ppid=1493 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:50.853000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Aug 13 00:45:50.855000 audit[1532]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:50.855000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc3c7bfbc0 a2=0 a3=7ffc3c7bfbac items=0 ppid=1493 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:50.855000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:45:50.856000 audit[1534]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:50.856000 audit[1534]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc55bbc520 a2=0 a3=7ffc55bbc50c items=0 ppid=1493 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:50.856000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:45:50.859000 audit[1536]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:50.859000 audit[1536]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff17acf620 a2=0 a3=7fff17acf60c items=0 ppid=1493 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:50.859000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Aug 13 00:45:50.883000 audit[1541]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:50.883000 audit[1541]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffca57d7280 a2=0 a3=7ffca57d726c items=0 ppid=1493 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:50.883000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Aug 13 00:45:50.998000 audit[1543]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:50.998000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdadc73cf0 a2=0 a3=7ffdadc73cdc items=0 ppid=1493 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:50.998000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 13 00:45:51.005000 audit[1545]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.005000 audit[1545]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffecb1e21a0 a2=0 a3=7ffecb1e218c items=0 ppid=1493 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.005000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 13 00:45:51.008000 audit[1547]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.008000 audit[1547]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc5e069690 a2=0 a3=7ffc5e06967c items=0 ppid=1493 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.008000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:45:51.019000 audit[1551]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.019000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdf3af7850 a2=0 a3=7ffdf3af783c items=0 ppid=1493 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.019000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:45:51.024000 audit[1552]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.024000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffe75796a0 a2=0 a3=7fffe757968c items=0 ppid=1493 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.024000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:45:51.036091 kernel: Initializing XFRM netlink socket Aug 13 00:45:51.069095 env[1493]: time="2025-08-13T00:45:51.069056167Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:45:51.086000 audit[1560]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.086000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe29b97170 a2=0 a3=7ffe29b9715c items=0 ppid=1493 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.086000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Aug 13 00:45:51.095000 audit[1563]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.095000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffd66d1fe0 a2=0 a3=7fffd66d1fcc items=0 ppid=1493 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.095000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Aug 13 00:45:51.098000 audit[1566]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.098000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd4fed70d0 a2=0 a3=7ffd4fed70bc items=0 ppid=1493 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.098000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Aug 13 00:45:51.099000 audit[1568]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.099000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe9e025640 a2=0 a3=7ffe9e02562c items=0 ppid=1493 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.099000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Aug 13 00:45:51.102000 audit[1570]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.102000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffcd3413f40 a2=0 a3=7ffcd3413f2c items=0 ppid=1493 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.102000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Aug 13 00:45:51.103000 audit[1572]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.103000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffff7d93090 a2=0 a3=7ffff7d9307c items=0 ppid=1493 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.103000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Aug 13 00:45:51.105000 audit[1574]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.105000 audit[1574]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd9655be70 a2=0 a3=7ffd9655be5c items=0 ppid=1493 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.105000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Aug 13 00:45:51.112000 audit[1577]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.112000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe9d9ef580 a2=0 a3=7ffe9d9ef56c items=0 ppid=1493 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.112000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Aug 13 00:45:51.114000 audit[1579]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.114000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff65af5ac0 a2=0 a3=7fff65af5aac items=0 ppid=1493 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.114000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:45:51.115000 audit[1581]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.115000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffee720b150 a2=0 a3=7ffee720b13c items=0 ppid=1493 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.115000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:45:51.117000 audit[1583]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.117000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcfd8f57c0 a2=0 a3=7ffcfd8f57ac items=0 ppid=1493 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.117000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Aug 13 00:45:51.119073 systemd-networkd[1102]: docker0: Link UP Aug 13 00:45:51.127000 audit[1587]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.127000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffa7f45970 a2=0 a3=7fffa7f4595c items=0 ppid=1493 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.127000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:45:51.133000 audit[1588]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:45:51.133000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff6c6c94e0 a2=0 a3=7fff6c6c94cc items=0 ppid=1493 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:45:51.133000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:45:51.135167 env[1493]: time="2025-08-13T00:45:51.135132848Z" level=info msg="Loading containers: done." Aug 13 00:45:51.153678 env[1493]: time="2025-08-13T00:45:51.153616304Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:45:51.153879 env[1493]: time="2025-08-13T00:45:51.153852893Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:45:51.153992 env[1493]: time="2025-08-13T00:45:51.153972109Z" level=info msg="Daemon has completed initialization" Aug 13 00:45:51.172585 systemd[1]: Started docker.service. Aug 13 00:45:51.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:51.176538 env[1493]: time="2025-08-13T00:45:51.176484242Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:45:52.946480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:45:52.947793 systemd[1]: Stopped kubelet.service. Aug 13 00:45:52.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:52.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:52.953834 systemd[1]: Starting kubelet.service... Aug 13 00:45:53.198718 systemd[1]: Started kubelet.service. Aug 13 00:45:53.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:45:53.232227 env[1320]: time="2025-08-13T00:45:53.232146647Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:45:53.711624 kubelet[1631]: E0813 00:45:53.711543 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:45:53.714731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:45:53.714882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:45:53.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:45:54.867432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059677730.mount: Deactivated successfully. Aug 13 00:45:58.916415 env[1320]: time="2025-08-13T00:45:58.916328072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:45:58.919519 env[1320]: time="2025-08-13T00:45:58.919431338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:45:58.921614 env[1320]: time="2025-08-13T00:45:58.921582660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:45:58.923431 env[1320]: time="2025-08-13T00:45:58.923384136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:45:58.924292 env[1320]: time="2025-08-13T00:45:58.924259524Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:45:58.925066 env[1320]: time="2025-08-13T00:45:58.925023150Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:46:02.049515 env[1320]: time="2025-08-13T00:46:02.049380088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:02.054828 env[1320]: time="2025-08-13T00:46:02.054755866Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:02.060159 env[1320]: time="2025-08-13T00:46:02.060082457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:02.062559 env[1320]: time="2025-08-13T00:46:02.062520190Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:02.063848 env[1320]: time="2025-08-13T00:46:02.063772556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:46:02.064565 env[1320]: time="2025-08-13T00:46:02.064539264Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:46:03.969284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:46:03.973353 systemd[1]: Stopped kubelet.service. Aug 13 00:46:03.984241 kernel: kauditd_printk_skb: 88 callbacks suppressed Aug 13 00:46:03.984304 kernel: audit: type=1130 audit(1755045963.968:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:03.984340 kernel: audit: type=1131 audit(1755045963.968:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:03.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:03.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:03.977049 systemd[1]: Starting kubelet.service... Aug 13 00:46:04.180621 systemd[1]: Started kubelet.service. Aug 13 00:46:04.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:04.191731 kernel: audit: type=1130 audit(1755045964.178:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:04.557832 kubelet[1650]: E0813 00:46:04.554627 1650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:46:04.564809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:46:04.565149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:46:04.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:46:04.578092 kernel: audit: type=1131 audit(1755045964.564:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:46:05.303850 env[1320]: time="2025-08-13T00:46:05.303672278Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:05.311084 env[1320]: time="2025-08-13T00:46:05.310990140Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:05.315367 env[1320]: time="2025-08-13T00:46:05.315299029Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:05.321979 env[1320]: time="2025-08-13T00:46:05.318617875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:05.321979 env[1320]: time="2025-08-13T00:46:05.319581054Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:46:05.323979 env[1320]: time="2025-08-13T00:46:05.323170712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:46:07.416104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1778539751.mount: Deactivated successfully. Aug 13 00:46:09.144772 env[1320]: time="2025-08-13T00:46:09.144672950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:09.159066 env[1320]: time="2025-08-13T00:46:09.158884995Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:09.162040 env[1320]: time="2025-08-13T00:46:09.161989553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:09.167926 env[1320]: time="2025-08-13T00:46:09.167884919Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:09.168579 env[1320]: time="2025-08-13T00:46:09.168536221Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:46:09.169696 env[1320]: time="2025-08-13T00:46:09.169670917Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:46:10.035319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031129552.mount: Deactivated successfully. Aug 13 00:46:11.902611 env[1320]: time="2025-08-13T00:46:11.902532215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:11.905478 env[1320]: time="2025-08-13T00:46:11.905394778Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:11.907677 env[1320]: time="2025-08-13T00:46:11.907610865Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:11.910138 env[1320]: time="2025-08-13T00:46:11.910099390Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:11.911108 env[1320]: time="2025-08-13T00:46:11.911048079Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:46:11.911705 env[1320]: time="2025-08-13T00:46:11.911668659Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:46:12.500282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2784221723.mount: Deactivated successfully. Aug 13 00:46:12.506096 env[1320]: time="2025-08-13T00:46:12.506048095Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:12.508363 env[1320]: time="2025-08-13T00:46:12.508290980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:12.510304 env[1320]: time="2025-08-13T00:46:12.510271833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:12.512311 env[1320]: time="2025-08-13T00:46:12.512258929Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:12.512590 env[1320]: time="2025-08-13T00:46:12.512552208Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:46:12.513324 env[1320]: time="2025-08-13T00:46:12.513268462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:46:12.954883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715213019.mount: Deactivated successfully. Aug 13 00:46:14.816358 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:46:14.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:14.816565 systemd[1]: Stopped kubelet.service. Aug 13 00:46:14.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:14.819212 systemd[1]: Starting kubelet.service... Aug 13 00:46:14.823149 kernel: audit: type=1130 audit(1755045974.815:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:14.823533 kernel: audit: type=1131 audit(1755045974.815:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:14.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:14.911821 systemd[1]: Started kubelet.service. Aug 13 00:46:14.916058 kernel: audit: type=1130 audit(1755045974.911:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:14.974545 kubelet[1667]: E0813 00:46:14.974463 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:46:14.976584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:46:14.976796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:46:14.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:46:14.981113 kernel: audit: type=1131 audit(1755045974.976:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:46:17.791997 env[1320]: time="2025-08-13T00:46:17.791906459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:17.796818 env[1320]: time="2025-08-13T00:46:17.796733492Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:17.800234 env[1320]: time="2025-08-13T00:46:17.800137063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:17.803515 env[1320]: time="2025-08-13T00:46:17.803453533Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:46:17.805655 env[1320]: time="2025-08-13T00:46:17.804609794Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:21.509503 systemd[1]: Stopped kubelet.service. Aug 13 00:46:21.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:21.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:21.513051 kernel: audit: type=1130 audit(1755045981.508:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:21.513092 kernel: audit: type=1131 audit(1755045981.512:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:21.514541 systemd[1]: Starting kubelet.service... Aug 13 00:46:21.535440 systemd[1]: Reloading. Aug 13 00:46:21.612969 /usr/lib/systemd/system-generators/torcx-generator[1724]: time="2025-08-13T00:46:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:46:21.613496 /usr/lib/systemd/system-generators/torcx-generator[1724]: time="2025-08-13T00:46:21Z" level=info msg="torcx already run" Aug 13 00:46:21.973417 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:46:21.973446 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:46:22.002419 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:22.157930 systemd[1]: Started kubelet.service. Aug 13 00:46:22.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:22.162313 systemd[1]: Stopping kubelet.service... Aug 13 00:46:22.163062 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:46:22.163500 systemd[1]: Stopped kubelet.service. Aug 13 00:46:22.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:22.165763 systemd[1]: Starting kubelet.service... Aug 13 00:46:22.169922 kernel: audit: type=1130 audit(1755045982.158:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:22.169993 kernel: audit: type=1131 audit(1755045982.162:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:22.295699 systemd[1]: Started kubelet.service. Aug 13 00:46:22.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:22.310221 kernel: audit: type=1130 audit(1755045982.298:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:22.382635 kubelet[1787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:46:22.382635 kubelet[1787]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:46:22.382635 kubelet[1787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:46:22.383255 kubelet[1787]: I0813 00:46:22.382663 1787 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:46:22.660698 kubelet[1787]: I0813 00:46:22.660565 1787 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:46:22.660698 kubelet[1787]: I0813 00:46:22.660603 1787 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:46:22.660886 kubelet[1787]: I0813 00:46:22.660878 1787 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:46:22.686415 kubelet[1787]: E0813 00:46:22.686335 1787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:22.688185 kubelet[1787]: I0813 00:46:22.688148 1787 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:46:22.695285 kubelet[1787]: E0813 00:46:22.695223 1787 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:46:22.695285 kubelet[1787]: I0813 00:46:22.695268 1787 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:46:22.701455 kubelet[1787]: I0813 00:46:22.701427 1787 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:46:22.701870 kubelet[1787]: I0813 00:46:22.701838 1787 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:46:22.702060 kubelet[1787]: I0813 00:46:22.702007 1787 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:46:22.702384 kubelet[1787]: I0813 00:46:22.702061 1787 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:46:22.702534 kubelet[1787]: I0813 00:46:22.702443 1787 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:46:22.702534 kubelet[1787]: I0813 00:46:22.702460 1787 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:46:22.702651 kubelet[1787]: I0813 00:46:22.702632 1787 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:46:22.712990 kubelet[1787]: I0813 00:46:22.712929 1787 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:46:22.713188 kubelet[1787]: I0813 00:46:22.713021 1787 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:46:22.713188 kubelet[1787]: I0813 00:46:22.713107 1787 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:46:22.713188 kubelet[1787]: W0813 00:46:22.713072 1787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:46:22.713188 kubelet[1787]: I0813 00:46:22.713151 1787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:46:22.713188 kubelet[1787]: E0813 00:46:22.713153 1787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:22.727604 kubelet[1787]: I0813 00:46:22.727527 1787 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:46:22.728204 kubelet[1787]: I0813 00:46:22.728170 1787 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:46:22.728304 kubelet[1787]: W0813 00:46:22.728282 1787 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:46:22.728993 kubelet[1787]: W0813 00:46:22.728663 1787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:46:22.728993 kubelet[1787]: E0813 00:46:22.728735 1787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:22.730690 kubelet[1787]: I0813 00:46:22.730656 1787 server.go:1274] "Started kubelet" Aug 13 00:46:22.731000 audit[1787]: AVC avc: denied { mac_admin } for pid=1787 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:46:22.732957 kubelet[1787]: I0813 00:46:22.732376 1787 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:46:22.732957 kubelet[1787]: I0813 00:46:22.732423 1787 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:46:22.732957 kubelet[1787]: I0813 00:46:22.732548 1787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:46:22.734189 kubelet[1787]: I0813 00:46:22.734118 1787 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:46:22.735215 kubelet[1787]: I0813 00:46:22.735191 1787 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:46:22.735606 kubelet[1787]: I0813 00:46:22.735572 1787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:46:22.731000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:46:22.736017 kubelet[1787]: I0813 00:46:22.735994 1787 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:46:22.744068 kernel: audit: type=1400 audit(1755045982.731:215): avc: denied { mac_admin } for pid=1787 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:46:22.744203 kernel: audit: type=1401 audit(1755045982.731:215): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:46:22.744221 kernel: audit: type=1300 audit(1755045982.731:215): arch=c000003e syscall=188 success=no exit=-22 a0=c000af0f90 a1=c0009b6720 a2=c000af0f30 a3=25 items=0 ppid=1 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.744240 kernel: audit: type=1327 audit(1755045982.731:215): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:46:22.731000 audit[1787]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000af0f90 a1=c0009b6720 a2=c000af0f30 a3=25 items=0 ppid=1 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:46:22.744385 kubelet[1787]: I0813 00:46:22.737484 1787 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:46:22.744385 kubelet[1787]: I0813 00:46:22.737648 1787 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:46:22.744385 kubelet[1787]: I0813 00:46:22.737890 1787 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:46:22.744385 kubelet[1787]: I0813 00:46:22.737967 1787 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:46:22.744385 kubelet[1787]: E0813 00:46:22.738022 1787 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:46:22.744385 kubelet[1787]: E0813 00:46:22.738428 1787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Aug 13 00:46:22.744385 kubelet[1787]: W0813 00:46:22.738458 1787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:46:22.744385 kubelet[1787]: E0813 00:46:22.738523 1787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:22.744589 kubelet[1787]: E0813 00:46:22.736294 1787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d05cd22433a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:46:22.730601274 +0000 UTC m=+0.420202951,LastTimestamp:2025-08-13 00:46:22.730601274 +0000 UTC m=+0.420202951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:46:22.744589 kubelet[1787]: I0813 00:46:22.740567 1787 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:46:22.744589 kubelet[1787]: E0813 00:46:22.742337 1787 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:46:22.744589 kubelet[1787]: I0813 00:46:22.742818 1787 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:46:22.744589 kubelet[1787]: I0813 00:46:22.742835 1787 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:46:22.731000 audit[1787]: AVC avc: denied { mac_admin } for pid=1787 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:46:22.749256 kernel: audit: type=1400 audit(1755045982.731:216): avc: denied { mac_admin } for pid=1787 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:46:22.731000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:46:22.731000 audit[1787]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0005a1500 a1=c0009b6738 a2=c000af1050 a3=25 items=0 ppid=1 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:46:22.734000 audit[1800]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:22.734000 audit[1800]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec1351710 a2=0 a3=7ffec13516fc items=0 ppid=1787 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:46:22.736000 audit[1801]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:22.736000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3cbf5c80 a2=0 a3=7fff3cbf5c6c items=0 ppid=1787 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:46:22.739000 audit[1803]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:22.739000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff589ab0e0 a2=0 a3=7fff589ab0cc items=0 ppid=1787 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:46:22.742000 audit[1805]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:22.742000 audit[1805]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff29f1c710 a2=0 a3=7fff29f1c6fc items=0 ppid=1787 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:46:22.750000 audit[1808]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1808 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:22.750000 audit[1808]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffa58a7510 a2=0 a3=7fffa58a74fc items=0 ppid=1787 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Aug 13 00:46:22.751448 kubelet[1787]: I0813 00:46:22.751412 1787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:46:22.751000 audit[1809]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:22.751000 audit[1809]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc45171f20 a2=0 a3=7ffc45171f0c items=0 ppid=1787 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:46:22.755614 kubelet[1787]: I0813 00:46:22.755571 1787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:46:22.755654 kubelet[1787]: I0813 00:46:22.755629 1787 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:46:22.755698 kubelet[1787]: I0813 00:46:22.755678 1787 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:46:22.755798 kubelet[1787]: E0813 00:46:22.755742 1787 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:46:22.756000 audit[1811]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1811 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:22.756000 audit[1811]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffba07de50 a2=0 a3=7fffba07de3c items=0 ppid=1787 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:46:22.757000 audit[1812]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:22.757000 audit[1812]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0ef210a0 a2=0 a3=7ffc0ef2108c items=0 ppid=1787 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:46:22.757000 audit[1814]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1814 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:22.757000 audit[1814]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2dec4e90 a2=0 a3=7ffe2dec4e7c items=0 ppid=1787 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:46:22.758000 audit[1815]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:22.758000 audit[1815]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff330f2510 a2=0 a3=7fff330f24fc items=0 ppid=1787 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:46:22.759000 audit[1816]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:22.759000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff549b15a0 a2=0 a3=7fff549b158c items=0 ppid=1787 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.759000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:46:22.760000 audit[1817]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1817 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:22.760000 audit[1817]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff821bb1c0 a2=0 a3=7fff821bb1ac items=0 ppid=1787 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:22.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:46:22.764598 kubelet[1787]: W0813 00:46:22.764553 1787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:46:22.764725 kubelet[1787]: E0813 00:46:22.764700 1787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:22.779302 kubelet[1787]: I0813 00:46:22.779264 1787 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:46:22.779302 kubelet[1787]: I0813 00:46:22.779299 1787 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:46:22.779461 kubelet[1787]: I0813 00:46:22.779325 1787 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:46:22.838764 kubelet[1787]: E0813 00:46:22.838636 1787 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:46:22.856441 kubelet[1787]: E0813 00:46:22.856375 1787 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:46:22.926429 update_engine[1310]: I0813 00:46:22.926141 1310 update_attempter.cc:509] Updating boot flags... Aug 13 00:46:22.939717 kubelet[1787]: E0813 00:46:22.939657 1787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Aug 13 00:46:22.940003 kubelet[1787]: E0813 00:46:22.939948 1787 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:46:23.041508 kubelet[1787]: E0813 00:46:23.040745 1787 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:46:23.056954 kubelet[1787]: E0813 00:46:23.056789 1787 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:46:23.141732 kubelet[1787]: E0813 00:46:23.141548 1787 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:46:23.238182 kubelet[1787]: I0813 00:46:23.237444 1787 policy_none.go:49] "None policy: Start" Aug 13 00:46:23.239629 kubelet[1787]: I0813 00:46:23.238939 1787 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:46:23.239629 kubelet[1787]: I0813 00:46:23.238970 1787 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:46:23.241879 kubelet[1787]: E0813 00:46:23.241834 1787 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:46:23.260662 kubelet[1787]: I0813 00:46:23.259350 1787 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:46:23.260662 kubelet[1787]: I0813 00:46:23.259436 1787 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:46:23.260662 kubelet[1787]: I0813 00:46:23.259594 1787 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:46:23.260662 kubelet[1787]: I0813 00:46:23.259616 1787 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:46:23.256000 audit[1787]: AVC avc: denied { mac_admin } for pid=1787 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:46:23.256000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:46:23.256000 audit[1787]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e36870 a1=c000a60000 a2=c000e36840 a3=25 items=0 ppid=1 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:23.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:46:23.264569 kubelet[1787]: I0813 00:46:23.262866 1787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:46:23.277897 kubelet[1787]: E0813 00:46:23.275613 1787 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:46:23.340583 kubelet[1787]: E0813 00:46:23.340512 1787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Aug 13 00:46:23.361441 kubelet[1787]: I0813 00:46:23.361385 1787 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:46:23.361985 kubelet[1787]: E0813 00:46:23.361935 1787 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Aug 13 00:46:23.543440 kubelet[1787]: I0813 00:46:23.543257 1787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a481a74a7f3e9489216db508eb4ae120-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a481a74a7f3e9489216db508eb4ae120\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:46:23.543440 kubelet[1787]: I0813 00:46:23.543326 1787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a481a74a7f3e9489216db508eb4ae120-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a481a74a7f3e9489216db508eb4ae120\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:46:23.543440 kubelet[1787]: I0813 00:46:23.543380 1787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:23.543922 kubelet[1787]: I0813 00:46:23.543449 1787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:23.543922 kubelet[1787]: I0813 00:46:23.543491 1787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:23.543922 kubelet[1787]: I0813 00:46:23.543514 1787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a481a74a7f3e9489216db508eb4ae120-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a481a74a7f3e9489216db508eb4ae120\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:46:23.543922 kubelet[1787]: I0813 00:46:23.543537 1787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:23.543922 kubelet[1787]: I0813 00:46:23.543559 1787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:23.544225 kubelet[1787]: I0813 00:46:23.543592 1787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:46:23.567288 kubelet[1787]: I0813 00:46:23.567253 1787 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:46:23.567795 kubelet[1787]: E0813 00:46:23.567720 1787 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Aug 13 00:46:23.708179 kubelet[1787]: E0813 00:46:23.708005 1787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d05cd22433a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:46:22.730601274 +0000 UTC m=+0.420202951,LastTimestamp:2025-08-13 00:46:22.730601274 +0000 UTC m=+0.420202951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:46:23.763634 kubelet[1787]: E0813 00:46:23.763567 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:23.764116 kubelet[1787]: E0813 00:46:23.764068 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:23.764312 env[1320]: time="2025-08-13T00:46:23.764272530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a481a74a7f3e9489216db508eb4ae120,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:23.764607 env[1320]: time="2025-08-13T00:46:23.764445758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:23.766745 kubelet[1787]: E0813 00:46:23.766721 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:23.767396 env[1320]: time="2025-08-13T00:46:23.767364302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:23.839332 kubelet[1787]: W0813 00:46:23.839216 1787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:46:23.839332 kubelet[1787]: E0813 00:46:23.839280 1787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:23.884222 kubelet[1787]: W0813 00:46:23.884179 1787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:46:23.884222 kubelet[1787]: E0813 00:46:23.884220 1787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:23.891842 kubelet[1787]: W0813 00:46:23.891790 1787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:46:23.891902 kubelet[1787]: E0813 00:46:23.891837 1787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:23.960928 kubelet[1787]: W0813 00:46:23.960872 1787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:46:23.960982 kubelet[1787]: E0813 00:46:23.960928 1787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:23.969065 kubelet[1787]: I0813 00:46:23.969043 1787 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:46:23.969767 kubelet[1787]: E0813 00:46:23.969712 1787 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Aug 13 00:46:24.141678 kubelet[1787]: E0813 00:46:24.141507 1787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="1.6s" Aug 13 00:46:24.448798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477100485.mount: Deactivated successfully. Aug 13 00:46:24.454330 env[1320]: time="2025-08-13T00:46:24.454284052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.456972 env[1320]: time="2025-08-13T00:46:24.456933786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.458600 env[1320]: time="2025-08-13T00:46:24.458572119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.459590 env[1320]: time="2025-08-13T00:46:24.459563801Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.461510 env[1320]: time="2025-08-13T00:46:24.461478937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.462942 env[1320]: time="2025-08-13T00:46:24.462899506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.464167 env[1320]: time="2025-08-13T00:46:24.464132579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.465946 env[1320]: time="2025-08-13T00:46:24.465923938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.468799 env[1320]: time="2025-08-13T00:46:24.468774503Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.469588 env[1320]: time="2025-08-13T00:46:24.469539632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.470835 env[1320]: time="2025-08-13T00:46:24.470802585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.471446 env[1320]: time="2025-08-13T00:46:24.471412584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:24.514425 env[1320]: time="2025-08-13T00:46:24.514320071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:46:24.514721 env[1320]: time="2025-08-13T00:46:24.514676102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:46:24.514809 env[1320]: time="2025-08-13T00:46:24.514722133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:46:24.514983 env[1320]: time="2025-08-13T00:46:24.514945761Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3073e99b8d039e8a64d9d9dfd154857852bbfb241a9846706467c6d7379610c3 pid=1841 runtime=io.containerd.runc.v2 Aug 13 00:46:24.516788 env[1320]: time="2025-08-13T00:46:24.516731449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:46:24.516855 env[1320]: time="2025-08-13T00:46:24.516786728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:46:24.516855 env[1320]: time="2025-08-13T00:46:24.516811788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:46:24.520480 env[1320]: time="2025-08-13T00:46:24.520419958Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2249ec0af6ac3777582d48a131fb0f5e9f86faebff9974ee14c97c13d4c08dc2 pid=1856 runtime=io.containerd.runc.v2 Aug 13 00:46:24.535464 env[1320]: time="2025-08-13T00:46:24.534488526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:46:24.535464 env[1320]: time="2025-08-13T00:46:24.534552093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:46:24.535464 env[1320]: time="2025-08-13T00:46:24.534567564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:46:24.535464 env[1320]: time="2025-08-13T00:46:24.534749437Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2337593965eda4ab2145b639702fb7feda05d9ad7ce43fbf9ace591d822921fa pid=1885 runtime=io.containerd.runc.v2 Aug 13 00:46:24.711994 env[1320]: time="2025-08-13T00:46:24.711863962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"3073e99b8d039e8a64d9d9dfd154857852bbfb241a9846706467c6d7379610c3\"" Aug 13 00:46:24.713866 kubelet[1787]: E0813 00:46:24.713585 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:24.716371 env[1320]: time="2025-08-13T00:46:24.716301919Z" level=info msg="CreateContainer within sandbox \"3073e99b8d039e8a64d9d9dfd154857852bbfb241a9846706467c6d7379610c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:46:24.728867 env[1320]: time="2025-08-13T00:46:24.728813687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2249ec0af6ac3777582d48a131fb0f5e9f86faebff9974ee14c97c13d4c08dc2\"" Aug 13 00:46:24.729355 kubelet[1787]: E0813 00:46:24.729322 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:24.731425 env[1320]: time="2025-08-13T00:46:24.731390856Z" level=info msg="CreateContainer within sandbox \"2249ec0af6ac3777582d48a131fb0f5e9f86faebff9974ee14c97c13d4c08dc2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:46:24.737053 env[1320]: time="2025-08-13T00:46:24.736877618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a481a74a7f3e9489216db508eb4ae120,Namespace:kube-system,Attempt:0,} returns sandbox id \"2337593965eda4ab2145b639702fb7feda05d9ad7ce43fbf9ace591d822921fa\"" Aug 13 00:46:24.737596 kubelet[1787]: E0813 00:46:24.737557 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:24.739746 env[1320]: time="2025-08-13T00:46:24.739710939Z" level=info msg="CreateContainer within sandbox \"2337593965eda4ab2145b639702fb7feda05d9ad7ce43fbf9ace591d822921fa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:46:24.770618 kubelet[1787]: I0813 00:46:24.770566 1787 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:46:24.770951 kubelet[1787]: E0813 00:46:24.770927 1787 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Aug 13 00:46:24.835063 env[1320]: time="2025-08-13T00:46:24.834991334Z" level=info msg="CreateContainer within sandbox \"3073e99b8d039e8a64d9d9dfd154857852bbfb241a9846706467c6d7379610c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc3f812a77a478f48ab2968aaf2f0a8613f47d23f60f900d42358c1f1fda35aa\"" Aug 13 00:46:24.835967 env[1320]: time="2025-08-13T00:46:24.835909157Z" level=info msg="StartContainer for \"fc3f812a77a478f48ab2968aaf2f0a8613f47d23f60f900d42358c1f1fda35aa\"" Aug 13 00:46:24.842389 env[1320]: time="2025-08-13T00:46:24.842351990Z" level=info msg="CreateContainer within sandbox \"2249ec0af6ac3777582d48a131fb0f5e9f86faebff9974ee14c97c13d4c08dc2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"34cac9f7524dab19baf78f2fc5129fba7477b15ebb9634b7e0dac3b877a202ad\"" Aug 13 00:46:24.843067 env[1320]: time="2025-08-13T00:46:24.843021267Z" level=info msg="StartContainer for \"34cac9f7524dab19baf78f2fc5129fba7477b15ebb9634b7e0dac3b877a202ad\"" Aug 13 00:46:24.844569 env[1320]: time="2025-08-13T00:46:24.844534711Z" level=info msg="CreateContainer within sandbox \"2337593965eda4ab2145b639702fb7feda05d9ad7ce43fbf9ace591d822921fa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e27082935e07498274c6ac2cfdfe2b40660deb2fbf684ce30942bed2f804390\"" Aug 13 00:46:24.845199 env[1320]: time="2025-08-13T00:46:24.845169158Z" level=info msg="StartContainer for \"3e27082935e07498274c6ac2cfdfe2b40660deb2fbf684ce30942bed2f804390\"" Aug 13 00:46:24.862211 kubelet[1787]: E0813 00:46:24.858430 1787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:24.927833 env[1320]: time="2025-08-13T00:46:24.927757786Z" level=info msg="StartContainer for \"fc3f812a77a478f48ab2968aaf2f0a8613f47d23f60f900d42358c1f1fda35aa\" returns successfully" Aug 13 00:46:24.935602 env[1320]: time="2025-08-13T00:46:24.935538952Z" level=info msg="StartContainer for \"34cac9f7524dab19baf78f2fc5129fba7477b15ebb9634b7e0dac3b877a202ad\" returns successfully" Aug 13 00:46:24.947067 env[1320]: time="2025-08-13T00:46:24.944820917Z" level=info msg="StartContainer for \"3e27082935e07498274c6ac2cfdfe2b40660deb2fbf684ce30942bed2f804390\" returns successfully" Aug 13 00:46:25.777234 kubelet[1787]: E0813 00:46:25.777187 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:25.781235 kubelet[1787]: E0813 00:46:25.781138 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:25.784005 kubelet[1787]: E0813 00:46:25.783959 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:26.392960 kubelet[1787]: I0813 00:46:26.392911 1787 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:46:26.788681 kubelet[1787]: E0813 00:46:26.788643 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:26.789498 kubelet[1787]: E0813 00:46:26.789119 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:26.789908 kubelet[1787]: E0813 00:46:26.789885 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:27.335980 kubelet[1787]: E0813 00:46:27.335940 1787 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:46:27.458060 kubelet[1787]: I0813 00:46:27.457989 1787 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:46:27.458060 kubelet[1787]: E0813 00:46:27.458043 1787 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:46:27.744925 kubelet[1787]: I0813 00:46:27.744874 1787 apiserver.go:52] "Watching apiserver" Aug 13 00:46:27.791594 kubelet[1787]: E0813 00:46:27.791554 1787 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:27.791594 kubelet[1787]: E0813 00:46:27.791554 1787 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 00:46:27.791994 kubelet[1787]: E0813 00:46:27.791726 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:27.791994 kubelet[1787]: E0813 00:46:27.791743 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:27.839146 kubelet[1787]: I0813 00:46:27.839108 1787 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:46:29.741282 kubelet[1787]: E0813 00:46:29.741229 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:29.791059 kubelet[1787]: E0813 00:46:29.790962 1787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:29.793052 systemd[1]: Reloading. Aug 13 00:46:29.876714 /usr/lib/systemd/system-generators/torcx-generator[2098]: time="2025-08-13T00:46:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:46:29.876752 /usr/lib/systemd/system-generators/torcx-generator[2098]: time="2025-08-13T00:46:29Z" level=info msg="torcx already run" Aug 13 00:46:30.174860 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:46:30.174890 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:46:30.202731 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:30.311803 systemd[1]: Stopping kubelet.service... Aug 13 00:46:30.328303 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:46:30.328747 systemd[1]: Stopped kubelet.service. Aug 13 00:46:30.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:30.330673 kernel: kauditd_printk_skb: 43 callbacks suppressed Aug 13 00:46:30.330749 kernel: audit: type=1131 audit(1755045990.327:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:30.331615 systemd[1]: Starting kubelet.service... Aug 13 00:46:30.443058 systemd[1]: Started kubelet.service. Aug 13 00:46:30.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:30.448092 kernel: audit: type=1130 audit(1755045990.442:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:30.490024 kubelet[2155]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:46:30.490024 kubelet[2155]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:46:30.490024 kubelet[2155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:46:30.490610 kubelet[2155]: I0813 00:46:30.490072 2155 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:46:30.496895 kubelet[2155]: I0813 00:46:30.496823 2155 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:46:30.496895 kubelet[2155]: I0813 00:46:30.496865 2155 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:46:30.497189 kubelet[2155]: I0813 00:46:30.497163 2155 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:46:30.498825 kubelet[2155]: I0813 00:46:30.498737 2155 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:46:30.501411 kubelet[2155]: I0813 00:46:30.501378 2155 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:46:30.507702 kubelet[2155]: E0813 00:46:30.507657 2155 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:46:30.507702 kubelet[2155]: I0813 00:46:30.507700 2155 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:46:30.511493 kubelet[2155]: I0813 00:46:30.511452 2155 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:46:30.511877 kubelet[2155]: I0813 00:46:30.511861 2155 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:46:30.511994 kubelet[2155]: I0813 00:46:30.511956 2155 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:46:30.512167 kubelet[2155]: I0813 00:46:30.511987 2155 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:46:30.512263 kubelet[2155]: I0813 00:46:30.512175 2155 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:46:30.512263 kubelet[2155]: I0813 00:46:30.512184 2155 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:46:30.512263 kubelet[2155]: I0813 00:46:30.512210 2155 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:46:30.512332 kubelet[2155]: I0813 00:46:30.512308 2155 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:46:30.512332 kubelet[2155]: I0813 00:46:30.512321 2155 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:46:30.512382 kubelet[2155]: I0813 00:46:30.512348 2155 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:46:30.512382 kubelet[2155]: I0813 00:46:30.512358 2155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:46:30.515235 kubelet[2155]: I0813 00:46:30.513855 2155 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:46:30.515235 kubelet[2155]: I0813 00:46:30.514544 2155 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:46:30.515599 kubelet[2155]: I0813 00:46:30.515576 2155 server.go:1274] "Started kubelet" Aug 13 00:46:30.527072 kernel: audit: type=1400 audit(1755045990.516:232): avc: denied { mac_admin } for pid=2155 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:46:30.527210 kernel: audit: type=1401 audit(1755045990.516:232): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:46:30.527254 kernel: audit: type=1300 audit(1755045990.516:232): arch=c000003e syscall=188 success=no exit=-22 a0=c0006d8f00 a1=c00048dcf8 a2=c0006d8ed0 a3=25 items=0 ppid=1 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:30.516000 audit[2155]: AVC avc: denied { mac_admin } for pid=2155 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:46:30.516000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:46:30.516000 audit[2155]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006d8f00 a1=c00048dcf8 a2=c0006d8ed0 a3=25 items=0 ppid=1 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:30.527405 kubelet[2155]: I0813 00:46:30.517540 2155 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:46:30.527405 kubelet[2155]: I0813 00:46:30.517692 2155 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:46:30.527405 kubelet[2155]: I0813 00:46:30.517804 2155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:46:30.527405 kubelet[2155]: I0813 00:46:30.525172 2155 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:46:30.527405 kubelet[2155]: I0813 00:46:30.526509 2155 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:46:30.528804 kubelet[2155]: I0813 00:46:30.528766 2155 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:46:30.529106 kubelet[2155]: I0813 00:46:30.529081 2155 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:46:30.535253 kernel: audit: type=1327 audit(1755045990.516:232): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:46:30.516000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:46:30.535418 kubelet[2155]: I0813 00:46:30.529303 2155 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:46:30.535418 kubelet[2155]: I0813 00:46:30.529562 2155 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:46:30.535418 kubelet[2155]: E0813 00:46:30.529968 2155 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:46:30.535418 kubelet[2155]: I0813 00:46:30.530665 2155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:46:30.535418 kubelet[2155]: I0813 00:46:30.534124 2155 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:46:30.535418 kubelet[2155]: I0813 00:46:30.534243 2155 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:46:30.539439 kernel: audit: type=1400 audit(1755045990.516:233): avc: denied { mac_admin } for pid=2155 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:46:30.516000 audit[2155]: AVC avc: denied { mac_admin } for pid=2155 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:46:30.539583 kubelet[2155]: I0813 00:46:30.538436 2155 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:46:30.541615 kernel: audit: type=1401 audit(1755045990.516:233): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:46:30.516000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:46:30.541698 kubelet[2155]: I0813 00:46:30.539643 2155 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:46:30.541698 kubelet[2155]: I0813 00:46:30.540378 2155 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:46:30.516000 audit[2155]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00085b720 a1=c00048dd10 a2=c0006d8f90 a3=25 items=0 ppid=1 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:30.542518 kubelet[2155]: I0813 00:46:30.542474 2155 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:46:30.542656 kubelet[2155]: I0813 00:46:30.542627 2155 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:46:30.542769 kubelet[2155]: I0813 00:46:30.542751 2155 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:46:30.542931 kubelet[2155]: E0813 00:46:30.542897 2155 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:46:30.546328 kernel: audit: type=1300 audit(1755045990.516:233): arch=c000003e syscall=188 success=no exit=-22 a0=c00085b720 a1=c00048dd10 a2=c0006d8f90 a3=25 items=0 ppid=1 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:30.546404 kernel: audit: type=1327 audit(1755045990.516:233): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:46:30.516000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:46:30.616283 kubelet[2155]: I0813 00:46:30.616246 2155 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:46:30.616283 kubelet[2155]: I0813 00:46:30.616267 2155 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:46:30.616283 kubelet[2155]: I0813 00:46:30.616293 2155 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:46:30.616554 kubelet[2155]: I0813 00:46:30.616499 2155 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:46:30.616554 kubelet[2155]: I0813 00:46:30.616511 2155 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:46:30.616554 kubelet[2155]: I0813 00:46:30.616532 2155 policy_none.go:49] "None policy: Start" Aug 13 00:46:30.617000 audit[2155]: AVC avc: denied { mac_admin } for pid=2155 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:46:30.617000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:46:30.617000 audit[2155]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c4e900 a1=c0002eb410 a2=c000c4e8d0 a3=25 items=0 ppid=1 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:30.617000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:46:30.622418 kubelet[2155]: I0813 00:46:30.617455 2155 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:46:30.622418 kubelet[2155]: I0813 00:46:30.617477 2155 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:46:30.622418 kubelet[2155]: I0813 00:46:30.617650 2155 state_mem.go:75] "Updated machine memory state" Aug 13 00:46:30.622418 kubelet[2155]: I0813 00:46:30.618958 2155 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:46:30.622418 kubelet[2155]: I0813 00:46:30.619023 2155 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:46:30.622418 kubelet[2155]: I0813 00:46:30.619166 2155 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:46:30.622418 kubelet[2155]: I0813 00:46:30.619177 2155 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:46:30.622418 kubelet[2155]: I0813 00:46:30.620354 2155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:46:30.727088 kubelet[2155]: I0813 00:46:30.727016 2155 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:46:30.782942 kubelet[2155]: E0813 00:46:30.782878 2155 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:46:30.832408 kubelet[2155]: I0813 00:46:30.832327 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:46:30.832408 kubelet[2155]: I0813 00:46:30.832387 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a481a74a7f3e9489216db508eb4ae120-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a481a74a7f3e9489216db508eb4ae120\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:46:30.832408 kubelet[2155]: I0813 00:46:30.832425 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:30.832745 kubelet[2155]: I0813 00:46:30.832556 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:30.832745 kubelet[2155]: I0813 00:46:30.832609 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:30.832745 kubelet[2155]: I0813 00:46:30.832638 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:30.832745 kubelet[2155]: I0813 00:46:30.832676 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:46:30.832745 kubelet[2155]: I0813 00:46:30.832710 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a481a74a7f3e9489216db508eb4ae120-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a481a74a7f3e9489216db508eb4ae120\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:46:30.832931 kubelet[2155]: I0813 00:46:30.832753 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a481a74a7f3e9489216db508eb4ae120-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a481a74a7f3e9489216db508eb4ae120\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:46:30.898852 kubelet[2155]: I0813 00:46:30.898804 2155 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 00:46:30.899024 kubelet[2155]: I0813 00:46:30.898902 2155 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:46:31.075487 kubelet[2155]: E0813 00:46:31.074950 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:31.075487 kubelet[2155]: E0813 00:46:31.075104 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:31.083825 kubelet[2155]: E0813 00:46:31.083796 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:31.513067 kubelet[2155]: I0813 00:46:31.512889 2155 apiserver.go:52] "Watching apiserver" Aug 13 00:46:31.539196 kubelet[2155]: I0813 00:46:31.539151 2155 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:46:31.556109 kubelet[2155]: E0813 00:46:31.556069 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:31.556780 kubelet[2155]: E0813 00:46:31.556726 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:31.556967 kubelet[2155]: E0813 00:46:31.556943 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:32.557486 kubelet[2155]: E0813 00:46:32.557446 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:32.840279 kubelet[2155]: I0813 00:46:32.839968 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.83994182 podStartE2EDuration="2.83994182s" podCreationTimestamp="2025-08-13 00:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:46:32.099207134 +0000 UTC m=+1.650064743" watchObservedRunningTime="2025-08-13 00:46:32.83994182 +0000 UTC m=+2.390799429" Aug 13 00:46:32.932982 kubelet[2155]: I0813 00:46:32.932925 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.932907497 podStartE2EDuration="3.932907497s" podCreationTimestamp="2025-08-13 00:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:46:32.840177218 +0000 UTC m=+2.391034827" watchObservedRunningTime="2025-08-13 00:46:32.932907497 +0000 UTC m=+2.483765106" Aug 13 00:46:32.933265 kubelet[2155]: I0813 00:46:32.933000 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.932996902 podStartE2EDuration="2.932996902s" podCreationTimestamp="2025-08-13 00:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:46:32.932762595 +0000 UTC m=+2.483620204" watchObservedRunningTime="2025-08-13 00:46:32.932996902 +0000 UTC m=+2.483854511" Aug 13 00:46:34.466533 kubelet[2155]: E0813 00:46:34.466495 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:34.560089 kubelet[2155]: E0813 00:46:34.560058 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:35.104537 kubelet[2155]: E0813 00:46:35.104485 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:35.550979 kubelet[2155]: I0813 00:46:35.550937 2155 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:46:35.551774 env[1320]: time="2025-08-13T00:46:35.551716491Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:46:35.552315 kubelet[2155]: I0813 00:46:35.551904 2155 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:46:35.561140 kubelet[2155]: E0813 00:46:35.561121 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:36.566774 kubelet[2155]: I0813 00:46:36.566731 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e68ba158-a09b-4114-895a-fdf47f10599c-xtables-lock\") pod \"kube-proxy-lwhhz\" (UID: \"e68ba158-a09b-4114-895a-fdf47f10599c\") " pod="kube-system/kube-proxy-lwhhz" Aug 13 00:46:36.566774 kubelet[2155]: I0813 00:46:36.566762 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e68ba158-a09b-4114-895a-fdf47f10599c-lib-modules\") pod \"kube-proxy-lwhhz\" (UID: \"e68ba158-a09b-4114-895a-fdf47f10599c\") " pod="kube-system/kube-proxy-lwhhz" Aug 13 00:46:36.566774 kubelet[2155]: I0813 00:46:36.566781 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e68ba158-a09b-4114-895a-fdf47f10599c-kube-proxy\") pod \"kube-proxy-lwhhz\" (UID: \"e68ba158-a09b-4114-895a-fdf47f10599c\") " pod="kube-system/kube-proxy-lwhhz" Aug 13 00:46:36.567296 kubelet[2155]: I0813 00:46:36.566799 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7hlw\" (UniqueName: \"kubernetes.io/projected/e68ba158-a09b-4114-895a-fdf47f10599c-kube-api-access-m7hlw\") pod \"kube-proxy-lwhhz\" (UID: \"e68ba158-a09b-4114-895a-fdf47f10599c\") " pod="kube-system/kube-proxy-lwhhz" Aug 13 00:46:36.666994 kubelet[2155]: I0813 00:46:36.666940 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85ttg\" (UniqueName: \"kubernetes.io/projected/0fe023bd-ebb8-4224-93bb-552d59709c1c-kube-api-access-85ttg\") pod \"tigera-operator-5bf8dfcb4-8zdpm\" (UID: \"0fe023bd-ebb8-4224-93bb-552d59709c1c\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-8zdpm" Aug 13 00:46:36.667197 kubelet[2155]: I0813 00:46:36.667028 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0fe023bd-ebb8-4224-93bb-552d59709c1c-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-8zdpm\" (UID: \"0fe023bd-ebb8-4224-93bb-552d59709c1c\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-8zdpm" Aug 13 00:46:36.673645 kubelet[2155]: I0813 00:46:36.673603 2155 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:46:36.695395 kubelet[2155]: E0813 00:46:36.695357 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:36.696017 env[1320]: time="2025-08-13T00:46:36.695958162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwhhz,Uid:e68ba158-a09b-4114-895a-fdf47f10599c,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:36.861861 env[1320]: time="2025-08-13T00:46:36.861756588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-8zdpm,Uid:0fe023bd-ebb8-4224-93bb-552d59709c1c,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:46:37.306411 env[1320]: time="2025-08-13T00:46:37.302295576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:46:37.306411 env[1320]: time="2025-08-13T00:46:37.302381902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:46:37.306411 env[1320]: time="2025-08-13T00:46:37.302414194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:46:37.306411 env[1320]: time="2025-08-13T00:46:37.302613017Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0747ef65cca7a9cd2c27b38f983f2397e3749a9e3dec5c2aea3ff833312d31a3 pid=2214 runtime=io.containerd.runc.v2 Aug 13 00:46:37.314557 env[1320]: time="2025-08-13T00:46:37.314480036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:46:37.314640 env[1320]: time="2025-08-13T00:46:37.314530223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:46:37.314640 env[1320]: time="2025-08-13T00:46:37.314566183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:46:37.314939 env[1320]: time="2025-08-13T00:46:37.314871892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee992fc194c6570ae877be043ca1a209efd6c93779adcbf2f98e676d69e7f75d pid=2231 runtime=io.containerd.runc.v2 Aug 13 00:46:37.359055 env[1320]: time="2025-08-13T00:46:37.356494928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwhhz,Uid:e68ba158-a09b-4114-895a-fdf47f10599c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0747ef65cca7a9cd2c27b38f983f2397e3749a9e3dec5c2aea3ff833312d31a3\"" Aug 13 00:46:37.359216 kubelet[2155]: E0813 00:46:37.357350 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:37.360866 env[1320]: time="2025-08-13T00:46:37.360817131Z" level=info msg="CreateContainer within sandbox \"0747ef65cca7a9cd2c27b38f983f2397e3749a9e3dec5c2aea3ff833312d31a3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:46:37.374679 env[1320]: time="2025-08-13T00:46:37.374623699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-8zdpm,Uid:0fe023bd-ebb8-4224-93bb-552d59709c1c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ee992fc194c6570ae877be043ca1a209efd6c93779adcbf2f98e676d69e7f75d\"" Aug 13 00:46:37.377938 env[1320]: time="2025-08-13T00:46:37.376843889Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:46:37.379579 env[1320]: time="2025-08-13T00:46:37.379412252Z" level=info msg="CreateContainer within sandbox \"0747ef65cca7a9cd2c27b38f983f2397e3749a9e3dec5c2aea3ff833312d31a3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"981c2457d543499787b75fb687d3d71b9731c58452a8c60d9ee524d6a0380f5d\"" Aug 13 00:46:37.379845 env[1320]: time="2025-08-13T00:46:37.379811361Z" level=info msg="StartContainer for \"981c2457d543499787b75fb687d3d71b9731c58452a8c60d9ee524d6a0380f5d\"" Aug 13 00:46:37.432398 env[1320]: time="2025-08-13T00:46:37.432341565Z" level=info msg="StartContainer for \"981c2457d543499787b75fb687d3d71b9731c58452a8c60d9ee524d6a0380f5d\" returns successfully" Aug 13 00:46:37.551000 audit[2357]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.555973 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 00:46:37.556154 kernel: audit: type=1325 audit(1755045997.551:235): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.551000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd914a97c0 a2=0 a3=7ffd914a97ac items=0 ppid=2307 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.551000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:46:37.564099 kernel: audit: type=1300 audit(1755045997.551:235): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd914a97c0 a2=0 a3=7ffd914a97ac items=0 ppid=2307 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.564158 kernel: audit: type=1327 audit(1755045997.551:235): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:46:37.553000 audit[2360]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.566357 kernel: audit: type=1325 audit(1755045997.553:236): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.553000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb55af8e0 a2=0 a3=7ffdb55af8cc items=0 ppid=2307 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.570737 kernel: audit: type=1300 audit(1755045997.553:236): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb55af8e0 a2=0 a3=7ffdb55af8cc items=0 ppid=2307 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.571108 kernel: audit: type=1327 audit(1755045997.553:236): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:46:37.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:46:37.553000 audit[2358]: NETFILTER_CFG table=mangle:40 family=10 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.573675 kubelet[2155]: E0813 00:46:37.573643 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:37.576157 kernel: audit: type=1325 audit(1755045997.553:237): table=mangle:40 family=10 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.553000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe149efd00 a2=0 a3=7ffe149efcec items=0 ppid=2307 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.582282 kernel: audit: type=1300 audit(1755045997.553:237): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe149efd00 a2=0 a3=7ffe149efcec items=0 ppid=2307 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.582425 kernel: audit: type=1327 audit(1755045997.553:237): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:46:37.553000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:46:37.554000 audit[2361]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.588569 kernel: audit: type=1325 audit(1755045997.554:238): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.554000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff470efba0 a2=0 a3=7fff470efb8c items=0 ppid=2307 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:46:37.558000 audit[2362]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.558000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce78e9b80 a2=0 a3=7ffce78e9b6c items=0 ppid=2307 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:46:37.560000 audit[2363]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.560000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3f4cc7e0 a2=0 a3=7ffe3f4cc7cc items=0 ppid=2307 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:46:37.592086 kubelet[2155]: I0813 00:46:37.591966 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lwhhz" podStartSLOduration=1.59193986 podStartE2EDuration="1.59193986s" podCreationTimestamp="2025-08-13 00:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:46:37.590342139 +0000 UTC m=+7.141199778" watchObservedRunningTime="2025-08-13 00:46:37.59193986 +0000 UTC m=+7.142797469" Aug 13 00:46:37.653000 audit[2364]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.653000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffefdfad7e0 a2=0 a3=7ffefdfad7cc items=0 ppid=2307 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:46:37.658000 audit[2366]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.658000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdfba0fbd0 a2=0 a3=7ffdfba0fbbc items=0 ppid=2307 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.658000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Aug 13 00:46:37.664000 audit[2369]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.664000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdd7ad51c0 a2=0 a3=7ffdd7ad51ac items=0 ppid=2307 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Aug 13 00:46:37.665000 audit[2370]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.665000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1aa1bfe0 a2=0 a3=7fff1aa1bfcc items=0 ppid=2307 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:46:37.668000 audit[2372]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.668000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc186d9350 a2=0 a3=7ffc186d933c items=0 ppid=2307 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:46:37.670000 audit[2373]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.670000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9168e940 a2=0 a3=7ffc9168e92c items=0 ppid=2307 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:46:37.673000 audit[2375]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.673000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff2c222b30 a2=0 a3=7fff2c222b1c items=0 ppid=2307 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.673000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:46:37.682000 audit[2378]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.682000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc37dca770 a2=0 a3=7ffc37dca75c items=0 ppid=2307 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Aug 13 00:46:37.682000 audit[2379]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.682000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5f465840 a2=0 a3=7ffc5f46582c items=0 ppid=2307 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:46:37.685000 audit[2381]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2381 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.685000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc6f13c70 a2=0 a3=7ffcc6f13c5c items=0 ppid=2307 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:46:37.686000 audit[2382]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.686000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe855a5650 a2=0 a3=7ffe855a563c items=0 ppid=2307 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:46:37.690000 audit[2384]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.690000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe44217c20 a2=0 a3=7ffe44217c0c items=0 ppid=2307 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:46:37.695000 audit[2387]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.695000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfd97cc50 a2=0 a3=7ffcfd97cc3c items=0 ppid=2307 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:46:37.704000 audit[2390]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.704000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc34347130 a2=0 a3=7ffc3434711c items=0 ppid=2307 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:46:37.705000 audit[2391]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.705000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc065d81c0 a2=0 a3=7ffc065d81ac items=0 ppid=2307 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:46:37.708000 audit[2393]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.708000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffec53f3190 a2=0 a3=7ffec53f317c items=0 ppid=2307 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:46:37.713000 audit[2396]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.713000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd33411960 a2=0 a3=7ffd3341194c items=0 ppid=2307 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:46:37.715000 audit[2397]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.715000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca0bcc980 a2=0 a3=7ffca0bcc96c items=0 ppid=2307 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:46:37.719000 audit[2399]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:46:37.719000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdd15f5ff0 a2=0 a3=7ffdd15f5fdc items=0 ppid=2307 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:46:37.753000 audit[2405]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:37.753000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8608ef20 a2=0 a3=7ffc8608ef0c items=0 ppid=2307 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:37.764000 audit[2405]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:37.764000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc8608ef20 a2=0 a3=7ffc8608ef0c items=0 ppid=2307 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:37.766000 audit[2410]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.766000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff113b7040 a2=0 a3=7fff113b702c items=0 ppid=2307 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.766000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:46:37.768000 audit[2412]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.768000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc8599e410 a2=0 a3=7ffc8599e3fc items=0 ppid=2307 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.768000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Aug 13 00:46:37.772000 audit[2415]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.772000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffe01dadf0 a2=0 a3=7fffe01daddc items=0 ppid=2307 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Aug 13 00:46:37.773000 audit[2416]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.773000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4c4beda0 a2=0 a3=7ffe4c4bed8c items=0 ppid=2307 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:46:37.775000 audit[2418]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.775000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd304fa30 a2=0 a3=7fffd304fa1c items=0 ppid=2307 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:46:37.776000 audit[2419]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.776000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6e917980 a2=0 a3=7ffc6e91796c items=0 ppid=2307 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:46:37.778000 audit[2421]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.778000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc66f5c310 a2=0 a3=7ffc66f5c2fc items=0 ppid=2307 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Aug 13 00:46:37.781000 audit[2424]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.781000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffff6f98c80 a2=0 a3=7ffff6f98c6c items=0 ppid=2307 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:46:37.782000 audit[2425]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.782000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc051d00a0 a2=0 a3=7ffc051d008c items=0 ppid=2307 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:46:37.784000 audit[2427]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.784000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8340e510 a2=0 a3=7fff8340e4fc items=0 ppid=2307 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:46:37.786000 audit[2428]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.786000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd128fde00 a2=0 a3=7ffd128fddec items=0 ppid=2307 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:46:37.788000 audit[2430]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.788000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcff6154d0 a2=0 a3=7ffcff6154bc items=0 ppid=2307 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:46:37.791000 audit[2433]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2433 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.791000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb6a8b9f0 a2=0 a3=7fffb6a8b9dc items=0 ppid=2307 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:46:37.795000 audit[2436]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2436 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.795000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2a010d20 a2=0 a3=7fff2a010d0c items=0 ppid=2307 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Aug 13 00:46:37.796000 audit[2437]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.796000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffec9363070 a2=0 a3=7ffec936305c items=0 ppid=2307 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.796000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:46:37.797000 audit[2439]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.797000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff07ff0f20 a2=0 a3=7fff07ff0f0c items=0 ppid=2307 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:46:37.800000 audit[2442]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.800000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd2b96ab30 a2=0 a3=7ffd2b96ab1c items=0 ppid=2307 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:46:37.801000 audit[2443]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.801000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef59a90c0 a2=0 a3=7ffef59a90ac items=0 ppid=2307 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:46:37.807000 audit[2445]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.807000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc39f13170 a2=0 a3=7ffc39f1315c items=0 ppid=2307 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:46:37.808000 audit[2446]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.808000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde1f421c0 a2=0 a3=7ffde1f421ac items=0 ppid=2307 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:46:37.810000 audit[2448]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.810000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe35490440 a2=0 a3=7ffe3549042c items=0 ppid=2307 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:46:37.813000 audit[2451]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:46:37.813000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffec54df930 a2=0 a3=7ffec54df91c items=0 ppid=2307 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:46:37.816000 audit[2453]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:46:37.816000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc82d37cb0 a2=0 a3=7ffc82d37c9c items=0 ppid=2307 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.816000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:37.818000 audit[2453]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:46:37.818000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc82d37cb0 a2=0 a3=7ffc82d37c9c items=0 ppid=2307 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:37.818000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:38.317963 kubelet[2155]: E0813 00:46:38.317895 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:38.577493 kubelet[2155]: E0813 00:46:38.576945 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:38.778596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138338825.mount: Deactivated successfully. Aug 13 00:46:39.583535 kubelet[2155]: E0813 00:46:39.581867 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:40.698435 env[1320]: time="2025-08-13T00:46:40.697356807Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:40.701264 env[1320]: time="2025-08-13T00:46:40.701172648Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:40.707859 env[1320]: time="2025-08-13T00:46:40.705733077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:40.713140 env[1320]: time="2025-08-13T00:46:40.712994779Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:40.716148 env[1320]: time="2025-08-13T00:46:40.713760027Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:46:40.726014 env[1320]: time="2025-08-13T00:46:40.725865703Z" level=info msg="CreateContainer within sandbox \"ee992fc194c6570ae877be043ca1a209efd6c93779adcbf2f98e676d69e7f75d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:46:40.757894 env[1320]: time="2025-08-13T00:46:40.757631937Z" level=info msg="CreateContainer within sandbox \"ee992fc194c6570ae877be043ca1a209efd6c93779adcbf2f98e676d69e7f75d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5d66bd99e7e9f9fe4051ff940e1a04a792ac9c19efb1ebd1ca81b9eba41675b8\"" Aug 13 00:46:40.760785 env[1320]: time="2025-08-13T00:46:40.758988080Z" level=info msg="StartContainer for \"5d66bd99e7e9f9fe4051ff940e1a04a792ac9c19efb1ebd1ca81b9eba41675b8\"" Aug 13 00:46:41.442128 env[1320]: time="2025-08-13T00:46:41.440156381Z" level=info msg="StartContainer for \"5d66bd99e7e9f9fe4051ff940e1a04a792ac9c19efb1ebd1ca81b9eba41675b8\" returns successfully" Aug 13 00:46:47.501994 sudo[1481]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:47.501000 audit[1481]: USER_END pid=1481 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:46:47.506354 kernel: kauditd_printk_skb: 143 callbacks suppressed Aug 13 00:46:47.506522 kernel: audit: type=1106 audit(1755046007.501:286): pid=1481 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:46:47.510064 kernel: audit: type=1104 audit(1755046007.505:287): pid=1481 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:46:47.505000 audit[1481]: CRED_DISP pid=1481 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:46:47.513483 sshd[1475]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:47.513000 audit[1475]: USER_END pid=1475 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:46:47.516484 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:43672.service: Deactivated successfully. Aug 13 00:46:47.518153 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:46:47.518726 systemd-logind[1304]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:46:47.519670 systemd-logind[1304]: Removed session 7. Aug 13 00:46:47.520269 kernel: audit: type=1106 audit(1755046007.513:288): pid=1475 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:46:47.513000 audit[1475]: CRED_DISP pid=1475 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:46:47.527086 kernel: audit: type=1104 audit(1755046007.513:289): pid=1475 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:46:47.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.21:22-10.0.0.1:43672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:47.532171 kernel: audit: type=1131 audit(1755046007.515:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.21:22-10.0.0.1:43672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:46:47.823536 kernel: audit: type=1325 audit(1755046007.813:291): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:47.823739 kernel: audit: type=1300 audit(1755046007.813:291): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa493b2a0 a2=0 a3=7fffa493b28c items=0 ppid=2307 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:47.813000 audit[2546]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:47.813000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa493b2a0 a2=0 a3=7fffa493b28c items=0 ppid=2307 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:47.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:47.828075 kernel: audit: type=1327 audit(1755046007.813:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:47.828000 audit[2546]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:47.838222 kernel: audit: type=1325 audit(1755046007.828:292): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:47.828000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa493b2a0 a2=0 a3=0 items=0 ppid=2307 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:47.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:47.850064 kernel: audit: type=1300 audit(1755046007.828:292): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa493b2a0 a2=0 a3=0 items=0 ppid=2307 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:47.903000 audit[2548]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:47.903000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff9b48f560 a2=0 a3=7fff9b48f54c items=0 ppid=2307 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:47.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:47.909000 audit[2548]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:47.909000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9b48f560 a2=0 a3=0 items=0 ppid=2307 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:47.909000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:50.799000 audit[2551]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:50.799000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe93c08910 a2=0 a3=7ffe93c088fc items=0 ppid=2307 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:50.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:50.812000 audit[2551]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:50.812000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe93c08910 a2=0 a3=0 items=0 ppid=2307 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:50.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:50.843000 audit[2553]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:50.843000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffffe20bf30 a2=0 a3=7ffffe20bf1c items=0 ppid=2307 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:50.843000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:50.848000 audit[2553]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:50.848000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffffe20bf30 a2=0 a3=0 items=0 ppid=2307 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:50.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:50.953465 kubelet[2155]: I0813 00:46:50.953396 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-8zdpm" podStartSLOduration=11.614164395 podStartE2EDuration="14.953375413s" podCreationTimestamp="2025-08-13 00:46:36 +0000 UTC" firstStartedPulling="2025-08-13 00:46:37.37614415 +0000 UTC m=+6.927001759" lastFinishedPulling="2025-08-13 00:46:40.715355168 +0000 UTC m=+10.266212777" observedRunningTime="2025-08-13 00:46:41.633765258 +0000 UTC m=+11.184622867" watchObservedRunningTime="2025-08-13 00:46:50.953375413 +0000 UTC m=+20.504233022" Aug 13 00:46:51.007862 kubelet[2155]: I0813 00:46:51.007734 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/91ee6a56-b38f-4d08-8f38-662a563c4304-typha-certs\") pod \"calico-typha-f7bc77b77-gdhp9\" (UID: \"91ee6a56-b38f-4d08-8f38-662a563c4304\") " pod="calico-system/calico-typha-f7bc77b77-gdhp9" Aug 13 00:46:51.007862 kubelet[2155]: I0813 00:46:51.007814 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91ee6a56-b38f-4d08-8f38-662a563c4304-tigera-ca-bundle\") pod \"calico-typha-f7bc77b77-gdhp9\" (UID: \"91ee6a56-b38f-4d08-8f38-662a563c4304\") " pod="calico-system/calico-typha-f7bc77b77-gdhp9" Aug 13 00:46:51.007862 kubelet[2155]: I0813 00:46:51.007848 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmz5j\" (UniqueName: \"kubernetes.io/projected/91ee6a56-b38f-4d08-8f38-662a563c4304-kube-api-access-cmz5j\") pod \"calico-typha-f7bc77b77-gdhp9\" (UID: \"91ee6a56-b38f-4d08-8f38-662a563c4304\") " pod="calico-system/calico-typha-f7bc77b77-gdhp9" Aug 13 00:46:51.267212 kubelet[2155]: E0813 00:46:51.267151 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:51.267941 env[1320]: time="2025-08-13T00:46:51.267904738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f7bc77b77-gdhp9,Uid:91ee6a56-b38f-4d08-8f38-662a563c4304,Namespace:calico-system,Attempt:0,}" Aug 13 00:46:51.290429 env[1320]: time="2025-08-13T00:46:51.290323373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:46:51.290429 env[1320]: time="2025-08-13T00:46:51.290386295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:46:51.290429 env[1320]: time="2025-08-13T00:46:51.290417306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:46:51.290725 env[1320]: time="2025-08-13T00:46:51.290664057Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6872fad81ebc0e94a04c8f37237cf35c4cce133c7e6c5ace180995ebb765d408 pid=2564 runtime=io.containerd.runc.v2 Aug 13 00:46:51.311159 kubelet[2155]: I0813 00:46:51.310844 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/28122c17-44bc-470c-93fa-7b72037b5a85-cni-bin-dir\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311159 kubelet[2155]: I0813 00:46:51.310884 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/28122c17-44bc-470c-93fa-7b72037b5a85-cni-log-dir\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311159 kubelet[2155]: I0813 00:46:51.310897 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/28122c17-44bc-470c-93fa-7b72037b5a85-cni-net-dir\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311159 kubelet[2155]: I0813 00:46:51.310910 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/28122c17-44bc-470c-93fa-7b72037b5a85-node-certs\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311159 kubelet[2155]: I0813 00:46:51.310927 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/28122c17-44bc-470c-93fa-7b72037b5a85-var-lib-calico\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311423 kubelet[2155]: I0813 00:46:51.310942 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/28122c17-44bc-470c-93fa-7b72037b5a85-policysync\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311423 kubelet[2155]: I0813 00:46:51.310956 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5lt\" (UniqueName: \"kubernetes.io/projected/28122c17-44bc-470c-93fa-7b72037b5a85-kube-api-access-dw5lt\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311423 kubelet[2155]: I0813 00:46:51.310974 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/28122c17-44bc-470c-93fa-7b72037b5a85-flexvol-driver-host\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311423 kubelet[2155]: I0813 00:46:51.310985 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28122c17-44bc-470c-93fa-7b72037b5a85-lib-modules\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311423 kubelet[2155]: I0813 00:46:51.310997 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28122c17-44bc-470c-93fa-7b72037b5a85-tigera-ca-bundle\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311587 kubelet[2155]: I0813 00:46:51.311014 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28122c17-44bc-470c-93fa-7b72037b5a85-xtables-lock\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.311587 kubelet[2155]: I0813 00:46:51.311045 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/28122c17-44bc-470c-93fa-7b72037b5a85-var-run-calico\") pod \"calico-node-5469d\" (UID: \"28122c17-44bc-470c-93fa-7b72037b5a85\") " pod="calico-system/calico-node-5469d" Aug 13 00:46:51.358358 env[1320]: time="2025-08-13T00:46:51.358297975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f7bc77b77-gdhp9,Uid:91ee6a56-b38f-4d08-8f38-662a563c4304,Namespace:calico-system,Attempt:0,} returns sandbox id \"6872fad81ebc0e94a04c8f37237cf35c4cce133c7e6c5ace180995ebb765d408\"" Aug 13 00:46:51.360546 kubelet[2155]: E0813 00:46:51.360508 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:51.364820 env[1320]: time="2025-08-13T00:46:51.364783062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:46:51.416111 kubelet[2155]: E0813 00:46:51.416059 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.416111 kubelet[2155]: W0813 00:46:51.416091 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.416384 kubelet[2155]: E0813 00:46:51.416134 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.416425 kubelet[2155]: E0813 00:46:51.416390 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.416425 kubelet[2155]: W0813 00:46:51.416402 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.416493 kubelet[2155]: E0813 00:46:51.416429 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.420524 kubelet[2155]: E0813 00:46:51.420461 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.420524 kubelet[2155]: W0813 00:46:51.420487 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.420524 kubelet[2155]: E0813 00:46:51.420509 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.427397 kubelet[2155]: E0813 00:46:51.427341 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfdgb" podUID="222af0ae-3446-4630-b6ec-423608ba3718" Aug 13 00:46:51.437494 kubelet[2155]: E0813 00:46:51.437455 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.437718 kubelet[2155]: W0813 00:46:51.437692 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.438893 kubelet[2155]: E0813 00:46:51.438872 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.439853 kubelet[2155]: E0813 00:46:51.439837 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.439970 kubelet[2155]: W0813 00:46:51.439948 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.440258 kubelet[2155]: E0813 00:46:51.440237 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.440514 kubelet[2155]: E0813 00:46:51.440501 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.440607 kubelet[2155]: W0813 00:46:51.440588 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.440848 kubelet[2155]: E0813 00:46:51.440831 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.441071 kubelet[2155]: E0813 00:46:51.441055 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.441739 kubelet[2155]: W0813 00:46:51.441720 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.441933 kubelet[2155]: E0813 00:46:51.441916 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.453522 kubelet[2155]: E0813 00:46:51.453480 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.453775 kubelet[2155]: W0813 00:46:51.453748 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.454057 kubelet[2155]: E0813 00:46:51.454015 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.454243 kubelet[2155]: E0813 00:46:51.454227 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.454350 kubelet[2155]: W0813 00:46:51.454328 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.454532 kubelet[2155]: E0813 00:46:51.454516 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.454745 kubelet[2155]: E0813 00:46:51.454730 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.454844 kubelet[2155]: W0813 00:46:51.454823 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.455007 kubelet[2155]: E0813 00:46:51.454992 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.455241 kubelet[2155]: E0813 00:46:51.455228 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.455350 kubelet[2155]: W0813 00:46:51.455330 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.455522 kubelet[2155]: E0813 00:46:51.455506 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.455751 kubelet[2155]: E0813 00:46:51.455735 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.455895 kubelet[2155]: W0813 00:46:51.455875 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.456117 kubelet[2155]: E0813 00:46:51.456100 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.456344 kubelet[2155]: E0813 00:46:51.456330 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.456445 kubelet[2155]: W0813 00:46:51.456426 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.456623 kubelet[2155]: E0813 00:46:51.456606 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.458277 kubelet[2155]: E0813 00:46:51.456721 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.458277 kubelet[2155]: W0813 00:46:51.456741 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.458277 kubelet[2155]: E0813 00:46:51.456838 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.458277 kubelet[2155]: E0813 00:46:51.456987 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.458277 kubelet[2155]: W0813 00:46:51.456998 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.458277 kubelet[2155]: E0813 00:46:51.457095 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.458277 kubelet[2155]: E0813 00:46:51.457211 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.458277 kubelet[2155]: W0813 00:46:51.457221 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.458277 kubelet[2155]: E0813 00:46:51.457236 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.458277 kubelet[2155]: E0813 00:46:51.457549 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.458619 kubelet[2155]: W0813 00:46:51.457559 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.458619 kubelet[2155]: E0813 00:46:51.457572 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.458619 kubelet[2155]: E0813 00:46:51.457733 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.458619 kubelet[2155]: W0813 00:46:51.457741 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.458619 kubelet[2155]: E0813 00:46:51.457755 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.458619 kubelet[2155]: E0813 00:46:51.457921 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.458619 kubelet[2155]: W0813 00:46:51.457929 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.458619 kubelet[2155]: E0813 00:46:51.457939 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.502875 kubelet[2155]: E0813 00:46:51.502816 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.502875 kubelet[2155]: W0813 00:46:51.502857 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.502875 kubelet[2155]: E0813 00:46:51.502891 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.503420 kubelet[2155]: E0813 00:46:51.503392 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.503420 kubelet[2155]: W0813 00:46:51.503413 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.503516 kubelet[2155]: E0813 00:46:51.503427 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.503721 kubelet[2155]: E0813 00:46:51.503692 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.503783 kubelet[2155]: W0813 00:46:51.503721 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.503783 kubelet[2155]: E0813 00:46:51.503740 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.503998 kubelet[2155]: E0813 00:46:51.503965 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.503998 kubelet[2155]: W0813 00:46:51.503989 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.504111 kubelet[2155]: E0813 00:46:51.504010 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.504322 kubelet[2155]: E0813 00:46:51.504293 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.504322 kubelet[2155]: W0813 00:46:51.504314 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.504428 kubelet[2155]: E0813 00:46:51.504327 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.504842 kubelet[2155]: E0813 00:46:51.504812 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.504842 kubelet[2155]: W0813 00:46:51.504832 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.504842 kubelet[2155]: E0813 00:46:51.504845 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.505494 kubelet[2155]: E0813 00:46:51.505464 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.505494 kubelet[2155]: W0813 00:46:51.505484 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.505494 kubelet[2155]: E0813 00:46:51.505497 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.505943 kubelet[2155]: E0813 00:46:51.505841 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.505943 kubelet[2155]: W0813 00:46:51.505939 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.506062 kubelet[2155]: E0813 00:46:51.505956 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.508393 env[1320]: time="2025-08-13T00:46:51.507027254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5469d,Uid:28122c17-44bc-470c-93fa-7b72037b5a85,Namespace:calico-system,Attempt:0,}" Aug 13 00:46:51.508474 kubelet[2155]: E0813 00:46:51.507574 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.508474 kubelet[2155]: W0813 00:46:51.507585 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.508474 kubelet[2155]: E0813 00:46:51.507621 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.508474 kubelet[2155]: E0813 00:46:51.507915 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.508474 kubelet[2155]: W0813 00:46:51.507933 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.508474 kubelet[2155]: E0813 00:46:51.507944 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.508474 kubelet[2155]: E0813 00:46:51.508111 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.508474 kubelet[2155]: W0813 00:46:51.508120 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.508474 kubelet[2155]: E0813 00:46:51.508131 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.508474 kubelet[2155]: E0813 00:46:51.508314 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.508843 kubelet[2155]: W0813 00:46:51.508324 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.508843 kubelet[2155]: E0813 00:46:51.508336 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.508843 kubelet[2155]: E0813 00:46:51.508557 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.508843 kubelet[2155]: W0813 00:46:51.508567 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.508843 kubelet[2155]: E0813 00:46:51.508578 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.508843 kubelet[2155]: E0813 00:46:51.508723 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.508843 kubelet[2155]: W0813 00:46:51.508735 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.508843 kubelet[2155]: E0813 00:46:51.508746 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.510341 kubelet[2155]: E0813 00:46:51.510307 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.510341 kubelet[2155]: W0813 00:46:51.510326 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.510341 kubelet[2155]: E0813 00:46:51.510340 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.511258 kubelet[2155]: E0813 00:46:51.510552 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.511258 kubelet[2155]: W0813 00:46:51.510563 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.511258 kubelet[2155]: E0813 00:46:51.510573 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.511258 kubelet[2155]: E0813 00:46:51.510928 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.511258 kubelet[2155]: W0813 00:46:51.510941 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.511258 kubelet[2155]: E0813 00:46:51.510953 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.511258 kubelet[2155]: E0813 00:46:51.511221 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.511258 kubelet[2155]: W0813 00:46:51.511233 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.511258 kubelet[2155]: E0813 00:46:51.511244 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.511529 kubelet[2155]: E0813 00:46:51.511472 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.511529 kubelet[2155]: W0813 00:46:51.511484 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.511529 kubelet[2155]: E0813 00:46:51.511495 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.511804 kubelet[2155]: E0813 00:46:51.511772 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.511804 kubelet[2155]: W0813 00:46:51.511790 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.511804 kubelet[2155]: E0813 00:46:51.511802 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.515757 kubelet[2155]: E0813 00:46:51.515706 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.515757 kubelet[2155]: W0813 00:46:51.515747 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.515900 kubelet[2155]: E0813 00:46:51.515782 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.515900 kubelet[2155]: I0813 00:46:51.515834 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/222af0ae-3446-4630-b6ec-423608ba3718-varrun\") pod \"csi-node-driver-dfdgb\" (UID: \"222af0ae-3446-4630-b6ec-423608ba3718\") " pod="calico-system/csi-node-driver-dfdgb" Aug 13 00:46:51.516470 kubelet[2155]: E0813 00:46:51.516448 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.516470 kubelet[2155]: W0813 00:46:51.516465 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.516570 kubelet[2155]: E0813 00:46:51.516482 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.516570 kubelet[2155]: I0813 00:46:51.516502 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/222af0ae-3446-4630-b6ec-423608ba3718-registration-dir\") pod \"csi-node-driver-dfdgb\" (UID: \"222af0ae-3446-4630-b6ec-423608ba3718\") " pod="calico-system/csi-node-driver-dfdgb" Aug 13 00:46:51.516945 kubelet[2155]: E0813 00:46:51.516917 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.516945 kubelet[2155]: W0813 00:46:51.516933 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.517073 kubelet[2155]: E0813 00:46:51.516950 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.517073 kubelet[2155]: I0813 00:46:51.516971 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/222af0ae-3446-4630-b6ec-423608ba3718-socket-dir\") pod \"csi-node-driver-dfdgb\" (UID: \"222af0ae-3446-4630-b6ec-423608ba3718\") " pod="calico-system/csi-node-driver-dfdgb" Aug 13 00:46:51.522741 kubelet[2155]: E0813 00:46:51.517506 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.522741 kubelet[2155]: W0813 00:46:51.517522 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.522741 kubelet[2155]: E0813 00:46:51.517636 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.522741 kubelet[2155]: I0813 00:46:51.517678 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x7lj\" (UniqueName: \"kubernetes.io/projected/222af0ae-3446-4630-b6ec-423608ba3718-kube-api-access-6x7lj\") pod \"csi-node-driver-dfdgb\" (UID: \"222af0ae-3446-4630-b6ec-423608ba3718\") " pod="calico-system/csi-node-driver-dfdgb" Aug 13 00:46:51.522741 kubelet[2155]: E0813 00:46:51.517745 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.522741 kubelet[2155]: W0813 00:46:51.517754 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.522741 kubelet[2155]: E0813 00:46:51.517833 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.522741 kubelet[2155]: E0813 00:46:51.517951 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.522741 kubelet[2155]: W0813 00:46:51.517964 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.523125 kubelet[2155]: E0813 00:46:51.517978 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.523125 kubelet[2155]: E0813 00:46:51.518194 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.523125 kubelet[2155]: W0813 00:46:51.518206 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.523125 kubelet[2155]: E0813 00:46:51.518221 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.523125 kubelet[2155]: E0813 00:46:51.518413 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.523125 kubelet[2155]: W0813 00:46:51.518424 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.523125 kubelet[2155]: E0813 00:46:51.518440 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.523125 kubelet[2155]: I0813 00:46:51.518458 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222af0ae-3446-4630-b6ec-423608ba3718-kubelet-dir\") pod \"csi-node-driver-dfdgb\" (UID: \"222af0ae-3446-4630-b6ec-423608ba3718\") " pod="calico-system/csi-node-driver-dfdgb" Aug 13 00:46:51.523125 kubelet[2155]: E0813 00:46:51.518639 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.523393 kubelet[2155]: W0813 00:46:51.518652 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.523393 kubelet[2155]: E0813 00:46:51.518667 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.523393 kubelet[2155]: E0813 00:46:51.519101 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.523393 kubelet[2155]: W0813 00:46:51.519111 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.523393 kubelet[2155]: E0813 00:46:51.519123 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.523393 kubelet[2155]: E0813 00:46:51.519454 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.523393 kubelet[2155]: W0813 00:46:51.519464 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.523393 kubelet[2155]: E0813 00:46:51.519670 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.523393 kubelet[2155]: E0813 00:46:51.519865 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.523393 kubelet[2155]: W0813 00:46:51.519875 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.523657 kubelet[2155]: E0813 00:46:51.519921 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.523657 kubelet[2155]: E0813 00:46:51.520572 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.523657 kubelet[2155]: W0813 00:46:51.520585 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.523657 kubelet[2155]: E0813 00:46:51.520597 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.523657 kubelet[2155]: E0813 00:46:51.521101 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.523657 kubelet[2155]: W0813 00:46:51.521115 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.523657 kubelet[2155]: E0813 00:46:51.521127 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.523657 kubelet[2155]: E0813 00:46:51.522989 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.523657 kubelet[2155]: W0813 00:46:51.523001 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.523657 kubelet[2155]: E0813 00:46:51.523015 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.535318 env[1320]: time="2025-08-13T00:46:51.535068937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:46:51.535318 env[1320]: time="2025-08-13T00:46:51.535126018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:46:51.535318 env[1320]: time="2025-08-13T00:46:51.535140968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:46:51.535576 env[1320]: time="2025-08-13T00:46:51.535501530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c4025a919faf9ede6987b3e7918ab19a72a3144404551c2569b9989710455f4 pid=2670 runtime=io.containerd.runc.v2 Aug 13 00:46:51.572390 env[1320]: time="2025-08-13T00:46:51.572327178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5469d,Uid:28122c17-44bc-470c-93fa-7b72037b5a85,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c4025a919faf9ede6987b3e7918ab19a72a3144404551c2569b9989710455f4\"" Aug 13 00:46:51.623426 kubelet[2155]: E0813 00:46:51.623370 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.623426 kubelet[2155]: W0813 00:46:51.623406 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.623426 kubelet[2155]: E0813 00:46:51.623435 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.623764 kubelet[2155]: E0813 00:46:51.623728 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.623764 kubelet[2155]: W0813 00:46:51.623740 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.623764 kubelet[2155]: E0813 00:46:51.623756 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.624102 kubelet[2155]: E0813 00:46:51.624074 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.624102 kubelet[2155]: W0813 00:46:51.624093 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.624210 kubelet[2155]: E0813 00:46:51.624108 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.630868 kubelet[2155]: E0813 00:46:51.630826 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.630868 kubelet[2155]: W0813 00:46:51.630855 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.631055 kubelet[2155]: E0813 00:46:51.630883 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.638242 kubelet[2155]: E0813 00:46:51.638188 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.638242 kubelet[2155]: W0813 00:46:51.638223 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.638546 kubelet[2155]: E0813 00:46:51.638393 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.638634 kubelet[2155]: E0813 00:46:51.638607 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.638634 kubelet[2155]: W0813 00:46:51.638624 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.638748 kubelet[2155]: E0813 00:46:51.638699 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.638837 kubelet[2155]: E0813 00:46:51.638811 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.638837 kubelet[2155]: W0813 00:46:51.638827 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.638940 kubelet[2155]: E0813 00:46:51.638902 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.639049 kubelet[2155]: E0813 00:46:51.639008 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.639049 kubelet[2155]: W0813 00:46:51.639024 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.639180 kubelet[2155]: E0813 00:46:51.639118 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.639231 kubelet[2155]: E0813 00:46:51.639222 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.639276 kubelet[2155]: W0813 00:46:51.639231 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.639331 kubelet[2155]: E0813 00:46:51.639316 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.639474 kubelet[2155]: E0813 00:46:51.639448 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.639474 kubelet[2155]: W0813 00:46:51.639464 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.639600 kubelet[2155]: E0813 00:46:51.639478 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.641140 kubelet[2155]: E0813 00:46:51.641110 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.641140 kubelet[2155]: W0813 00:46:51.641130 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.641258 kubelet[2155]: E0813 00:46:51.641146 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.641471 kubelet[2155]: E0813 00:46:51.641441 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.641471 kubelet[2155]: W0813 00:46:51.641462 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.641575 kubelet[2155]: E0813 00:46:51.641549 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.641712 kubelet[2155]: E0813 00:46:51.641669 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.641712 kubelet[2155]: W0813 00:46:51.641697 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.641819 kubelet[2155]: E0813 00:46:51.641795 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.641965 kubelet[2155]: E0813 00:46:51.641938 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.641965 kubelet[2155]: W0813 00:46:51.641957 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.642114 kubelet[2155]: E0813 00:46:51.642097 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.643174 kubelet[2155]: E0813 00:46:51.643146 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.643174 kubelet[2155]: W0813 00:46:51.643164 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.643303 kubelet[2155]: E0813 00:46:51.643268 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.643465 kubelet[2155]: E0813 00:46:51.643439 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.643465 kubelet[2155]: W0813 00:46:51.643455 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.643619 kubelet[2155]: E0813 00:46:51.643588 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.645158 kubelet[2155]: E0813 00:46:51.645129 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.645158 kubelet[2155]: W0813 00:46:51.645149 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.645285 kubelet[2155]: E0813 00:46:51.645242 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.645392 kubelet[2155]: E0813 00:46:51.645366 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.645392 kubelet[2155]: W0813 00:46:51.645382 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.645504 kubelet[2155]: E0813 00:46:51.645465 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.645563 kubelet[2155]: E0813 00:46:51.645535 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.645563 kubelet[2155]: W0813 00:46:51.645544 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.645809 kubelet[2155]: E0813 00:46:51.645779 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.645883 kubelet[2155]: E0813 00:46:51.645872 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.645925 kubelet[2155]: W0813 00:46:51.645884 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.645996 kubelet[2155]: E0813 00:46:51.645968 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.646221 kubelet[2155]: E0813 00:46:51.646193 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.646221 kubelet[2155]: W0813 00:46:51.646210 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.646357 kubelet[2155]: E0813 00:46:51.646336 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.646454 kubelet[2155]: E0813 00:46:51.646429 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.646454 kubelet[2155]: W0813 00:46:51.646446 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.646563 kubelet[2155]: E0813 00:46:51.646459 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.646679 kubelet[2155]: E0813 00:46:51.646651 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.646679 kubelet[2155]: W0813 00:46:51.646669 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.646679 kubelet[2155]: E0813 00:46:51.646683 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.648151 kubelet[2155]: E0813 00:46:51.648124 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.648151 kubelet[2155]: W0813 00:46:51.648142 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.648272 kubelet[2155]: E0813 00:46:51.648157 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.649335 kubelet[2155]: E0813 00:46:51.649305 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.649335 kubelet[2155]: W0813 00:46:51.649328 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.649455 kubelet[2155]: E0813 00:46:51.649343 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.662349 kubelet[2155]: E0813 00:46:51.662259 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:51.662349 kubelet[2155]: W0813 00:46:51.662303 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:51.662349 kubelet[2155]: E0813 00:46:51.662329 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:51.870000 audit[2731]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:51.870000 audit[2731]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcb3f34e70 a2=0 a3=7ffcb3f34e5c items=0 ppid=2307 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:51.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:51.876000 audit[2731]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:46:51.876000 audit[2731]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb3f34e70 a2=0 a3=0 items=0 ppid=2307 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:46:51.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:46:52.139632 systemd[1]: run-containerd-runc-k8s.io-6872fad81ebc0e94a04c8f37237cf35c4cce133c7e6c5ace180995ebb765d408-runc.oXA0uz.mount: Deactivated successfully. Aug 13 00:46:52.997646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361648768.mount: Deactivated successfully. Aug 13 00:46:53.547660 kubelet[2155]: E0813 00:46:53.547595 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfdgb" podUID="222af0ae-3446-4630-b6ec-423608ba3718" Aug 13 00:46:54.288199 env[1320]: time="2025-08-13T00:46:54.288089520Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:54.291093 env[1320]: time="2025-08-13T00:46:54.291054467Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:54.292976 env[1320]: time="2025-08-13T00:46:54.292927814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:54.294740 env[1320]: time="2025-08-13T00:46:54.294706077Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:54.295479 env[1320]: time="2025-08-13T00:46:54.295377160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:46:54.296672 env[1320]: time="2025-08-13T00:46:54.296624652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:46:54.304842 env[1320]: time="2025-08-13T00:46:54.304778844Z" level=info msg="CreateContainer within sandbox \"6872fad81ebc0e94a04c8f37237cf35c4cce133c7e6c5ace180995ebb765d408\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:46:54.322739 env[1320]: time="2025-08-13T00:46:54.322661353Z" level=info msg="CreateContainer within sandbox \"6872fad81ebc0e94a04c8f37237cf35c4cce133c7e6c5ace180995ebb765d408\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b7f3d4baec8720476d0f81edefe0d70ca66055b308f3fe83efd644c8e28086cd\"" Aug 13 00:46:54.323352 env[1320]: time="2025-08-13T00:46:54.323315643Z" level=info msg="StartContainer for \"b7f3d4baec8720476d0f81edefe0d70ca66055b308f3fe83efd644c8e28086cd\"" Aug 13 00:46:54.690791 env[1320]: time="2025-08-13T00:46:54.689954981Z" level=info msg="StartContainer for \"b7f3d4baec8720476d0f81edefe0d70ca66055b308f3fe83efd644c8e28086cd\" returns successfully" Aug 13 00:46:55.307929 systemd[1]: run-containerd-runc-k8s.io-b7f3d4baec8720476d0f81edefe0d70ca66055b308f3fe83efd644c8e28086cd-runc.BSrtiD.mount: Deactivated successfully. Aug 13 00:46:55.543845 kubelet[2155]: E0813 00:46:55.543685 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfdgb" podUID="222af0ae-3446-4630-b6ec-423608ba3718" Aug 13 00:46:55.701200 kubelet[2155]: E0813 00:46:55.701016 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:46:55.739730 kubelet[2155]: I0813 00:46:55.739636 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f7bc77b77-gdhp9" podStartSLOduration=2.805943611 podStartE2EDuration="5.739614552s" podCreationTimestamp="2025-08-13 00:46:50 +0000 UTC" firstStartedPulling="2025-08-13 00:46:51.362764022 +0000 UTC m=+20.913621631" lastFinishedPulling="2025-08-13 00:46:54.296434963 +0000 UTC m=+23.847292572" observedRunningTime="2025-08-13 00:46:55.739182514 +0000 UTC m=+25.290040143" watchObservedRunningTime="2025-08-13 00:46:55.739614552 +0000 UTC m=+25.290472152" Aug 13 00:46:55.767886 kubelet[2155]: E0813 00:46:55.767819 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.767886 kubelet[2155]: W0813 00:46:55.767857 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.767886 kubelet[2155]: E0813 00:46:55.767888 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.768802 kubelet[2155]: E0813 00:46:55.768753 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.768802 kubelet[2155]: W0813 00:46:55.768773 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.768802 kubelet[2155]: E0813 00:46:55.768785 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.769182 kubelet[2155]: E0813 00:46:55.769148 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.769308 kubelet[2155]: W0813 00:46:55.769270 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.770067 kubelet[2155]: E0813 00:46:55.769400 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.770067 kubelet[2155]: E0813 00:46:55.769710 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.770067 kubelet[2155]: W0813 00:46:55.769721 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.770067 kubelet[2155]: E0813 00:46:55.769732 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.770067 kubelet[2155]: E0813 00:46:55.770019 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.770067 kubelet[2155]: W0813 00:46:55.770051 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.770067 kubelet[2155]: E0813 00:46:55.770064 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.770558 kubelet[2155]: E0813 00:46:55.770260 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.770558 kubelet[2155]: W0813 00:46:55.770270 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.770558 kubelet[2155]: E0813 00:46:55.770279 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.774617 kubelet[2155]: E0813 00:46:55.774542 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.774617 kubelet[2155]: W0813 00:46:55.774589 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.775213 kubelet[2155]: E0813 00:46:55.774627 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.775213 kubelet[2155]: E0813 00:46:55.774969 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.775213 kubelet[2155]: W0813 00:46:55.774998 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.775213 kubelet[2155]: E0813 00:46:55.775011 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.775339 kubelet[2155]: E0813 00:46:55.775274 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.775339 kubelet[2155]: W0813 00:46:55.775283 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.775339 kubelet[2155]: E0813 00:46:55.775292 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.775517 kubelet[2155]: E0813 00:46:55.775445 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.775517 kubelet[2155]: W0813 00:46:55.775467 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.775517 kubelet[2155]: E0813 00:46:55.775475 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.775683 kubelet[2155]: E0813 00:46:55.775641 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.775683 kubelet[2155]: W0813 00:46:55.775655 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.775683 kubelet[2155]: E0813 00:46:55.775665 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.775883 kubelet[2155]: E0813 00:46:55.775851 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.775883 kubelet[2155]: W0813 00:46:55.775870 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.775883 kubelet[2155]: E0813 00:46:55.775879 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.776278 kubelet[2155]: E0813 00:46:55.776104 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.776278 kubelet[2155]: W0813 00:46:55.776123 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.776278 kubelet[2155]: E0813 00:46:55.776134 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.776415 kubelet[2155]: E0813 00:46:55.776397 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.776415 kubelet[2155]: W0813 00:46:55.776409 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.776502 kubelet[2155]: E0813 00:46:55.776421 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.776790 kubelet[2155]: E0813 00:46:55.776745 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.776790 kubelet[2155]: W0813 00:46:55.776762 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.776790 kubelet[2155]: E0813 00:46:55.776773 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.778105 kubelet[2155]: E0813 00:46:55.778069 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.778197 kubelet[2155]: W0813 00:46:55.778103 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.778197 kubelet[2155]: E0813 00:46:55.778157 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.778414 kubelet[2155]: E0813 00:46:55.778397 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.778414 kubelet[2155]: W0813 00:46:55.778409 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.778536 kubelet[2155]: E0813 00:46:55.778426 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.778845 kubelet[2155]: E0813 00:46:55.778827 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.778845 kubelet[2155]: W0813 00:46:55.778842 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.778999 kubelet[2155]: E0813 00:46:55.778860 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.779229 kubelet[2155]: E0813 00:46:55.779191 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.779229 kubelet[2155]: W0813 00:46:55.779209 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.779229 kubelet[2155]: E0813 00:46:55.779230 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.779548 kubelet[2155]: E0813 00:46:55.779529 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.779548 kubelet[2155]: W0813 00:46:55.779543 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.779643 kubelet[2155]: E0813 00:46:55.779560 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.779796 kubelet[2155]: E0813 00:46:55.779772 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.779926 kubelet[2155]: W0813 00:46:55.779791 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.779926 kubelet[2155]: E0813 00:46:55.779897 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.780398 kubelet[2155]: E0813 00:46:55.780347 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.780398 kubelet[2155]: W0813 00:46:55.780366 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.780603 kubelet[2155]: E0813 00:46:55.780474 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.780706 kubelet[2155]: E0813 00:46:55.780605 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.780706 kubelet[2155]: W0813 00:46:55.780614 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.780831 kubelet[2155]: E0813 00:46:55.780719 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.780946 kubelet[2155]: E0813 00:46:55.780917 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.781015 kubelet[2155]: W0813 00:46:55.780943 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.781015 kubelet[2155]: E0813 00:46:55.780972 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.782714 kubelet[2155]: E0813 00:46:55.782691 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.782714 kubelet[2155]: W0813 00:46:55.782707 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.782830 kubelet[2155]: E0813 00:46:55.782754 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.784023 kubelet[2155]: E0813 00:46:55.782979 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.784023 kubelet[2155]: W0813 00:46:55.782998 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.784023 kubelet[2155]: E0813 00:46:55.783173 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.784023 kubelet[2155]: W0813 00:46:55.783183 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.784023 kubelet[2155]: E0813 00:46:55.783187 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.784023 kubelet[2155]: E0813 00:46:55.783197 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.784023 kubelet[2155]: E0813 00:46:55.783604 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.784023 kubelet[2155]: W0813 00:46:55.783620 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.784023 kubelet[2155]: E0813 00:46:55.783641 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.784023 kubelet[2155]: E0813 00:46:55.783997 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.784396 kubelet[2155]: W0813 00:46:55.784008 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.784396 kubelet[2155]: E0813 00:46:55.784026 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.784902 kubelet[2155]: E0813 00:46:55.784655 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.784902 kubelet[2155]: W0813 00:46:55.784680 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.784902 kubelet[2155]: E0813 00:46:55.784796 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.798305 kubelet[2155]: E0813 00:46:55.798203 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.798305 kubelet[2155]: W0813 00:46:55.798249 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.798305 kubelet[2155]: E0813 00:46:55.798282 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.798914 kubelet[2155]: E0813 00:46:55.798854 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.798914 kubelet[2155]: W0813 00:46:55.798898 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.799022 kubelet[2155]: E0813 00:46:55.798936 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.799804 kubelet[2155]: E0813 00:46:55.799620 2155 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:46:55.799804 kubelet[2155]: W0813 00:46:55.799634 2155 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:46:55.799804 kubelet[2155]: E0813 00:46:55.799645 2155 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:46:55.945889 env[1320]: time="2025-08-13T00:46:55.945805466Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:55.950061 env[1320]: time="2025-08-13T00:46:55.950003940Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:55.953825 env[1320]: time="2025-08-13T00:46:55.953630203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:55.957239 env[1320]: time="2025-08-13T00:46:55.956955283Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:46:55.957435 env[1320]: time="2025-08-13T00:46:55.957288550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:46:55.962207 env[1320]: time="2025-08-13T00:46:55.962132286Z" level=info msg="CreateContainer within sandbox \"8c4025a919faf9ede6987b3e7918ab19a72a3144404551c2569b9989710455f4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:46:55.996268 env[1320]: time="2025-08-13T00:46:55.996169219Z" level=info msg="CreateContainer within sandbox \"8c4025a919faf9ede6987b3e7918ab19a72a3144404551c2569b9989710455f4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4c98dd96cee71e022519e2881e79b6ad2a4ea746ea947319402c1eb4e4f85060\"" Aug 13 00:46:55.997851 env[1320]: time="2025-08-13T00:46:55.997781918Z" level=info msg="StartContainer for \"4c98dd96cee71e022519e2881e79b6ad2a4ea746ea947319402c1eb4e4f85060\"" Aug 13 00:46:56.117434 env[1320]: time="2025-08-13T00:46:56.117371775Z" level=info msg="StartContainer for \"4c98dd96cee71e022519e2881e79b6ad2a4ea746ea947319402c1eb4e4f85060\" returns successfully" Aug 13 00:46:56.200923 env[1320]: time="2025-08-13T00:46:56.200684679Z" level=info msg="shim disconnected" id=4c98dd96cee71e022519e2881e79b6ad2a4ea746ea947319402c1eb4e4f85060 Aug 13 00:46:56.200923 env[1320]: time="2025-08-13T00:46:56.200794501Z" level=warning msg="cleaning up after shim disconnected" id=4c98dd96cee71e022519e2881e79b6ad2a4ea746ea947319402c1eb4e4f85060 namespace=k8s.io Aug 13 00:46:56.200923 env[1320]: time="2025-08-13T00:46:56.200813850Z" level=info msg="cleaning up dead shim" Aug 13 00:46:56.220370 env[1320]: time="2025-08-13T00:46:56.219558136Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:46:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2854 runtime=io.containerd.runc.v2\n" Aug 13 00:46:56.310517 systemd[1]: run-containerd-runc-k8s.io-4c98dd96cee71e022519e2881e79b6ad2a4ea746ea947319402c1eb4e4f85060-runc.mROfk5.mount: Deactivated successfully. Aug 13 00:46:56.312369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c98dd96cee71e022519e2881e79b6ad2a4ea746ea947319402c1eb4e4f85060-rootfs.mount: Deactivated successfully. Aug 13 00:46:56.705881 env[1320]: time="2025-08-13T00:46:56.705721626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:46:57.544003 kubelet[2155]: E0813 00:46:57.543915 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfdgb" podUID="222af0ae-3446-4630-b6ec-423608ba3718" Aug 13 00:46:59.544059 kubelet[2155]: E0813 00:46:59.543984 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfdgb" podUID="222af0ae-3446-4630-b6ec-423608ba3718" Aug 13 00:47:01.130199 env[1320]: time="2025-08-13T00:47:01.130134672Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:01.131909 env[1320]: time="2025-08-13T00:47:01.131847367Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:01.133210 env[1320]: time="2025-08-13T00:47:01.133161515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:01.134653 env[1320]: time="2025-08-13T00:47:01.134605301Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:01.135211 env[1320]: time="2025-08-13T00:47:01.135165993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:47:01.140572 env[1320]: time="2025-08-13T00:47:01.140526319Z" level=info msg="CreateContainer within sandbox \"8c4025a919faf9ede6987b3e7918ab19a72a3144404551c2569b9989710455f4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:47:01.158295 env[1320]: time="2025-08-13T00:47:01.158233418Z" level=info msg="CreateContainer within sandbox \"8c4025a919faf9ede6987b3e7918ab19a72a3144404551c2569b9989710455f4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e2ecb7c0c386f55d806396ac75fe8181fb6dd490ccc8469b200179a1519fb646\"" Aug 13 00:47:01.158884 env[1320]: time="2025-08-13T00:47:01.158854618Z" level=info msg="StartContainer for \"e2ecb7c0c386f55d806396ac75fe8181fb6dd490ccc8469b200179a1519fb646\"" Aug 13 00:47:01.543644 kubelet[2155]: E0813 00:47:01.543548 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfdgb" podUID="222af0ae-3446-4630-b6ec-423608ba3718" Aug 13 00:47:02.091461 env[1320]: time="2025-08-13T00:47:02.091349272Z" level=info msg="StartContainer for \"e2ecb7c0c386f55d806396ac75fe8181fb6dd490ccc8469b200179a1519fb646\" returns successfully" Aug 13 00:47:03.019942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2ecb7c0c386f55d806396ac75fe8181fb6dd490ccc8469b200179a1519fb646-rootfs.mount: Deactivated successfully. Aug 13 00:47:03.022021 kubelet[2155]: I0813 00:47:03.021388 2155 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:47:03.022657 env[1320]: time="2025-08-13T00:47:03.021627715Z" level=info msg="shim disconnected" id=e2ecb7c0c386f55d806396ac75fe8181fb6dd490ccc8469b200179a1519fb646 Aug 13 00:47:03.022657 env[1320]: time="2025-08-13T00:47:03.021715233Z" level=warning msg="cleaning up after shim disconnected" id=e2ecb7c0c386f55d806396ac75fe8181fb6dd490ccc8469b200179a1519fb646 namespace=k8s.io Aug 13 00:47:03.022657 env[1320]: time="2025-08-13T00:47:03.021727076Z" level=info msg="cleaning up dead shim" Aug 13 00:47:03.030337 env[1320]: time="2025-08-13T00:47:03.030278268Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:47:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2925 runtime=io.containerd.runc.v2\n" Aug 13 00:47:03.101761 env[1320]: time="2025-08-13T00:47:03.100738154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:47:03.239194 kubelet[2155]: I0813 00:47:03.239113 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txmdt\" (UniqueName: \"kubernetes.io/projected/34f400c3-01be-4d31-9cf0-4542774ed01f-kube-api-access-txmdt\") pod \"calico-apiserver-7dfcc9cb65-9wsgx\" (UID: \"34f400c3-01be-4d31-9cf0-4542774ed01f\") " pod="calico-apiserver/calico-apiserver-7dfcc9cb65-9wsgx" Aug 13 00:47:03.239194 kubelet[2155]: I0813 00:47:03.239171 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghmjd\" (UniqueName: \"kubernetes.io/projected/1bcb8657-554c-4e97-97e8-2254cdca9102-kube-api-access-ghmjd\") pod \"whisker-65bd9f9cbb-kc8mn\" (UID: \"1bcb8657-554c-4e97-97e8-2254cdca9102\") " pod="calico-system/whisker-65bd9f9cbb-kc8mn" Aug 13 00:47:03.239194 kubelet[2155]: I0813 00:47:03.239186 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/34f400c3-01be-4d31-9cf0-4542774ed01f-calico-apiserver-certs\") pod \"calico-apiserver-7dfcc9cb65-9wsgx\" (UID: \"34f400c3-01be-4d31-9cf0-4542774ed01f\") " pod="calico-apiserver/calico-apiserver-7dfcc9cb65-9wsgx" Aug 13 00:47:03.239194 kubelet[2155]: I0813 00:47:03.239203 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9e603b-de35-4367-b9d4-819dfaad6063-config\") pod \"goldmane-58fd7646b9-n995t\" (UID: \"3c9e603b-de35-4367-b9d4-819dfaad6063\") " pod="calico-system/goldmane-58fd7646b9-n995t" Aug 13 00:47:03.239524 kubelet[2155]: I0813 00:47:03.239236 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqstv\" (UniqueName: \"kubernetes.io/projected/3c9e603b-de35-4367-b9d4-819dfaad6063-kube-api-access-kqstv\") pod \"goldmane-58fd7646b9-n995t\" (UID: \"3c9e603b-de35-4367-b9d4-819dfaad6063\") " pod="calico-system/goldmane-58fd7646b9-n995t" Aug 13 00:47:03.239524 kubelet[2155]: I0813 00:47:03.239264 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c322dc19-862d-4e41-8d08-222e739f135c-config-volume\") pod \"coredns-7c65d6cfc9-8tqtr\" (UID: \"c322dc19-862d-4e41-8d08-222e739f135c\") " pod="kube-system/coredns-7c65d6cfc9-8tqtr" Aug 13 00:47:03.239524 kubelet[2155]: I0813 00:47:03.239280 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7zf\" (UniqueName: \"kubernetes.io/projected/c322dc19-862d-4e41-8d08-222e739f135c-kube-api-access-vl7zf\") pod \"coredns-7c65d6cfc9-8tqtr\" (UID: \"c322dc19-862d-4e41-8d08-222e739f135c\") " pod="kube-system/coredns-7c65d6cfc9-8tqtr" Aug 13 00:47:03.239524 kubelet[2155]: I0813 00:47:03.239298 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzpbt\" (UniqueName: \"kubernetes.io/projected/e908a700-6b74-4dfa-9e94-825de953b563-kube-api-access-wzpbt\") pod \"calico-apiserver-7dfcc9cb65-vpjwq\" (UID: \"e908a700-6b74-4dfa-9e94-825de953b563\") " pod="calico-apiserver/calico-apiserver-7dfcc9cb65-vpjwq" Aug 13 00:47:03.239524 kubelet[2155]: I0813 00:47:03.239324 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e908a700-6b74-4dfa-9e94-825de953b563-calico-apiserver-certs\") pod \"calico-apiserver-7dfcc9cb65-vpjwq\" (UID: \"e908a700-6b74-4dfa-9e94-825de953b563\") " pod="calico-apiserver/calico-apiserver-7dfcc9cb65-vpjwq" Aug 13 00:47:03.239656 kubelet[2155]: I0813 00:47:03.239337 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bcb8657-554c-4e97-97e8-2254cdca9102-whisker-ca-bundle\") pod \"whisker-65bd9f9cbb-kc8mn\" (UID: \"1bcb8657-554c-4e97-97e8-2254cdca9102\") " pod="calico-system/whisker-65bd9f9cbb-kc8mn" Aug 13 00:47:03.239656 kubelet[2155]: I0813 00:47:03.239364 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9bw\" (UniqueName: \"kubernetes.io/projected/7f9c7221-f628-46e4-b71d-64f1c6b90e3b-kube-api-access-ch9bw\") pod \"calico-kube-controllers-75878946fc-mwwk8\" (UID: \"7f9c7221-f628-46e4-b71d-64f1c6b90e3b\") " pod="calico-system/calico-kube-controllers-75878946fc-mwwk8" Aug 13 00:47:03.239656 kubelet[2155]: I0813 00:47:03.239379 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1bcb8657-554c-4e97-97e8-2254cdca9102-whisker-backend-key-pair\") pod \"whisker-65bd9f9cbb-kc8mn\" (UID: \"1bcb8657-554c-4e97-97e8-2254cdca9102\") " pod="calico-system/whisker-65bd9f9cbb-kc8mn" Aug 13 00:47:03.239656 kubelet[2155]: I0813 00:47:03.239393 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e4136fa-8827-4ebd-a668-a252e7a55a56-config-volume\") pod \"coredns-7c65d6cfc9-n9cpw\" (UID: \"1e4136fa-8827-4ebd-a668-a252e7a55a56\") " pod="kube-system/coredns-7c65d6cfc9-n9cpw" Aug 13 00:47:03.239656 kubelet[2155]: I0813 00:47:03.239409 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f9c7221-f628-46e4-b71d-64f1c6b90e3b-tigera-ca-bundle\") pod \"calico-kube-controllers-75878946fc-mwwk8\" (UID: \"7f9c7221-f628-46e4-b71d-64f1c6b90e3b\") " pod="calico-system/calico-kube-controllers-75878946fc-mwwk8" Aug 13 00:47:03.239795 kubelet[2155]: I0813 00:47:03.239429 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3c9e603b-de35-4367-b9d4-819dfaad6063-goldmane-key-pair\") pod \"goldmane-58fd7646b9-n995t\" (UID: \"3c9e603b-de35-4367-b9d4-819dfaad6063\") " pod="calico-system/goldmane-58fd7646b9-n995t" Aug 13 00:47:03.239795 kubelet[2155]: I0813 00:47:03.239443 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpgk7\" (UniqueName: \"kubernetes.io/projected/1e4136fa-8827-4ebd-a668-a252e7a55a56-kube-api-access-wpgk7\") pod \"coredns-7c65d6cfc9-n9cpw\" (UID: \"1e4136fa-8827-4ebd-a668-a252e7a55a56\") " pod="kube-system/coredns-7c65d6cfc9-n9cpw" Aug 13 00:47:03.239795 kubelet[2155]: I0813 00:47:03.239458 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c9e603b-de35-4367-b9d4-819dfaad6063-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-n995t\" (UID: \"3c9e603b-de35-4367-b9d4-819dfaad6063\") " pod="calico-system/goldmane-58fd7646b9-n995t" Aug 13 00:47:03.546238 env[1320]: time="2025-08-13T00:47:03.546181943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfdgb,Uid:222af0ae-3446-4630-b6ec-423608ba3718,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:03.634113 env[1320]: time="2025-08-13T00:47:03.634008131Z" level=error msg="Failed to destroy network for sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.634695 env[1320]: time="2025-08-13T00:47:03.634636121Z" level=error msg="encountered an error cleaning up failed sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.634760 env[1320]: time="2025-08-13T00:47:03.634712198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfdgb,Uid:222af0ae-3446-4630-b6ec-423608ba3718,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.635277 kubelet[2155]: E0813 00:47:03.635220 2155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.635353 kubelet[2155]: E0813 00:47:03.635307 2155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dfdgb" Aug 13 00:47:03.635353 kubelet[2155]: E0813 00:47:03.635331 2155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dfdgb" Aug 13 00:47:03.635434 kubelet[2155]: E0813 00:47:03.635398 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dfdgb_calico-system(222af0ae-3446-4630-b6ec-423608ba3718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dfdgb_calico-system(222af0ae-3446-4630-b6ec-423608ba3718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dfdgb" podUID="222af0ae-3446-4630-b6ec-423608ba3718" Aug 13 00:47:03.652436 kubelet[2155]: E0813 00:47:03.652395 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:03.652931 env[1320]: time="2025-08-13T00:47:03.652904462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n9cpw,Uid:1e4136fa-8827-4ebd-a668-a252e7a55a56,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:03.657024 env[1320]: time="2025-08-13T00:47:03.656986440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfcc9cb65-9wsgx,Uid:34f400c3-01be-4d31-9cf0-4542774ed01f,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:47:03.660476 env[1320]: time="2025-08-13T00:47:03.660429145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75878946fc-mwwk8,Uid:7f9c7221-f628-46e4-b71d-64f1c6b90e3b,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:03.667003 env[1320]: time="2025-08-13T00:47:03.666950886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n995t,Uid:3c9e603b-de35-4367-b9d4-819dfaad6063,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:03.669458 env[1320]: time="2025-08-13T00:47:03.669431817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfcc9cb65-vpjwq,Uid:e908a700-6b74-4dfa-9e94-825de953b563,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:47:03.670637 kubelet[2155]: E0813 00:47:03.670606 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:03.671071 env[1320]: time="2025-08-13T00:47:03.670912030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8tqtr,Uid:c322dc19-862d-4e41-8d08-222e739f135c,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:03.677198 env[1320]: time="2025-08-13T00:47:03.677143932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bd9f9cbb-kc8mn,Uid:1bcb8657-554c-4e97-97e8-2254cdca9102,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:03.730740 env[1320]: time="2025-08-13T00:47:03.730667975Z" level=error msg="Failed to destroy network for sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.731021 env[1320]: time="2025-08-13T00:47:03.730979987Z" level=error msg="encountered an error cleaning up failed sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.731111 env[1320]: time="2025-08-13T00:47:03.731052466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n9cpw,Uid:1e4136fa-8827-4ebd-a668-a252e7a55a56,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.731374 kubelet[2155]: E0813 00:47:03.731327 2155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.731458 kubelet[2155]: E0813 00:47:03.731410 2155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-n9cpw" Aug 13 00:47:03.731458 kubelet[2155]: E0813 00:47:03.731429 2155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-n9cpw" Aug 13 00:47:03.731563 kubelet[2155]: E0813 00:47:03.731471 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-n9cpw_kube-system(1e4136fa-8827-4ebd-a668-a252e7a55a56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-n9cpw_kube-system(1e4136fa-8827-4ebd-a668-a252e7a55a56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-n9cpw" podUID="1e4136fa-8827-4ebd-a668-a252e7a55a56" Aug 13 00:47:03.793577 env[1320]: time="2025-08-13T00:47:03.793498887Z" level=error msg="Failed to destroy network for sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.793928 env[1320]: time="2025-08-13T00:47:03.793896233Z" level=error msg="encountered an error cleaning up failed sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.793982 env[1320]: time="2025-08-13T00:47:03.793948493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfcc9cb65-9wsgx,Uid:34f400c3-01be-4d31-9cf0-4542774ed01f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.794278 kubelet[2155]: E0813 00:47:03.794210 2155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.794365 kubelet[2155]: E0813 00:47:03.794311 2155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dfcc9cb65-9wsgx" Aug 13 00:47:03.794395 kubelet[2155]: E0813 00:47:03.794373 2155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dfcc9cb65-9wsgx" Aug 13 00:47:03.794481 kubelet[2155]: E0813 00:47:03.794451 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dfcc9cb65-9wsgx_calico-apiserver(34f400c3-01be-4d31-9cf0-4542774ed01f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dfcc9cb65-9wsgx_calico-apiserver(34f400c3-01be-4d31-9cf0-4542774ed01f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dfcc9cb65-9wsgx" podUID="34f400c3-01be-4d31-9cf0-4542774ed01f" Aug 13 00:47:03.820603 env[1320]: time="2025-08-13T00:47:03.819817484Z" level=error msg="Failed to destroy network for sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.821126 env[1320]: time="2025-08-13T00:47:03.821096459Z" level=error msg="encountered an error cleaning up failed sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.821306 env[1320]: time="2025-08-13T00:47:03.821279382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bd9f9cbb-kc8mn,Uid:1bcb8657-554c-4e97-97e8-2254cdca9102,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.821839 kubelet[2155]: E0813 00:47:03.821636 2155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.821839 kubelet[2155]: E0813 00:47:03.821710 2155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65bd9f9cbb-kc8mn" Aug 13 00:47:03.821839 kubelet[2155]: E0813 00:47:03.821738 2155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65bd9f9cbb-kc8mn" Aug 13 00:47:03.821958 kubelet[2155]: E0813 00:47:03.821785 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65bd9f9cbb-kc8mn_calico-system(1bcb8657-554c-4e97-97e8-2254cdca9102)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65bd9f9cbb-kc8mn_calico-system(1bcb8657-554c-4e97-97e8-2254cdca9102)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65bd9f9cbb-kc8mn" podUID="1bcb8657-554c-4e97-97e8-2254cdca9102" Aug 13 00:47:03.828060 env[1320]: time="2025-08-13T00:47:03.827996759Z" level=error msg="Failed to destroy network for sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.828533 env[1320]: time="2025-08-13T00:47:03.828500059Z" level=error msg="encountered an error cleaning up failed sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.828646 env[1320]: time="2025-08-13T00:47:03.828617085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n995t,Uid:3c9e603b-de35-4367-b9d4-819dfaad6063,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.829015 kubelet[2155]: E0813 00:47:03.828966 2155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.829110 kubelet[2155]: E0813 00:47:03.829057 2155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-n995t" Aug 13 00:47:03.829110 kubelet[2155]: E0813 00:47:03.829078 2155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-n995t" Aug 13 00:47:03.829168 kubelet[2155]: E0813 00:47:03.829149 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-n995t_calico-system(3c9e603b-de35-4367-b9d4-819dfaad6063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-n995t_calico-system(3c9e603b-de35-4367-b9d4-819dfaad6063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-n995t" podUID="3c9e603b-de35-4367-b9d4-819dfaad6063" Aug 13 00:47:03.838633 env[1320]: time="2025-08-13T00:47:03.838576029Z" level=error msg="Failed to destroy network for sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.839137 env[1320]: time="2025-08-13T00:47:03.839111030Z" level=error msg="encountered an error cleaning up failed sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.839265 env[1320]: time="2025-08-13T00:47:03.839234399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75878946fc-mwwk8,Uid:7f9c7221-f628-46e4-b71d-64f1c6b90e3b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.839691 kubelet[2155]: E0813 00:47:03.839640 2155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.839774 kubelet[2155]: E0813 00:47:03.839728 2155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75878946fc-mwwk8" Aug 13 00:47:03.839774 kubelet[2155]: E0813 00:47:03.839746 2155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75878946fc-mwwk8" Aug 13 00:47:03.839835 kubelet[2155]: E0813 00:47:03.839806 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75878946fc-mwwk8_calico-system(7f9c7221-f628-46e4-b71d-64f1c6b90e3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75878946fc-mwwk8_calico-system(7f9c7221-f628-46e4-b71d-64f1c6b90e3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75878946fc-mwwk8" podUID="7f9c7221-f628-46e4-b71d-64f1c6b90e3b" Aug 13 00:47:03.846394 env[1320]: time="2025-08-13T00:47:03.846364391Z" level=error msg="Failed to destroy network for sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.846815 env[1320]: time="2025-08-13T00:47:03.846788949Z" level=error msg="encountered an error cleaning up failed sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.846946 env[1320]: time="2025-08-13T00:47:03.846917036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfcc9cb65-vpjwq,Uid:e908a700-6b74-4dfa-9e94-825de953b563,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.847175 env[1320]: time="2025-08-13T00:47:03.847140607Z" level=error msg="Failed to destroy network for sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.847293 kubelet[2155]: E0813 00:47:03.847264 2155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.847380 kubelet[2155]: E0813 00:47:03.847312 2155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dfcc9cb65-vpjwq" Aug 13 00:47:03.847380 kubelet[2155]: E0813 00:47:03.847328 2155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dfcc9cb65-vpjwq" Aug 13 00:47:03.847380 kubelet[2155]: E0813 00:47:03.847368 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dfcc9cb65-vpjwq_calico-apiserver(e908a700-6b74-4dfa-9e94-825de953b563)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dfcc9cb65-vpjwq_calico-apiserver(e908a700-6b74-4dfa-9e94-825de953b563)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dfcc9cb65-vpjwq" podUID="e908a700-6b74-4dfa-9e94-825de953b563" Aug 13 00:47:03.847517 env[1320]: time="2025-08-13T00:47:03.847490242Z" level=error msg="encountered an error cleaning up failed sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.847546 env[1320]: time="2025-08-13T00:47:03.847526751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8tqtr,Uid:c322dc19-862d-4e41-8d08-222e739f135c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.847765 kubelet[2155]: E0813 00:47:03.847717 2155 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:03.847835 kubelet[2155]: E0813 00:47:03.847798 2155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8tqtr" Aug 13 00:47:03.847871 kubelet[2155]: E0813 00:47:03.847818 2155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8tqtr" Aug 13 00:47:03.847913 kubelet[2155]: E0813 00:47:03.847885 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8tqtr_kube-system(c322dc19-862d-4e41-8d08-222e739f135c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8tqtr_kube-system(c322dc19-862d-4e41-8d08-222e739f135c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8tqtr" podUID="c322dc19-862d-4e41-8d08-222e739f135c" Aug 13 00:47:04.102525 kubelet[2155]: I0813 00:47:04.102393 2155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:04.103753 env[1320]: time="2025-08-13T00:47:04.103713146Z" level=info msg="StopPodSandbox for \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\"" Aug 13 00:47:04.104862 kubelet[2155]: I0813 00:47:04.104828 2155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:04.105412 env[1320]: time="2025-08-13T00:47:04.105363394Z" level=info msg="StopPodSandbox for \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\"" Aug 13 00:47:04.106468 kubelet[2155]: I0813 00:47:04.106442 2155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:04.108630 env[1320]: time="2025-08-13T00:47:04.107527653Z" level=info msg="StopPodSandbox for \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\"" Aug 13 00:47:04.109070 kubelet[2155]: I0813 00:47:04.108997 2155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:04.109718 env[1320]: time="2025-08-13T00:47:04.109643008Z" level=info msg="StopPodSandbox for \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\"" Aug 13 00:47:04.113052 kubelet[2155]: I0813 00:47:04.112985 2155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:04.113999 env[1320]: time="2025-08-13T00:47:04.113953630Z" level=info msg="StopPodSandbox for \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\"" Aug 13 00:47:04.115438 kubelet[2155]: I0813 00:47:04.115097 2155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:04.117675 kubelet[2155]: I0813 00:47:04.117154 2155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:04.118637 env[1320]: time="2025-08-13T00:47:04.118493675Z" level=info msg="StopPodSandbox for \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\"" Aug 13 00:47:04.118792 env[1320]: time="2025-08-13T00:47:04.118618875Z" level=info msg="StopPodSandbox for \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\"" Aug 13 00:47:04.120630 kubelet[2155]: I0813 00:47:04.120178 2155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:04.120958 env[1320]: time="2025-08-13T00:47:04.120920930Z" level=info msg="StopPodSandbox for \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\"" Aug 13 00:47:04.141635 env[1320]: time="2025-08-13T00:47:04.141555795Z" level=error msg="StopPodSandbox for \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\" failed" error="failed to destroy network for sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:04.141944 kubelet[2155]: E0813 00:47:04.141845 2155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:04.142048 kubelet[2155]: E0813 00:47:04.141914 2155 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af"} Aug 13 00:47:04.142048 kubelet[2155]: E0813 00:47:04.142011 2155 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"222af0ae-3446-4630-b6ec-423608ba3718\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:47:04.142182 kubelet[2155]: E0813 00:47:04.142051 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"222af0ae-3446-4630-b6ec-423608ba3718\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dfdgb" podUID="222af0ae-3446-4630-b6ec-423608ba3718" Aug 13 00:47:04.150530 env[1320]: time="2025-08-13T00:47:04.150462850Z" level=error msg="StopPodSandbox for \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\" failed" error="failed to destroy network for sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:04.150770 kubelet[2155]: E0813 00:47:04.150716 2155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:04.150862 kubelet[2155]: E0813 00:47:04.150782 2155 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da"} Aug 13 00:47:04.150862 kubelet[2155]: E0813 00:47:04.150819 2155 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1bcb8657-554c-4e97-97e8-2254cdca9102\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:47:04.150985 kubelet[2155]: E0813 00:47:04.150857 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1bcb8657-554c-4e97-97e8-2254cdca9102\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65bd9f9cbb-kc8mn" podUID="1bcb8657-554c-4e97-97e8-2254cdca9102" Aug 13 00:47:04.159337 env[1320]: time="2025-08-13T00:47:04.159268471Z" level=error msg="StopPodSandbox for \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\" failed" error="failed to destroy network for sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:04.159911 kubelet[2155]: E0813 00:47:04.159846 2155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:04.159986 kubelet[2155]: E0813 00:47:04.159955 2155 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340"} Aug 13 00:47:04.160023 kubelet[2155]: E0813 00:47:04.160012 2155 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34f400c3-01be-4d31-9cf0-4542774ed01f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:47:04.160108 kubelet[2155]: E0813 00:47:04.160067 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34f400c3-01be-4d31-9cf0-4542774ed01f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dfcc9cb65-9wsgx" podUID="34f400c3-01be-4d31-9cf0-4542774ed01f" Aug 13 00:47:04.169184 env[1320]: time="2025-08-13T00:47:04.169117711Z" level=error msg="StopPodSandbox for \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\" failed" error="failed to destroy network for sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:04.169709 kubelet[2155]: E0813 00:47:04.169626 2155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:04.169709 kubelet[2155]: E0813 00:47:04.169700 2155 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557"} Aug 13 00:47:04.169815 kubelet[2155]: E0813 00:47:04.169751 2155 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e4136fa-8827-4ebd-a668-a252e7a55a56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:47:04.169815 kubelet[2155]: E0813 00:47:04.169774 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e4136fa-8827-4ebd-a668-a252e7a55a56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-n9cpw" podUID="1e4136fa-8827-4ebd-a668-a252e7a55a56" Aug 13 00:47:04.183785 env[1320]: time="2025-08-13T00:47:04.183725206Z" level=error msg="StopPodSandbox for \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\" failed" error="failed to destroy network for sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:04.184060 kubelet[2155]: E0813 00:47:04.183998 2155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:04.184136 kubelet[2155]: E0813 00:47:04.184073 2155 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0"} Aug 13 00:47:04.184179 kubelet[2155]: E0813 00:47:04.184165 2155 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e908a700-6b74-4dfa-9e94-825de953b563\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:47:04.184248 kubelet[2155]: E0813 00:47:04.184196 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e908a700-6b74-4dfa-9e94-825de953b563\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dfcc9cb65-vpjwq" podUID="e908a700-6b74-4dfa-9e94-825de953b563" Aug 13 00:47:04.184920 env[1320]: time="2025-08-13T00:47:04.184883397Z" level=error msg="StopPodSandbox for \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\" failed" error="failed to destroy network for sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:04.185053 kubelet[2155]: E0813 00:47:04.185010 2155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:04.185115 kubelet[2155]: E0813 00:47:04.185070 2155 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7"} Aug 13 00:47:04.185115 kubelet[2155]: E0813 00:47:04.185093 2155 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f9c7221-f628-46e4-b71d-64f1c6b90e3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:47:04.185193 kubelet[2155]: E0813 00:47:04.185109 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f9c7221-f628-46e4-b71d-64f1c6b90e3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75878946fc-mwwk8" podUID="7f9c7221-f628-46e4-b71d-64f1c6b90e3b" Aug 13 00:47:04.187217 env[1320]: time="2025-08-13T00:47:04.187132439Z" level=error msg="StopPodSandbox for \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\" failed" error="failed to destroy network for sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:04.187413 kubelet[2155]: E0813 00:47:04.187365 2155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:04.187413 kubelet[2155]: E0813 00:47:04.187395 2155 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5"} Aug 13 00:47:04.187497 kubelet[2155]: E0813 00:47:04.187416 2155 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c322dc19-862d-4e41-8d08-222e739f135c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:47:04.187497 kubelet[2155]: E0813 00:47:04.187434 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c322dc19-862d-4e41-8d08-222e739f135c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8tqtr" podUID="c322dc19-862d-4e41-8d08-222e739f135c" Aug 13 00:47:04.198746 env[1320]: time="2025-08-13T00:47:04.198663098Z" level=error msg="StopPodSandbox for \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\" failed" error="failed to destroy network for sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:04.198986 kubelet[2155]: E0813 00:47:04.198936 2155 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:04.199085 kubelet[2155]: E0813 00:47:04.199002 2155 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88"} Aug 13 00:47:04.199085 kubelet[2155]: E0813 00:47:04.199054 2155 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c9e603b-de35-4367-b9d4-819dfaad6063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:47:04.199171 kubelet[2155]: E0813 00:47:04.199080 2155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c9e603b-de35-4367-b9d4-819dfaad6063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-n995t" podUID="3c9e603b-de35-4367-b9d4-819dfaad6063" Aug 13 00:47:10.122294 kubelet[2155]: I0813 00:47:10.120187 2155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:47:10.122294 kubelet[2155]: E0813 00:47:10.120636 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:10.152345 kubelet[2155]: E0813 00:47:10.152285 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:10.363682 kernel: kauditd_printk_skb: 25 callbacks suppressed Aug 13 00:47:10.365918 kernel: audit: type=1325 audit(1755046030.354:301): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:10.354000 audit[3367]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:10.354000 audit[3367]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc69306670 a2=0 a3=7ffc6930665c items=0 ppid=2307 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:10.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:10.378419 kernel: audit: type=1300 audit(1755046030.354:301): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc69306670 a2=0 a3=7ffc6930665c items=0 ppid=2307 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:10.378585 kernel: audit: type=1327 audit(1755046030.354:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:10.378619 kernel: audit: type=1325 audit(1755046030.365:302): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:10.365000 audit[3367]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:10.365000 audit[3367]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc69306670 a2=0 a3=7ffc6930665c items=0 ppid=2307 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:10.399407 kernel: audit: type=1300 audit(1755046030.365:302): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc69306670 a2=0 a3=7ffc6930665c items=0 ppid=2307 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:10.399573 kernel: audit: type=1327 audit(1755046030.365:302): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:10.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:13.031772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount378696318.mount: Deactivated successfully. Aug 13 00:47:14.027238 env[1320]: time="2025-08-13T00:47:14.027167962Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:14.135051 env[1320]: time="2025-08-13T00:47:14.133659375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:14.157010 env[1320]: time="2025-08-13T00:47:14.155362151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:14.158075 env[1320]: time="2025-08-13T00:47:14.158020301Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:14.158689 env[1320]: time="2025-08-13T00:47:14.158611724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:47:14.179670 env[1320]: time="2025-08-13T00:47:14.179617677Z" level=info msg="CreateContainer within sandbox \"8c4025a919faf9ede6987b3e7918ab19a72a3144404551c2569b9989710455f4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:47:14.387746 env[1320]: time="2025-08-13T00:47:14.387573007Z" level=info msg="CreateContainer within sandbox \"8c4025a919faf9ede6987b3e7918ab19a72a3144404551c2569b9989710455f4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ae1fd114b1a39c38ae3e7cdd8445431f56d5b9405f1ff1d2b50cffd0226cbc1f\"" Aug 13 00:47:14.391480 env[1320]: time="2025-08-13T00:47:14.391424853Z" level=info msg="StartContainer for \"ae1fd114b1a39c38ae3e7cdd8445431f56d5b9405f1ff1d2b50cffd0226cbc1f\"" Aug 13 00:47:14.486371 env[1320]: time="2025-08-13T00:47:14.486254415Z" level=info msg="StartContainer for \"ae1fd114b1a39c38ae3e7cdd8445431f56d5b9405f1ff1d2b50cffd0226cbc1f\" returns successfully" Aug 13 00:47:14.637385 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:47:14.638324 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:47:14.879351 env[1320]: time="2025-08-13T00:47:14.879294743Z" level=info msg="StopPodSandbox for \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\"" Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:14.955 [INFO][3433] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:14.955 [INFO][3433] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" iface="eth0" netns="/var/run/netns/cni-4f33acc2-62e6-22b4-be81-bb6ce08dee2e" Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:14.957 [INFO][3433] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" iface="eth0" netns="/var/run/netns/cni-4f33acc2-62e6-22b4-be81-bb6ce08dee2e" Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:14.957 [INFO][3433] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" iface="eth0" netns="/var/run/netns/cni-4f33acc2-62e6-22b4-be81-bb6ce08dee2e" Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:14.957 [INFO][3433] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:14.957 [INFO][3433] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:15.030 [INFO][3442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" HandleID="k8s-pod-network.df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Workload="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:15.030 [INFO][3442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:15.030 [INFO][3442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:15.037 [WARNING][3442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" HandleID="k8s-pod-network.df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Workload="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:15.037 [INFO][3442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" HandleID="k8s-pod-network.df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Workload="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:15.039 [INFO][3442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:15.042676 env[1320]: 2025-08-13 00:47:15.040 [INFO][3433] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:15.043428 env[1320]: time="2025-08-13T00:47:15.042823253Z" level=info msg="TearDown network for sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\" successfully" Aug 13 00:47:15.043428 env[1320]: time="2025-08-13T00:47:15.042878449Z" level=info msg="StopPodSandbox for \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\" returns successfully" Aug 13 00:47:15.154847 kubelet[2155]: I0813 00:47:15.154698 2155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1bcb8657-554c-4e97-97e8-2254cdca9102-whisker-backend-key-pair\") pod \"1bcb8657-554c-4e97-97e8-2254cdca9102\" (UID: \"1bcb8657-554c-4e97-97e8-2254cdca9102\") " Aug 13 00:47:15.154847 kubelet[2155]: I0813 00:47:15.154783 2155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bcb8657-554c-4e97-97e8-2254cdca9102-whisker-ca-bundle\") pod \"1bcb8657-554c-4e97-97e8-2254cdca9102\" (UID: \"1bcb8657-554c-4e97-97e8-2254cdca9102\") " Aug 13 00:47:15.154847 kubelet[2155]: I0813 00:47:15.154816 2155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghmjd\" (UniqueName: \"kubernetes.io/projected/1bcb8657-554c-4e97-97e8-2254cdca9102-kube-api-access-ghmjd\") pod \"1bcb8657-554c-4e97-97e8-2254cdca9102\" (UID: \"1bcb8657-554c-4e97-97e8-2254cdca9102\") " Aug 13 00:47:15.155514 kubelet[2155]: I0813 00:47:15.155272 2155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcb8657-554c-4e97-97e8-2254cdca9102-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1bcb8657-554c-4e97-97e8-2254cdca9102" (UID: "1bcb8657-554c-4e97-97e8-2254cdca9102"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:47:15.158338 kubelet[2155]: I0813 00:47:15.158277 2155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcb8657-554c-4e97-97e8-2254cdca9102-kube-api-access-ghmjd" (OuterVolumeSpecName: "kube-api-access-ghmjd") pod "1bcb8657-554c-4e97-97e8-2254cdca9102" (UID: "1bcb8657-554c-4e97-97e8-2254cdca9102"). InnerVolumeSpecName "kube-api-access-ghmjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:47:15.158588 kubelet[2155]: I0813 00:47:15.158550 2155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcb8657-554c-4e97-97e8-2254cdca9102-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1bcb8657-554c-4e97-97e8-2254cdca9102" (UID: "1bcb8657-554c-4e97-97e8-2254cdca9102"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:47:15.178108 systemd[1]: run-netns-cni\x2d4f33acc2\x2d62e6\x2d22b4\x2dbe81\x2dbb6ce08dee2e.mount: Deactivated successfully. Aug 13 00:47:15.178305 systemd[1]: var-lib-kubelet-pods-1bcb8657\x2d554c\x2d4e97\x2d97e8\x2d2254cdca9102-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dghmjd.mount: Deactivated successfully. Aug 13 00:47:15.178437 systemd[1]: var-lib-kubelet-pods-1bcb8657\x2d554c\x2d4e97\x2d97e8\x2d2254cdca9102-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:47:15.229576 kubelet[2155]: I0813 00:47:15.229503 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5469d" podStartSLOduration=1.642809639 podStartE2EDuration="24.229483351s" podCreationTimestamp="2025-08-13 00:46:51 +0000 UTC" firstStartedPulling="2025-08-13 00:46:51.573510272 +0000 UTC m=+21.124367881" lastFinishedPulling="2025-08-13 00:47:14.160183984 +0000 UTC m=+43.711041593" observedRunningTime="2025-08-13 00:47:15.229275653 +0000 UTC m=+44.780133262" watchObservedRunningTime="2025-08-13 00:47:15.229483351 +0000 UTC m=+44.780340950" Aug 13 00:47:15.255153 kubelet[2155]: I0813 00:47:15.255087 2155 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1bcb8657-554c-4e97-97e8-2254cdca9102-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 00:47:15.255374 kubelet[2155]: I0813 00:47:15.255358 2155 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bcb8657-554c-4e97-97e8-2254cdca9102-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 00:47:15.255458 kubelet[2155]: I0813 00:47:15.255442 2155 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghmjd\" (UniqueName: \"kubernetes.io/projected/1bcb8657-554c-4e97-97e8-2254cdca9102-kube-api-access-ghmjd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:47:15.356278 kubelet[2155]: I0813 00:47:15.356226 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e68d9eb-007c-40d1-89e5-f63f90095299-whisker-ca-bundle\") pod \"whisker-5748bbc6f8-mwsj9\" (UID: \"8e68d9eb-007c-40d1-89e5-f63f90095299\") " pod="calico-system/whisker-5748bbc6f8-mwsj9" Aug 13 00:47:15.356500 kubelet[2155]: I0813 00:47:15.356288 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8e68d9eb-007c-40d1-89e5-f63f90095299-whisker-backend-key-pair\") pod \"whisker-5748bbc6f8-mwsj9\" (UID: \"8e68d9eb-007c-40d1-89e5-f63f90095299\") " pod="calico-system/whisker-5748bbc6f8-mwsj9" Aug 13 00:47:15.356500 kubelet[2155]: I0813 00:47:15.356320 2155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhqdc\" (UniqueName: \"kubernetes.io/projected/8e68d9eb-007c-40d1-89e5-f63f90095299-kube-api-access-lhqdc\") pod \"whisker-5748bbc6f8-mwsj9\" (UID: \"8e68d9eb-007c-40d1-89e5-f63f90095299\") " pod="calico-system/whisker-5748bbc6f8-mwsj9" Aug 13 00:47:15.553087 env[1320]: time="2025-08-13T00:47:15.552907600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5748bbc6f8-mwsj9,Uid:8e68d9eb-007c-40d1-89e5-f63f90095299,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:15.859176 systemd-networkd[1102]: cali346dffbba28: Link UP Aug 13 00:47:15.866603 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:47:15.866685 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali346dffbba28: link becomes ready Aug 13 00:47:15.865503 systemd-networkd[1102]: cali346dffbba28: Gained carrier Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.631 [INFO][3488] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.656 [INFO][3488] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0 whisker-5748bbc6f8- calico-system 8e68d9eb-007c-40d1-89e5-f63f90095299 910 0 2025-08-13 00:47:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5748bbc6f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5748bbc6f8-mwsj9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali346dffbba28 [] [] }} ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Namespace="calico-system" Pod="whisker-5748bbc6f8-mwsj9" WorkloadEndpoint="localhost-k8s-whisker--5748bbc6f8--mwsj9-" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.656 [INFO][3488] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Namespace="calico-system" Pod="whisker-5748bbc6f8-mwsj9" WorkloadEndpoint="localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.716 [INFO][3501] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" HandleID="k8s-pod-network.d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Workload="localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.717 [INFO][3501] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" HandleID="k8s-pod-network.d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Workload="localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5748bbc6f8-mwsj9", "timestamp":"2025-08-13 00:47:15.716343569 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.719 [INFO][3501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.719 [INFO][3501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.719 [INFO][3501] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.735 [INFO][3501] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" host="localhost" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.758 [INFO][3501] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.773 [INFO][3501] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.781 [INFO][3501] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.790 [INFO][3501] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.790 [INFO][3501] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" host="localhost" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.799 [INFO][3501] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089 Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.805 [INFO][3501] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" host="localhost" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.815 [INFO][3501] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" host="localhost" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.815 [INFO][3501] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" host="localhost" Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.815 [INFO][3501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:15.907421 env[1320]: 2025-08-13 00:47:15.815 [INFO][3501] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" HandleID="k8s-pod-network.d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Workload="localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0" Aug 13 00:47:15.908318 env[1320]: 2025-08-13 00:47:15.824 [INFO][3488] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Namespace="calico-system" Pod="whisker-5748bbc6f8-mwsj9" WorkloadEndpoint="localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0", GenerateName:"whisker-5748bbc6f8-", Namespace:"calico-system", SelfLink:"", UID:"8e68d9eb-007c-40d1-89e5-f63f90095299", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5748bbc6f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5748bbc6f8-mwsj9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali346dffbba28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:15.908318 env[1320]: 2025-08-13 00:47:15.824 [INFO][3488] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Namespace="calico-system" Pod="whisker-5748bbc6f8-mwsj9" WorkloadEndpoint="localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0" Aug 13 00:47:15.908318 env[1320]: 2025-08-13 00:47:15.824 [INFO][3488] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali346dffbba28 ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Namespace="calico-system" Pod="whisker-5748bbc6f8-mwsj9" WorkloadEndpoint="localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0" Aug 13 00:47:15.908318 env[1320]: 2025-08-13 00:47:15.866 [INFO][3488] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Namespace="calico-system" Pod="whisker-5748bbc6f8-mwsj9" WorkloadEndpoint="localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0" Aug 13 00:47:15.908318 env[1320]: 2025-08-13 00:47:15.867 [INFO][3488] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Namespace="calico-system" Pod="whisker-5748bbc6f8-mwsj9" WorkloadEndpoint="localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0", GenerateName:"whisker-5748bbc6f8-", Namespace:"calico-system", SelfLink:"", UID:"8e68d9eb-007c-40d1-89e5-f63f90095299", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5748bbc6f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089", Pod:"whisker-5748bbc6f8-mwsj9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali346dffbba28", MAC:"ca:54:30:62:ed:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:15.908318 env[1320]: 2025-08-13 00:47:15.903 [INFO][3488] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089" Namespace="calico-system" Pod="whisker-5748bbc6f8-mwsj9" WorkloadEndpoint="localhost-k8s-whisker--5748bbc6f8--mwsj9-eth0" Aug 13 00:47:15.929634 env[1320]: time="2025-08-13T00:47:15.929526499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:15.929634 env[1320]: time="2025-08-13T00:47:15.929582487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:15.929634 env[1320]: time="2025-08-13T00:47:15.929598226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:15.930151 env[1320]: time="2025-08-13T00:47:15.930017559Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089 pid=3523 runtime=io.containerd.runc.v2 Aug 13 00:47:15.962310 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:47:15.990751 env[1320]: time="2025-08-13T00:47:15.990698486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5748bbc6f8-mwsj9,Uid:8e68d9eb-007c-40d1-89e5-f63f90095299,Namespace:calico-system,Attempt:0,} returns sandbox id \"d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089\"" Aug 13 00:47:15.992020 env[1320]: time="2025-08-13T00:47:15.991990829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:47:16.133000 audit[3595]: AVC avc: denied { write } for pid=3595 comm="tee" name="fd" dev="proc" ino=24155 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:47:16.140000 kernel: audit: type=1400 audit(1755046036.133:303): avc: denied { write } for pid=3595 comm="tee" name="fd" dev="proc" ino=24155 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:47:16.138000 audit[3611]: AVC avc: denied { write } for pid=3611 comm="tee" name="fd" dev="proc" ino=26760 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:47:16.149477 kernel: audit: type=1400 audit(1755046036.138:304): avc: denied { write } for pid=3611 comm="tee" name="fd" dev="proc" ino=26760 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:47:16.149637 kernel: audit: type=1300 audit(1755046036.138:304): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec1df97f1 a2=241 a3=1b6 items=1 ppid=3569 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.138000 audit[3611]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec1df97f1 a2=241 a3=1b6 items=1 ppid=3569 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.138000 audit: CWD cwd="/etc/service/enabled/bird6/log" Aug 13 00:47:16.159471 kernel: audit: type=1307 audit(1755046036.138:304): cwd="/etc/service/enabled/bird6/log" Aug 13 00:47:16.159634 kernel: audit: type=1302 audit(1755046036.138:304): item=0 name="/dev/fd/63" inode=25884 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:47:16.138000 audit: PATH item=0 name="/dev/fd/63" inode=25884 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:47:16.163471 kernel: audit: type=1327 audit(1755046036.138:304): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:47:16.138000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:47:16.133000 audit[3595]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8228d7f3 a2=241 a3=1b6 items=1 ppid=3566 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.175027 kernel: audit: type=1300 audit(1755046036.133:303): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8228d7f3 a2=241 a3=1b6 items=1 ppid=3566 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.175078 kernel: audit: type=1307 audit(1755046036.133:303): cwd="/etc/service/enabled/cni/log" Aug 13 00:47:16.133000 audit: CWD cwd="/etc/service/enabled/cni/log" Aug 13 00:47:16.133000 audit: PATH item=0 name="/dev/fd/63" inode=24152 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:47:16.186542 kernel: audit: type=1302 audit(1755046036.133:303): item=0 name="/dev/fd/63" inode=24152 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:47:16.186596 kernel: audit: type=1327 audit(1755046036.133:303): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:47:16.133000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:47:16.188000 audit[3623]: AVC avc: denied { write } for pid=3623 comm="tee" name="fd" dev="proc" ino=24163 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:47:16.188000 audit[3623]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe67c3e7e1 a2=241 a3=1b6 items=1 ppid=3568 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.188000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 13 00:47:16.188000 audit: PATH item=0 name="/dev/fd/63" inode=26769 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:47:16.188000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:47:16.209000 audit[3636]: AVC avc: denied { write } for pid=3636 comm="tee" name="fd" dev="proc" ino=26776 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:47:16.209000 audit[3636]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff58dd07f2 a2=241 a3=1b6 items=1 ppid=3583 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.209000 audit: CWD cwd="/etc/service/enabled/bird/log" Aug 13 00:47:16.209000 audit: PATH item=0 name="/dev/fd/63" inode=25892 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:47:16.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:47:16.210000 audit[3619]: AVC avc: denied { write } for pid=3619 comm="tee" name="fd" dev="proc" ino=26780 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:47:16.210000 audit[3619]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc663847f1 a2=241 a3=1b6 items=1 ppid=3581 pid=3619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.210000 audit: CWD cwd="/etc/service/enabled/felix/log" Aug 13 00:47:16.210000 audit: PATH item=0 name="/dev/fd/63" inode=26766 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:47:16.210000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:47:16.214000 audit[3652]: AVC avc: denied { write } for pid=3652 comm="tee" name="fd" dev="proc" ino=24168 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:47:16.214000 audit[3652]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe3db9f7e2 a2=241 a3=1b6 items=1 ppid=3578 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.214000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Aug 13 00:47:16.214000 audit: PATH item=0 name="/dev/fd/63" inode=24165 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:47:16.214000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:47:16.220000 audit[3643]: AVC avc: denied { write } for pid=3643 comm="tee" name="fd" dev="proc" ino=25003 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:47:16.220000 audit[3643]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffbe7707f1 a2=241 a3=1b6 items=1 ppid=3573 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.220000 audit: CWD cwd="/etc/service/enabled/confd/log" Aug 13 00:47:16.220000 audit: PATH item=0 name="/dev/fd/63" inode=24965 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:47:16.220000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:47:16.382000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.382000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.382000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.382000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.382000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.382000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.382000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.382000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.382000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.382000 audit: BPF prog-id=10 op=LOAD Aug 13 00:47:16.382000 audit[3708]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1aea4be0 a2=98 a3=1fffffffffffffff items=0 ppid=3582 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.382000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:47:16.383000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit: BPF prog-id=11 op=LOAD Aug 13 00:47:16.383000 audit[3708]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1aea4ac0 a2=94 a3=3 items=0 ppid=3582 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.383000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:47:16.383000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { bpf } for pid=3708 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit: BPF prog-id=12 op=LOAD Aug 13 00:47:16.383000 audit[3708]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1aea4b00 a2=94 a3=7fff1aea4ce0 items=0 ppid=3582 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.383000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:47:16.383000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:47:16.383000 audit[3708]: AVC avc: denied { perfmon } for pid=3708 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.383000 audit[3708]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff1aea4bd0 a2=50 a3=a000000085 items=0 ppid=3582 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.383000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:47:16.388000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.388000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.388000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.388000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.388000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.388000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.388000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.388000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.388000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.388000 audit: BPF prog-id=13 op=LOAD Aug 13 00:47:16.388000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff52c78920 a2=98 a3=3 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.388000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.389000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:47:16.390000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.390000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.390000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.390000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.390000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.390000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.390000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.390000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.390000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.390000 audit: BPF prog-id=14 op=LOAD Aug 13 00:47:16.390000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff52c78710 a2=94 a3=54428f items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.390000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.391000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:47:16.391000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.391000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.391000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.391000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.391000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.391000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.391000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.391000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.391000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.391000 audit: BPF prog-id=15 op=LOAD Aug 13 00:47:16.391000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff52c78740 a2=94 a3=2 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.391000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.391000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:47:16.498000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.498000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.498000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.498000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.498000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.498000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.498000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.498000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.498000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.498000 audit: BPF prog-id=16 op=LOAD Aug 13 00:47:16.498000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff52c78600 a2=94 a3=1 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.498000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:47:16.498000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.498000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff52c786d0 a2=50 a3=7fff52c787b0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52c78610 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff52c78640 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff52c78550 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52c78660 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52c78640 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52c78630 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52c78660 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff52c78640 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff52c78660 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff52c78630 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.507000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.507000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff52c786a0 a2=28 a3=0 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.507000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff52c78450 a2=50 a3=1 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit: BPF prog-id=17 op=LOAD Aug 13 00:47:16.508000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff52c78450 a2=94 a3=5 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.508000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff52c78500 a2=50 a3=1 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff52c78620 a2=4 a3=38 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { confidentiality } for pid=3709 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:47:16.508000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff52c78670 a2=94 a3=6 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { confidentiality } for pid=3709 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:47:16.508000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff52c77e20 a2=94 a3=88 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.508000 audit[3709]: AVC avc: denied { confidentiality } for pid=3709 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:47:16.508000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff52c77e20 a2=94 a3=88 items=0 ppid=3582 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.508000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit: BPF prog-id=18 op=LOAD Aug 13 00:47:16.517000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffaccb1770 a2=98 a3=1999999999999999 items=0 ppid=3582 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.517000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:47:16.517000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.517000 audit: BPF prog-id=19 op=LOAD Aug 13 00:47:16.517000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffaccb1650 a2=94 a3=ffff items=0 ppid=3582 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.517000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:47:16.518000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:47:16.518000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.518000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.518000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.518000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.518000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.518000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.518000 audit[3712]: AVC avc: denied { perfmon } for pid=3712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.518000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.518000 audit[3712]: AVC avc: denied { bpf } for pid=3712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.518000 audit: BPF prog-id=20 op=LOAD Aug 13 00:47:16.518000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffaccb1690 a2=94 a3=7fffaccb1870 items=0 ppid=3582 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.518000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:47:16.518000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:47:16.545763 kubelet[2155]: I0813 00:47:16.545711 2155 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcb8657-554c-4e97-97e8-2254cdca9102" path="/var/lib/kubelet/pods/1bcb8657-554c-4e97-97e8-2254cdca9102/volumes" Aug 13 00:47:16.947964 systemd-networkd[1102]: vxlan.calico: Link UP Aug 13 00:47:16.947974 systemd-networkd[1102]: vxlan.calico: Gained carrier Aug 13 00:47:16.980000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.980000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.980000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.980000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.980000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.980000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.980000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.980000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.980000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.980000 audit: BPF prog-id=21 op=LOAD Aug 13 00:47:16.980000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4ff3e000 a2=98 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.980000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.980000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit: BPF prog-id=22 op=LOAD Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4ff3de10 a2=94 a3=54428f items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit: BPF prog-id=23 op=LOAD Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4ff3de40 a2=94 a3=2 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc4ff3dd10 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc4ff3dd40 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc4ff3dc50 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc4ff3dd60 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc4ff3dd40 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc4ff3dd30 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc4ff3dd60 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc4ff3dd40 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc4ff3dd60 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc4ff3dd30 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc4ff3dda0 a2=28 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.981000 audit: BPF prog-id=24 op=LOAD Aug 13 00:47:16.981000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc4ff3dc10 a2=94 a3=0 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.981000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffc4ff3dc00 a2=50 a3=2800 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.982000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffc4ff3dc00 a2=50 a3=2800 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.982000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit: BPF prog-id=25 op=LOAD Aug 13 00:47:16.982000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc4ff3d420 a2=94 a3=2 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.982000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.982000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { perfmon } for pid=3739 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit[3739]: AVC avc: denied { bpf } for pid=3739 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.982000 audit: BPF prog-id=26 op=LOAD Aug 13 00:47:16.982000 audit[3739]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc4ff3d520 a2=94 a3=30 items=0 ppid=3582 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.982000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit: BPF prog-id=27 op=LOAD Aug 13 00:47:16.985000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff18159b50 a2=98 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.985000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:16.985000 audit: BPF prog-id=27 op=UNLOAD Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit: BPF prog-id=28 op=LOAD Aug 13 00:47:16.985000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff18159940 a2=94 a3=54428f items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.985000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:16.985000 audit: BPF prog-id=28 op=UNLOAD Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:16.985000 audit: BPF prog-id=29 op=LOAD Aug 13 00:47:16.985000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff18159970 a2=94 a3=2 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:16.985000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:16.986000 audit: BPF prog-id=29 op=UNLOAD Aug 13 00:47:17.095000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.095000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.095000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.095000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.095000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.095000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.095000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.095000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.095000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.095000 audit: BPF prog-id=30 op=LOAD Aug 13 00:47:17.095000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff18159830 a2=94 a3=1 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.095000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.095000 audit: BPF prog-id=30 op=UNLOAD Aug 13 00:47:17.095000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.095000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff18159900 a2=50 a3=7fff181599e0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.095000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff18159840 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff18159870 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff18159780 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff18159890 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff18159870 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff18159860 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff18159890 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff18159870 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff18159890 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff18159860 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.103000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.103000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff181598d0 a2=28 a3=0 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.103000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff18159680 a2=50 a3=1 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.104000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit: BPF prog-id=31 op=LOAD Aug 13 00:47:17.104000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff18159680 a2=94 a3=5 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.104000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.104000 audit: BPF prog-id=31 op=UNLOAD Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff18159730 a2=50 a3=1 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.104000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff18159850 a2=4 a3=38 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.104000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { confidentiality } for pid=3741 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:47:17.104000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff181598a0 a2=94 a3=6 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.104000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { confidentiality } for pid=3741 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:47:17.104000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff18159050 a2=94 a3=88 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.104000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { perfmon } for pid=3741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { confidentiality } for pid=3741 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:47:17.104000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff18159050 a2=94 a3=88 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.104000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.104000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.104000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff1815aa80 a2=10 a3=208 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.104000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.105000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.105000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff1815a920 a2=10 a3=3 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.105000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.105000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff1815a8c0 a2=10 a3=3 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.105000 audit[3741]: AVC avc: denied { bpf } for pid=3741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:47:17.105000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff1815a8c0 a2=10 a3=7 items=0 ppid=3582 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:47:17.113000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:47:17.158000 audit[3774]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3774 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:17.158000 audit[3774]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc82c5d3b0 a2=0 a3=7ffc82c5d39c items=0 ppid=3582 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.158000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:17.162000 audit[3773]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3773 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:17.162000 audit[3773]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff679ffaf0 a2=0 a3=7fff679ffadc items=0 ppid=3582 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.162000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:17.166000 audit[3772]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=3772 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:17.166000 audit[3772]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffa11f29b0 a2=0 a3=7fffa11f299c items=0 ppid=3582 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.166000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:17.170000 audit[3777]: NETFILTER_CFG table=filter:104 family=2 entries=94 op=nft_register_chain pid=3777 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:17.170000 audit[3777]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffd17590670 a2=0 a3=7ffd1759065c items=0 ppid=3582 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.170000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:17.270220 systemd-networkd[1102]: cali346dffbba28: Gained IPv6LL Aug 13 00:47:17.546014 env[1320]: time="2025-08-13T00:47:17.544841493Z" level=info msg="StopPodSandbox for \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\"" Aug 13 00:47:17.546014 env[1320]: time="2025-08-13T00:47:17.544841823Z" level=info msg="StopPodSandbox for \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\"" Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.594 [INFO][3809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.595 [INFO][3809] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" iface="eth0" netns="/var/run/netns/cni-ade5a0c3-9da6-8b25-5dc6-b141bd4a19c7" Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.595 [INFO][3809] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" iface="eth0" netns="/var/run/netns/cni-ade5a0c3-9da6-8b25-5dc6-b141bd4a19c7" Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.595 [INFO][3809] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" iface="eth0" netns="/var/run/netns/cni-ade5a0c3-9da6-8b25-5dc6-b141bd4a19c7" Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.595 [INFO][3809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.595 [INFO][3809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.624 [INFO][3823] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" HandleID="k8s-pod-network.01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.624 [INFO][3823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.624 [INFO][3823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.630 [WARNING][3823] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" HandleID="k8s-pod-network.01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.630 [INFO][3823] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" HandleID="k8s-pod-network.01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.631 [INFO][3823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:17.635777 env[1320]: 2025-08-13 00:47:17.634 [INFO][3809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:17.637212 env[1320]: time="2025-08-13T00:47:17.637166927Z" level=info msg="TearDown network for sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\" successfully" Aug 13 00:47:17.637398 env[1320]: time="2025-08-13T00:47:17.637289312Z" level=info msg="StopPodSandbox for \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\" returns successfully" Aug 13 00:47:17.638253 kubelet[2155]: E0813 00:47:17.637712 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:17.638795 env[1320]: time="2025-08-13T00:47:17.638206255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8tqtr,Uid:c322dc19-862d-4e41-8d08-222e739f135c,Namespace:kube-system,Attempt:1,}" Aug 13 00:47:17.639196 systemd[1]: run-netns-cni\x2dade5a0c3\x2d9da6\x2d8b25\x2d5dc6\x2db141bd4a19c7.mount: Deactivated successfully. Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.596 [INFO][3808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.596 [INFO][3808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" iface="eth0" netns="/var/run/netns/cni-446843e7-c8d3-5fcf-1050-23358366f14c" Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.597 [INFO][3808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" iface="eth0" netns="/var/run/netns/cni-446843e7-c8d3-5fcf-1050-23358366f14c" Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.597 [INFO][3808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" iface="eth0" netns="/var/run/netns/cni-446843e7-c8d3-5fcf-1050-23358366f14c" Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.597 [INFO][3808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.597 [INFO][3808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.625 [INFO][3825] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" HandleID="k8s-pod-network.5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.625 [INFO][3825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.631 [INFO][3825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.640 [WARNING][3825] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" HandleID="k8s-pod-network.5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.641 [INFO][3825] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" HandleID="k8s-pod-network.5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.660 [INFO][3825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:17.663882 env[1320]: 2025-08-13 00:47:17.662 [INFO][3808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:17.664823 env[1320]: time="2025-08-13T00:47:17.664766842Z" level=info msg="TearDown network for sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\" successfully" Aug 13 00:47:17.664823 env[1320]: time="2025-08-13T00:47:17.664815114Z" level=info msg="StopPodSandbox for \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\" returns successfully" Aug 13 00:47:17.665665 env[1320]: time="2025-08-13T00:47:17.665638549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75878946fc-mwwk8,Uid:7f9c7221-f628-46e4-b71d-64f1c6b90e3b,Namespace:calico-system,Attempt:1,}" Aug 13 00:47:17.667280 systemd[1]: run-netns-cni\x2d446843e7\x2dc8d3\x2d5fcf\x2d1050\x2d23358366f14c.mount: Deactivated successfully. Aug 13 00:47:17.812389 systemd-networkd[1102]: calif6e14598f81: Link UP Aug 13 00:47:17.814656 systemd-networkd[1102]: calif6e14598f81: Gained carrier Aug 13 00:47:17.815064 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif6e14598f81: link becomes ready Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.737 [INFO][3852] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0 calico-kube-controllers-75878946fc- calico-system 7f9c7221-f628-46e4-b71d-64f1c6b90e3b 922 0 2025-08-13 00:46:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75878946fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75878946fc-mwwk8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif6e14598f81 [] [] }} ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Namespace="calico-system" Pod="calico-kube-controllers-75878946fc-mwwk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.737 [INFO][3852] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Namespace="calico-system" Pod="calico-kube-controllers-75878946fc-mwwk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.761 [INFO][3873] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" HandleID="k8s-pod-network.33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.762 [INFO][3873] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" HandleID="k8s-pod-network.33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75878946fc-mwwk8", "timestamp":"2025-08-13 00:47:17.761832851 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.762 [INFO][3873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.762 [INFO][3873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.762 [INFO][3873] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.768 [INFO][3873] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" host="localhost" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.777 [INFO][3873] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.783 [INFO][3873] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.785 [INFO][3873] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.789 [INFO][3873] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.789 [INFO][3873] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" host="localhost" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.792 [INFO][3873] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380 Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.797 [INFO][3873] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" host="localhost" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.803 [INFO][3873] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" host="localhost" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.803 [INFO][3873] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" host="localhost" Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.805 [INFO][3873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:17.829165 env[1320]: 2025-08-13 00:47:17.805 [INFO][3873] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" HandleID="k8s-pod-network.33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:17.829931 env[1320]: 2025-08-13 00:47:17.807 [INFO][3852] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Namespace="calico-system" Pod="calico-kube-controllers-75878946fc-mwwk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0", GenerateName:"calico-kube-controllers-75878946fc-", Namespace:"calico-system", SelfLink:"", UID:"7f9c7221-f628-46e4-b71d-64f1c6b90e3b", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75878946fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75878946fc-mwwk8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6e14598f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:17.829931 env[1320]: 2025-08-13 00:47:17.808 [INFO][3852] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Namespace="calico-system" Pod="calico-kube-controllers-75878946fc-mwwk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:17.829931 env[1320]: 2025-08-13 00:47:17.808 [INFO][3852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6e14598f81 ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Namespace="calico-system" Pod="calico-kube-controllers-75878946fc-mwwk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:17.829931 env[1320]: 2025-08-13 00:47:17.815 [INFO][3852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Namespace="calico-system" Pod="calico-kube-controllers-75878946fc-mwwk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:17.829931 env[1320]: 2025-08-13 00:47:17.815 [INFO][3852] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Namespace="calico-system" Pod="calico-kube-controllers-75878946fc-mwwk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0", GenerateName:"calico-kube-controllers-75878946fc-", Namespace:"calico-system", SelfLink:"", UID:"7f9c7221-f628-46e4-b71d-64f1c6b90e3b", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75878946fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380", Pod:"calico-kube-controllers-75878946fc-mwwk8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6e14598f81", MAC:"5e:40:ca:37:95:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:17.829931 env[1320]: 2025-08-13 00:47:17.827 [INFO][3852] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380" Namespace="calico-system" Pod="calico-kube-controllers-75878946fc-mwwk8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:17.841314 env[1320]: time="2025-08-13T00:47:17.841242710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:17.841314 env[1320]: time="2025-08-13T00:47:17.841291384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:17.841314 env[1320]: time="2025-08-13T00:47:17.841306343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:17.840000 audit[3900]: NETFILTER_CFG table=filter:105 family=2 entries=36 op=nft_register_chain pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:17.840000 audit[3900]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7fff77788f00 a2=0 a3=7fff77788eec items=0 ppid=3582 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.840000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:17.841963 env[1320]: time="2025-08-13T00:47:17.841858387Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380 pid=3903 runtime=io.containerd.runc.v2 Aug 13 00:47:17.867198 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:47:17.907141 env[1320]: time="2025-08-13T00:47:17.906676487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75878946fc-mwwk8,Uid:7f9c7221-f628-46e4-b71d-64f1c6b90e3b,Namespace:calico-system,Attempt:1,} returns sandbox id \"33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380\"" Aug 13 00:47:17.912558 systemd-networkd[1102]: cali95007698936: Link UP Aug 13 00:47:17.914152 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali95007698936: link becomes ready Aug 13 00:47:17.915906 systemd-networkd[1102]: cali95007698936: Gained carrier Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.736 [INFO][3841] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0 coredns-7c65d6cfc9- kube-system c322dc19-862d-4e41-8d08-222e739f135c 923 0 2025-08-13 00:46:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-8tqtr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95007698936 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8tqtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8tqtr-" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.736 [INFO][3841] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8tqtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.768 [INFO][3871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" HandleID="k8s-pod-network.3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.768 [INFO][3871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" HandleID="k8s-pod-network.3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-8tqtr", "timestamp":"2025-08-13 00:47:17.768525407 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.768 [INFO][3871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.803 [INFO][3871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.803 [INFO][3871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.869 [INFO][3871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" host="localhost" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.877 [INFO][3871] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.881 [INFO][3871] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.882 [INFO][3871] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.886 [INFO][3871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.886 [INFO][3871] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" host="localhost" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.889 [INFO][3871] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4 Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.893 [INFO][3871] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" host="localhost" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.900 [INFO][3871] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" host="localhost" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.900 [INFO][3871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" host="localhost" Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.900 [INFO][3871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:17.930593 env[1320]: 2025-08-13 00:47:17.900 [INFO][3871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" HandleID="k8s-pod-network.3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:17.931501 env[1320]: 2025-08-13 00:47:17.903 [INFO][3841] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8tqtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c322dc19-862d-4e41-8d08-222e739f135c", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-8tqtr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95007698936", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:17.931501 env[1320]: 2025-08-13 00:47:17.903 [INFO][3841] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8tqtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:17.931501 env[1320]: 2025-08-13 00:47:17.903 [INFO][3841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95007698936 ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8tqtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:17.931501 env[1320]: 2025-08-13 00:47:17.918 [INFO][3841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8tqtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:17.931501 env[1320]: 2025-08-13 00:47:17.918 [INFO][3841] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8tqtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c322dc19-862d-4e41-8d08-222e739f135c", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4", Pod:"coredns-7c65d6cfc9-8tqtr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95007698936", MAC:"ae:51:7e:53:6a:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:17.931501 env[1320]: 2025-08-13 00:47:17.928 [INFO][3841] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8tqtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:17.955000 audit[3945]: NETFILTER_CFG table=filter:106 family=2 entries=46 op=nft_register_chain pid=3945 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:17.955000 audit[3945]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffcffa2e110 a2=0 a3=7ffcffa2e0fc items=0 ppid=3582 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:17.955000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:17.962222 env[1320]: time="2025-08-13T00:47:17.962144166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:17.962356 env[1320]: time="2025-08-13T00:47:17.962232304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:17.962356 env[1320]: time="2025-08-13T00:47:17.962261501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:17.964346 env[1320]: time="2025-08-13T00:47:17.962876857Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4 pid=3953 runtime=io.containerd.runc.v2 Aug 13 00:47:17.988480 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:47:18.016273 env[1320]: time="2025-08-13T00:47:18.016214040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8tqtr,Uid:c322dc19-862d-4e41-8d08-222e739f135c,Namespace:kube-system,Attempt:1,} returns sandbox id \"3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4\"" Aug 13 00:47:18.018315 kubelet[2155]: E0813 00:47:18.017394 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:18.020800 env[1320]: time="2025-08-13T00:47:18.020753196Z" level=info msg="CreateContainer within sandbox \"3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:47:18.029849 env[1320]: time="2025-08-13T00:47:18.029778415Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:18.032826 env[1320]: time="2025-08-13T00:47:18.032803026Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:18.034441 env[1320]: time="2025-08-13T00:47:18.034410518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:18.040136 env[1320]: time="2025-08-13T00:47:18.040082619Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:18.040368 env[1320]: time="2025-08-13T00:47:18.040321947Z" level=info msg="CreateContainer within sandbox \"3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b18acf00a1be13325b9713da1f4dc0ffc283a70df06441a9c4ed21adfc61ed60\"" Aug 13 00:47:18.040769 env[1320]: time="2025-08-13T00:47:18.040719015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 00:47:18.040823 env[1320]: time="2025-08-13T00:47:18.040785613Z" level=info msg="StartContainer for \"b18acf00a1be13325b9713da1f4dc0ffc283a70df06441a9c4ed21adfc61ed60\"" Aug 13 00:47:18.042605 env[1320]: time="2025-08-13T00:47:18.042580092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:47:18.043476 env[1320]: time="2025-08-13T00:47:18.043454353Z" level=info msg="CreateContainer within sandbox \"d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:47:18.061792 env[1320]: time="2025-08-13T00:47:18.061727457Z" level=info msg="CreateContainer within sandbox \"d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"eaf41a477f65bcfdae562092e14e55dd9a0bf6c00c0cde1397c5f47479ee57a8\"" Aug 13 00:47:18.063913 env[1320]: time="2025-08-13T00:47:18.063778338Z" level=info msg="StartContainer for \"eaf41a477f65bcfdae562092e14e55dd9a0bf6c00c0cde1397c5f47479ee57a8\"" Aug 13 00:47:18.352679 env[1320]: time="2025-08-13T00:47:18.352515293Z" level=info msg="StartContainer for \"b18acf00a1be13325b9713da1f4dc0ffc283a70df06441a9c4ed21adfc61ed60\" returns successfully" Aug 13 00:47:18.455648 env[1320]: time="2025-08-13T00:47:18.455560869Z" level=info msg="StartContainer for \"eaf41a477f65bcfdae562092e14e55dd9a0bf6c00c0cde1397c5f47479ee57a8\" returns successfully" Aug 13 00:47:18.460636 kubelet[2155]: E0813 00:47:18.460591 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:18.544489 env[1320]: time="2025-08-13T00:47:18.544403238Z" level=info msg="StopPodSandbox for \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\"" Aug 13 00:47:18.545128 env[1320]: time="2025-08-13T00:47:18.545083919Z" level=info msg="StopPodSandbox for \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\"" Aug 13 00:47:18.545223 env[1320]: time="2025-08-13T00:47:18.545195522Z" level=info msg="StopPodSandbox for \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\"" Aug 13 00:47:18.635000 audit[4118]: NETFILTER_CFG table=filter:107 family=2 entries=20 op=nft_register_rule pid=4118 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:18.635000 audit[4118]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd27092240 a2=0 a3=7ffd2709222c items=0 ppid=2307 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:18.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:18.641763 kubelet[2155]: I0813 00:47:18.641457 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8tqtr" podStartSLOduration=42.641428376 podStartE2EDuration="42.641428376s" podCreationTimestamp="2025-08-13 00:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:18.584142679 +0000 UTC m=+48.135000398" watchObservedRunningTime="2025-08-13 00:47:18.641428376 +0000 UTC m=+48.192285985" Aug 13 00:47:18.651000 audit[4118]: NETFILTER_CFG table=nat:108 family=2 entries=14 op=nft_register_rule pid=4118 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:18.651000 audit[4118]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd27092240 a2=0 a3=0 items=0 ppid=2307 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:18.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.636 [INFO][4094] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.636 [INFO][4094] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" iface="eth0" netns="/var/run/netns/cni-b5f49312-7fdc-7a8b-a610-574686e897b1" Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.636 [INFO][4094] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" iface="eth0" netns="/var/run/netns/cni-b5f49312-7fdc-7a8b-a610-574686e897b1" Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.636 [INFO][4094] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" iface="eth0" netns="/var/run/netns/cni-b5f49312-7fdc-7a8b-a610-574686e897b1" Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.636 [INFO][4094] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.636 [INFO][4094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.710 [INFO][4120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" HandleID="k8s-pod-network.12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.710 [INFO][4120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.710 [INFO][4120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.716 [WARNING][4120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" HandleID="k8s-pod-network.12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.716 [INFO][4120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" HandleID="k8s-pod-network.12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.718 [INFO][4120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:18.725735 env[1320]: 2025-08-13 00:47:18.721 [INFO][4094] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:18.730060 systemd[1]: run-netns-cni\x2db5f49312\x2d7fdc\x2d7a8b\x2da610\x2d574686e897b1.mount: Deactivated successfully. Aug 13 00:47:18.732447 env[1320]: time="2025-08-13T00:47:18.732328007Z" level=info msg="TearDown network for sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\" successfully" Aug 13 00:47:18.732523 env[1320]: time="2025-08-13T00:47:18.732445492Z" level=info msg="StopPodSandbox for \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\" returns successfully" Aug 13 00:47:18.733736 env[1320]: time="2025-08-13T00:47:18.733695852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfdgb,Uid:222af0ae-3446-4630-b6ec-423608ba3718,Namespace:calico-system,Attempt:1,}" Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.650 [INFO][4091] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.651 [INFO][4091] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" iface="eth0" netns="/var/run/netns/cni-a5f47ebb-60f5-f66e-9c01-6453504bfe68" Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.651 [INFO][4091] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" iface="eth0" netns="/var/run/netns/cni-a5f47ebb-60f5-f66e-9c01-6453504bfe68" Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.651 [INFO][4091] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" iface="eth0" netns="/var/run/netns/cni-a5f47ebb-60f5-f66e-9c01-6453504bfe68" Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.651 [INFO][4091] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.651 [INFO][4091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.752 [INFO][4127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" HandleID="k8s-pod-network.7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.753 [INFO][4127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.753 [INFO][4127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.776 [WARNING][4127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" HandleID="k8s-pod-network.7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.776 [INFO][4127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" HandleID="k8s-pod-network.7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.782 [INFO][4127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:18.794855 env[1320]: 2025-08-13 00:47:18.784 [INFO][4091] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:18.799329 systemd[1]: run-netns-cni\x2da5f47ebb\x2d60f5\x2df66e\x2d9c01\x2d6453504bfe68.mount: Deactivated successfully. Aug 13 00:47:18.801430 env[1320]: time="2025-08-13T00:47:18.801342401Z" level=info msg="TearDown network for sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\" successfully" Aug 13 00:47:18.801565 env[1320]: time="2025-08-13T00:47:18.801541161Z" level=info msg="StopPodSandbox for \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\" returns successfully" Aug 13 00:47:18.802551 env[1320]: time="2025-08-13T00:47:18.802494633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfcc9cb65-vpjwq,Uid:e908a700-6b74-4dfa-9e94-825de953b563,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.665 [INFO][4098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.665 [INFO][4098] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" iface="eth0" netns="/var/run/netns/cni-0538b91e-4178-b505-86df-62df269fc162" Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.666 [INFO][4098] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" iface="eth0" netns="/var/run/netns/cni-0538b91e-4178-b505-86df-62df269fc162" Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.666 [INFO][4098] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" iface="eth0" netns="/var/run/netns/cni-0538b91e-4178-b505-86df-62df269fc162" Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.666 [INFO][4098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.666 [INFO][4098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.781 [INFO][4133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" HandleID="k8s-pod-network.047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.781 [INFO][4133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.802 [INFO][4133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.808 [WARNING][4133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" HandleID="k8s-pod-network.047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.809 [INFO][4133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" HandleID="k8s-pod-network.047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.810 [INFO][4133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:18.819068 env[1320]: 2025-08-13 00:47:18.813 [INFO][4098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:18.819635 env[1320]: time="2025-08-13T00:47:18.819237072Z" level=info msg="TearDown network for sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\" successfully" Aug 13 00:47:18.819635 env[1320]: time="2025-08-13T00:47:18.819272920Z" level=info msg="StopPodSandbox for \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\" returns successfully" Aug 13 00:47:18.821486 env[1320]: time="2025-08-13T00:47:18.821425114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfcc9cb65-9wsgx,Uid:34f400c3-01be-4d31-9cf0-4542774ed01f,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:47:18.871221 systemd-networkd[1102]: vxlan.calico: Gained IPv6LL Aug 13 00:47:18.952385 systemd-networkd[1102]: calie20057f85cc: Link UP Aug 13 00:47:18.957442 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:47:18.957584 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie20057f85cc: link becomes ready Aug 13 00:47:18.957777 systemd-networkd[1102]: calie20057f85cc: Gained carrier Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.851 [INFO][4147] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dfdgb-eth0 csi-node-driver- calico-system 222af0ae-3446-4630-b6ec-423608ba3718 951 0 2025-08-13 00:46:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dfdgb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie20057f85cc [] [] }} ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Namespace="calico-system" Pod="csi-node-driver-dfdgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfdgb-" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.852 [INFO][4147] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Namespace="calico-system" Pod="csi-node-driver-dfdgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.900 [INFO][4191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" HandleID="k8s-pod-network.72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.900 [INFO][4191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" HandleID="k8s-pod-network.72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dfdgb", "timestamp":"2025-08-13 00:47:18.90076945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.901 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.901 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.901 [INFO][4191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.914 [INFO][4191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" host="localhost" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.919 [INFO][4191] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.924 [INFO][4191] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.926 [INFO][4191] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.929 [INFO][4191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.929 [INFO][4191] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" host="localhost" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.930 [INFO][4191] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172 Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.935 [INFO][4191] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" host="localhost" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.940 [INFO][4191] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" host="localhost" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.940 [INFO][4191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" host="localhost" Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.940 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:18.973127 env[1320]: 2025-08-13 00:47:18.940 [INFO][4191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" HandleID="k8s-pod-network.72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:18.974803 env[1320]: 2025-08-13 00:47:18.950 [INFO][4147] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Namespace="calico-system" Pod="csi-node-driver-dfdgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfdgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dfdgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"222af0ae-3446-4630-b6ec-423608ba3718", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dfdgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie20057f85cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:18.974803 env[1320]: 2025-08-13 00:47:18.950 [INFO][4147] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Namespace="calico-system" Pod="csi-node-driver-dfdgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:18.974803 env[1320]: 2025-08-13 00:47:18.950 [INFO][4147] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie20057f85cc ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Namespace="calico-system" Pod="csi-node-driver-dfdgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:18.974803 env[1320]: 2025-08-13 00:47:18.952 [INFO][4147] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Namespace="calico-system" Pod="csi-node-driver-dfdgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:18.974803 env[1320]: 2025-08-13 00:47:18.958 [INFO][4147] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Namespace="calico-system" Pod="csi-node-driver-dfdgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfdgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dfdgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"222af0ae-3446-4630-b6ec-423608ba3718", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172", Pod:"csi-node-driver-dfdgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie20057f85cc", MAC:"fe:04:a7:6f:51:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:18.974803 env[1320]: 2025-08-13 00:47:18.970 [INFO][4147] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172" Namespace="calico-system" Pod="csi-node-driver-dfdgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:18.983000 audit[4229]: NETFILTER_CFG table=filter:109 family=2 entries=44 op=nft_register_chain pid=4229 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:18.983000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffd5d633cc0 a2=0 a3=7ffd5d633cac items=0 ppid=3582 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:18.983000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:18.985661 env[1320]: time="2025-08-13T00:47:18.985587448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:18.985772 env[1320]: time="2025-08-13T00:47:18.985633105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:18.985772 env[1320]: time="2025-08-13T00:47:18.985651049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:18.985908 env[1320]: time="2025-08-13T00:47:18.985846493Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172 pid=4234 runtime=io.containerd.runc.v2 Aug 13 00:47:19.010766 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:47:19.023755 env[1320]: time="2025-08-13T00:47:19.023150000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfdgb,Uid:222af0ae-3446-4630-b6ec-423608ba3718,Namespace:calico-system,Attempt:1,} returns sandbox id \"72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172\"" Aug 13 00:47:19.045557 systemd-networkd[1102]: cali2d93e9e302a: Link UP Aug 13 00:47:19.048367 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2d93e9e302a: link becomes ready Aug 13 00:47:19.048203 systemd-networkd[1102]: cali2d93e9e302a: Gained carrier Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:18.856 [INFO][4161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0 calico-apiserver-7dfcc9cb65- calico-apiserver e908a700-6b74-4dfa-9e94-825de953b563 952 0 2025-08-13 00:46:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dfcc9cb65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7dfcc9cb65-vpjwq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d93e9e302a [] [] }} ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-vpjwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:18.856 [INFO][4161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-vpjwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:18.906 [INFO][4193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" HandleID="k8s-pod-network.6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:18.907 [INFO][4193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" HandleID="k8s-pod-network.6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9940), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7dfcc9cb65-vpjwq", "timestamp":"2025-08-13 00:47:18.906689525 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:18.907 [INFO][4193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:18.940 [INFO][4193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:18.940 [INFO][4193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.015 [INFO][4193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" host="localhost" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.019 [INFO][4193] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.024 [INFO][4193] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.026 [INFO][4193] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.029 [INFO][4193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.029 [INFO][4193] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" host="localhost" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.030 [INFO][4193] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19 Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.034 [INFO][4193] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" host="localhost" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.040 [INFO][4193] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" host="localhost" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.040 [INFO][4193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" host="localhost" Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.041 [INFO][4193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:19.058999 env[1320]: 2025-08-13 00:47:19.041 [INFO][4193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" HandleID="k8s-pod-network.6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:19.059727 env[1320]: 2025-08-13 00:47:19.043 [INFO][4161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-vpjwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0", GenerateName:"calico-apiserver-7dfcc9cb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"e908a700-6b74-4dfa-9e94-825de953b563", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfcc9cb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7dfcc9cb65-vpjwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d93e9e302a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:19.059727 env[1320]: 2025-08-13 00:47:19.043 [INFO][4161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-vpjwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:19.059727 env[1320]: 2025-08-13 00:47:19.043 [INFO][4161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d93e9e302a ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-vpjwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:19.059727 env[1320]: 2025-08-13 00:47:19.045 [INFO][4161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-vpjwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:19.059727 env[1320]: 2025-08-13 00:47:19.046 [INFO][4161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-vpjwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0", GenerateName:"calico-apiserver-7dfcc9cb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"e908a700-6b74-4dfa-9e94-825de953b563", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfcc9cb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19", Pod:"calico-apiserver-7dfcc9cb65-vpjwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d93e9e302a", MAC:"fa:09:97:fb:46:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:19.059727 env[1320]: 2025-08-13 00:47:19.056 [INFO][4161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-vpjwq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:19.071000 audit[4283]: NETFILTER_CFG table=filter:110 family=2 entries=62 op=nft_register_chain pid=4283 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:19.071000 audit[4283]: SYSCALL arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7ffd9429e250 a2=0 a3=7ffd9429e23c items=0 ppid=3582 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:19.071000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:19.074975 env[1320]: time="2025-08-13T00:47:19.072062603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:19.074975 env[1320]: time="2025-08-13T00:47:19.072155510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:19.074975 env[1320]: time="2025-08-13T00:47:19.072185057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:19.074975 env[1320]: time="2025-08-13T00:47:19.072332028Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19 pid=4284 runtime=io.containerd.runc.v2 Aug 13 00:47:19.098869 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:47:19.130285 env[1320]: time="2025-08-13T00:47:19.130223083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfcc9cb65-vpjwq,Uid:e908a700-6b74-4dfa-9e94-825de953b563,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19\"" Aug 13 00:47:19.152621 systemd-networkd[1102]: cali0814756a6c1: Link UP Aug 13 00:47:19.156155 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0814756a6c1: link becomes ready Aug 13 00:47:19.155601 systemd-networkd[1102]: cali0814756a6c1: Gained carrier Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:18.894 [INFO][4174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0 calico-apiserver-7dfcc9cb65- calico-apiserver 34f400c3-01be-4d31-9cf0-4542774ed01f 954 0 2025-08-13 00:46:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dfcc9cb65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7dfcc9cb65-9wsgx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0814756a6c1 [] [] }} ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-9wsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:18.894 [INFO][4174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-9wsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:18.931 [INFO][4208] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" HandleID="k8s-pod-network.44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:18.931 [INFO][4208] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" HandleID="k8s-pod-network.44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7dfcc9cb65-9wsgx", "timestamp":"2025-08-13 00:47:18.931368563 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:18.931 [INFO][4208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.040 [INFO][4208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.041 [INFO][4208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.116 [INFO][4208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" host="localhost" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.123 [INFO][4208] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.129 [INFO][4208] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.131 [INFO][4208] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.133 [INFO][4208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.134 [INFO][4208] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" host="localhost" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.135 [INFO][4208] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234 Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.139 [INFO][4208] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" host="localhost" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.144 [INFO][4208] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" host="localhost" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.144 [INFO][4208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" host="localhost" Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.144 [INFO][4208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:19.169782 env[1320]: 2025-08-13 00:47:19.144 [INFO][4208] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" HandleID="k8s-pod-network.44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:19.170466 env[1320]: 2025-08-13 00:47:19.148 [INFO][4174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-9wsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0", GenerateName:"calico-apiserver-7dfcc9cb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"34f400c3-01be-4d31-9cf0-4542774ed01f", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfcc9cb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7dfcc9cb65-9wsgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0814756a6c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:19.170466 env[1320]: 2025-08-13 00:47:19.148 [INFO][4174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-9wsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:19.170466 env[1320]: 2025-08-13 00:47:19.148 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0814756a6c1 ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-9wsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:19.170466 env[1320]: 2025-08-13 00:47:19.156 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-9wsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:19.170466 env[1320]: 2025-08-13 00:47:19.156 [INFO][4174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-9wsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0", GenerateName:"calico-apiserver-7dfcc9cb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"34f400c3-01be-4d31-9cf0-4542774ed01f", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfcc9cb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234", Pod:"calico-apiserver-7dfcc9cb65-9wsgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0814756a6c1", MAC:"9e:e4:45:42:1d:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:19.170466 env[1320]: 2025-08-13 00:47:19.165 [INFO][4174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234" Namespace="calico-apiserver" Pod="calico-apiserver-7dfcc9cb65-9wsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:19.178000 audit[4329]: NETFILTER_CFG table=filter:111 family=2 entries=53 op=nft_register_chain pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:19.178000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7fff6e8e8fc0 a2=0 a3=7fff6e8e8fac items=0 ppid=3582 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:19.178000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:19.182353 env[1320]: time="2025-08-13T00:47:19.182274323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:19.182353 env[1320]: time="2025-08-13T00:47:19.182319509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:19.182353 env[1320]: time="2025-08-13T00:47:19.182329739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:19.182752 env[1320]: time="2025-08-13T00:47:19.182694616Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234 pid=4338 runtime=io.containerd.runc.v2 Aug 13 00:47:19.203945 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:47:19.226855 env[1320]: time="2025-08-13T00:47:19.226797781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfcc9cb65-9wsgx,Uid:34f400c3-01be-4d31-9cf0-4542774ed01f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234\"" Aug 13 00:47:19.254234 systemd-networkd[1102]: cali95007698936: Gained IPv6LL Aug 13 00:47:19.471632 kubelet[2155]: E0813 00:47:19.471575 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:19.500000 audit[4372]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:19.500000 audit[4372]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc89d47d70 a2=0 a3=7ffc89d47d5c items=0 ppid=2307 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:19.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:19.510000 audit[4372]: NETFILTER_CFG table=nat:113 family=2 entries=35 op=nft_register_chain pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:19.510000 audit[4372]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc89d47d70 a2=0 a3=7ffc89d47d5c items=0 ppid=2307 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:19.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:19.544416 env[1320]: time="2025-08-13T00:47:19.544372199Z" level=info msg="StopPodSandbox for \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\"" Aug 13 00:47:19.544416 env[1320]: time="2025-08-13T00:47:19.544384092Z" level=info msg="StopPodSandbox for \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\"" Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.589 [INFO][4397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.589 [INFO][4397] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" iface="eth0" netns="/var/run/netns/cni-006515bb-44b9-935d-6b62-8edd186694b7" Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.590 [INFO][4397] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" iface="eth0" netns="/var/run/netns/cni-006515bb-44b9-935d-6b62-8edd186694b7" Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.590 [INFO][4397] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" iface="eth0" netns="/var/run/netns/cni-006515bb-44b9-935d-6b62-8edd186694b7" Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.590 [INFO][4397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.590 [INFO][4397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.622 [INFO][4413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" HandleID="k8s-pod-network.c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.623 [INFO][4413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.623 [INFO][4413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.632 [WARNING][4413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" HandleID="k8s-pod-network.c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.633 [INFO][4413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" HandleID="k8s-pod-network.c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.635 [INFO][4413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:19.652804 env[1320]: 2025-08-13 00:47:19.637 [INFO][4397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:19.652804 env[1320]: time="2025-08-13T00:47:19.647014668Z" level=info msg="TearDown network for sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\" successfully" Aug 13 00:47:19.652804 env[1320]: time="2025-08-13T00:47:19.647105863Z" level=info msg="StopPodSandbox for \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\" returns successfully" Aug 13 00:47:19.648243 systemd[1]: run-netns-cni\x2d0538b91e\x2d4178\x2db505\x2d86df\x2d62df269fc162.mount: Deactivated successfully. Aug 13 00:47:19.653640 kubelet[2155]: E0813 00:47:19.647483 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:19.658129 env[1320]: time="2025-08-13T00:47:19.654280870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n9cpw,Uid:1e4136fa-8827-4ebd-a668-a252e7a55a56,Namespace:kube-system,Attempt:1,}" Aug 13 00:47:19.656688 systemd[1]: run-netns-cni\x2d006515bb\x2d44b9\x2d935d\x2d6b62\x2d8edd186694b7.mount: Deactivated successfully. Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.614 [INFO][4398] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.614 [INFO][4398] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" iface="eth0" netns="/var/run/netns/cni-5de33666-d896-e1d9-e5c6-e7e6ba4ca9cb" Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.614 [INFO][4398] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" iface="eth0" netns="/var/run/netns/cni-5de33666-d896-e1d9-e5c6-e7e6ba4ca9cb" Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.614 [INFO][4398] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" iface="eth0" netns="/var/run/netns/cni-5de33666-d896-e1d9-e5c6-e7e6ba4ca9cb" Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.614 [INFO][4398] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.614 [INFO][4398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.660 [INFO][4421] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" HandleID="k8s-pod-network.4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.660 [INFO][4421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.660 [INFO][4421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.666 [WARNING][4421] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" HandleID="k8s-pod-network.4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.666 [INFO][4421] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" HandleID="k8s-pod-network.4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.667 [INFO][4421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:19.671783 env[1320]: 2025-08-13 00:47:19.669 [INFO][4398] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:19.672584 env[1320]: time="2025-08-13T00:47:19.671929055Z" level=info msg="TearDown network for sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\" successfully" Aug 13 00:47:19.672584 env[1320]: time="2025-08-13T00:47:19.671974402Z" level=info msg="StopPodSandbox for \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\" returns successfully" Aug 13 00:47:19.672848 env[1320]: time="2025-08-13T00:47:19.672818435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n995t,Uid:3c9e603b-de35-4367-b9d4-819dfaad6063,Namespace:calico-system,Attempt:1,}" Aug 13 00:47:19.674645 systemd[1]: run-netns-cni\x2d5de33666\x2dd896\x2de1d9\x2de5c6\x2de7e6ba4ca9cb.mount: Deactivated successfully. Aug 13 00:47:19.791322 systemd-networkd[1102]: cali5a49022853e: Link UP Aug 13 00:47:19.792408 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5a49022853e: link becomes ready Aug 13 00:47:19.792736 systemd-networkd[1102]: cali5a49022853e: Gained carrier Aug 13 00:47:19.830374 systemd-networkd[1102]: calif6e14598f81: Gained IPv6LL Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.718 [INFO][4431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0 coredns-7c65d6cfc9- kube-system 1e4136fa-8827-4ebd-a668-a252e7a55a56 976 0 2025-08-13 00:46:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-n9cpw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5a49022853e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n9cpw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n9cpw-" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.718 [INFO][4431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n9cpw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.751 [INFO][4460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" HandleID="k8s-pod-network.25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.751 [INFO][4460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" HandleID="k8s-pod-network.25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-n9cpw", "timestamp":"2025-08-13 00:47:19.751316428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.751 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.751 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.751 [INFO][4460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.760 [INFO][4460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" host="localhost" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.766 [INFO][4460] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.770 [INFO][4460] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.772 [INFO][4460] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.774 [INFO][4460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.774 [INFO][4460] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" host="localhost" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.776 [INFO][4460] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.779 [INFO][4460] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" host="localhost" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.785 [INFO][4460] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" host="localhost" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.785 [INFO][4460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" host="localhost" Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.785 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:19.841177 env[1320]: 2025-08-13 00:47:19.785 [INFO][4460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" HandleID="k8s-pod-network.25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:19.842214 env[1320]: 2025-08-13 00:47:19.788 [INFO][4431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n9cpw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e4136fa-8827-4ebd-a668-a252e7a55a56", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-n9cpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a49022853e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:19.842214 env[1320]: 2025-08-13 00:47:19.788 [INFO][4431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n9cpw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:19.842214 env[1320]: 2025-08-13 00:47:19.788 [INFO][4431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a49022853e ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n9cpw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:19.842214 env[1320]: 2025-08-13 00:47:19.793 [INFO][4431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n9cpw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:19.842214 env[1320]: 2025-08-13 00:47:19.793 [INFO][4431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n9cpw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e4136fa-8827-4ebd-a668-a252e7a55a56", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b", Pod:"coredns-7c65d6cfc9-n9cpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a49022853e", MAC:"d6:17:7d:0d:41:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:19.842214 env[1320]: 2025-08-13 00:47:19.833 [INFO][4431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n9cpw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:19.860170 env[1320]: time="2025-08-13T00:47:19.857062557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:19.860170 env[1320]: time="2025-08-13T00:47:19.857112252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:19.860170 env[1320]: time="2025-08-13T00:47:19.857124666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:19.860170 env[1320]: time="2025-08-13T00:47:19.857278971Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b pid=4494 runtime=io.containerd.runc.v2 Aug 13 00:47:19.863000 audit[4509]: NETFILTER_CFG table=filter:114 family=2 entries=52 op=nft_register_chain pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:19.863000 audit[4509]: SYSCALL arch=c000003e syscall=46 success=yes exit=23908 a0=3 a1=7fff998dca50 a2=0 a3=7fff998dca3c items=0 ppid=3582 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:19.863000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:19.894376 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:47:19.931026 systemd-networkd[1102]: cali70022f28513: Link UP Aug 13 00:47:19.935298 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali70022f28513: link becomes ready Aug 13 00:47:19.934694 systemd-networkd[1102]: cali70022f28513: Gained carrier Aug 13 00:47:19.961207 env[1320]: time="2025-08-13T00:47:19.954530671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n9cpw,Uid:1e4136fa-8827-4ebd-a668-a252e7a55a56,Namespace:kube-system,Attempt:1,} returns sandbox id \"25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b\"" Aug 13 00:47:19.961207 env[1320]: time="2025-08-13T00:47:19.959193721Z" level=info msg="CreateContainer within sandbox \"25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:47:19.961448 kubelet[2155]: E0813 00:47:19.955668 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.750 [INFO][4443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--n995t-eth0 goldmane-58fd7646b9- calico-system 3c9e603b-de35-4367-b9d4-819dfaad6063 977 0 2025-08-13 00:46:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-n995t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali70022f28513 [] [] }} ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Namespace="calico-system" Pod="goldmane-58fd7646b9-n995t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n995t-" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.750 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Namespace="calico-system" Pod="goldmane-58fd7646b9-n995t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.795 [INFO][4470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" HandleID="k8s-pod-network.db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.816 [INFO][4470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" HandleID="k8s-pod-network.db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-n995t", "timestamp":"2025-08-13 00:47:19.795634715 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.816 [INFO][4470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.816 [INFO][4470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.816 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.861 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" host="localhost" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.867 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.873 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.875 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.878 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.878 [INFO][4470] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" host="localhost" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.885 [INFO][4470] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094 Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.893 [INFO][4470] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" host="localhost" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.913 [INFO][4470] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" host="localhost" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.913 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" host="localhost" Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.913 [INFO][4470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:19.986834 env[1320]: 2025-08-13 00:47:19.913 [INFO][4470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" HandleID="k8s-pod-network.db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:19.987655 env[1320]: 2025-08-13 00:47:19.919 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Namespace="calico-system" Pod="goldmane-58fd7646b9-n995t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--n995t-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3c9e603b-de35-4367-b9d4-819dfaad6063", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-n995t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70022f28513", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:19.987655 env[1320]: 2025-08-13 00:47:19.919 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Namespace="calico-system" Pod="goldmane-58fd7646b9-n995t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:19.987655 env[1320]: 2025-08-13 00:47:19.919 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70022f28513 ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Namespace="calico-system" Pod="goldmane-58fd7646b9-n995t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:19.987655 env[1320]: 2025-08-13 00:47:19.935 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Namespace="calico-system" Pod="goldmane-58fd7646b9-n995t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:19.987655 env[1320]: 2025-08-13 00:47:19.936 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Namespace="calico-system" Pod="goldmane-58fd7646b9-n995t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--n995t-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3c9e603b-de35-4367-b9d4-819dfaad6063", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094", Pod:"goldmane-58fd7646b9-n995t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70022f28513", MAC:"fa:7b:29:47:e5:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:19.987655 env[1320]: 2025-08-13 00:47:19.982 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094" Namespace="calico-system" Pod="goldmane-58fd7646b9-n995t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:20.001000 audit[4542]: NETFILTER_CFG table=filter:115 family=2 entries=68 op=nft_register_chain pid=4542 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:47:20.001000 audit[4542]: SYSCALL arch=c000003e syscall=46 success=yes exit=32308 a0=3 a1=7fff2f953d20 a2=0 a3=7fff2f953d0c items=0 ppid=3582 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:20.001000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:47:20.072374 env[1320]: time="2025-08-13T00:47:20.071669839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:20.072374 env[1320]: time="2025-08-13T00:47:20.071751866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:20.072374 env[1320]: time="2025-08-13T00:47:20.071780280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:20.073608 env[1320]: time="2025-08-13T00:47:20.073525883Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094 pid=4552 runtime=io.containerd.runc.v2 Aug 13 00:47:20.119056 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:47:20.151056 systemd-networkd[1102]: cali2d93e9e302a: Gained IPv6LL Aug 13 00:47:20.169689 env[1320]: time="2025-08-13T00:47:20.169625536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n995t,Uid:3c9e603b-de35-4367-b9d4-819dfaad6063,Namespace:calico-system,Attempt:1,} returns sandbox id \"db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094\"" Aug 13 00:47:20.278383 env[1320]: time="2025-08-13T00:47:20.277558496Z" level=info msg="CreateContainer within sandbox \"25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dec53b92f6e752671e0b6636ef258f4ca5c704062c08987d0e94cc7bc9ad057d\"" Aug 13 00:47:20.282445 env[1320]: time="2025-08-13T00:47:20.282303468Z" level=info msg="StartContainer for \"dec53b92f6e752671e0b6636ef258f4ca5c704062c08987d0e94cc7bc9ad057d\"" Aug 13 00:47:20.598269 systemd-networkd[1102]: calie20057f85cc: Gained IPv6LL Aug 13 00:47:20.726187 systemd-networkd[1102]: cali0814756a6c1: Gained IPv6LL Aug 13 00:47:20.752058 env[1320]: time="2025-08-13T00:47:20.751995116Z" level=info msg="StartContainer for \"dec53b92f6e752671e0b6636ef258f4ca5c704062c08987d0e94cc7bc9ad057d\" returns successfully" Aug 13 00:47:20.763575 kubelet[2155]: E0813 00:47:20.763536 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:20.768364 kubelet[2155]: E0813 00:47:20.768270 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:20.828000 audit[4623]: NETFILTER_CFG table=filter:116 family=2 entries=14 op=nft_register_rule pid=4623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:20.828000 audit[4623]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd17a8f060 a2=0 a3=7ffd17a8f04c items=0 ppid=2307 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:20.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:20.830998 kubelet[2155]: I0813 00:47:20.830511 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-n9cpw" podStartSLOduration=44.830488338 podStartE2EDuration="44.830488338s" podCreationTimestamp="2025-08-13 00:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:20.816229138 +0000 UTC m=+50.367086767" watchObservedRunningTime="2025-08-13 00:47:20.830488338 +0000 UTC m=+50.381345947" Aug 13 00:47:20.834000 audit[4623]: NETFILTER_CFG table=nat:117 family=2 entries=44 op=nft_register_rule pid=4623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:20.834000 audit[4623]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd17a8f060 a2=0 a3=7ffd17a8f04c items=0 ppid=2307 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:20.834000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:20.849000 audit[4625]: NETFILTER_CFG table=filter:118 family=2 entries=14 op=nft_register_rule pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:20.849000 audit[4625]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe42b91e90 a2=0 a3=7ffe42b91e7c items=0 ppid=2307 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:20.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:20.862000 audit[4625]: NETFILTER_CFG table=nat:119 family=2 entries=56 op=nft_register_chain pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:20.862000 audit[4625]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe42b91e90 a2=0 a3=7ffe42b91e7c items=0 ppid=2307 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:20.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:21.366200 systemd-networkd[1102]: cali5a49022853e: Gained IPv6LL Aug 13 00:47:21.622739 systemd-networkd[1102]: cali70022f28513: Gained IPv6LL Aug 13 00:47:21.769938 kubelet[2155]: E0813 00:47:21.769891 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:22.089879 env[1320]: time="2025-08-13T00:47:22.089818905Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:22.095603 env[1320]: time="2025-08-13T00:47:22.095554408Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:22.097869 env[1320]: time="2025-08-13T00:47:22.097827375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:22.101020 env[1320]: time="2025-08-13T00:47:22.100954773Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:22.101301 env[1320]: time="2025-08-13T00:47:22.101265326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 00:47:22.104058 env[1320]: time="2025-08-13T00:47:22.103982170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:47:22.117067 env[1320]: time="2025-08-13T00:47:22.112392567Z" level=info msg="CreateContainer within sandbox \"33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:47:22.270189 env[1320]: time="2025-08-13T00:47:22.270094625Z" level=info msg="CreateContainer within sandbox \"33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"007395cb433dc5a2dda10c110cfccf43242a501b2429d844c581535f40355780\"" Aug 13 00:47:22.271011 env[1320]: time="2025-08-13T00:47:22.270942814Z" level=info msg="StartContainer for \"007395cb433dc5a2dda10c110cfccf43242a501b2429d844c581535f40355780\"" Aug 13 00:47:22.501406 env[1320]: time="2025-08-13T00:47:22.501303460Z" level=info msg="StartContainer for \"007395cb433dc5a2dda10c110cfccf43242a501b2429d844c581535f40355780\" returns successfully" Aug 13 00:47:22.775381 kubelet[2155]: E0813 00:47:22.774452 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:23.215536 kubelet[2155]: I0813 00:47:23.215400 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75878946fc-mwwk8" podStartSLOduration=28.022862007 podStartE2EDuration="32.215377594s" podCreationTimestamp="2025-08-13 00:46:51 +0000 UTC" firstStartedPulling="2025-08-13 00:47:17.910458619 +0000 UTC m=+47.461316228" lastFinishedPulling="2025-08-13 00:47:22.102974206 +0000 UTC m=+51.653831815" observedRunningTime="2025-08-13 00:47:23.146377837 +0000 UTC m=+52.697235446" watchObservedRunningTime="2025-08-13 00:47:23.215377594 +0000 UTC m=+52.766235203" Aug 13 00:47:23.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.21:22-10.0.0.1:35218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:23.250333 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:35218.service. Aug 13 00:47:23.252304 kernel: kauditd_printk_skb: 592 callbacks suppressed Aug 13 00:47:23.252370 kernel: audit: type=1130 audit(1755046043.249:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.21:22-10.0.0.1:35218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:23.309710 sshd[4705]: Accepted publickey for core from 10.0.0.1 port 35218 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:47:23.308000 audit[4705]: USER_ACCT pid=4705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.315062 kernel: audit: type=1101 audit(1755046043.308:428): pid=4705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.314000 audit[4705]: CRED_ACQ pid=4705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.316409 sshd[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:23.322646 kernel: audit: type=1103 audit(1755046043.314:429): pid=4705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.322720 kernel: audit: type=1006 audit(1755046043.314:430): pid=4705 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Aug 13 00:47:23.322770 kernel: audit: type=1300 audit(1755046043.314:430): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0d959160 a2=3 a3=0 items=0 ppid=1 pid=4705 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:23.314000 audit[4705]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0d959160 a2=3 a3=0 items=0 ppid=1 pid=4705 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:23.314000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:23.329871 kernel: audit: type=1327 audit(1755046043.314:430): proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:23.332845 systemd-logind[1304]: New session 8 of user core. Aug 13 00:47:23.333952 systemd[1]: Started session-8.scope. Aug 13 00:47:23.339000 audit[4705]: USER_START pid=4705 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.350462 kernel: audit: type=1105 audit(1755046043.339:431): pid=4705 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.350576 kernel: audit: type=1103 audit(1755046043.345:432): pid=4708 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.345000 audit[4708]: CRED_ACQ pid=4708 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.536793 sshd[4705]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:23.536000 audit[4705]: USER_END pid=4705 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.539401 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:35218.service: Deactivated successfully. Aug 13 00:47:23.540368 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:47:23.541440 systemd-logind[1304]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:47:23.542329 systemd-logind[1304]: Removed session 8. Aug 13 00:47:23.536000 audit[4705]: CRED_DISP pid=4705 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.546184 kernel: audit: type=1106 audit(1755046043.536:433): pid=4705 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.546240 kernel: audit: type=1104 audit(1755046043.536:434): pid=4705 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:23.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.21:22-10.0.0.1:35218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:25.489308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840935859.mount: Deactivated successfully. Aug 13 00:47:26.010795 env[1320]: time="2025-08-13T00:47:26.010717523Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:26.012823 env[1320]: time="2025-08-13T00:47:26.012794551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:26.014561 env[1320]: time="2025-08-13T00:47:26.014523445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:26.016344 env[1320]: time="2025-08-13T00:47:26.016303616Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:26.017016 env[1320]: time="2025-08-13T00:47:26.016969064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 00:47:26.018338 env[1320]: time="2025-08-13T00:47:26.018304529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:47:26.019477 env[1320]: time="2025-08-13T00:47:26.019428740Z" level=info msg="CreateContainer within sandbox \"d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:47:26.034094 env[1320]: time="2025-08-13T00:47:26.034008830Z" level=info msg="CreateContainer within sandbox \"d22dc4066784fab296454997fc00b8189a6f34fba8efab96e712271601d0c089\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"24c1d4148c39763c57846a8ec7b486e1c68edaf49698e69d0e057a9d2b6d0716\"" Aug 13 00:47:26.034977 env[1320]: time="2025-08-13T00:47:26.034921338Z" level=info msg="StartContainer for \"24c1d4148c39763c57846a8ec7b486e1c68edaf49698e69d0e057a9d2b6d0716\"" Aug 13 00:47:26.106063 env[1320]: time="2025-08-13T00:47:26.105987640Z" level=info msg="StartContainer for \"24c1d4148c39763c57846a8ec7b486e1c68edaf49698e69d0e057a9d2b6d0716\" returns successfully" Aug 13 00:47:26.807000 audit[4760]: NETFILTER_CFG table=filter:120 family=2 entries=13 op=nft_register_rule pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:26.807000 audit[4760]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc7e423a50 a2=0 a3=7ffc7e423a3c items=0 ppid=2307 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:26.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:26.812000 audit[4760]: NETFILTER_CFG table=nat:121 family=2 entries=27 op=nft_register_chain pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:26.812000 audit[4760]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc7e423a50 a2=0 a3=7ffc7e423a3c items=0 ppid=2307 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:26.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:27.592592 env[1320]: time="2025-08-13T00:47:27.592494512Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:27.594675 env[1320]: time="2025-08-13T00:47:27.594604089Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:27.596279 env[1320]: time="2025-08-13T00:47:27.596245345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:27.597791 env[1320]: time="2025-08-13T00:47:27.597736615Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:27.598316 env[1320]: time="2025-08-13T00:47:27.598279560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:47:27.599407 env[1320]: time="2025-08-13T00:47:27.599382240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:47:27.600989 env[1320]: time="2025-08-13T00:47:27.600947622Z" level=info msg="CreateContainer within sandbox \"72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:47:27.621241 env[1320]: time="2025-08-13T00:47:27.621177502Z" level=info msg="CreateContainer within sandbox \"72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3ed3d33414e5845903ff700da2cc2c89dbabc8c0a366475471953df0ec48e78f\"" Aug 13 00:47:27.621855 env[1320]: time="2025-08-13T00:47:27.621829744Z" level=info msg="StartContainer for \"3ed3d33414e5845903ff700da2cc2c89dbabc8c0a366475471953df0ec48e78f\"" Aug 13 00:47:27.675578 env[1320]: time="2025-08-13T00:47:27.675528322Z" level=info msg="StartContainer for \"3ed3d33414e5845903ff700da2cc2c89dbabc8c0a366475471953df0ec48e78f\" returns successfully" Aug 13 00:47:27.801977 kubelet[2155]: I0813 00:47:27.801358 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5748bbc6f8-mwsj9" podStartSLOduration=2.775056034 podStartE2EDuration="12.801334087s" podCreationTimestamp="2025-08-13 00:47:15 +0000 UTC" firstStartedPulling="2025-08-13 00:47:15.991769796 +0000 UTC m=+45.542627405" lastFinishedPulling="2025-08-13 00:47:26.018047849 +0000 UTC m=+55.568905458" observedRunningTime="2025-08-13 00:47:26.794335485 +0000 UTC m=+56.345193094" watchObservedRunningTime="2025-08-13 00:47:27.801334087 +0000 UTC m=+57.352191697" Aug 13 00:47:28.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.21:22-10.0.0.1:53160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:28.540216 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:53160.service. Aug 13 00:47:28.541949 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:47:28.542066 kernel: audit: type=1130 audit(1755046048.539:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.21:22-10.0.0.1:53160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:28.582000 audit[4825]: USER_ACCT pid=4825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.584353 sshd[4825]: Accepted publickey for core from 10.0.0.1 port 53160 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:47:28.588000 audit[4825]: CRED_ACQ pid=4825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.589820 sshd[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:28.592847 kernel: audit: type=1101 audit(1755046048.582:439): pid=4825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.592904 kernel: audit: type=1103 audit(1755046048.588:440): pid=4825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.592939 kernel: audit: type=1006 audit(1755046048.588:441): pid=4825 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Aug 13 00:47:28.593619 systemd-logind[1304]: New session 9 of user core. Aug 13 00:47:28.594645 systemd[1]: Started session-9.scope. Aug 13 00:47:28.595089 kernel: audit: type=1300 audit(1755046048.588:441): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcadd1b40 a2=3 a3=0 items=0 ppid=1 pid=4825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:28.588000 audit[4825]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcadd1b40 a2=3 a3=0 items=0 ppid=1 pid=4825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:28.588000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:28.600041 kernel: audit: type=1327 audit(1755046048.588:441): proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:28.598000 audit[4825]: USER_START pid=4825 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.604243 kernel: audit: type=1105 audit(1755046048.598:442): pid=4825 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.604304 kernel: audit: type=1103 audit(1755046048.599:443): pid=4828 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.599000 audit[4828]: CRED_ACQ pid=4828 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.770793 sshd[4825]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:28.770000 audit[4825]: USER_END pid=4825 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.772883 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:53160.service: Deactivated successfully. Aug 13 00:47:28.773655 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:47:28.776130 kernel: audit: type=1106 audit(1755046048.770:444): pid=4825 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.770000 audit[4825]: CRED_DISP pid=4825 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:28.776631 systemd-logind[1304]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:47:28.777548 systemd-logind[1304]: Removed session 9. Aug 13 00:47:28.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.21:22-10.0.0.1:53160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:28.780111 kernel: audit: type=1104 audit(1755046048.770:445): pid=4825 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:30.544712 env[1320]: time="2025-08-13T00:47:30.544664247Z" level=info msg="StopPodSandbox for \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\"" Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.579 [WARNING][4854] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0", GenerateName:"calico-kube-controllers-75878946fc-", Namespace:"calico-system", SelfLink:"", UID:"7f9c7221-f628-46e4-b71d-64f1c6b90e3b", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75878946fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380", Pod:"calico-kube-controllers-75878946fc-mwwk8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6e14598f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.579 [INFO][4854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.579 [INFO][4854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" iface="eth0" netns="" Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.579 [INFO][4854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.579 [INFO][4854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.608 [INFO][4863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" HandleID="k8s-pod-network.5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.608 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.608 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.615 [WARNING][4863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" HandleID="k8s-pod-network.5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.615 [INFO][4863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" HandleID="k8s-pod-network.5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.617 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:30.621335 env[1320]: 2025-08-13 00:47:30.619 [INFO][4854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:30.621335 env[1320]: time="2025-08-13T00:47:30.621301125Z" level=info msg="TearDown network for sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\" successfully" Aug 13 00:47:30.621335 env[1320]: time="2025-08-13T00:47:30.621333687Z" level=info msg="StopPodSandbox for \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\" returns successfully" Aug 13 00:47:30.622159 env[1320]: time="2025-08-13T00:47:30.621936294Z" level=info msg="RemovePodSandbox for \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\"" Aug 13 00:47:30.622159 env[1320]: time="2025-08-13T00:47:30.621974306Z" level=info msg="Forcibly stopping sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\"" Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.656 [WARNING][4881] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0", GenerateName:"calico-kube-controllers-75878946fc-", Namespace:"calico-system", SelfLink:"", UID:"7f9c7221-f628-46e4-b71d-64f1c6b90e3b", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75878946fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33f77dd3a14dd24c9d578f3c182bb5fe57e0f716e44d52c2a745936d71b2d380", Pod:"calico-kube-controllers-75878946fc-mwwk8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6e14598f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.657 [INFO][4881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.657 [INFO][4881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" iface="eth0" netns="" Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.658 [INFO][4881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.658 [INFO][4881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.680 [INFO][4891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" HandleID="k8s-pod-network.5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.680 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.680 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.687 [WARNING][4891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" HandleID="k8s-pod-network.5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.687 [INFO][4891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" HandleID="k8s-pod-network.5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Workload="localhost-k8s-calico--kube--controllers--75878946fc--mwwk8-eth0" Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.688 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:30.695519 env[1320]: 2025-08-13 00:47:30.693 [INFO][4881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7" Aug 13 00:47:30.696120 env[1320]: time="2025-08-13T00:47:30.695557684Z" level=info msg="TearDown network for sandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\" successfully" Aug 13 00:47:30.757721 env[1320]: time="2025-08-13T00:47:30.757649034Z" level=info msg="RemovePodSandbox \"5820ae14eaaaf37130329b56398ac32c8917629d35f83027d73f5bf83304abd7\" returns successfully" Aug 13 00:47:30.758465 env[1320]: time="2025-08-13T00:47:30.758430100Z" level=info msg="StopPodSandbox for \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\"" Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.808 [WARNING][4910] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--n995t-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3c9e603b-de35-4367-b9d4-819dfaad6063", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094", Pod:"goldmane-58fd7646b9-n995t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70022f28513", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.809 [INFO][4910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.809 [INFO][4910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" iface="eth0" netns="" Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.809 [INFO][4910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.809 [INFO][4910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.832 [INFO][4918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" HandleID="k8s-pod-network.4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.832 [INFO][4918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.833 [INFO][4918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.838 [WARNING][4918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" HandleID="k8s-pod-network.4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.838 [INFO][4918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" HandleID="k8s-pod-network.4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.840 [INFO][4918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:30.844984 env[1320]: 2025-08-13 00:47:30.842 [INFO][4910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:30.844984 env[1320]: time="2025-08-13T00:47:30.843909253Z" level=info msg="TearDown network for sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\" successfully" Aug 13 00:47:30.844984 env[1320]: time="2025-08-13T00:47:30.843968455Z" level=info msg="StopPodSandbox for \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\" returns successfully" Aug 13 00:47:30.844984 env[1320]: time="2025-08-13T00:47:30.844726328Z" level=info msg="RemovePodSandbox for \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\"" Aug 13 00:47:30.844984 env[1320]: time="2025-08-13T00:47:30.844770732Z" level=info msg="Forcibly stopping sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\"" Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.884 [WARNING][4936] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--n995t-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3c9e603b-de35-4367-b9d4-819dfaad6063", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094", Pod:"goldmane-58fd7646b9-n995t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70022f28513", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.884 [INFO][4936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.884 [INFO][4936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" iface="eth0" netns="" Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.884 [INFO][4936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.884 [INFO][4936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.906 [INFO][4945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" HandleID="k8s-pod-network.4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.906 [INFO][4945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.906 [INFO][4945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.913 [WARNING][4945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" HandleID="k8s-pod-network.4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.913 [INFO][4945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" HandleID="k8s-pod-network.4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Workload="localhost-k8s-goldmane--58fd7646b9--n995t-eth0" Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.915 [INFO][4945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:30.919529 env[1320]: 2025-08-13 00:47:30.917 [INFO][4936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88" Aug 13 00:47:30.920258 env[1320]: time="2025-08-13T00:47:30.919561357Z" level=info msg="TearDown network for sandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\" successfully" Aug 13 00:47:30.923714 env[1320]: time="2025-08-13T00:47:30.923672820Z" level=info msg="RemovePodSandbox \"4b2e7026f8cf5905ad04ff0ede72a9c20b947207170eb1d643c5c133e4876e88\" returns successfully" Aug 13 00:47:30.924397 env[1320]: time="2025-08-13T00:47:30.924326173Z" level=info msg="StopPodSandbox for \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\"" Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.962 [WARNING][4963] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0", GenerateName:"calico-apiserver-7dfcc9cb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"e908a700-6b74-4dfa-9e94-825de953b563", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfcc9cb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19", Pod:"calico-apiserver-7dfcc9cb65-vpjwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d93e9e302a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.962 [INFO][4963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.962 [INFO][4963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" iface="eth0" netns="" Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.962 [INFO][4963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.962 [INFO][4963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.984 [INFO][4972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" HandleID="k8s-pod-network.7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.984 [INFO][4972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.984 [INFO][4972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.990 [WARNING][4972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" HandleID="k8s-pod-network.7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.990 [INFO][4972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" HandleID="k8s-pod-network.7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.991 [INFO][4972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:30.995412 env[1320]: 2025-08-13 00:47:30.993 [INFO][4963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:30.996417 env[1320]: time="2025-08-13T00:47:30.995449699Z" level=info msg="TearDown network for sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\" successfully" Aug 13 00:47:30.996417 env[1320]: time="2025-08-13T00:47:30.995485387Z" level=info msg="StopPodSandbox for \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\" returns successfully" Aug 13 00:47:30.996417 env[1320]: time="2025-08-13T00:47:30.996197253Z" level=info msg="RemovePodSandbox for \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\"" Aug 13 00:47:30.996417 env[1320]: time="2025-08-13T00:47:30.996248179Z" level=info msg="Forcibly stopping sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\"" Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.029 [WARNING][4989] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0", GenerateName:"calico-apiserver-7dfcc9cb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"e908a700-6b74-4dfa-9e94-825de953b563", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfcc9cb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19", Pod:"calico-apiserver-7dfcc9cb65-vpjwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d93e9e302a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.031 [INFO][4989] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.031 [INFO][4989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" iface="eth0" netns="" Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.031 [INFO][4989] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.031 [INFO][4989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.054 [INFO][4998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" HandleID="k8s-pod-network.7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.055 [INFO][4998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.055 [INFO][4998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.061 [WARNING][4998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" HandleID="k8s-pod-network.7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.061 [INFO][4998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" HandleID="k8s-pod-network.7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--vpjwq-eth0" Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.062 [INFO][4998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:31.066793 env[1320]: 2025-08-13 00:47:31.064 [INFO][4989] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0" Aug 13 00:47:31.067340 env[1320]: time="2025-08-13T00:47:31.066816802Z" level=info msg="TearDown network for sandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\" successfully" Aug 13 00:47:31.070610 env[1320]: time="2025-08-13T00:47:31.070582757Z" level=info msg="RemovePodSandbox \"7a4f913ecb2c039c85aefdc6cfd1d2258045241d5ac39a046b5a65159f819ee0\" returns successfully" Aug 13 00:47:31.071131 env[1320]: time="2025-08-13T00:47:31.071106683Z" level=info msg="StopPodSandbox for \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\"" Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.107 [WARNING][5016] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dfdgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"222af0ae-3446-4630-b6ec-423608ba3718", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172", Pod:"csi-node-driver-dfdgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie20057f85cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.107 [INFO][5016] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.107 [INFO][5016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" iface="eth0" netns="" Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.107 [INFO][5016] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.107 [INFO][5016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.130 [INFO][5025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" HandleID="k8s-pod-network.12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.131 [INFO][5025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.131 [INFO][5025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.138 [WARNING][5025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" HandleID="k8s-pod-network.12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.138 [INFO][5025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" HandleID="k8s-pod-network.12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.139 [INFO][5025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:31.149181 env[1320]: 2025-08-13 00:47:31.147 [INFO][5016] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:31.149181 env[1320]: time="2025-08-13T00:47:31.149142899Z" level=info msg="TearDown network for sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\" successfully" Aug 13 00:47:31.149181 env[1320]: time="2025-08-13T00:47:31.149176673Z" level=info msg="StopPodSandbox for \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\" returns successfully" Aug 13 00:47:31.150247 env[1320]: time="2025-08-13T00:47:31.150200631Z" level=info msg="RemovePodSandbox for \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\"" Aug 13 00:47:31.150444 env[1320]: time="2025-08-13T00:47:31.150357941Z" level=info msg="Forcibly stopping sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\"" Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.187 [WARNING][5043] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dfdgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"222af0ae-3446-4630-b6ec-423608ba3718", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172", Pod:"csi-node-driver-dfdgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie20057f85cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.188 [INFO][5043] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.188 [INFO][5043] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" iface="eth0" netns="" Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.188 [INFO][5043] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.188 [INFO][5043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.210 [INFO][5052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" HandleID="k8s-pod-network.12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.211 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.211 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.218 [WARNING][5052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" HandleID="k8s-pod-network.12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.218 [INFO][5052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" HandleID="k8s-pod-network.12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Workload="localhost-k8s-csi--node--driver--dfdgb-eth0" Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.220 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:31.223910 env[1320]: 2025-08-13 00:47:31.222 [INFO][5043] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af" Aug 13 00:47:31.224447 env[1320]: time="2025-08-13T00:47:31.223994327Z" level=info msg="TearDown network for sandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\" successfully" Aug 13 00:47:31.433572 env[1320]: time="2025-08-13T00:47:31.433403480Z" level=info msg="RemovePodSandbox \"12af653715988ae5cd92c1742e069b2bd261609a3d3b0db608414ea1c77a71af\" returns successfully" Aug 13 00:47:31.434270 env[1320]: time="2025-08-13T00:47:31.434238549Z" level=info msg="StopPodSandbox for \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\"" Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.565 [WARNING][5069] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" WorkloadEndpoint="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.565 [INFO][5069] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.565 [INFO][5069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" iface="eth0" netns="" Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.565 [INFO][5069] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.565 [INFO][5069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.583 [INFO][5078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" HandleID="k8s-pod-network.df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Workload="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.583 [INFO][5078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.583 [INFO][5078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.592 [WARNING][5078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" HandleID="k8s-pod-network.df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Workload="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.592 [INFO][5078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" HandleID="k8s-pod-network.df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Workload="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.593 [INFO][5078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:31.598473 env[1320]: 2025-08-13 00:47:31.595 [INFO][5069] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:31.599691 env[1320]: time="2025-08-13T00:47:31.598527672Z" level=info msg="TearDown network for sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\" successfully" Aug 13 00:47:31.599691 env[1320]: time="2025-08-13T00:47:31.598567187Z" level=info msg="StopPodSandbox for \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\" returns successfully" Aug 13 00:47:31.599691 env[1320]: time="2025-08-13T00:47:31.599149324Z" level=info msg="RemovePodSandbox for \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\"" Aug 13 00:47:31.599691 env[1320]: time="2025-08-13T00:47:31.599181886Z" level=info msg="Forcibly stopping sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\"" Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.626 [WARNING][5097] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" WorkloadEndpoint="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.627 [INFO][5097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.627 [INFO][5097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" iface="eth0" netns="" Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.627 [INFO][5097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.627 [INFO][5097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.645 [INFO][5105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" HandleID="k8s-pod-network.df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Workload="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.645 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.645 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.651 [WARNING][5105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" HandleID="k8s-pod-network.df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Workload="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.651 [INFO][5105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" HandleID="k8s-pod-network.df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Workload="localhost-k8s-whisker--65bd9f9cbb--kc8mn-eth0" Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.652 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:31.655226 env[1320]: 2025-08-13 00:47:31.653 [INFO][5097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da" Aug 13 00:47:31.716173 env[1320]: time="2025-08-13T00:47:31.655449533Z" level=info msg="TearDown network for sandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\" successfully" Aug 13 00:47:32.401845 env[1320]: time="2025-08-13T00:47:32.401772432Z" level=info msg="RemovePodSandbox \"df0ed7056a41fa02a4c4a5d8c3b893cad7f10a66432edfcee332d2e39c3210da\" returns successfully" Aug 13 00:47:32.402428 env[1320]: time="2025-08-13T00:47:32.402394996Z" level=info msg="StopPodSandbox for \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\"" Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.457 [WARNING][5122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c322dc19-862d-4e41-8d08-222e739f135c", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4", Pod:"coredns-7c65d6cfc9-8tqtr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95007698936", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.458 [INFO][5122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.458 [INFO][5122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" iface="eth0" netns="" Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.458 [INFO][5122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.458 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.478 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" HandleID="k8s-pod-network.01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.478 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.478 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.485 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" HandleID="k8s-pod-network.01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.485 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" HandleID="k8s-pod-network.01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.487 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:32.490692 env[1320]: 2025-08-13 00:47:32.489 [INFO][5122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:32.491273 env[1320]: time="2025-08-13T00:47:32.490709166Z" level=info msg="TearDown network for sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\" successfully" Aug 13 00:47:32.491273 env[1320]: time="2025-08-13T00:47:32.490742439Z" level=info msg="StopPodSandbox for \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\" returns successfully" Aug 13 00:47:32.491512 env[1320]: time="2025-08-13T00:47:32.491434775Z" level=info msg="RemovePodSandbox for \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\"" Aug 13 00:47:32.491583 env[1320]: time="2025-08-13T00:47:32.491515489Z" level=info msg="Forcibly stopping sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\"" Aug 13 00:47:32.496193 env[1320]: time="2025-08-13T00:47:32.496162887Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:32.501176 env[1320]: time="2025-08-13T00:47:32.501130353Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:32.506973 env[1320]: time="2025-08-13T00:47:32.506901298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:32.509811 env[1320]: time="2025-08-13T00:47:32.509767359Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:32.510428 env[1320]: time="2025-08-13T00:47:32.510387959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:47:32.512696 env[1320]: time="2025-08-13T00:47:32.512642296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:47:32.513553 env[1320]: time="2025-08-13T00:47:32.513432028Z" level=info msg="CreateContainer within sandbox \"6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:47:32.539378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815559758.mount: Deactivated successfully. Aug 13 00:47:32.550267 env[1320]: time="2025-08-13T00:47:32.550192717Z" level=info msg="CreateContainer within sandbox \"6c7029d71292e46209a7cfc6da6d009222101abf405e56370fcba71fe43a4d19\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5524f1f82fc1ab421ec9a3622d8307cf15173786a56b96f27e7ef4b419e73e99\"" Aug 13 00:47:32.551515 env[1320]: time="2025-08-13T00:47:32.551463734Z" level=info msg="StartContainer for \"5524f1f82fc1ab421ec9a3622d8307cf15173786a56b96f27e7ef4b419e73e99\"" Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.526 [WARNING][5149] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c322dc19-862d-4e41-8d08-222e739f135c", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e0ed2fc88b0f1485aee9665c4793216859b27e52be25f424d0bcaa02dc006f4", Pod:"coredns-7c65d6cfc9-8tqtr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95007698936", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.526 [INFO][5149] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.526 [INFO][5149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" iface="eth0" netns="" Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.526 [INFO][5149] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.526 [INFO][5149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.553 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" HandleID="k8s-pod-network.01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.553 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.553 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.565 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" HandleID="k8s-pod-network.01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.565 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" HandleID="k8s-pod-network.01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Workload="localhost-k8s-coredns--7c65d6cfc9--8tqtr-eth0" Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.567 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:32.572556 env[1320]: 2025-08-13 00:47:32.569 [INFO][5149] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5" Aug 13 00:47:32.576541 env[1320]: time="2025-08-13T00:47:32.572566907Z" level=info msg="TearDown network for sandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\" successfully" Aug 13 00:47:32.585211 env[1320]: time="2025-08-13T00:47:32.585141209Z" level=info msg="RemovePodSandbox \"01f2902860fcd624ae3ef2831799b4821b7e607de6644e57c17cf4458c8497a5\" returns successfully" Aug 13 00:47:32.585549 env[1320]: time="2025-08-13T00:47:32.585518857Z" level=info msg="StopPodSandbox for \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\"" Aug 13 00:47:32.623737 env[1320]: time="2025-08-13T00:47:32.623655212Z" level=info msg="StartContainer for \"5524f1f82fc1ab421ec9a3622d8307cf15173786a56b96f27e7ef4b419e73e99\" returns successfully" Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.627 [WARNING][5200] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e4136fa-8827-4ebd-a668-a252e7a55a56", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b", Pod:"coredns-7c65d6cfc9-n9cpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a49022853e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.628 [INFO][5200] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.628 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" iface="eth0" netns="" Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.628 [INFO][5200] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.628 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.652 [INFO][5219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" HandleID="k8s-pod-network.c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.652 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.652 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.661 [WARNING][5219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" HandleID="k8s-pod-network.c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.661 [INFO][5219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" HandleID="k8s-pod-network.c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.666 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:32.676637 env[1320]: 2025-08-13 00:47:32.674 [INFO][5200] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:32.676637 env[1320]: time="2025-08-13T00:47:32.676598365Z" level=info msg="TearDown network for sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\" successfully" Aug 13 00:47:32.677395 env[1320]: time="2025-08-13T00:47:32.676640766Z" level=info msg="StopPodSandbox for \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\" returns successfully" Aug 13 00:47:32.678374 env[1320]: time="2025-08-13T00:47:32.678316553Z" level=info msg="RemovePodSandbox for \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\"" Aug 13 00:47:32.678438 env[1320]: time="2025-08-13T00:47:32.678356248Z" level=info msg="Forcibly stopping sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\"" Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.718 [WARNING][5242] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e4136fa-8827-4ebd-a668-a252e7a55a56", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25e94e8423f89681b17a2445321d337af244433863a2ebac2a0fd0d97d07476b", Pod:"coredns-7c65d6cfc9-n9cpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a49022853e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.719 [INFO][5242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.719 [INFO][5242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" iface="eth0" netns="" Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.719 [INFO][5242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.719 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.743 [INFO][5251] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" HandleID="k8s-pod-network.c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.743 [INFO][5251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.743 [INFO][5251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.752 [WARNING][5251] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" HandleID="k8s-pod-network.c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.752 [INFO][5251] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" HandleID="k8s-pod-network.c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Workload="localhost-k8s-coredns--7c65d6cfc9--n9cpw-eth0" Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.754 [INFO][5251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:32.759116 env[1320]: 2025-08-13 00:47:32.757 [INFO][5242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557" Aug 13 00:47:32.759826 env[1320]: time="2025-08-13T00:47:32.759139807Z" level=info msg="TearDown network for sandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\" successfully" Aug 13 00:47:32.763522 env[1320]: time="2025-08-13T00:47:32.763467927Z" level=info msg="RemovePodSandbox \"c5bc82e4e2501c9c0fb5fa89fd91f6a28c7202ceefb46ac14c731b0107553557\" returns successfully" Aug 13 00:47:32.764147 env[1320]: time="2025-08-13T00:47:32.764116300Z" level=info msg="StopPodSandbox for \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\"" Aug 13 00:47:32.834703 kubelet[2155]: I0813 00:47:32.834387 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dfcc9cb65-vpjwq" podStartSLOduration=32.454017463 podStartE2EDuration="45.834362179s" podCreationTimestamp="2025-08-13 00:46:47 +0000 UTC" firstStartedPulling="2025-08-13 00:47:19.131467411 +0000 UTC m=+48.682325020" lastFinishedPulling="2025-08-13 00:47:32.511812127 +0000 UTC m=+62.062669736" observedRunningTime="2025-08-13 00:47:32.832524524 +0000 UTC m=+62.383382133" watchObservedRunningTime="2025-08-13 00:47:32.834362179 +0000 UTC m=+62.385219788" Aug 13 00:47:32.844000 audit[5285]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:32.844000 audit[5285]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd58e6cfa0 a2=0 a3=7ffd58e6cf8c items=0 ppid=2307 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:32.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:32.849000 audit[5285]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:32.849000 audit[5285]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd58e6cfa0 a2=0 a3=7ffd58e6cf8c items=0 ppid=2307 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:32.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.805 [WARNING][5269] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0", GenerateName:"calico-apiserver-7dfcc9cb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"34f400c3-01be-4d31-9cf0-4542774ed01f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfcc9cb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234", Pod:"calico-apiserver-7dfcc9cb65-9wsgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0814756a6c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.805 [INFO][5269] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.805 [INFO][5269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" iface="eth0" netns="" Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.805 [INFO][5269] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.805 [INFO][5269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.844 [INFO][5278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" HandleID="k8s-pod-network.047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.844 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.844 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.851 [WARNING][5278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" HandleID="k8s-pod-network.047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.851 [INFO][5278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" HandleID="k8s-pod-network.047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.853 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:32.857999 env[1320]: 2025-08-13 00:47:32.855 [INFO][5269] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:32.858593 env[1320]: time="2025-08-13T00:47:32.858038264Z" level=info msg="TearDown network for sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\" successfully" Aug 13 00:47:32.858593 env[1320]: time="2025-08-13T00:47:32.858076396Z" level=info msg="StopPodSandbox for \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\" returns successfully" Aug 13 00:47:32.858874 env[1320]: time="2025-08-13T00:47:32.858830271Z" level=info msg="RemovePodSandbox for \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\"" Aug 13 00:47:32.858950 env[1320]: time="2025-08-13T00:47:32.858890555Z" level=info msg="Forcibly stopping sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\"" Aug 13 00:47:32.879614 env[1320]: time="2025-08-13T00:47:32.879522341Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:32.883393 env[1320]: time="2025-08-13T00:47:32.883312930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:32.885840 env[1320]: time="2025-08-13T00:47:32.885761195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:32.887478 env[1320]: time="2025-08-13T00:47:32.887443585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:32.888215 env[1320]: time="2025-08-13T00:47:32.888191948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:47:32.890490 env[1320]: time="2025-08-13T00:47:32.890456254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:47:32.891494 env[1320]: time="2025-08-13T00:47:32.891429064Z" level=info msg="CreateContainer within sandbox \"44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:47:32.908318 env[1320]: time="2025-08-13T00:47:32.908253720Z" level=info msg="CreateContainer within sandbox \"44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c0219a5fbf77f34d0ff9b3e6a21907fbfc5d0d0b55fc7c1c67690f5c070883c5\"" Aug 13 00:47:32.909412 env[1320]: time="2025-08-13T00:47:32.909365144Z" level=info msg="StartContainer for \"c0219a5fbf77f34d0ff9b3e6a21907fbfc5d0d0b55fc7c1c67690f5c070883c5\"" Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.913 [WARNING][5298] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0", GenerateName:"calico-apiserver-7dfcc9cb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"34f400c3-01be-4d31-9cf0-4542774ed01f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 46, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfcc9cb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44b4d9278f791314d81dfdfd3d623f279b1fbcc554b286587d205383af4ad234", Pod:"calico-apiserver-7dfcc9cb65-9wsgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0814756a6c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.913 [INFO][5298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.913 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" iface="eth0" netns="" Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.913 [INFO][5298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.913 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.943 [INFO][5311] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" HandleID="k8s-pod-network.047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.943 [INFO][5311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.943 [INFO][5311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.950 [WARNING][5311] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" HandleID="k8s-pod-network.047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.950 [INFO][5311] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" HandleID="k8s-pod-network.047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Workload="localhost-k8s-calico--apiserver--7dfcc9cb65--9wsgx-eth0" Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.952 [INFO][5311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:32.959086 env[1320]: 2025-08-13 00:47:32.954 [INFO][5298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340" Aug 13 00:47:32.959086 env[1320]: time="2025-08-13T00:47:32.957309000Z" level=info msg="TearDown network for sandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\" successfully" Aug 13 00:47:32.965822 env[1320]: time="2025-08-13T00:47:32.965737007Z" level=info msg="RemovePodSandbox \"047788ecf9af356105959f2e54cc0e70b738456fa5bc2ef1b51d1ade900ed340\" returns successfully" Aug 13 00:47:32.998214 env[1320]: time="2025-08-13T00:47:32.998132224Z" level=info msg="StartContainer for \"c0219a5fbf77f34d0ff9b3e6a21907fbfc5d0d0b55fc7c1c67690f5c070883c5\" returns successfully" Aug 13 00:47:33.692453 systemd[1]: run-containerd-runc-k8s.io-007395cb433dc5a2dda10c110cfccf43242a501b2429d844c581535f40355780-runc.1FRYZH.mount: Deactivated successfully. Aug 13 00:47:33.774523 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:53170.service. Aug 13 00:47:33.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.21:22-10.0.0.1:53170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:33.776911 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:47:33.777012 kernel: audit: type=1130 audit(1755046053.773:449): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.21:22-10.0.0.1:53170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:33.816000 audit[5374]: USER_ACCT pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:33.817825 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 53170 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:47:33.827088 kernel: audit: type=1101 audit(1755046053.816:450): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:33.827265 kernel: audit: type=1103 audit(1755046053.821:451): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:33.821000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:33.823483 sshd[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:33.835852 kernel: audit: type=1006 audit(1755046053.821:452): pid=5374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Aug 13 00:47:33.835911 kernel: audit: type=1300 audit(1755046053.821:452): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe266b9e60 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:33.821000 audit[5374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe266b9e60 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:33.833644 systemd[1]: Started session-10.scope. Aug 13 00:47:33.835080 systemd-logind[1304]: New session 10 of user core. Aug 13 00:47:33.821000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:33.839089 kernel: audit: type=1327 audit(1755046053.821:452): proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:33.840000 audit[5374]: USER_START pid=5374 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:33.850441 kernel: audit: type=1105 audit(1755046053.840:453): pid=5374 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:33.850485 kernel: audit: type=1103 audit(1755046053.842:454): pid=5377 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:33.842000 audit[5377]: CRED_ACQ pid=5377 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:33.875981 kubelet[2155]: I0813 00:47:33.875899 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dfcc9cb65-9wsgx" podStartSLOduration=33.214415825 podStartE2EDuration="46.875876156s" podCreationTimestamp="2025-08-13 00:46:47 +0000 UTC" firstStartedPulling="2025-08-13 00:47:19.228182467 +0000 UTC m=+48.779040076" lastFinishedPulling="2025-08-13 00:47:32.889642807 +0000 UTC m=+62.440500407" observedRunningTime="2025-08-13 00:47:33.859895443 +0000 UTC m=+63.410753052" watchObservedRunningTime="2025-08-13 00:47:33.875876156 +0000 UTC m=+63.426733766" Aug 13 00:47:33.876000 audit[5379]: NETFILTER_CFG table=filter:124 family=2 entries=12 op=nft_register_rule pid=5379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:33.880224 kernel: audit: type=1325 audit(1755046053.876:455): table=filter:124 family=2 entries=12 op=nft_register_rule pid=5379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:33.876000 audit[5379]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff3d730010 a2=0 a3=7fff3d72fffc items=0 ppid=2307 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:33.889061 kernel: audit: type=1300 audit(1755046053.876:455): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff3d730010 a2=0 a3=7fff3d72fffc items=0 ppid=2307 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:33.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:33.892000 audit[5379]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=5379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:33.892000 audit[5379]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff3d730010 a2=0 a3=7fff3d72fffc items=0 ppid=2307 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:33.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:33.907000 audit[5385]: NETFILTER_CFG table=filter:126 family=2 entries=11 op=nft_register_rule pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:33.907000 audit[5385]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffec60065b0 a2=0 a3=7ffec600659c items=0 ppid=2307 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:33.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:33.914000 audit[5385]: NETFILTER_CFG table=nat:127 family=2 entries=29 op=nft_register_chain pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:33.914000 audit[5385]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffec60065b0 a2=0 a3=7ffec600659c items=0 ppid=2307 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:33.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:34.211025 sshd[5374]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:34.211000 audit[5374]: USER_END pid=5374 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:34.211000 audit[5374]: CRED_DISP pid=5374 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:34.215172 systemd-logind[1304]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:47:34.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.21:22-10.0.0.1:53170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:34.218463 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:53170.service: Deactivated successfully. Aug 13 00:47:34.219513 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:47:34.222230 systemd-logind[1304]: Removed session 10. Aug 13 00:47:34.929000 audit[5396]: NETFILTER_CFG table=filter:128 family=2 entries=10 op=nft_register_rule pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:34.929000 audit[5396]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffebb0714f0 a2=0 a3=7ffebb0714dc items=0 ppid=2307 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:34.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:34.935000 audit[5396]: NETFILTER_CFG table=nat:129 family=2 entries=36 op=nft_register_chain pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:34.935000 audit[5396]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffebb0714f0 a2=0 a3=7ffebb0714dc items=0 ppid=2307 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:34.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:36.048590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2587328987.mount: Deactivated successfully. Aug 13 00:47:36.765833 env[1320]: time="2025-08-13T00:47:36.765754408Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:36.767806 env[1320]: time="2025-08-13T00:47:36.767762362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:37.003648 env[1320]: time="2025-08-13T00:47:37.003580063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:37.011789 env[1320]: time="2025-08-13T00:47:37.011715043Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:37.012560 env[1320]: time="2025-08-13T00:47:37.012521705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 00:47:37.014005 env[1320]: time="2025-08-13T00:47:37.013958253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:47:37.016386 env[1320]: time="2025-08-13T00:47:37.016266367Z" level=info msg="CreateContainer within sandbox \"db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:47:37.031581 env[1320]: time="2025-08-13T00:47:37.031522191Z" level=info msg="CreateContainer within sandbox \"db2b0b805f39debb021afac8abc73320312dfdc6e8bd36b954b22a8d24d9f094\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bfb34db1dc61601e94d4e108eae4c98b2d05730e1984f90ab710ceb0da870c52\"" Aug 13 00:47:37.032325 env[1320]: time="2025-08-13T00:47:37.032279049Z" level=info msg="StartContainer for \"bfb34db1dc61601e94d4e108eae4c98b2d05730e1984f90ab710ceb0da870c52\"" Aug 13 00:47:37.098802 env[1320]: time="2025-08-13T00:47:37.098748680Z" level=info msg="StartContainer for \"bfb34db1dc61601e94d4e108eae4c98b2d05730e1984f90ab710ceb0da870c52\" returns successfully" Aug 13 00:47:38.148745 kubelet[2155]: I0813 00:47:38.148661 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-n995t" podStartSLOduration=30.24533934 podStartE2EDuration="47.086754739s" podCreationTimestamp="2025-08-13 00:46:51 +0000 UTC" firstStartedPulling="2025-08-13 00:47:20.172326844 +0000 UTC m=+49.723184443" lastFinishedPulling="2025-08-13 00:47:37.013742233 +0000 UTC m=+66.564599842" observedRunningTime="2025-08-13 00:47:38.085118852 +0000 UTC m=+67.635976461" watchObservedRunningTime="2025-08-13 00:47:38.086754739 +0000 UTC m=+67.637612348" Aug 13 00:47:38.162000 audit[5466]: NETFILTER_CFG table=filter:130 family=2 entries=10 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:38.162000 audit[5466]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffcca3fbd40 a2=0 a3=7ffcca3fbd2c items=0 ppid=2307 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:38.162000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:38.167000 audit[5466]: NETFILTER_CFG table=nat:131 family=2 entries=24 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:47:38.167000 audit[5466]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffcca3fbd40 a2=0 a3=7ffcca3fbd2c items=0 ppid=2307 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:38.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:47:39.219915 kernel: kauditd_printk_skb: 25 callbacks suppressed Aug 13 00:47:39.220504 kernel: audit: type=1130 audit(1755046059.212:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.21:22-10.0.0.1:40204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:39.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.21:22-10.0.0.1:40204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:39.213798 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:40204.service. Aug 13 00:47:39.255000 audit[5490]: USER_ACCT pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:39.256596 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 40204 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:47:39.261812 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:39.260000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:39.267801 kernel: audit: type=1101 audit(1755046059.255:467): pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:39.267964 kernel: audit: type=1103 audit(1755046059.260:468): pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:39.272310 kernel: audit: type=1006 audit(1755046059.260:469): pid=5490 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Aug 13 00:47:39.278886 kernel: audit: type=1300 audit(1755046059.260:469): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4bf64020 a2=3 a3=0 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:39.260000 audit[5490]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4bf64020 a2=3 a3=0 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:39.271886 systemd-logind[1304]: New session 11 of user core. Aug 13 00:47:39.279429 env[1320]: time="2025-08-13T00:47:39.271463113Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:39.279429 env[1320]: time="2025-08-13T00:47:39.277585435Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:39.272499 systemd[1]: Started session-11.scope. Aug 13 00:47:39.279962 env[1320]: time="2025-08-13T00:47:39.279932511Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:39.260000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:39.283112 kernel: audit: type=1327 audit(1755046059.260:469): proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:39.282000 audit[5490]: USER_START pid=5490 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:39.284262 env[1320]: time="2025-08-13T00:47:39.283829920Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:47:39.283000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:39.288439 env[1320]: time="2025-08-13T00:47:39.284353484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:47:39.288439 env[1320]: time="2025-08-13T00:47:39.287343981Z" level=info msg="CreateContainer within sandbox \"72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:47:39.292043 kernel: audit: type=1105 audit(1755046059.282:470): pid=5490 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:39.292136 kernel: audit: type=1103 audit(1755046059.283:471): pid=5493 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:39.318016 env[1320]: time="2025-08-13T00:47:39.317941905Z" level=info msg="CreateContainer within sandbox \"72eae21c7924e4fc9ef005daf2346824b098771b80769ed6bd53b0aa68dc6172\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f78f0b50fbf2c632b985bb96d882cbead2086d84229d7aa4009f955a321e50b6\"" Aug 13 00:47:39.318722 env[1320]: time="2025-08-13T00:47:39.318671019Z" level=info msg="StartContainer for \"f78f0b50fbf2c632b985bb96d882cbead2086d84229d7aa4009f955a321e50b6\"" Aug 13 00:47:39.760301 env[1320]: time="2025-08-13T00:47:39.760239475Z" level=info msg="StartContainer for \"f78f0b50fbf2c632b985bb96d882cbead2086d84229d7aa4009f955a321e50b6\" returns successfully" Aug 13 00:47:39.852120 kubelet[2155]: I0813 00:47:39.852083 2155 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:47:39.857496 kubelet[2155]: I0813 00:47:39.857464 2155 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:47:40.109185 sshd[5490]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:40.109000 audit[5490]: USER_END pid=5490 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.112339 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:40216.service. Aug 13 00:47:40.121003 kernel: audit: type=1106 audit(1755046060.109:472): pid=5490 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.121372 kernel: audit: type=1130 audit(1755046060.114:473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.21:22-10.0.0.1:40216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:40.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.21:22-10.0.0.1:40216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:40.114000 audit[5490]: CRED_DISP pid=5490 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.21:22-10.0.0.1:40204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:40.117307 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:40204.service: Deactivated successfully. Aug 13 00:47:40.118502 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:47:40.118987 systemd-logind[1304]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:47:40.119831 systemd-logind[1304]: Removed session 11. Aug 13 00:47:40.152000 audit[5543]: USER_ACCT pid=5543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.153491 sshd[5543]: Accepted publickey for core from 10.0.0.1 port 40216 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:47:40.153000 audit[5543]: CRED_ACQ pid=5543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.153000 audit[5543]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5d4a12a0 a2=3 a3=0 items=0 ppid=1 pid=5543 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:40.153000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:40.155336 sshd[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:40.159644 systemd-logind[1304]: New session 12 of user core. Aug 13 00:47:40.160341 systemd[1]: Started session-12.scope. Aug 13 00:47:40.163000 audit[5543]: USER_START pid=5543 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.165000 audit[5548]: CRED_ACQ pid=5548 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.314883 sshd[5543]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:40.315000 audit[5543]: USER_END pid=5543 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.315000 audit[5543]: CRED_DISP pid=5543 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.318421 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:40228.service. Aug 13 00:47:40.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.21:22-10.0.0.1:40228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:40.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.21:22-10.0.0.1:40216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:40.320647 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:40216.service: Deactivated successfully. Aug 13 00:47:40.322211 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:47:40.322999 systemd-logind[1304]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:47:40.329880 systemd-logind[1304]: Removed session 12. Aug 13 00:47:40.361000 audit[5555]: USER_ACCT pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.362845 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 40228 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:47:40.362000 audit[5555]: CRED_ACQ pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.363000 audit[5555]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe803ddac0 a2=3 a3=0 items=0 ppid=1 pid=5555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:40.363000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:40.364466 sshd[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:40.369366 systemd-logind[1304]: New session 13 of user core. Aug 13 00:47:40.370379 systemd[1]: Started session-13.scope. Aug 13 00:47:40.376000 audit[5555]: USER_START pid=5555 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.377000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.502915 sshd[5555]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:40.503000 audit[5555]: USER_END pid=5555 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.503000 audit[5555]: CRED_DISP pid=5555 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:40.505920 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:40228.service: Deactivated successfully. Aug 13 00:47:40.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.21:22-10.0.0.1:40228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:40.507082 systemd-logind[1304]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:47:40.507105 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:47:40.508062 systemd-logind[1304]: Removed session 13. Aug 13 00:47:44.544631 kubelet[2155]: E0813 00:47:44.544586 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:45.506238 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:40234.service. Aug 13 00:47:45.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.21:22-10.0.0.1:40234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:45.507716 kernel: kauditd_printk_skb: 23 callbacks suppressed Aug 13 00:47:45.507790 kernel: audit: type=1130 audit(1755046065.505:493): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.21:22-10.0.0.1:40234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:45.541000 audit[5571]: USER_ACCT pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.543301 sshd[5571]: Accepted publickey for core from 10.0.0.1 port 40234 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:47:45.547052 kernel: audit: type=1101 audit(1755046065.541:494): pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.547107 kernel: audit: type=1103 audit(1755046065.545:495): pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.545000 audit[5571]: CRED_ACQ pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.547427 sshd[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:45.551068 systemd-logind[1304]: New session 14 of user core. Aug 13 00:47:45.552149 systemd[1]: Started session-14.scope. Aug 13 00:47:45.552828 kernel: audit: type=1006 audit(1755046065.546:496): pid=5571 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Aug 13 00:47:45.552898 kernel: audit: type=1300 audit(1755046065.546:496): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc2cb6480 a2=3 a3=0 items=0 ppid=1 pid=5571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:45.546000 audit[5571]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc2cb6480 a2=3 a3=0 items=0 ppid=1 pid=5571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:45.546000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:45.557692 kernel: audit: type=1327 audit(1755046065.546:496): proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:45.556000 audit[5571]: USER_START pid=5571 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.562209 kernel: audit: type=1105 audit(1755046065.556:497): pid=5571 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.562258 kernel: audit: type=1103 audit(1755046065.557:498): pid=5574 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.557000 audit[5574]: CRED_ACQ pid=5574 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.687921 sshd[5571]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:45.687000 audit[5571]: USER_END pid=5571 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.690194 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:40234.service: Deactivated successfully. Aug 13 00:47:45.691442 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:47:45.691468 systemd-logind[1304]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:47:45.692422 systemd-logind[1304]: Removed session 14. Aug 13 00:47:45.687000 audit[5571]: CRED_DISP pid=5571 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.696339 kernel: audit: type=1106 audit(1755046065.687:499): pid=5571 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.696405 kernel: audit: type=1104 audit(1755046065.687:500): pid=5571 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:45.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.21:22-10.0.0.1:40234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:50.691989 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:46178.service. Aug 13 00:47:50.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.21:22-10.0.0.1:46178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:50.693246 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:47:50.693304 kernel: audit: type=1130 audit(1755046070.691:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.21:22-10.0.0.1:46178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:50.730000 audit[5611]: USER_ACCT pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.731394 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 46178 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:47:50.737068 kernel: audit: type=1101 audit(1755046070.730:503): pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.737186 kernel: audit: type=1103 audit(1755046070.734:504): pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.734000 audit[5611]: CRED_ACQ pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.735974 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:50.740391 systemd-logind[1304]: New session 15 of user core. Aug 13 00:47:50.741369 systemd[1]: Started session-15.scope. Aug 13 00:47:50.741922 kernel: audit: type=1006 audit(1755046070.734:505): pid=5611 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Aug 13 00:47:50.734000 audit[5611]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffffe27c460 a2=3 a3=0 items=0 ppid=1 pid=5611 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:50.734000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:50.748270 kernel: audit: type=1300 audit(1755046070.734:505): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffffe27c460 a2=3 a3=0 items=0 ppid=1 pid=5611 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:50.748330 kernel: audit: type=1327 audit(1755046070.734:505): proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:50.746000 audit[5611]: USER_START pid=5611 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.752789 kernel: audit: type=1105 audit(1755046070.746:506): pid=5611 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.752865 kernel: audit: type=1103 audit(1755046070.747:507): pid=5614 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.747000 audit[5614]: CRED_ACQ pid=5614 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.928672 sshd[5611]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:50.929000 audit[5611]: USER_END pid=5611 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.990237 kernel: audit: type=1106 audit(1755046070.929:508): pid=5611 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.990334 kernel: audit: type=1104 audit(1755046070.929:509): pid=5611 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.929000 audit[5611]: CRED_DISP pid=5611 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:50.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.21:22-10.0.0.1:46178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:50.932111 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:46178.service: Deactivated successfully. Aug 13 00:47:50.933056 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:47:50.938179 systemd-logind[1304]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:47:50.939116 systemd-logind[1304]: Removed session 15. Aug 13 00:47:55.544455 kubelet[2155]: E0813 00:47:55.544392 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:55.544455 kubelet[2155]: E0813 00:47:55.544467 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:55.932647 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:46194.service. Aug 13 00:47:55.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.21:22-10.0.0.1:46194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:55.934760 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:47:55.934848 kernel: audit: type=1130 audit(1755046075.932:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.21:22-10.0.0.1:46194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:47:55.972000 audit[5626]: USER_ACCT pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:55.973932 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 46194 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:47:55.978057 kernel: audit: type=1101 audit(1755046075.972:512): pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:55.978118 kernel: audit: type=1103 audit(1755046075.977:513): pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:55.977000 audit[5626]: CRED_ACQ pid=5626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:55.978481 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:55.987054 kernel: audit: type=1006 audit(1755046075.977:514): pid=5626 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Aug 13 00:47:55.987129 kernel: audit: type=1300 audit(1755046075.977:514): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8d297890 a2=3 a3=0 items=0 ppid=1 pid=5626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:55.977000 audit[5626]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8d297890 a2=3 a3=0 items=0 ppid=1 pid=5626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:47:55.983151 systemd[1]: Started session-16.scope. Aug 13 00:47:55.983293 systemd-logind[1304]: New session 16 of user core. Aug 13 00:47:55.977000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:55.987000 audit[5626]: USER_START pid=5626 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:55.996488 kernel: audit: type=1327 audit(1755046075.977:514): proctitle=737368643A20636F7265205B707269765D Aug 13 00:47:55.996537 kernel: audit: type=1105 audit(1755046075.987:515): pid=5626 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:55.996559 kernel: audit: type=1103 audit(1755046075.989:516): pid=5629 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:55.989000 audit[5629]: CRED_ACQ pid=5629 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:56.181244 sshd[5626]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:56.181000 audit[5626]: USER_END pid=5626 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:56.185795 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:46194.service: Deactivated successfully. Aug 13 00:47:56.187754 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:47:56.182000 audit[5626]: CRED_DISP pid=5626 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:56.189088 systemd-logind[1304]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:47:56.190758 systemd-logind[1304]: Removed session 16. Aug 13 00:47:56.192446 kernel: audit: type=1106 audit(1755046076.181:517): pid=5626 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:56.192712 kernel: audit: type=1104 audit(1755046076.182:518): pid=5626 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:47:56.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.21:22-10.0.0.1:46194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:01.185459 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:47362.service. Aug 13 00:48:01.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.21:22-10.0.0.1:47362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:01.186847 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:48:01.186932 kernel: audit: type=1130 audit(1755046081.184:520): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.21:22-10.0.0.1:47362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:01.220000 audit[5670]: USER_ACCT pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.221603 sshd[5670]: Accepted publickey for core from 10.0.0.1 port 47362 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:48:01.223399 sshd[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:48:01.221000 audit[5670]: CRED_ACQ pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.227408 systemd-logind[1304]: New session 17 of user core. Aug 13 00:48:01.228509 systemd[1]: Started session-17.scope. Aug 13 00:48:01.229126 kernel: audit: type=1101 audit(1755046081.220:521): pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.229197 kernel: audit: type=1103 audit(1755046081.221:522): pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.229221 kernel: audit: type=1006 audit(1755046081.222:523): pid=5670 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Aug 13 00:48:01.235255 kernel: audit: type=1300 audit(1755046081.222:523): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc61f2ccc0 a2=3 a3=0 items=0 ppid=1 pid=5670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:01.222000 audit[5670]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc61f2ccc0 a2=3 a3=0 items=0 ppid=1 pid=5670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:01.222000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:01.237189 kernel: audit: type=1327 audit(1755046081.222:523): proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:01.237254 kernel: audit: type=1105 audit(1755046081.232:524): pid=5670 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.232000 audit[5670]: USER_START pid=5670 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.234000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.244478 kernel: audit: type=1103 audit(1755046081.234:525): pid=5673 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.348870 sshd[5670]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:01.348000 audit[5670]: USER_END pid=5670 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.351799 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:47378.service. Aug 13 00:48:01.352462 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:47362.service: Deactivated successfully. Aug 13 00:48:01.353803 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:48:01.354557 systemd-logind[1304]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:48:01.349000 audit[5670]: CRED_DISP pid=5670 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.355743 systemd-logind[1304]: Removed session 17. Aug 13 00:48:01.358308 kernel: audit: type=1106 audit(1755046081.348:526): pid=5670 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.358410 kernel: audit: type=1104 audit(1755046081.349:527): pid=5670 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.21:22-10.0.0.1:47378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:01.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.21:22-10.0.0.1:47362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:01.389000 audit[5682]: USER_ACCT pid=5682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.390646 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 47378 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:48:01.390000 audit[5682]: CRED_ACQ pid=5682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.390000 audit[5682]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf186bcd0 a2=3 a3=0 items=0 ppid=1 pid=5682 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:01.390000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:01.391724 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:48:01.395249 systemd-logind[1304]: New session 18 of user core. Aug 13 00:48:01.396025 systemd[1]: Started session-18.scope. Aug 13 00:48:01.398000 audit[5682]: USER_START pid=5682 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.399000 audit[5687]: CRED_ACQ pid=5687 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.890171 sshd[5682]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:01.890000 audit[5682]: USER_END pid=5682 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.890000 audit[5682]: CRED_DISP pid=5682 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.893413 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:47384.service. Aug 13 00:48:01.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.21:22-10.0.0.1:47384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:01.895694 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:47378.service: Deactivated successfully. Aug 13 00:48:01.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.21:22-10.0.0.1:47378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:01.896555 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:48:01.897616 systemd-logind[1304]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:48:01.898551 systemd-logind[1304]: Removed session 18. Aug 13 00:48:01.928000 audit[5695]: USER_ACCT pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.929871 sshd[5695]: Accepted publickey for core from 10.0.0.1 port 47384 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:48:01.929000 audit[5695]: CRED_ACQ pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.929000 audit[5695]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf4231570 a2=3 a3=0 items=0 ppid=1 pid=5695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:01.929000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:01.930919 sshd[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:48:01.934008 systemd-logind[1304]: New session 19 of user core. Aug 13 00:48:01.934988 systemd[1]: Started session-19.scope. Aug 13 00:48:01.938000 audit[5695]: USER_START pid=5695 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:01.939000 audit[5699]: CRED_ACQ pid=5699 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:03.682024 systemd[1]: run-containerd-runc-k8s.io-007395cb433dc5a2dda10c110cfccf43242a501b2429d844c581535f40355780-runc.2RkwqB.mount: Deactivated successfully. Aug 13 00:48:03.718000 audit[5752]: NETFILTER_CFG table=filter:132 family=2 entries=22 op=nft_register_rule pid=5752 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:48:03.718000 audit[5752]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7fff27786df0 a2=0 a3=7fff27786ddc items=0 ppid=2307 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:03.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:48:03.723000 audit[5752]: NETFILTER_CFG table=nat:133 family=2 entries=24 op=nft_register_rule pid=5752 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:48:03.723000 audit[5752]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7fff27786df0 a2=0 a3=0 items=0 ppid=2307 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:03.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:48:03.878000 audit[5759]: NETFILTER_CFG table=filter:134 family=2 entries=34 op=nft_register_rule pid=5759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:48:03.878000 audit[5759]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffc07eeedc0 a2=0 a3=7ffc07eeedac items=0 ppid=2307 pid=5759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:03.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:48:03.888000 audit[5759]: NETFILTER_CFG table=nat:135 family=2 entries=24 op=nft_register_rule pid=5759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:48:03.888000 audit[5759]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffc07eeedc0 a2=0 a3=0 items=0 ppid=2307 pid=5759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:03.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:48:03.950739 sshd[5695]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:03.951000 audit[5695]: USER_END pid=5695 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:03.951000 audit[5695]: CRED_DISP pid=5695 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:03.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.21:22-10.0.0.1:47394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:03.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.21:22-10.0.0.1:47384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:03.952801 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:47394.service. Aug 13 00:48:03.954311 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:47384.service: Deactivated successfully. Aug 13 00:48:03.955262 systemd-logind[1304]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:48:03.955286 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:48:03.956099 systemd-logind[1304]: Removed session 19. Aug 13 00:48:03.990000 audit[5760]: USER_ACCT pid=5760 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:03.991953 sshd[5760]: Accepted publickey for core from 10.0.0.1 port 47394 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:48:03.991000 audit[5760]: CRED_ACQ pid=5760 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:03.991000 audit[5760]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9faf88d0 a2=3 a3=0 items=0 ppid=1 pid=5760 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:03.991000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:04.001000 audit[5760]: USER_START pid=5760 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:04.002000 audit[5765]: CRED_ACQ pid=5765 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:03.996860 systemd-logind[1304]: New session 20 of user core. Aug 13 00:48:03.993267 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:48:03.997554 systemd[1]: Started session-20.scope. Aug 13 00:48:04.170068 kubelet[2155]: I0813 00:48:04.169792 2155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dfdgb" podStartSLOduration=52.908493571 podStartE2EDuration="1m13.169763434s" podCreationTimestamp="2025-08-13 00:46:51 +0000 UTC" firstStartedPulling="2025-08-13 00:47:19.02450572 +0000 UTC m=+48.575363319" lastFinishedPulling="2025-08-13 00:47:39.285775573 +0000 UTC m=+68.836633182" observedRunningTime="2025-08-13 00:47:39.993998959 +0000 UTC m=+69.544856568" watchObservedRunningTime="2025-08-13 00:48:04.169763434 +0000 UTC m=+93.720621043" Aug 13 00:48:04.562488 sshd[5760]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:04.563000 audit[5760]: USER_END pid=5760 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:04.563000 audit[5760]: CRED_DISP pid=5760 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:04.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.21:22-10.0.0.1:47400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:04.565430 systemd[1]: Started sshd@20-10.0.0.21:22-10.0.0.1:47400.service. Aug 13 00:48:04.566400 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:47394.service: Deactivated successfully. Aug 13 00:48:04.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.21:22-10.0.0.1:47394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:04.568180 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:48:04.568702 systemd-logind[1304]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:48:04.569725 systemd-logind[1304]: Removed session 20. Aug 13 00:48:04.603000 audit[5773]: USER_ACCT pid=5773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:04.605052 sshd[5773]: Accepted publickey for core from 10.0.0.1 port 47400 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:48:04.605000 audit[5773]: CRED_ACQ pid=5773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:04.605000 audit[5773]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf5f7e9e0 a2=3 a3=0 items=0 ppid=1 pid=5773 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:04.605000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:04.606767 sshd[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:48:04.611477 systemd-logind[1304]: New session 21 of user core. Aug 13 00:48:04.612587 systemd[1]: Started session-21.scope. Aug 13 00:48:04.617000 audit[5773]: USER_START pid=5773 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:04.619000 audit[5778]: CRED_ACQ pid=5778 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:04.675373 systemd[1]: run-containerd-runc-k8s.io-bfb34db1dc61601e94d4e108eae4c98b2d05730e1984f90ab710ceb0da870c52-runc.IdBhgn.mount: Deactivated successfully. Aug 13 00:48:04.739426 sshd[5773]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:04.739000 audit[5773]: USER_END pid=5773 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:04.739000 audit[5773]: CRED_DISP pid=5773 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:04.742239 systemd-logind[1304]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:48:04.742614 systemd[1]: sshd@20-10.0.0.21:22-10.0.0.1:47400.service: Deactivated successfully. Aug 13 00:48:04.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.21:22-10.0.0.1:47400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:04.743714 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:48:04.744259 systemd-logind[1304]: Removed session 21. Aug 13 00:48:04.906000 audit[5790]: NETFILTER_CFG table=filter:136 family=2 entries=33 op=nft_register_rule pid=5790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:48:04.906000 audit[5790]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffed9e70370 a2=0 a3=7ffed9e7035c items=0 ppid=2307 pid=5790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:04.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:48:04.911000 audit[5790]: NETFILTER_CFG table=nat:137 family=2 entries=31 op=nft_register_chain pid=5790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:48:04.911000 audit[5790]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffed9e70370 a2=0 a3=7ffed9e7035c items=0 ppid=2307 pid=5790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:04.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:48:07.543643 kubelet[2155]: E0813 00:48:07.543574 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:09.742772 systemd[1]: Started sshd@21-10.0.0.21:22-10.0.0.1:50576.service. Aug 13 00:48:09.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.21:22-10.0.0.1:50576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:09.744019 kernel: kauditd_printk_skb: 63 callbacks suppressed Aug 13 00:48:09.744183 kernel: audit: type=1130 audit(1755046089.741:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.21:22-10.0.0.1:50576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:09.780000 audit[5793]: USER_ACCT pid=5793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.782131 sshd[5793]: Accepted publickey for core from 10.0.0.1 port 50576 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:48:09.783336 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:48:09.786103 kernel: audit: type=1101 audit(1755046089.780:572): pid=5793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.786157 kernel: audit: type=1103 audit(1755046089.781:573): pid=5793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.781000 audit[5793]: CRED_ACQ pid=5793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.787903 systemd-logind[1304]: New session 22 of user core. Aug 13 00:48:09.789283 systemd[1]: Started session-22.scope. Aug 13 00:48:09.792319 kernel: audit: type=1006 audit(1755046089.782:574): pid=5793 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Aug 13 00:48:09.792370 kernel: audit: type=1300 audit(1755046089.782:574): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc91535d70 a2=3 a3=0 items=0 ppid=1 pid=5793 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:09.782000 audit[5793]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc91535d70 a2=3 a3=0 items=0 ppid=1 pid=5793 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:09.782000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:09.797921 kernel: audit: type=1327 audit(1755046089.782:574): proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:09.797987 kernel: audit: type=1105 audit(1755046089.794:575): pid=5793 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.794000 audit[5793]: USER_START pid=5793 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.801982 kernel: audit: type=1103 audit(1755046089.795:576): pid=5796 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.795000 audit[5796]: CRED_ACQ pid=5796 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.911604 sshd[5793]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:09.911000 audit[5793]: USER_END pid=5793 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.915100 systemd[1]: sshd@21-10.0.0.21:22-10.0.0.1:50576.service: Deactivated successfully. Aug 13 00:48:09.916488 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:48:09.916503 systemd-logind[1304]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:48:09.911000 audit[5793]: CRED_DISP pid=5793 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.917802 systemd-logind[1304]: Removed session 22. Aug 13 00:48:09.920978 kernel: audit: type=1106 audit(1755046089.911:577): pid=5793 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.921071 kernel: audit: type=1104 audit(1755046089.911:578): pid=5793 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:09.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.21:22-10.0.0.1:50576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:11.842000 audit[5808]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=5808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:48:11.842000 audit[5808]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe6d9331a0 a2=0 a3=7ffe6d93318c items=0 ppid=2307 pid=5808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:11.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:48:11.851000 audit[5808]: NETFILTER_CFG table=nat:139 family=2 entries=110 op=nft_register_chain pid=5808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:48:11.851000 audit[5808]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffe6d9331a0 a2=0 a3=7ffe6d93318c items=0 ppid=2307 pid=5808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:11.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:48:14.915440 systemd[1]: Started sshd@22-10.0.0.21:22-10.0.0.1:50584.service. Aug 13 00:48:14.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.21:22-10.0.0.1:50584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:14.917267 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:48:14.917314 kernel: audit: type=1130 audit(1755046094.915:582): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.21:22-10.0.0.1:50584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:14.950000 audit[5810]: USER_ACCT pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:14.951480 sshd[5810]: Accepted publickey for core from 10.0.0.1 port 50584 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:48:14.953677 sshd[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:48:14.952000 audit[5810]: CRED_ACQ pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:14.957673 systemd-logind[1304]: New session 23 of user core. Aug 13 00:48:14.962331 kernel: audit: type=1101 audit(1755046094.950:583): pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:14.962389 kernel: audit: type=1103 audit(1755046094.952:584): pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:14.962419 kernel: audit: type=1006 audit(1755046094.952:585): pid=5810 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Aug 13 00:48:14.958560 systemd[1]: Started session-23.scope. Aug 13 00:48:14.952000 audit[5810]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfaeea010 a2=3 a3=0 items=0 ppid=1 pid=5810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:14.968279 kernel: audit: type=1300 audit(1755046094.952:585): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfaeea010 a2=3 a3=0 items=0 ppid=1 pid=5810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:14.968317 kernel: audit: type=1327 audit(1755046094.952:585): proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:14.952000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:14.963000 audit[5810]: USER_START pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:14.974397 kernel: audit: type=1105 audit(1755046094.963:586): pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:14.974469 kernel: audit: type=1103 audit(1755046094.964:587): pid=5813 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:14.964000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:15.082377 sshd[5810]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:15.082000 audit[5810]: USER_END pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:15.085526 systemd[1]: sshd@22-10.0.0.21:22-10.0.0.1:50584.service: Deactivated successfully. Aug 13 00:48:15.086761 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:48:15.087221 systemd-logind[1304]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:48:15.088242 systemd-logind[1304]: Removed session 23. Aug 13 00:48:15.082000 audit[5810]: CRED_DISP pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:15.092425 kernel: audit: type=1106 audit(1755046095.082:588): pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:15.092504 kernel: audit: type=1104 audit(1755046095.082:589): pid=5810 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:15.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.21:22-10.0.0.1:50584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:20.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.21:22-10.0.0.1:42368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:20.086342 systemd[1]: Started sshd@23-10.0.0.21:22-10.0.0.1:42368.service. Aug 13 00:48:20.087703 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:48:20.087876 kernel: audit: type=1130 audit(1755046100.085:591): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.21:22-10.0.0.1:42368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:20.127000 audit[5824]: USER_ACCT pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.128657 sshd[5824]: Accepted publickey for core from 10.0.0.1 port 42368 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:48:20.131751 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:48:20.130000 audit[5824]: CRED_ACQ pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.136764 kernel: audit: type=1101 audit(1755046100.127:592): pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.136842 kernel: audit: type=1103 audit(1755046100.130:593): pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.136902 kernel: audit: type=1006 audit(1755046100.130:594): pid=5824 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Aug 13 00:48:20.137343 systemd-logind[1304]: New session 24 of user core. Aug 13 00:48:20.138423 systemd[1]: Started session-24.scope. Aug 13 00:48:20.139551 kernel: audit: type=1300 audit(1755046100.130:594): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff167fbf0 a2=3 a3=0 items=0 ppid=1 pid=5824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:20.130000 audit[5824]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff167fbf0 a2=3 a3=0 items=0 ppid=1 pid=5824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:20.144203 kernel: audit: type=1327 audit(1755046100.130:594): proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:20.130000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:20.145679 kernel: audit: type=1105 audit(1755046100.143:595): pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.143000 audit[5824]: USER_START pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.149803 kernel: audit: type=1103 audit(1755046100.145:596): pid=5827 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.145000 audit[5827]: CRED_ACQ pid=5827 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.254089 sshd[5824]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:20.254000 audit[5824]: USER_END pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.256531 systemd[1]: sshd@23-10.0.0.21:22-10.0.0.1:42368.service: Deactivated successfully. Aug 13 00:48:20.258160 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:48:20.258980 systemd-logind[1304]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:48:20.254000 audit[5824]: CRED_DISP pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.260240 systemd-logind[1304]: Removed session 24. Aug 13 00:48:20.262896 kernel: audit: type=1106 audit(1755046100.254:597): pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.262970 kernel: audit: type=1104 audit(1755046100.254:598): pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:20.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.21:22-10.0.0.1:42368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:23.544463 kubelet[2155]: E0813 00:48:23.544420 2155 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:25.258082 systemd[1]: Started sshd@24-10.0.0.21:22-10.0.0.1:42376.service. Aug 13 00:48:25.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.21:22-10.0.0.1:42376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:25.259581 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:48:25.259649 kernel: audit: type=1130 audit(1755046105.257:600): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.21:22-10.0.0.1:42376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:25.292000 audit[5839]: USER_ACCT pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.293976 sshd[5839]: Accepted publickey for core from 10.0.0.1 port 42376 ssh2: RSA SHA256:DN6hQsuMl7HvE06uqvETgpBVuL0aNxeZ6UYS2doxNak Aug 13 00:48:25.295919 sshd[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:48:25.294000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.300252 systemd-logind[1304]: New session 25 of user core. Aug 13 00:48:25.301169 systemd[1]: Started session-25.scope. Aug 13 00:48:25.302780 kernel: audit: type=1101 audit(1755046105.292:601): pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.302859 kernel: audit: type=1103 audit(1755046105.294:602): pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.294000 audit[5839]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7bd86ff0 a2=3 a3=0 items=0 ppid=1 pid=5839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:25.310724 kernel: audit: type=1006 audit(1755046105.294:603): pid=5839 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Aug 13 00:48:25.310812 kernel: audit: type=1300 audit(1755046105.294:603): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7bd86ff0 a2=3 a3=0 items=0 ppid=1 pid=5839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:48:25.310845 kernel: audit: type=1327 audit(1755046105.294:603): proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:25.294000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:48:25.305000 audit[5839]: USER_START pid=5839 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.316095 kernel: audit: type=1105 audit(1755046105.305:604): pid=5839 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.316135 kernel: audit: type=1103 audit(1755046105.307:605): pid=5842 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.307000 audit[5842]: CRED_ACQ pid=5842 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.410670 sshd[5839]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:25.410000 audit[5839]: USER_END pid=5839 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.412693 systemd[1]: sshd@24-10.0.0.21:22-10.0.0.1:42376.service: Deactivated successfully. Aug 13 00:48:25.413886 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:48:25.414477 systemd-logind[1304]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:48:25.415378 systemd-logind[1304]: Removed session 25. Aug 13 00:48:25.410000 audit[5839]: CRED_DISP pid=5839 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.419912 kernel: audit: type=1106 audit(1755046105.410:606): pid=5839 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.419993 kernel: audit: type=1104 audit(1755046105.410:607): pid=5839 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Aug 13 00:48:25.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.21:22-10.0.0.1:42376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'