Sep 6 00:20:43.187174 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:20:43.187198 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:43.187208 kernel: BIOS-provided physical RAM map: Sep 6 00:20:43.187214 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 6 00:20:43.187219 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 6 00:20:43.187225 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 6 00:20:43.187246 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 6 00:20:43.187261 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 6 00:20:43.187275 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 6 00:20:43.187288 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 6 00:20:43.187294 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 6 00:20:43.187300 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 6 00:20:43.187306 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 6 00:20:43.187311 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 6 00:20:43.187319 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 6 00:20:43.187338 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 6 00:20:43.187356 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 6 00:20:43.187365 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 6 00:20:43.187378 kernel: NX (Execute Disable) protection: active Sep 6 00:20:43.187399 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 6 00:20:43.187418 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 6 00:20:43.187427 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 6 00:20:43.187436 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 6 00:20:43.187454 kernel: extended physical RAM map: Sep 6 00:20:43.187472 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 6 00:20:43.187495 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 6 00:20:43.187516 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 6 00:20:43.187541 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 6 00:20:43.187565 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 6 00:20:43.187606 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 6 00:20:43.187616 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 6 00:20:43.187637 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Sep 6 00:20:43.187659 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Sep 6 00:20:43.187669 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Sep 6 00:20:43.187676 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Sep 6 00:20:43.187685 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Sep 6 00:20:43.187697 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 6 00:20:43.187711 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 6 00:20:43.187721 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 6 00:20:43.187733 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 6 00:20:43.187756 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 6 00:20:43.187775 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 6 00:20:43.187791 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 6 00:20:43.187810 kernel: efi: EFI v2.70 by EDK II Sep 6 00:20:43.187819 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Sep 6 00:20:43.187829 kernel: random: crng init done Sep 6 00:20:43.187839 kernel: SMBIOS 2.8 present. Sep 6 00:20:43.187849 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 6 00:20:43.187858 kernel: Hypervisor detected: KVM Sep 6 00:20:43.187867 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:20:43.187876 kernel: kvm-clock: cpu 0, msr 6619f001, primary cpu clock Sep 6 00:20:43.187885 kernel: kvm-clock: using sched offset of 5083237936 cycles Sep 6 00:20:43.187904 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:20:43.187914 kernel: tsc: Detected 2794.750 MHz processor Sep 6 00:20:43.187923 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:20:43.187932 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:20:43.187940 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 6 00:20:43.187947 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:20:43.187954 kernel: Using GB pages for direct mapping Sep 6 00:20:43.187977 kernel: Secure boot disabled Sep 6 00:20:43.187987 kernel: ACPI: Early table checksum verification disabled Sep 6 00:20:43.187997 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 6 00:20:43.188004 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 6 00:20:43.188011 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:43.188018 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:43.188028 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 6 00:20:43.188035 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:43.188041 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:43.188061 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:43.188079 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:43.188099 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 6 00:20:43.189065 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 6 00:20:43.189081 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 6 00:20:43.189089 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 6 00:20:43.189096 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 6 00:20:43.189102 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 6 00:20:43.189109 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 6 00:20:43.189116 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 6 00:20:43.189123 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 6 00:20:43.189133 kernel: No NUMA configuration found Sep 6 00:20:43.189140 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 6 00:20:43.189146 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 6 00:20:43.189163 kernel: Zone ranges: Sep 6 00:20:43.189189 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:20:43.189215 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 6 00:20:43.189245 kernel: Normal empty Sep 6 00:20:43.189265 kernel: Movable zone start for each node Sep 6 00:20:43.189287 kernel: Early memory node ranges Sep 6 00:20:43.189308 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 6 00:20:43.189318 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 6 00:20:43.189327 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 6 00:20:43.189336 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 6 00:20:43.189346 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 6 00:20:43.189354 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 6 00:20:43.189364 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 6 00:20:43.189374 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:20:43.189383 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 6 00:20:43.189393 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 6 00:20:43.189405 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:20:43.189415 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 6 00:20:43.189424 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 6 00:20:43.189433 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 6 00:20:43.189443 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 00:20:43.189453 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:20:43.189462 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:20:43.189476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:20:43.189486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:20:43.189498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:20:43.189507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:20:43.189516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:20:43.189529 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:20:43.189539 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:20:43.189546 kernel: TSC deadline timer available Sep 6 00:20:43.189553 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 6 00:20:43.189560 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 6 00:20:43.189566 kernel: kvm-guest: setup PV sched yield Sep 6 00:20:43.189575 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 6 00:20:43.189600 kernel: Booting paravirtualized kernel on KVM Sep 6 00:20:43.189612 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:20:43.189621 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 6 00:20:43.189628 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 6 00:20:43.189635 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 6 00:20:43.189642 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 6 00:20:43.189649 kernel: kvm-guest: setup async PF for cpu 0 Sep 6 00:20:43.189656 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Sep 6 00:20:43.189663 kernel: kvm-guest: PV spinlocks enabled Sep 6 00:20:43.189677 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 00:20:43.189689 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 6 00:20:43.189704 kernel: Policy zone: DMA32 Sep 6 00:20:43.189715 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:43.189728 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:20:43.189735 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:20:43.189765 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:20:43.190444 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:20:43.190456 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 169308K reserved, 0K cma-reserved) Sep 6 00:20:43.190466 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:20:43.190476 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:20:43.190486 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:20:43.190497 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:20:43.190527 kernel: rcu: RCU event tracing is enabled. Sep 6 00:20:43.190540 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:20:43.190556 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:20:43.190566 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:20:43.190577 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:20:43.190613 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:20:43.190623 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 6 00:20:43.190633 kernel: Console: colour dummy device 80x25 Sep 6 00:20:43.190653 kernel: printk: console [ttyS0] enabled Sep 6 00:20:43.190676 kernel: ACPI: Core revision 20210730 Sep 6 00:20:43.190692 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 00:20:43.190713 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:20:43.190740 kernel: x2apic enabled Sep 6 00:20:43.190752 kernel: Switched APIC routing to physical x2apic. Sep 6 00:20:43.190762 kernel: kvm-guest: setup PV IPIs Sep 6 00:20:43.190772 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:20:43.190782 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 6 00:20:43.190794 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 6 00:20:43.190804 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 6 00:20:43.190818 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 6 00:20:43.190833 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 6 00:20:43.190842 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:20:43.190853 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:20:43.190863 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:20:43.190874 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 6 00:20:43.190883 kernel: active return thunk: retbleed_return_thunk Sep 6 00:20:43.190893 kernel: RETBleed: Mitigation: untrained return thunk Sep 6 00:20:43.190907 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:20:43.190918 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:20:43.190931 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:20:43.190941 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:20:43.190951 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:20:43.190961 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:20:43.190970 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:20:43.190979 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:20:43.190988 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:20:43.190996 kernel: LSM: Security Framework initializing Sep 6 00:20:43.191006 kernel: SELinux: Initializing. Sep 6 00:20:43.191017 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:20:43.191027 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:20:43.191037 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 6 00:20:43.191048 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 6 00:20:43.191061 kernel: ... version: 0 Sep 6 00:20:43.191072 kernel: ... bit width: 48 Sep 6 00:20:43.191082 kernel: ... generic registers: 6 Sep 6 00:20:43.191092 kernel: ... value mask: 0000ffffffffffff Sep 6 00:20:43.191102 kernel: ... max period: 00007fffffffffff Sep 6 00:20:43.191115 kernel: ... fixed-purpose events: 0 Sep 6 00:20:43.191126 kernel: ... event mask: 000000000000003f Sep 6 00:20:43.191136 kernel: signal: max sigframe size: 1776 Sep 6 00:20:43.191147 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:20:43.191157 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:20:43.191168 kernel: x86: Booting SMP configuration: Sep 6 00:20:43.191178 kernel: .... node #0, CPUs: #1 Sep 6 00:20:43.191188 kernel: kvm-clock: cpu 1, msr 6619f041, secondary cpu clock Sep 6 00:20:43.191213 kernel: kvm-guest: setup async PF for cpu 1 Sep 6 00:20:43.191228 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Sep 6 00:20:43.191246 kernel: #2 Sep 6 00:20:43.191255 kernel: kvm-clock: cpu 2, msr 6619f081, secondary cpu clock Sep 6 00:20:43.191262 kernel: kvm-guest: setup async PF for cpu 2 Sep 6 00:20:43.191270 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Sep 6 00:20:43.191276 kernel: #3 Sep 6 00:20:43.191284 kernel: kvm-clock: cpu 3, msr 6619f0c1, secondary cpu clock Sep 6 00:20:43.191291 kernel: kvm-guest: setup async PF for cpu 3 Sep 6 00:20:43.191298 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Sep 6 00:20:43.191310 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:20:43.191317 kernel: smpboot: Max logical packages: 1 Sep 6 00:20:43.191325 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 6 00:20:43.191332 kernel: devtmpfs: initialized Sep 6 00:20:43.191339 kernel: x86/mm: Memory block size: 128MB Sep 6 00:20:43.191346 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 6 00:20:43.191353 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 6 00:20:43.191360 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 6 00:20:43.191367 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 6 00:20:43.191376 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 6 00:20:43.191399 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:20:43.191421 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:20:43.191440 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:20:43.191457 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:20:43.191479 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:20:43.191500 kernel: audit: type=2000 audit(1757118042.387:1): state=initialized audit_enabled=0 res=1 Sep 6 00:20:43.191519 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:20:43.191534 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:20:43.191644 kernel: cpuidle: using governor menu Sep 6 00:20:43.191652 kernel: ACPI: bus type PCI registered Sep 6 00:20:43.191659 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:20:43.191666 kernel: dca service started, version 1.12.1 Sep 6 00:20:43.191674 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 6 00:20:43.191681 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 6 00:20:43.191688 kernel: PCI: Using configuration type 1 for base access Sep 6 00:20:43.191695 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:20:43.191705 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:20:43.191714 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:20:43.191721 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:20:43.191728 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:20:43.191735 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:20:43.191742 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:20:43.191749 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:20:43.191756 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:20:43.191763 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:20:43.191770 kernel: ACPI: Interpreter enabled Sep 6 00:20:43.191781 kernel: ACPI: PM: (supports S0 S3 S5) Sep 6 00:20:43.191788 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:20:43.191795 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:20:43.191802 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 6 00:20:43.191809 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:20:43.192046 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:20:43.192858 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 6 00:20:43.192989 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 6 00:20:43.193011 kernel: PCI host bridge to bus 0000:00 Sep 6 00:20:43.193144 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:20:43.193220 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:20:43.193300 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:20:43.193378 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 6 00:20:43.193481 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 6 00:20:43.193601 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 6 00:20:43.193704 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:20:43.193872 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 6 00:20:43.194026 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 6 00:20:43.195157 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 6 00:20:43.195262 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 6 00:20:43.195360 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 6 00:20:43.195463 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 6 00:20:43.195563 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:20:43.195744 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:20:43.195883 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 6 00:20:43.196061 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 6 00:20:43.196972 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 6 00:20:43.197128 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:20:43.197261 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 6 00:20:43.197380 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 6 00:20:43.197491 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 6 00:20:43.197633 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:20:43.197714 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 6 00:20:43.197812 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 6 00:20:43.197895 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 6 00:20:43.198039 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 6 00:20:43.198165 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 6 00:20:43.198254 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 6 00:20:43.198400 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 6 00:20:43.198547 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 6 00:20:43.198674 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 6 00:20:43.199475 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 6 00:20:43.199642 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 6 00:20:43.199658 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:20:43.199668 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:20:43.199677 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:20:43.199684 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:20:43.199691 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 6 00:20:43.199698 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 6 00:20:43.199705 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 6 00:20:43.199720 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 6 00:20:43.199727 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 6 00:20:43.199735 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 6 00:20:43.199742 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 6 00:20:43.199749 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 6 00:20:43.199756 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 6 00:20:43.199763 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 6 00:20:43.199771 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 6 00:20:43.199779 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 6 00:20:43.199789 kernel: iommu: Default domain type: Translated Sep 6 00:20:43.199797 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:20:43.199885 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 6 00:20:43.200698 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:20:43.200822 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 6 00:20:43.200838 kernel: vgaarb: loaded Sep 6 00:20:43.200852 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:20:43.200863 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:20:43.200878 kernel: PTP clock support registered Sep 6 00:20:43.200889 kernel: Registered efivars operations Sep 6 00:20:43.200899 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:20:43.201908 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:20:43.201921 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 6 00:20:43.201931 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 6 00:20:43.201939 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Sep 6 00:20:43.201947 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Sep 6 00:20:43.201954 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 6 00:20:43.201966 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 6 00:20:43.201973 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 00:20:43.201981 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 00:20:43.201988 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:20:43.201995 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:20:43.202003 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:20:43.202010 kernel: pnp: PnP ACPI init Sep 6 00:20:43.202671 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 6 00:20:43.202690 kernel: pnp: PnP ACPI: found 6 devices Sep 6 00:20:43.202703 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:20:43.202712 kernel: NET: Registered PF_INET protocol family Sep 6 00:20:43.202736 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:20:43.202756 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:20:43.202779 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:20:43.202809 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:20:43.203624 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:20:43.203643 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:20:43.203655 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:20:43.203665 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:20:43.203672 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:20:43.203679 kernel: NET: Registered PF_XDP protocol family Sep 6 00:20:43.203785 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 6 00:20:43.203973 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 6 00:20:43.204114 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:20:43.204303 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:20:43.204475 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:20:43.204581 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 6 00:20:43.204744 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 6 00:20:43.204869 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 6 00:20:43.204886 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:20:43.204896 kernel: Initialise system trusted keyrings Sep 6 00:20:43.204906 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:20:43.204916 kernel: Key type asymmetric registered Sep 6 00:20:43.204925 kernel: Asymmetric key parser 'x509' registered Sep 6 00:20:43.204940 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:20:43.204950 kernel: io scheduler mq-deadline registered Sep 6 00:20:43.204997 kernel: io scheduler kyber registered Sep 6 00:20:43.205024 kernel: io scheduler bfq registered Sep 6 00:20:43.205041 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:20:43.205062 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 6 00:20:43.205077 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 6 00:20:43.205088 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 6 00:20:43.205098 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:20:43.205112 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:20:43.205123 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:20:43.205133 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:20:43.205143 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:20:43.205153 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:20:43.205352 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 6 00:20:43.205512 kernel: rtc_cmos 00:04: registered as rtc0 Sep 6 00:20:43.205719 kernel: rtc_cmos 00:04: setting system clock to 2025-09-06T00:20:42 UTC (1757118042) Sep 6 00:20:43.206482 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 6 00:20:43.206500 kernel: efifb: probing for efifb Sep 6 00:20:43.206512 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 6 00:20:43.206522 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 6 00:20:43.206532 kernel: efifb: scrolling: redraw Sep 6 00:20:43.206542 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 6 00:20:43.206553 kernel: Console: switching to colour frame buffer device 160x50 Sep 6 00:20:43.206563 kernel: fb0: EFI VGA frame buffer device Sep 6 00:20:43.206573 kernel: pstore: Registered efi as persistent store backend Sep 6 00:20:43.206605 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:20:43.206616 kernel: Segment Routing with IPv6 Sep 6 00:20:43.206630 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:20:43.206642 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:20:43.206652 kernel: Key type dns_resolver registered Sep 6 00:20:43.206664 kernel: IPI shorthand broadcast: enabled Sep 6 00:20:43.206676 kernel: sched_clock: Marking stable (538049022, 135941846)->(701276233, -27285365) Sep 6 00:20:43.206687 kernel: registered taskstats version 1 Sep 6 00:20:43.206698 kernel: Loading compiled-in X.509 certificates Sep 6 00:20:43.206709 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:20:43.206719 kernel: Key type .fscrypt registered Sep 6 00:20:43.206729 kernel: Key type fscrypt-provisioning registered Sep 6 00:20:43.206740 kernel: pstore: Using crash dump compression: deflate Sep 6 00:20:43.206751 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:20:43.206765 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:20:43.206776 kernel: ima: No architecture policies found Sep 6 00:20:43.206786 kernel: clk: Disabling unused clocks Sep 6 00:20:43.206797 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:20:43.206807 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:20:43.206816 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:20:43.206826 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:20:43.206836 kernel: Run /init as init process Sep 6 00:20:43.206845 kernel: with arguments: Sep 6 00:20:43.206858 kernel: /init Sep 6 00:20:43.206869 kernel: with environment: Sep 6 00:20:43.206879 kernel: HOME=/ Sep 6 00:20:43.206889 kernel: TERM=linux Sep 6 00:20:43.206900 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:20:43.206914 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:20:43.206930 systemd[1]: Detected virtualization kvm. Sep 6 00:20:43.206942 systemd[1]: Detected architecture x86-64. Sep 6 00:20:43.206963 systemd[1]: Running in initrd. Sep 6 00:20:43.206975 systemd[1]: No hostname configured, using default hostname. Sep 6 00:20:43.206986 systemd[1]: Hostname set to . Sep 6 00:20:43.206999 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:20:43.207011 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:20:43.207022 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:20:43.207034 systemd[1]: Reached target cryptsetup.target. Sep 6 00:20:43.207045 systemd[1]: Reached target paths.target. Sep 6 00:20:43.207059 systemd[1]: Reached target slices.target. Sep 6 00:20:43.207070 systemd[1]: Reached target swap.target. Sep 6 00:20:43.207081 systemd[1]: Reached target timers.target. Sep 6 00:20:43.207094 systemd[1]: Listening on iscsid.socket. Sep 6 00:20:43.207105 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:20:43.207116 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:20:43.207127 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:20:43.207138 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:20:43.207152 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:20:43.207163 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:20:43.207174 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:20:43.207186 systemd[1]: Reached target sockets.target. Sep 6 00:20:43.207197 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:20:43.207208 systemd[1]: Finished network-cleanup.service. Sep 6 00:20:43.207219 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:20:43.207242 systemd[1]: Starting systemd-journald.service... Sep 6 00:20:43.207253 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:20:43.207268 systemd[1]: Starting systemd-resolved.service... Sep 6 00:20:43.207280 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:20:43.207291 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:20:43.207303 kernel: audit: type=1130 audit(1757118043.188:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.207315 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:20:43.207326 kernel: audit: type=1130 audit(1757118043.194:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.207337 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:20:43.207349 kernel: audit: type=1130 audit(1757118043.201:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.207369 systemd-journald[199]: Journal started Sep 6 00:20:43.207462 systemd-journald[199]: Runtime Journal (/run/log/journal/85e6255b3dd04f07ad818711cfd7f065) is 6.0M, max 48.4M, 42.4M free. Sep 6 00:20:43.207519 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:20:43.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.183920 systemd-modules-load[200]: Inserted module 'overlay' Sep 6 00:20:43.210769 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:20:43.212608 systemd[1]: Started systemd-journald.service. Sep 6 00:20:43.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.216631 kernel: audit: type=1130 audit(1757118043.212:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.223758 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:20:43.228623 kernel: audit: type=1130 audit(1757118043.223:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.234315 systemd-resolved[201]: Positive Trust Anchors: Sep 6 00:20:43.234664 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:20:43.234884 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:20:43.235852 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:20:43.241472 kernel: audit: type=1130 audit(1757118043.236:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.237424 systemd-resolved[201]: Defaulting to hostname 'linux'. Sep 6 00:20:43.240507 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:20:43.251807 kernel: audit: type=1130 audit(1757118043.243:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.251843 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:20:43.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.242399 systemd[1]: Started systemd-resolved.service. Sep 6 00:20:43.252993 dracut-cmdline[216]: dracut-dracut-053 Sep 6 00:20:43.244433 systemd[1]: Reached target nss-lookup.target. Sep 6 00:20:43.255143 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:43.265886 systemd-modules-load[200]: Inserted module 'br_netfilter' Sep 6 00:20:43.267013 kernel: Bridge firewalling registered Sep 6 00:20:43.289629 kernel: SCSI subsystem initialized Sep 6 00:20:43.305072 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:20:43.305147 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:20:43.306637 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:20:43.310532 systemd-modules-load[200]: Inserted module 'dm_multipath' Sep 6 00:20:43.312498 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:20:43.317892 kernel: audit: type=1130 audit(1757118043.312:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.313418 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:20:43.326620 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:20:43.328500 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:20:43.333892 kernel: audit: type=1130 audit(1757118043.329:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.347638 kernel: iscsi: registered transport (tcp) Sep 6 00:20:43.374654 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:20:43.374756 kernel: QLogic iSCSI HBA Driver Sep 6 00:20:43.416440 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:20:43.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.418538 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:20:43.475641 kernel: raid6: avx2x4 gen() 20529 MB/s Sep 6 00:20:43.492639 kernel: raid6: avx2x4 xor() 6294 MB/s Sep 6 00:20:43.509635 kernel: raid6: avx2x2 gen() 31602 MB/s Sep 6 00:20:43.526652 kernel: raid6: avx2x2 xor() 15548 MB/s Sep 6 00:20:43.543639 kernel: raid6: avx2x1 gen() 20154 MB/s Sep 6 00:20:43.560645 kernel: raid6: avx2x1 xor() 11311 MB/s Sep 6 00:20:43.577646 kernel: raid6: sse2x4 gen() 13596 MB/s Sep 6 00:20:43.594696 kernel: raid6: sse2x4 xor() 6186 MB/s Sep 6 00:20:43.611674 kernel: raid6: sse2x2 gen() 12306 MB/s Sep 6 00:20:43.628672 kernel: raid6: sse2x2 xor() 8226 MB/s Sep 6 00:20:43.645670 kernel: raid6: sse2x1 gen() 9008 MB/s Sep 6 00:20:43.663351 kernel: raid6: sse2x1 xor() 5271 MB/s Sep 6 00:20:43.663445 kernel: raid6: using algorithm avx2x2 gen() 31602 MB/s Sep 6 00:20:43.663460 kernel: raid6: .... xor() 15548 MB/s, rmw enabled Sep 6 00:20:43.664126 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:20:43.680657 kernel: xor: automatically using best checksumming function avx Sep 6 00:20:43.794663 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:20:43.805309 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:20:43.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.806000 audit: BPF prog-id=7 op=LOAD Sep 6 00:20:43.806000 audit: BPF prog-id=8 op=LOAD Sep 6 00:20:43.807807 systemd[1]: Starting systemd-udevd.service... Sep 6 00:20:43.825576 systemd-udevd[401]: Using default interface naming scheme 'v252'. Sep 6 00:20:43.831138 systemd[1]: Started systemd-udevd.service. Sep 6 00:20:43.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.833955 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:20:43.849612 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Sep 6 00:20:43.882496 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:20:43.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.884443 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:20:43.932198 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:20:43.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:43.976615 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:20:43.980699 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:20:44.008383 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:20:44.008400 kernel: AES CTR mode by8 optimization enabled Sep 6 00:20:44.008409 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:20:44.008418 kernel: GPT:9289727 != 19775487 Sep 6 00:20:44.008436 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:20:44.008445 kernel: GPT:9289727 != 19775487 Sep 6 00:20:44.008453 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:20:44.008462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:44.013622 kernel: libata version 3.00 loaded. Sep 6 00:20:44.025349 kernel: ahci 0000:00:1f.2: version 3.0 Sep 6 00:20:44.045848 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 6 00:20:44.045873 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 6 00:20:44.046024 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 6 00:20:44.046167 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (447) Sep 6 00:20:44.046183 kernel: scsi host0: ahci Sep 6 00:20:44.046365 kernel: scsi host1: ahci Sep 6 00:20:44.046503 kernel: scsi host2: ahci Sep 6 00:20:44.046662 kernel: scsi host3: ahci Sep 6 00:20:44.046795 kernel: scsi host4: ahci Sep 6 00:20:44.046948 kernel: scsi host5: ahci Sep 6 00:20:44.047084 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 6 00:20:44.047099 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 6 00:20:44.047112 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 6 00:20:44.047125 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 6 00:20:44.047137 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 6 00:20:44.047149 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 6 00:20:44.038535 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:20:44.049555 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:20:44.054895 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:20:44.056044 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:20:44.066834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:20:44.069172 systemd[1]: Starting disk-uuid.service... Sep 6 00:20:44.078174 disk-uuid[527]: Primary Header is updated. Sep 6 00:20:44.078174 disk-uuid[527]: Secondary Entries is updated. Sep 6 00:20:44.078174 disk-uuid[527]: Secondary Header is updated. Sep 6 00:20:44.082927 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:44.087632 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:44.356124 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 6 00:20:44.356223 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 6 00:20:44.357389 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 6 00:20:44.357493 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 6 00:20:44.358632 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 6 00:20:44.359627 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 6 00:20:44.360618 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 6 00:20:44.362346 kernel: ata3.00: applying bridge limits Sep 6 00:20:44.363166 kernel: ata3.00: configured for UDMA/100 Sep 6 00:20:44.363626 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 6 00:20:44.401309 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 6 00:20:44.418806 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 00:20:44.418829 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 6 00:20:45.088623 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:45.088697 disk-uuid[528]: The operation has completed successfully. Sep 6 00:20:45.128131 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:20:45.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.128284 systemd[1]: Finished disk-uuid.service. Sep 6 00:20:45.134505 systemd[1]: Starting verity-setup.service... Sep 6 00:20:45.161620 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 6 00:20:45.192848 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:20:45.196428 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:20:45.199613 systemd[1]: Finished verity-setup.service. Sep 6 00:20:45.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.288622 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:20:45.289376 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:20:45.290521 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:20:45.291535 systemd[1]: Starting ignition-setup.service... Sep 6 00:20:45.294722 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:20:45.302676 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:20:45.302746 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:20:45.302760 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:20:45.314011 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:20:45.323929 systemd[1]: Finished ignition-setup.service. Sep 6 00:20:45.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.326125 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:20:45.394948 ignition[640]: Ignition 2.14.0 Sep 6 00:20:45.394962 ignition[640]: Stage: fetch-offline Sep 6 00:20:45.395033 ignition[640]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:45.395045 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:45.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.398547 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:20:45.402000 audit: BPF prog-id=9 op=LOAD Sep 6 00:20:45.395201 ignition[640]: parsed url from cmdline: "" Sep 6 00:20:45.395206 ignition[640]: no config URL provided Sep 6 00:20:45.395213 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:20:45.403427 systemd[1]: Starting systemd-networkd.service... Sep 6 00:20:45.395224 ignition[640]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:20:45.395250 ignition[640]: op(1): [started] loading QEMU firmware config module Sep 6 00:20:45.395256 ignition[640]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:20:45.400553 ignition[640]: op(1): [finished] loading QEMU firmware config module Sep 6 00:20:45.453259 ignition[640]: parsing config with SHA512: f7c02259cf0ff7ffda63123ab927d3894bf4c89af580d0a41833cb2ebd096b98cefd64c42e9871b777627c15eebf620808c46d5fb4d11c5e931fab3daeedb5d7 Sep 6 00:20:45.462059 unknown[640]: fetched base config from "system" Sep 6 00:20:45.462076 unknown[640]: fetched user config from "qemu" Sep 6 00:20:45.462809 ignition[640]: fetch-offline: fetch-offline passed Sep 6 00:20:45.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.464650 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:20:45.462874 ignition[640]: Ignition finished successfully Sep 6 00:20:45.486848 systemd-networkd[721]: lo: Link UP Sep 6 00:20:45.486863 systemd-networkd[721]: lo: Gained carrier Sep 6 00:20:45.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.487520 systemd-networkd[721]: Enumeration completed Sep 6 00:20:45.487668 systemd[1]: Started systemd-networkd.service. Sep 6 00:20:45.487849 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:20:45.489667 systemd[1]: Reached target network.target. Sep 6 00:20:45.490363 systemd-networkd[721]: eth0: Link UP Sep 6 00:20:45.490368 systemd-networkd[721]: eth0: Gained carrier Sep 6 00:20:45.491718 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:20:45.492687 systemd[1]: Starting ignition-kargs.service... Sep 6 00:20:45.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.494708 systemd[1]: Starting iscsiuio.service... Sep 6 00:20:45.501194 systemd[1]: Started iscsiuio.service. Sep 6 00:20:45.506010 ignition[723]: Ignition 2.14.0 Sep 6 00:20:45.504902 systemd[1]: Starting iscsid.service... Sep 6 00:20:45.506018 ignition[723]: Stage: kargs Sep 6 00:20:45.510391 iscsid[733]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:20:45.510391 iscsid[733]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 00:20:45.510391 iscsid[733]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:20:45.510391 iscsid[733]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:20:45.510391 iscsid[733]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:20:45.510391 iscsid[733]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:20:45.510391 iscsid[733]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:20:45.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.508075 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:20:45.506147 ignition[723]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:45.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.510783 systemd[1]: Finished ignition-kargs.service. Sep 6 00:20:45.506160 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:45.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.514198 systemd[1]: Started iscsid.service. Sep 6 00:20:45.507752 ignition[723]: kargs: kargs passed Sep 6 00:20:45.520035 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:20:45.507814 ignition[723]: Ignition finished successfully Sep 6 00:20:45.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.526383 systemd[1]: Starting ignition-disks.service... Sep 6 00:20:45.535658 ignition[736]: Ignition 2.14.0 Sep 6 00:20:45.534276 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:20:45.535666 ignition[736]: Stage: disks Sep 6 00:20:45.535519 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:20:45.535796 ignition[736]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:45.537466 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:20:45.535807 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:45.538553 systemd[1]: Reached target remote-fs.target. Sep 6 00:20:45.537199 ignition[736]: disks: disks passed Sep 6 00:20:45.539668 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:20:45.537250 ignition[736]: Ignition finished successfully Sep 6 00:20:45.541621 systemd[1]: Finished ignition-disks.service. Sep 6 00:20:45.543893 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:20:45.545047 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:20:45.572508 systemd-fsck[755]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:20:45.546868 systemd[1]: Reached target local-fs.target. Sep 6 00:20:45.546952 systemd[1]: Reached target sysinit.target. Sep 6 00:20:45.547365 systemd[1]: Reached target basic.target. Sep 6 00:20:45.550571 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:20:45.553800 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:20:45.579496 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:20:45.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.582382 systemd[1]: Mounting sysroot.mount... Sep 6 00:20:45.591641 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:20:45.592061 systemd[1]: Mounted sysroot.mount. Sep 6 00:20:45.593568 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:20:45.596199 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:20:45.597853 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:20:45.597891 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:20:45.597912 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:20:45.603124 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:20:45.605277 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:20:45.611137 initrd-setup-root[765]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:20:45.616485 initrd-setup-root[773]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:20:45.620731 initrd-setup-root[781]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:20:45.624777 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:20:45.657693 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:20:45.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.659741 systemd[1]: Starting ignition-mount.service... Sep 6 00:20:45.662421 systemd[1]: Starting sysroot-boot.service... Sep 6 00:20:45.670270 bash[806]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:20:45.680319 ignition[808]: INFO : Ignition 2.14.0 Sep 6 00:20:45.681574 ignition[808]: INFO : Stage: mount Sep 6 00:20:45.681574 ignition[808]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:45.681574 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:45.685263 ignition[808]: INFO : mount: mount passed Sep 6 00:20:45.685263 ignition[808]: INFO : Ignition finished successfully Sep 6 00:20:45.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:45.684400 systemd[1]: Finished ignition-mount.service. Sep 6 00:20:45.686488 systemd[1]: Finished sysroot-boot.service. Sep 6 00:20:46.208685 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:20:46.218473 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Sep 6 00:20:46.221210 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:20:46.221248 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:20:46.221264 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:20:46.227126 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:20:46.229850 systemd[1]: Starting ignition-files.service... Sep 6 00:20:46.247281 ignition[836]: INFO : Ignition 2.14.0 Sep 6 00:20:46.247281 ignition[836]: INFO : Stage: files Sep 6 00:20:46.249354 ignition[836]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:46.249354 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:46.249354 ignition[836]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:20:46.253786 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:20:46.253786 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:20:46.258422 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:20:46.260018 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:20:46.261654 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:20:46.261167 unknown[836]: wrote ssh authorized keys file for user: core Sep 6 00:20:46.264653 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:20:46.264653 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:20:46.264653 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:20:46.264653 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 00:20:46.355505 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:20:46.502052 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:20:46.504340 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:20:46.506368 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:20:46.508324 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:20:46.511738 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:20:46.513695 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:20:46.515769 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:20:46.517800 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:20:46.519804 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:20:46.522057 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:20:46.524135 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:20:46.526179 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:20:46.529018 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:20:46.531712 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:20:46.533708 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 00:20:46.811819 systemd-networkd[721]: eth0: Gained IPv6LL Sep 6 00:20:46.901173 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 00:20:47.627692 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:20:47.627692 ignition[836]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:20:47.632614 ignition[836]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:20:47.686999 ignition[836]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:20:47.688967 ignition[836]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:20:47.688967 ignition[836]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:20:47.688967 ignition[836]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:20:47.688967 ignition[836]: INFO : files: files passed Sep 6 00:20:47.688967 ignition[836]: INFO : Ignition finished successfully Sep 6 00:20:47.713919 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 6 00:20:47.713951 kernel: audit: type=1130 audit(1757118047.689:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.713963 kernel: audit: type=1130 audit(1757118047.702:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.713974 kernel: audit: type=1130 audit(1757118047.706:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.713983 kernel: audit: type=1131 audit(1757118047.706:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.688915 systemd[1]: Finished ignition-files.service. Sep 6 00:20:47.690980 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:20:47.696785 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:20:47.718833 initrd-setup-root-after-ignition[859]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 6 00:20:47.697439 systemd[1]: Starting ignition-quench.service... Sep 6 00:20:47.721264 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:20:47.699840 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:20:47.702378 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:20:47.702462 systemd[1]: Finished ignition-quench.service. Sep 6 00:20:47.706709 systemd[1]: Reached target ignition-complete.target. Sep 6 00:20:47.714695 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:20:47.728374 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:20:47.728462 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:20:47.737277 kernel: audit: type=1130 audit(1757118047.730:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.737975 kernel: audit: type=1131 audit(1757118047.730:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.730279 systemd[1]: Reached target initrd-fs.target. Sep 6 00:20:47.737272 systemd[1]: Reached target initrd.target. Sep 6 00:20:47.738043 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:20:47.738835 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:20:47.750040 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:20:47.755198 kernel: audit: type=1130 audit(1757118047.750:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.751715 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:20:47.759714 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:20:47.760569 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:20:47.762152 systemd[1]: Stopped target timers.target. Sep 6 00:20:47.763719 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:20:47.769809 kernel: audit: type=1131 audit(1757118047.765:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.763812 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:20:47.796498 kernel: audit: type=1131 audit(1757118047.771:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.796530 kernel: audit: type=1131 audit(1757118047.774:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.765307 systemd[1]: Stopped target initrd.target. Sep 6 00:20:47.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.798663 iscsid[733]: iscsid shutting down. Sep 6 00:20:47.769865 systemd[1]: Stopped target basic.target. Sep 6 00:20:47.801707 ignition[876]: INFO : Ignition 2.14.0 Sep 6 00:20:47.801707 ignition[876]: INFO : Stage: umount Sep 6 00:20:47.801707 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:47.801707 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:47.801707 ignition[876]: INFO : umount: umount passed Sep 6 00:20:47.801707 ignition[876]: INFO : Ignition finished successfully Sep 6 00:20:47.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.769978 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:20:47.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.770152 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:20:47.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.770308 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:20:47.770468 systemd[1]: Stopped target remote-fs.target. Sep 6 00:20:47.770666 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:20:47.770799 systemd[1]: Stopped target sysinit.target. Sep 6 00:20:47.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.770952 systemd[1]: Stopped target local-fs.target. Sep 6 00:20:47.771108 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:20:47.771276 systemd[1]: Stopped target swap.target. Sep 6 00:20:47.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.771416 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:20:47.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.771515 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:20:47.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.771688 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:20:47.774936 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:20:47.775025 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:20:47.775301 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:20:47.775400 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:20:47.778639 systemd[1]: Stopped target paths.target. Sep 6 00:20:47.778701 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:20:47.785658 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:20:47.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.786100 systemd[1]: Stopped target slices.target. Sep 6 00:20:47.786271 systemd[1]: Stopped target sockets.target. Sep 6 00:20:47.786473 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:20:47.786647 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:20:47.786888 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:20:47.787010 systemd[1]: Stopped ignition-files.service. Sep 6 00:20:47.788368 systemd[1]: Stopping ignition-mount.service... Sep 6 00:20:47.789097 systemd[1]: Stopping iscsid.service... Sep 6 00:20:47.789246 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:20:47.789423 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:20:47.790893 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:20:47.791214 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:20:47.791419 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:20:47.793707 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:20:47.793852 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:20:47.796222 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:20:47.796344 systemd[1]: Stopped iscsid.service. Sep 6 00:20:47.797559 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:20:47.797674 systemd[1]: Closed iscsid.socket. Sep 6 00:20:47.799444 systemd[1]: Stopping iscsiuio.service... Sep 6 00:20:47.801031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:20:47.801118 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:20:47.802756 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:20:47.802824 systemd[1]: Stopped ignition-mount.service. Sep 6 00:20:47.803536 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:20:47.803632 systemd[1]: Stopped iscsiuio.service. Sep 6 00:20:47.805534 systemd[1]: Stopped target network.target. Sep 6 00:20:47.806400 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:20:47.806429 systemd[1]: Closed iscsiuio.socket. Sep 6 00:20:47.808183 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:20:47.808220 systemd[1]: Stopped ignition-disks.service. Sep 6 00:20:47.809554 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:20:47.809597 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:20:47.811802 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:20:47.811835 systemd[1]: Stopped ignition-setup.service. Sep 6 00:20:47.813430 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:20:47.815050 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:20:47.816658 systemd-networkd[721]: eth0: DHCPv6 lease lost Sep 6 00:20:47.871000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:20:47.817619 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:20:47.817694 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:20:47.820215 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:20:47.820242 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:20:47.821715 systemd[1]: Stopping network-cleanup.service... Sep 6 00:20:47.822576 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:20:47.822650 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:20:47.823565 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:20:47.823624 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:20:47.826017 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:20:47.826051 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:20:47.829098 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:20:47.831957 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:20:47.832456 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:20:47.832565 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:20:47.861556 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:20:47.872519 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:20:47.873364 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:20:47.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.888000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:20:47.889415 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:20:47.890607 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:20:47.892397 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:20:47.892433 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:20:47.894985 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:20:47.895035 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:20:47.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.897533 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:20:47.897567 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:20:47.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.899401 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:20:47.900146 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:20:47.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.903550 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:20:47.905253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:20:47.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.905298 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:20:47.908442 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:20:47.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.909184 systemd[1]: Stopped network-cleanup.service. Sep 6 00:20:47.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.910755 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:20:47.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.910852 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:20:47.912792 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:20:47.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:47.912902 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:20:47.914925 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:20:47.916865 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:20:47.916919 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:20:47.919830 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:20:47.925451 systemd[1]: Switching root. Sep 6 00:20:47.927000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:20:47.927000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:20:47.927000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:20:47.928000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:20:47.928000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:20:47.944904 systemd-journald[199]: Journal stopped Sep 6 00:20:52.113782 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Sep 6 00:20:52.113856 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:20:52.113877 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:20:52.113887 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:20:52.113898 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:20:52.113908 kernel: SELinux: policy capability open_perms=1 Sep 6 00:20:52.113918 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:20:52.113928 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:20:52.113937 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:20:52.113947 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:20:52.113966 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:20:52.113975 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:20:52.113985 systemd[1]: Successfully loaded SELinux policy in 38.925ms. Sep 6 00:20:52.114016 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.885ms. Sep 6 00:20:52.114029 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:20:52.114041 systemd[1]: Detected virtualization kvm. Sep 6 00:20:52.114054 systemd[1]: Detected architecture x86-64. Sep 6 00:20:52.114066 systemd[1]: Detected first boot. Sep 6 00:20:52.114076 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:20:52.114087 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:20:52.114101 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:20:52.114112 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:52.114124 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:52.114136 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:52.114148 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:20:52.114160 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:20:52.114171 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:20:52.114182 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:20:52.114193 systemd[1]: Created slice system-getty.slice. Sep 6 00:20:52.114204 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:20:52.114214 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:20:52.114233 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:20:52.114244 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:20:52.114254 systemd[1]: Created slice user.slice. Sep 6 00:20:52.114266 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:20:52.114277 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:20:52.114288 systemd[1]: Set up automount boot.automount. Sep 6 00:20:52.114298 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:20:52.114308 systemd[1]: Reached target integritysetup.target. Sep 6 00:20:52.114319 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:20:52.114330 systemd[1]: Reached target remote-fs.target. Sep 6 00:20:52.114340 systemd[1]: Reached target slices.target. Sep 6 00:20:52.114352 systemd[1]: Reached target swap.target. Sep 6 00:20:52.114362 systemd[1]: Reached target torcx.target. Sep 6 00:20:52.114373 systemd[1]: Reached target veritysetup.target. Sep 6 00:20:52.114383 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:20:52.114396 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:20:52.114407 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:20:52.114417 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:20:52.114427 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:20:52.114437 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:20:52.114447 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:20:52.114469 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:20:52.114483 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:20:52.114498 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:20:52.114513 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:20:52.114527 systemd[1]: Mounting media.mount... Sep 6 00:20:52.114540 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:52.114553 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:20:52.114563 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:20:52.114573 systemd[1]: Mounting tmp.mount... Sep 6 00:20:52.114598 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:20:52.114610 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:52.114620 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:20:52.114630 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:20:52.114652 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:52.114662 systemd[1]: Starting modprobe@drm.service... Sep 6 00:20:52.114673 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:52.114683 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:20:52.114708 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:52.114728 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:20:52.114750 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 00:20:52.114783 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 00:20:52.114818 systemd[1]: Starting systemd-journald.service... Sep 6 00:20:52.114830 kernel: loop: module loaded Sep 6 00:20:52.114840 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:20:52.114871 kernel: fuse: init (API version 7.34) Sep 6 00:20:52.114904 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:20:52.114934 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:20:52.114963 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:20:52.114975 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:52.114985 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:20:52.114995 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:20:52.115015 systemd[1]: Mounted media.mount. Sep 6 00:20:52.115032 systemd-journald[1017]: Journal started Sep 6 00:20:52.115080 systemd-journald[1017]: Runtime Journal (/run/log/journal/85e6255b3dd04f07ad818711cfd7f065) is 6.0M, max 48.4M, 42.4M free. Sep 6 00:20:52.111000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:20:52.111000 audit[1017]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffefa7b8660 a2=4000 a3=7ffefa7b86fc items=0 ppid=1 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:52.111000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:20:52.119170 systemd[1]: Started systemd-journald.service. Sep 6 00:20:52.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.118872 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:20:52.119835 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:20:52.120794 systemd[1]: Mounted tmp.mount. Sep 6 00:20:52.122303 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:20:52.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.123479 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:20:52.123789 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:20:52.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.125051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:52.125283 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:52.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.126502 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:20:52.126711 systemd[1]: Finished modprobe@drm.service. Sep 6 00:20:52.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.128110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:52.128326 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:52.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.130010 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:20:52.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.131236 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:20:52.131514 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:20:52.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.133150 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:52.133383 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:52.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.134718 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:20:52.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.136032 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:20:52.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.137833 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:20:52.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.139126 systemd[1]: Reached target network-pre.target. Sep 6 00:20:52.141276 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:20:52.143204 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:20:52.144195 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:20:52.145622 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:20:52.148010 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:20:52.149088 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:52.150821 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:20:52.151835 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:52.153162 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:20:52.173266 systemd-journald[1017]: Time spent on flushing to /var/log/journal/85e6255b3dd04f07ad818711cfd7f065 is 16.711ms for 1100 entries. Sep 6 00:20:52.173266 systemd-journald[1017]: System Journal (/var/log/journal/85e6255b3dd04f07ad818711cfd7f065) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:20:52.211144 systemd-journald[1017]: Received client request to flush runtime journal. Sep 6 00:20:52.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.172335 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:20:52.177867 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:20:52.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.179012 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:20:52.213416 udevadm[1058]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:20:52.179938 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:20:52.182293 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:20:52.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.187496 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:20:52.188699 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:20:52.200816 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:20:52.212069 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:20:52.214231 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:20:52.216429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:20:52.233937 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:20:52.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.916342 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:20:52.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.919859 systemd[1]: Starting systemd-udevd.service... Sep 6 00:20:52.921713 kernel: kauditd_printk_skb: 76 callbacks suppressed Sep 6 00:20:52.921803 kernel: audit: type=1130 audit(1757118052.916:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.940676 systemd-udevd[1069]: Using default interface naming scheme 'v252'. Sep 6 00:20:52.953389 systemd[1]: Started systemd-udevd.service. Sep 6 00:20:52.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.956158 systemd[1]: Starting systemd-networkd.service... Sep 6 00:20:52.959750 kernel: audit: type=1130 audit(1757118052.954:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.963365 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:20:52.996040 systemd[1]: Started systemd-userdbd.service. Sep 6 00:20:53.003041 kernel: audit: type=1130 audit(1757118052.997:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.015391 systemd[1]: Found device dev-ttyS0.device. Sep 6 00:20:53.020803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:20:53.038657 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:20:53.049405 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:20:53.052047 systemd-networkd[1075]: lo: Link UP Sep 6 00:20:53.052058 systemd-networkd[1075]: lo: Gained carrier Sep 6 00:20:53.052472 systemd-networkd[1075]: Enumeration completed Sep 6 00:20:53.052581 systemd-networkd[1075]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:20:53.052681 systemd[1]: Started systemd-networkd.service. Sep 6 00:20:53.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.054310 systemd-networkd[1075]: eth0: Link UP Sep 6 00:20:53.054318 systemd-networkd[1075]: eth0: Gained carrier Sep 6 00:20:53.057644 kernel: audit: type=1130 audit(1757118053.053:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.068721 systemd-networkd[1075]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:20:53.059000 audit[1070]: AVC avc: denied { confidentiality } for pid=1070 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:20:53.085664 kernel: audit: type=1400 audit(1757118053.059:117): avc: denied { confidentiality } for pid=1070 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:20:53.059000 audit[1070]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e5d5e368c0 a1=338ec a2=7f3f5dea5bc5 a3=5 items=110 ppid=1069 pid=1070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:53.059000 audit: CWD cwd="/" Sep 6 00:20:53.095362 kernel: audit: type=1300 audit(1757118053.059:117): arch=c000003e syscall=175 success=yes exit=0 a0=55e5d5e368c0 a1=338ec a2=7f3f5dea5bc5 a3=5 items=110 ppid=1069 pid=1070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:53.095414 kernel: audit: type=1307 audit(1757118053.059:117): cwd="/" Sep 6 00:20:53.098448 kernel: audit: type=1302 audit(1757118053.059:117): item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.104745 kernel: audit: type=1302 audit(1757118053.059:117): item=1 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.104800 kernel: audit: type=1302 audit(1757118053.059:117): item=2 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=1 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=2 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=3 name=(null) inode=14555 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=4 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=5 name=(null) inode=14556 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=6 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=7 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=8 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=9 name=(null) inode=14558 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=10 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=11 name=(null) inode=14559 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=12 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=13 name=(null) inode=14560 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=14 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=15 name=(null) inode=14561 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=16 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=17 name=(null) inode=14562 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=18 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=19 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=20 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=21 name=(null) inode=14564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=22 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=23 name=(null) inode=14565 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=24 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=25 name=(null) inode=14566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=26 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=27 name=(null) inode=14567 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=28 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=29 name=(null) inode=14568 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=30 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=31 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=32 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=33 name=(null) inode=14570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=34 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=35 name=(null) inode=14571 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=36 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=37 name=(null) inode=14572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=38 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=39 name=(null) inode=14573 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=40 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=41 name=(null) inode=14574 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=42 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=43 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=44 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=45 name=(null) inode=14576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=46 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=47 name=(null) inode=14577 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=48 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=49 name=(null) inode=14578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=50 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=51 name=(null) inode=14579 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=52 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=53 name=(null) inode=14580 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=55 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=56 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=57 name=(null) inode=14582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=58 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=59 name=(null) inode=14583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=60 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=61 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=62 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=63 name=(null) inode=14585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=64 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=65 name=(null) inode=14586 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=66 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=67 name=(null) inode=14587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=68 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=69 name=(null) inode=14588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=70 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=71 name=(null) inode=14589 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=72 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=73 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=74 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=75 name=(null) inode=14591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=76 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=77 name=(null) inode=14592 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=78 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=79 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=80 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=81 name=(null) inode=14594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=82 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=83 name=(null) inode=14595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=84 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=85 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=86 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=87 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=88 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=89 name=(null) inode=14598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=90 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=91 name=(null) inode=14599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=92 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=93 name=(null) inode=14600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=94 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=95 name=(null) inode=14601 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=96 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=97 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=98 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=99 name=(null) inode=14603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=100 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=101 name=(null) inode=14604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=102 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=103 name=(null) inode=14605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=104 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=105 name=(null) inode=14606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=106 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=107 name=(null) inode=14607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PATH item=109 name=(null) inode=16388 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:53.059000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:20:53.116197 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:20:53.116261 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 6 00:20:53.119834 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 6 00:20:53.120011 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 6 00:20:53.120170 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 6 00:20:53.123608 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:20:53.230727 kernel: kvm: Nested Virtualization enabled Sep 6 00:20:53.230829 kernel: SVM: kvm: Nested Paging enabled Sep 6 00:20:53.232093 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 6 00:20:53.232142 kernel: SVM: Virtual GIF supported Sep 6 00:20:53.249622 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:20:53.276042 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:20:53.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.278315 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:20:53.286600 lvm[1106]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:20:53.313536 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:20:53.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.314643 systemd[1]: Reached target cryptsetup.target. Sep 6 00:20:53.316662 systemd[1]: Starting lvm2-activation.service... Sep 6 00:20:53.320916 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:20:53.349050 systemd[1]: Finished lvm2-activation.service. Sep 6 00:20:53.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.350078 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:20:53.350994 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:20:53.351016 systemd[1]: Reached target local-fs.target. Sep 6 00:20:53.351901 systemd[1]: Reached target machines.target. Sep 6 00:20:53.353951 systemd[1]: Starting ldconfig.service... Sep 6 00:20:53.363208 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:53.363273 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:53.364514 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:20:53.366548 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:20:53.369396 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:20:53.371882 systemd[1]: Starting systemd-sysext.service... Sep 6 00:20:53.373441 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1111 (bootctl) Sep 6 00:20:53.374457 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:20:53.376202 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:20:53.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.386864 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:20:53.390290 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:20:53.390504 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:20:53.405618 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 00:20:53.419614 systemd-fsck[1121]: fsck.fat 4.2 (2021-01-31) Sep 6 00:20:53.419614 systemd-fsck[1121]: /dev/vda1: 791 files, 120781/258078 clusters Sep 6 00:20:53.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.420964 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:20:53.424329 systemd[1]: Mounting boot.mount... Sep 6 00:20:53.430684 systemd[1]: Mounted boot.mount. Sep 6 00:20:53.746614 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:20:53.748128 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:20:53.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.752404 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:20:53.753430 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:20:53.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.764669 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 00:20:53.771207 (sd-sysext)[1132]: Using extensions 'kubernetes'. Sep 6 00:20:53.771669 (sd-sysext)[1132]: Merged extensions into '/usr'. Sep 6 00:20:53.790069 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:53.791989 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:20:53.793228 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:53.794838 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:53.797469 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:53.800064 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:53.802559 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:53.802759 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:53.802898 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:53.806522 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:20:53.808194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:53.808378 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:53.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.810045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:53.810206 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:53.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.811752 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:53.811929 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:53.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.813673 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:53.813826 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:53.815497 systemd[1]: Finished systemd-sysext.service. Sep 6 00:20:53.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:53.818792 systemd[1]: Starting ensure-sysext.service... Sep 6 00:20:53.821295 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:20:53.823782 ldconfig[1110]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:20:53.828097 systemd[1]: Reloading. Sep 6 00:20:53.833700 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:20:53.834828 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:20:53.836369 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:20:53.886514 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-09-06T00:20:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:20:53.886548 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-09-06T00:20:53Z" level=info msg="torcx already run" Sep 6 00:20:53.973579 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:53.973620 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:54.000829 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:54.081049 systemd[1]: Finished ldconfig.service. Sep 6 00:20:54.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:54.084117 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:20:54.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:54.088469 systemd[1]: Starting audit-rules.service... Sep 6 00:20:54.090894 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:20:54.093698 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:20:54.096946 systemd[1]: Starting systemd-resolved.service... Sep 6 00:20:54.111351 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:20:54.115171 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:20:54.117301 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:20:54.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:54.123000 audit[1230]: SYSTEM_BOOT pid=1230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:20:54.129999 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:54.132766 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:54.135930 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:54.138000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:20:54.138000 audit[1239]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffea2eb7840 a2=420 a3=0 items=0 ppid=1216 pid=1239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:54.138000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:20:54.139639 augenrules[1239]: No rules Sep 6 00:20:54.138466 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:54.139521 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:54.139744 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:54.139929 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:54.144196 systemd[1]: Finished audit-rules.service. Sep 6 00:20:54.145906 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:20:54.147725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:54.147915 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:54.149984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:54.150151 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:54.151987 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:54.152162 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:54.155187 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:20:54.158052 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:54.160320 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:54.162995 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:54.165376 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:54.166480 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:54.166637 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:54.168895 systemd[1]: Starting systemd-update-done.service... Sep 6 00:20:54.170220 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:54.171367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:54.171538 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:54.173297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:54.173453 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:54.174990 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:54.175318 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:54.176638 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:54.176728 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:54.180436 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:54.182926 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:54.186065 systemd[1]: Starting modprobe@drm.service... Sep 6 00:20:54.189924 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:54.202124 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:54.212383 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:54.212625 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:54.219326 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:20:54.221075 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:54.222974 systemd[1]: Finished systemd-update-done.service. Sep 6 00:20:54.225090 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:54.225262 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:54.226893 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:20:54.227097 systemd[1]: Finished modprobe@drm.service. Sep 6 00:20:54.228979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:54.229161 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:54.231027 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:54.231302 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:54.233016 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:54.234459 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:54.235006 systemd[1]: Finished ensure-sysext.service. Sep 6 00:20:54.239767 systemd-resolved[1222]: Positive Trust Anchors: Sep 6 00:20:54.239791 systemd-resolved[1222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:20:54.239828 systemd-resolved[1222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:20:54.249171 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:20:54.250787 systemd[1]: Reached target time-set.target. Sep 6 00:20:54.684653 systemd-timesyncd[1227]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:20:54.684745 systemd-timesyncd[1227]: Initial clock synchronization to Sat 2025-09-06 00:20:54.684425 UTC. Sep 6 00:20:54.686005 systemd-resolved[1222]: Defaulting to hostname 'linux'. Sep 6 00:20:54.688060 systemd[1]: Started systemd-resolved.service. Sep 6 00:20:54.689197 systemd[1]: Reached target network.target. Sep 6 00:20:54.690122 systemd[1]: Reached target nss-lookup.target. Sep 6 00:20:54.691071 systemd[1]: Reached target sysinit.target. Sep 6 00:20:54.692070 systemd[1]: Started motdgen.path. Sep 6 00:20:54.693100 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:20:54.694667 systemd[1]: Started logrotate.timer. Sep 6 00:20:54.695688 systemd[1]: Started mdadm.timer. Sep 6 00:20:54.696577 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:20:54.697715 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:20:54.697822 systemd[1]: Reached target paths.target. Sep 6 00:20:54.698783 systemd[1]: Reached target timers.target. Sep 6 00:20:54.700291 systemd[1]: Listening on dbus.socket. Sep 6 00:20:54.702808 systemd[1]: Starting docker.socket... Sep 6 00:20:54.705415 systemd[1]: Listening on sshd.socket. Sep 6 00:20:54.706554 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:54.707010 systemd[1]: Listening on docker.socket. Sep 6 00:20:54.708184 systemd[1]: Reached target sockets.target. Sep 6 00:20:54.709107 systemd[1]: Reached target basic.target. Sep 6 00:20:54.710268 systemd[1]: System is tainted: cgroupsv1 Sep 6 00:20:54.710329 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:20:54.710356 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:20:54.711991 systemd[1]: Starting containerd.service... Sep 6 00:20:54.714529 systemd[1]: Starting dbus.service... Sep 6 00:20:54.716806 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:20:54.720921 systemd[1]: Starting extend-filesystems.service... Sep 6 00:20:54.722218 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:20:54.723312 jq[1280]: false Sep 6 00:20:54.723998 systemd[1]: Starting motdgen.service... Sep 6 00:20:54.726658 systemd[1]: Starting prepare-helm.service... Sep 6 00:20:54.732475 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:20:54.737752 extend-filesystems[1281]: Found loop1 Sep 6 00:20:54.737752 extend-filesystems[1281]: Found sr0 Sep 6 00:20:54.737752 extend-filesystems[1281]: Found vda Sep 6 00:20:54.737752 extend-filesystems[1281]: Found vda1 Sep 6 00:20:54.737752 extend-filesystems[1281]: Found vda2 Sep 6 00:20:54.737752 extend-filesystems[1281]: Found vda3 Sep 6 00:20:54.737752 extend-filesystems[1281]: Found usr Sep 6 00:20:54.737752 extend-filesystems[1281]: Found vda4 Sep 6 00:20:54.737752 extend-filesystems[1281]: Found vda6 Sep 6 00:20:54.737752 extend-filesystems[1281]: Found vda7 Sep 6 00:20:54.737752 extend-filesystems[1281]: Found vda9 Sep 6 00:20:54.737752 extend-filesystems[1281]: Checking size of /dev/vda9 Sep 6 00:20:54.735435 systemd[1]: Starting sshd-keygen.service... Sep 6 00:20:54.742043 systemd[1]: Starting systemd-logind.service... Sep 6 00:20:54.744236 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:54.744335 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:20:54.752304 systemd[1]: Starting update-engine.service... Sep 6 00:20:54.759247 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:20:54.767287 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:20:54.767719 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:20:54.769534 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:20:54.770194 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:20:54.827672 extend-filesystems[1281]: Resized partition /dev/vda9 Sep 6 00:20:54.829325 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:54.830203 tar[1309]: linux-amd64/helm Sep 6 00:20:54.829349 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:54.834570 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:20:54.834887 systemd[1]: Finished motdgen.service. Sep 6 00:20:54.838224 extend-filesystems[1313]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:20:54.848864 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:20:54.864655 dbus-daemon[1278]: [system] SELinux support is enabled Sep 6 00:20:54.864844 systemd[1]: Started dbus.service. Sep 6 00:20:54.866754 jq[1305]: true Sep 6 00:20:54.867496 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:20:54.872411 update_engine[1303]: I0906 00:20:54.869359 1303 main.cc:92] Flatcar Update Engine starting Sep 6 00:20:54.867522 systemd[1]: Reached target system-config.target. Sep 6 00:20:54.868771 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:20:54.868787 systemd[1]: Reached target user-config.target. Sep 6 00:20:54.869585 systemd-logind[1293]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:20:54.869608 systemd-logind[1293]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:20:54.870109 systemd-logind[1293]: New seat seat0. Sep 6 00:20:54.879614 jq[1322]: true Sep 6 00:20:54.881437 systemd[1]: Started update-engine.service. Sep 6 00:20:54.881674 update_engine[1303]: I0906 00:20:54.881639 1303 update_check_scheduler.cc:74] Next update check in 9m56s Sep 6 00:20:54.884761 systemd[1]: Started locksmithd.service. Sep 6 00:20:54.886675 systemd[1]: Started systemd-logind.service. Sep 6 00:20:54.891166 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:20:54.917737 extend-filesystems[1313]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:20:54.917737 extend-filesystems[1313]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:20:54.917737 extend-filesystems[1313]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:20:54.920377 extend-filesystems[1281]: Resized filesystem in /dev/vda9 Sep 6 00:20:54.919652 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:20:54.920777 env[1312]: time="2025-09-06T00:20:54.918967673Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:20:54.919921 systemd[1]: Finished extend-filesystems.service. Sep 6 00:20:54.929541 bash[1339]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:20:54.931038 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:20:54.944755 locksmithd[1325]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:20:54.950703 env[1312]: time="2025-09-06T00:20:54.950644487Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:20:54.951026 env[1312]: time="2025-09-06T00:20:54.951007258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:54.952895 env[1312]: time="2025-09-06T00:20:54.952825226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:54.952895 env[1312]: time="2025-09-06T00:20:54.952887653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:54.953310 env[1312]: time="2025-09-06T00:20:54.953280139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:54.953310 env[1312]: time="2025-09-06T00:20:54.953307951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:54.953377 env[1312]: time="2025-09-06T00:20:54.953324993Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:20:54.953377 env[1312]: time="2025-09-06T00:20:54.953339010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:54.953457 env[1312]: time="2025-09-06T00:20:54.953434378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:54.953992 env[1312]: time="2025-09-06T00:20:54.953830131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:54.954240 env[1312]: time="2025-09-06T00:20:54.954205043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:54.954240 env[1312]: time="2025-09-06T00:20:54.954234819Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:20:54.954322 env[1312]: time="2025-09-06T00:20:54.954303228Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:20:54.954357 env[1312]: time="2025-09-06T00:20:54.954321983Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:20:54.961442 env[1312]: time="2025-09-06T00:20:54.961372062Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:20:54.961442 env[1312]: time="2025-09-06T00:20:54.961426834Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:20:54.961442 env[1312]: time="2025-09-06T00:20:54.961439809Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:20:54.961668 env[1312]: time="2025-09-06T00:20:54.961509249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:20:54.961668 env[1312]: time="2025-09-06T00:20:54.961524317Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:20:54.961668 env[1312]: time="2025-09-06T00:20:54.961536560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:20:54.961668 env[1312]: time="2025-09-06T00:20:54.961547340Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:20:54.961668 env[1312]: time="2025-09-06T00:20:54.961560595Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:20:54.961668 env[1312]: time="2025-09-06T00:20:54.961572367Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:20:54.961668 env[1312]: time="2025-09-06T00:20:54.961584991Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:20:54.961668 env[1312]: time="2025-09-06T00:20:54.961597013Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:20:54.961668 env[1312]: time="2025-09-06T00:20:54.961608204Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:20:54.961835 env[1312]: time="2025-09-06T00:20:54.961774416Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:20:54.961888 env[1312]: time="2025-09-06T00:20:54.961859445Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:20:54.962364 env[1312]: time="2025-09-06T00:20:54.962326541Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:20:54.962478 env[1312]: time="2025-09-06T00:20:54.962457467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.962572 env[1312]: time="2025-09-06T00:20:54.962553006Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:20:54.962714 env[1312]: time="2025-09-06T00:20:54.962693529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.962796 env[1312]: time="2025-09-06T00:20:54.962777427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.962877 env[1312]: time="2025-09-06T00:20:54.962857827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.962957 env[1312]: time="2025-09-06T00:20:54.962938659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.963058 env[1312]: time="2025-09-06T00:20:54.963038737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.963155 env[1312]: time="2025-09-06T00:20:54.963120801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.963236 env[1312]: time="2025-09-06T00:20:54.963217432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.963317 env[1312]: time="2025-09-06T00:20:54.963297792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.963414 env[1312]: time="2025-09-06T00:20:54.963393782Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:20:54.963662 env[1312]: time="2025-09-06T00:20:54.963643300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.963743 env[1312]: time="2025-09-06T00:20:54.963724352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.963831 env[1312]: time="2025-09-06T00:20:54.963811546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.963913 env[1312]: time="2025-09-06T00:20:54.963893579Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:20:54.964001 env[1312]: time="2025-09-06T00:20:54.963977837Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:20:54.964084 env[1312]: time="2025-09-06T00:20:54.964064630Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:20:54.964207 env[1312]: time="2025-09-06T00:20:54.964185898Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:20:54.964331 env[1312]: time="2025-09-06T00:20:54.964312405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:20:54.964677 env[1312]: time="2025-09-06T00:20:54.964624811Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:20:54.965383 env[1312]: time="2025-09-06T00:20:54.964842228Z" level=info msg="Connect containerd service" Sep 6 00:20:54.965383 env[1312]: time="2025-09-06T00:20:54.964898995Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:20:54.966017 env[1312]: time="2025-09-06T00:20:54.965996442Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:20:54.966306 env[1312]: time="2025-09-06T00:20:54.966225762Z" level=info msg="Start subscribing containerd event" Sep 6 00:20:54.966561 env[1312]: time="2025-09-06T00:20:54.966367668Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:20:54.966614 env[1312]: time="2025-09-06T00:20:54.966589985Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:20:54.966682 env[1312]: time="2025-09-06T00:20:54.966658484Z" level=info msg="Start recovering state" Sep 6 00:20:54.966763 systemd[1]: Started containerd.service. Sep 6 00:20:54.966911 env[1312]: time="2025-09-06T00:20:54.966886421Z" level=info msg="containerd successfully booted in 0.074404s" Sep 6 00:20:54.967053 env[1312]: time="2025-09-06T00:20:54.967034960Z" level=info msg="Start event monitor" Sep 6 00:20:54.967182 env[1312]: time="2025-09-06T00:20:54.967164653Z" level=info msg="Start snapshots syncer" Sep 6 00:20:54.967270 env[1312]: time="2025-09-06T00:20:54.967250584Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:20:54.967364 env[1312]: time="2025-09-06T00:20:54.967340172Z" level=info msg="Start streaming server" Sep 6 00:20:55.117618 systemd-networkd[1075]: eth0: Gained IPv6LL Sep 6 00:20:55.120551 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:20:55.121888 systemd[1]: Reached target network-online.target. Sep 6 00:20:55.124843 systemd[1]: Starting kubelet.service... Sep 6 00:20:55.244090 sshd_keygen[1297]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:20:55.440063 systemd[1]: Finished sshd-keygen.service. Sep 6 00:20:55.443830 systemd[1]: Starting issuegen.service... Sep 6 00:20:55.453684 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:20:55.454018 systemd[1]: Finished issuegen.service. Sep 6 00:20:55.457365 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:20:55.475125 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:20:55.478588 systemd[1]: Started getty@tty1.service. Sep 6 00:20:55.482237 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:20:55.483817 systemd[1]: Reached target getty.target. Sep 6 00:20:55.720566 tar[1309]: linux-amd64/LICENSE Sep 6 00:20:55.720738 tar[1309]: linux-amd64/README.md Sep 6 00:20:55.727957 systemd[1]: Finished prepare-helm.service. Sep 6 00:20:56.379859 systemd[1]: Created slice system-sshd.slice. Sep 6 00:20:56.384421 systemd[1]: Started sshd@0-10.0.0.61:22-10.0.0.1:34284.service. Sep 6 00:20:56.525260 sshd[1376]: Accepted publickey for core from 10.0.0.1 port 34284 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:20:56.536301 sshd[1376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:56.593534 systemd-logind[1293]: New session 1 of user core. Sep 6 00:20:56.594854 systemd[1]: Created slice user-500.slice. Sep 6 00:20:56.599258 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:20:56.640160 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:20:56.649957 systemd[1]: Starting user@500.service... Sep 6 00:20:56.658052 (systemd)[1381]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:56.929420 systemd[1381]: Queued start job for default target default.target. Sep 6 00:20:56.929773 systemd[1381]: Reached target paths.target. Sep 6 00:20:56.929792 systemd[1381]: Reached target sockets.target. Sep 6 00:20:56.929807 systemd[1381]: Reached target timers.target. Sep 6 00:20:56.929819 systemd[1381]: Reached target basic.target. Sep 6 00:20:56.930022 systemd[1]: Started user@500.service. Sep 6 00:20:56.930550 systemd[1381]: Reached target default.target. Sep 6 00:20:56.930637 systemd[1381]: Startup finished in 256ms. Sep 6 00:20:56.969960 systemd[1]: Started session-1.scope. Sep 6 00:20:57.092901 systemd[1]: Started sshd@1-10.0.0.61:22-10.0.0.1:34292.service. Sep 6 00:20:57.192118 sshd[1390]: Accepted publickey for core from 10.0.0.1 port 34292 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:20:57.194858 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:57.225458 systemd[1]: Started session-2.scope. Sep 6 00:20:57.226863 systemd-logind[1293]: New session 2 of user core. Sep 6 00:20:57.307025 sshd[1390]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:57.309798 systemd[1]: Started sshd@2-10.0.0.61:22-10.0.0.1:34306.service. Sep 6 00:20:57.312359 systemd[1]: sshd@1-10.0.0.61:22-10.0.0.1:34292.service: Deactivated successfully. Sep 6 00:20:57.314198 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:20:57.315700 systemd-logind[1293]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:20:57.317596 systemd-logind[1293]: Removed session 2. Sep 6 00:20:57.370619 sshd[1395]: Accepted publickey for core from 10.0.0.1 port 34306 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:20:57.372622 sshd[1395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:57.382597 systemd-logind[1293]: New session 3 of user core. Sep 6 00:20:57.383089 systemd[1]: Started session-3.scope. Sep 6 00:20:57.496305 sshd[1395]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:57.499545 systemd[1]: sshd@2-10.0.0.61:22-10.0.0.1:34306.service: Deactivated successfully. Sep 6 00:20:57.501512 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:20:57.502079 systemd-logind[1293]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:20:57.503477 systemd-logind[1293]: Removed session 3. Sep 6 00:20:57.700684 systemd[1]: Started kubelet.service. Sep 6 00:20:57.703455 systemd[1]: Reached target multi-user.target. Sep 6 00:20:57.706457 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:20:57.715473 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:20:57.715816 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:20:57.720973 systemd[1]: Startup finished in 6.007s (kernel) + 9.298s (userspace) = 15.305s. Sep 6 00:20:58.520730 kubelet[1409]: E0906 00:20:58.520635 1409 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:20:58.522492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:20:58.522698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:21:07.499784 systemd[1]: Started sshd@3-10.0.0.61:22-10.0.0.1:53578.service. Sep 6 00:21:07.549825 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 53578 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:21:07.552332 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:07.558802 systemd-logind[1293]: New session 4 of user core. Sep 6 00:21:07.560243 systemd[1]: Started session-4.scope. Sep 6 00:21:07.626800 sshd[1419]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:07.630422 systemd[1]: Started sshd@4-10.0.0.61:22-10.0.0.1:53582.service. Sep 6 00:21:07.631358 systemd[1]: sshd@3-10.0.0.61:22-10.0.0.1:53578.service: Deactivated successfully. Sep 6 00:21:07.632764 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:21:07.632778 systemd-logind[1293]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:21:07.634146 systemd-logind[1293]: Removed session 4. Sep 6 00:21:07.678971 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 53582 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:21:07.680725 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:07.686190 systemd-logind[1293]: New session 5 of user core. Sep 6 00:21:07.687422 systemd[1]: Started session-5.scope. Sep 6 00:21:07.742099 sshd[1424]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:07.747551 systemd[1]: Started sshd@5-10.0.0.61:22-10.0.0.1:53586.service. Sep 6 00:21:07.748391 systemd[1]: sshd@4-10.0.0.61:22-10.0.0.1:53582.service: Deactivated successfully. Sep 6 00:21:07.749969 systemd-logind[1293]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:21:07.750093 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:21:07.752344 systemd-logind[1293]: Removed session 5. Sep 6 00:21:07.799233 sshd[1432]: Accepted publickey for core from 10.0.0.1 port 53586 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:21:07.800809 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:07.805589 systemd-logind[1293]: New session 6 of user core. Sep 6 00:21:07.806553 systemd[1]: Started session-6.scope. Sep 6 00:21:07.872191 sshd[1432]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:07.875441 systemd[1]: Started sshd@6-10.0.0.61:22-10.0.0.1:53602.service. Sep 6 00:21:07.875984 systemd[1]: sshd@5-10.0.0.61:22-10.0.0.1:53586.service: Deactivated successfully. Sep 6 00:21:07.877302 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:21:07.877701 systemd-logind[1293]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:21:07.878746 systemd-logind[1293]: Removed session 6. Sep 6 00:21:07.922745 sshd[1439]: Accepted publickey for core from 10.0.0.1 port 53602 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:21:07.924512 sshd[1439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:07.930022 systemd-logind[1293]: New session 7 of user core. Sep 6 00:21:07.932733 systemd[1]: Started session-7.scope. Sep 6 00:21:08.008914 sudo[1444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 00:21:08.009288 sudo[1444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:21:08.027607 dbus-daemon[1278]: \xd0]㲋U: received setenforce notice (enforcing=887809680) Sep 6 00:21:08.030737 sudo[1444]: pam_unix(sudo:session): session closed for user root Sep 6 00:21:08.033758 sshd[1439]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:08.037914 systemd[1]: Started sshd@7-10.0.0.61:22-10.0.0.1:53614.service. Sep 6 00:21:08.038952 systemd[1]: sshd@6-10.0.0.61:22-10.0.0.1:53602.service: Deactivated successfully. Sep 6 00:21:08.040842 systemd-logind[1293]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:21:08.040878 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:21:08.042745 systemd-logind[1293]: Removed session 7. Sep 6 00:21:08.089937 sshd[1446]: Accepted publickey for core from 10.0.0.1 port 53614 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:21:08.091814 sshd[1446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:08.099075 systemd-logind[1293]: New session 8 of user core. Sep 6 00:21:08.100039 systemd[1]: Started session-8.scope. Sep 6 00:21:08.170655 sudo[1453]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 00:21:08.170950 sudo[1453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:21:08.178985 sudo[1453]: pam_unix(sudo:session): session closed for user root Sep 6 00:21:08.186310 sudo[1452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 6 00:21:08.186594 sudo[1452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:21:08.201326 systemd[1]: Stopping audit-rules.service... Sep 6 00:21:08.202000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 6 00:21:08.203768 auditctl[1456]: No rules Sep 6 00:21:08.204369 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 00:21:08.204727 kernel: kauditd_printk_skb: 129 callbacks suppressed Sep 6 00:21:08.204773 kernel: audit: type=1305 audit(1757118068.202:137): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 6 00:21:08.204760 systemd[1]: Stopped audit-rules.service. Sep 6 00:21:08.207239 systemd[1]: Starting audit-rules.service... Sep 6 00:21:08.202000 audit[1456]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9c013520 a2=420 a3=0 items=0 ppid=1 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:08.213186 kernel: audit: type=1300 audit(1757118068.202:137): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9c013520 a2=420 a3=0 items=0 ppid=1 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:08.213353 kernel: audit: type=1327 audit(1757118068.202:137): proctitle=2F7362696E2F617564697463746C002D44 Sep 6 00:21:08.202000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 6 00:21:08.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.222877 kernel: audit: type=1131 audit(1757118068.203:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.236185 augenrules[1474]: No rules Sep 6 00:21:08.237440 systemd[1]: Finished audit-rules.service. Sep 6 00:21:08.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.238550 sudo[1452]: pam_unix(sudo:session): session closed for user root Sep 6 00:21:08.237000 audit[1452]: USER_END pid=1452 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.242715 sshd[1446]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:08.245914 systemd[1]: sshd@7-10.0.0.61:22-10.0.0.1:53614.service: Deactivated successfully. Sep 6 00:21:08.247363 kernel: audit: type=1130 audit(1757118068.236:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.247491 kernel: audit: type=1106 audit(1757118068.237:140): pid=1452 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.247516 kernel: audit: type=1104 audit(1757118068.237:141): pid=1452 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.237000 audit[1452]: CRED_DISP pid=1452 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.249054 systemd-logind[1293]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:21:08.250346 systemd[1]: Started sshd@8-10.0.0.61:22-10.0.0.1:53622.service. Sep 6 00:21:08.250840 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:21:08.251559 systemd-logind[1293]: Removed session 8. Sep 6 00:21:08.242000 audit[1446]: USER_END pid=1446 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:08.258037 kernel: audit: type=1106 audit(1757118068.242:142): pid=1446 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:08.258222 kernel: audit: type=1104 audit(1757118068.243:143): pid=1446 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:08.243000 audit[1446]: CRED_DISP pid=1446 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:08.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.61:22-10.0.0.1:53614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.266996 kernel: audit: type=1131 audit(1757118068.245:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.61:22-10.0.0.1:53614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.61:22-10.0.0.1:53622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.305000 audit[1481]: USER_ACCT pid=1481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:08.307202 sshd[1481]: Accepted publickey for core from 10.0.0.1 port 53622 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:21:08.307000 audit[1481]: CRED_ACQ pid=1481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:08.307000 audit[1481]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3ed35e50 a2=3 a3=0 items=0 ppid=1 pid=1481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:08.307000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:21:08.309447 sshd[1481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:08.319297 systemd[1]: Started session-9.scope. Sep 6 00:21:08.319342 systemd-logind[1293]: New session 9 of user core. Sep 6 00:21:08.327000 audit[1481]: USER_START pid=1481 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:08.329000 audit[1484]: CRED_ACQ pid=1484 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:08.381000 audit[1485]: USER_ACCT pid=1485 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.381000 audit[1485]: CRED_REFR pid=1485 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.382820 sudo[1485]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:21:08.383032 sudo[1485]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:21:08.384000 audit[1485]: USER_START pid=1485 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.430227 systemd[1]: Starting docker.service... Sep 6 00:21:08.545443 env[1496]: time="2025-09-06T00:21:08.544424752Z" level=info msg="Starting up" Sep 6 00:21:08.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.547767 env[1496]: time="2025-09-06T00:21:08.547241473Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:21:08.547767 env[1496]: time="2025-09-06T00:21:08.547277401Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:21:08.547767 env[1496]: time="2025-09-06T00:21:08.547309120Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:21:08.547767 env[1496]: time="2025-09-06T00:21:08.547328236Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:21:08.546335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:21:08.546536 systemd[1]: Stopped kubelet.service. Sep 6 00:21:08.551090 systemd[1]: Starting kubelet.service... Sep 6 00:21:08.552181 env[1496]: time="2025-09-06T00:21:08.552117225Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:21:08.552391 env[1496]: time="2025-09-06T00:21:08.552301761Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:21:08.552507 env[1496]: time="2025-09-06T00:21:08.552478984Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:21:08.552599 env[1496]: time="2025-09-06T00:21:08.552575555Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:21:08.878120 systemd[1]: Started kubelet.service. Sep 6 00:21:08.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:09.603921 kubelet[1515]: E0906 00:21:09.601889 1515 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:21:09.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:21:09.607817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:21:09.608008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:21:10.241409 env[1496]: time="2025-09-06T00:21:10.239217738Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 00:21:10.241409 env[1496]: time="2025-09-06T00:21:10.239256321Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 00:21:10.241409 env[1496]: time="2025-09-06T00:21:10.239661911Z" level=info msg="Loading containers: start." Sep 6 00:21:10.336000 audit[1546]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.336000 audit[1546]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd3e923fd0 a2=0 a3=7ffd3e923fbc items=0 ppid=1496 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.336000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 6 00:21:10.339000 audit[1548]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.339000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc53f87ae0 a2=0 a3=7ffc53f87acc items=0 ppid=1496 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.339000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 6 00:21:10.342000 audit[1550]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.342000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc06a4c1e0 a2=0 a3=7ffc06a4c1cc items=0 ppid=1496 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.342000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 00:21:10.345000 audit[1552]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.345000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd68180dd0 a2=0 a3=7ffd68180dbc items=0 ppid=1496 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.345000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 6 00:21:10.349000 audit[1554]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.349000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc79150aa0 a2=0 a3=7ffc79150a8c items=0 ppid=1496 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.349000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 6 00:21:10.375000 audit[1559]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.375000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffde1335cc0 a2=0 a3=7ffde1335cac items=0 ppid=1496 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.375000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 6 00:21:10.599000 audit[1561]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.599000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcae6c03e0 a2=0 a3=7ffcae6c03cc items=0 ppid=1496 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.599000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 6 00:21:10.601000 audit[1563]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.601000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd21a67a80 a2=0 a3=7ffd21a67a6c items=0 ppid=1496 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.601000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 6 00:21:10.603000 audit[1565]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.603000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd75c082a0 a2=0 a3=7ffd75c0828c items=0 ppid=1496 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.603000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:21:10.681000 audit[1569]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.681000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffd32dbb90 a2=0 a3=7fffd32dbb7c items=0 ppid=1496 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.681000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:21:10.687000 audit[1570]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.687000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffce3fe8010 a2=0 a3=7ffce3fe7ffc items=0 ppid=1496 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.687000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:21:10.701166 kernel: Initializing XFRM netlink socket Sep 6 00:21:10.738779 env[1496]: time="2025-09-06T00:21:10.738661726Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:21:10.760000 audit[1578]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.760000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc839d3590 a2=0 a3=7ffc839d357c items=0 ppid=1496 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.760000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 6 00:21:10.774000 audit[1581]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.774000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdfbc75820 a2=0 a3=7ffdfbc7580c items=0 ppid=1496 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.774000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 6 00:21:10.778000 audit[1584]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.778000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffd5994050 a2=0 a3=7fffd599403c items=0 ppid=1496 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.778000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 6 00:21:10.782000 audit[1586]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.782000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd050f4d00 a2=0 a3=7ffd050f4cec items=0 ppid=1496 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.782000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 6 00:21:10.785000 audit[1588]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.785000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff6717eed0 a2=0 a3=7fff6717eebc items=0 ppid=1496 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.785000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 6 00:21:10.787000 audit[1590]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.787000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fffedc25360 a2=0 a3=7fffedc2534c items=0 ppid=1496 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.787000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 6 00:21:10.790000 audit[1592]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.790000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffdd60e25b0 a2=0 a3=7ffdd60e259c items=0 ppid=1496 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.790000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 6 00:21:10.801000 audit[1595]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.801000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe349c5cf0 a2=0 a3=7ffe349c5cdc items=0 ppid=1496 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.801000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 6 00:21:10.804000 audit[1597]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.804000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff04d31520 a2=0 a3=7fff04d3150c items=0 ppid=1496 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.804000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 00:21:10.807000 audit[1599]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.807000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe970af9e0 a2=0 a3=7ffe970af9cc items=0 ppid=1496 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.807000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 6 00:21:10.811000 audit[1601]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.811000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc29d818f0 a2=0 a3=7ffc29d818dc items=0 ppid=1496 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.811000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 6 00:21:10.812776 systemd-networkd[1075]: docker0: Link UP Sep 6 00:21:10.878000 audit[1605]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.878000 audit[1605]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcc89707b0 a2=0 a3=7ffcc897079c items=0 ppid=1496 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.878000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:21:10.884000 audit[1606]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:10.884000 audit[1606]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffd55d7fa0 a2=0 a3=7fffd55d7f8c items=0 ppid=1496 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:10.884000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:21:10.886318 env[1496]: time="2025-09-06T00:21:10.886258500Z" level=info msg="Loading containers: done." Sep 6 00:21:10.920914 env[1496]: time="2025-09-06T00:21:10.920754149Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:21:10.921248 env[1496]: time="2025-09-06T00:21:10.921023524Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:21:10.921248 env[1496]: time="2025-09-06T00:21:10.921168396Z" level=info msg="Daemon has completed initialization" Sep 6 00:21:10.943911 systemd[1]: Started docker.service. Sep 6 00:21:10.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:10.954106 env[1496]: time="2025-09-06T00:21:10.953986039Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:21:12.037956 env[1312]: time="2025-09-06T00:21:12.037891100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:21:12.735796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581973276.mount: Deactivated successfully. Sep 6 00:21:14.169650 env[1312]: time="2025-09-06T00:21:14.169579586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:14.171869 env[1312]: time="2025-09-06T00:21:14.171833822Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:14.174099 env[1312]: time="2025-09-06T00:21:14.174053214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:14.176042 env[1312]: time="2025-09-06T00:21:14.175980297Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:14.176877 env[1312]: time="2025-09-06T00:21:14.176834388Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 00:21:14.177494 env[1312]: time="2025-09-06T00:21:14.177449923Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:21:15.918073 env[1312]: time="2025-09-06T00:21:15.917990365Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:15.920201 env[1312]: time="2025-09-06T00:21:15.920144745Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:15.922380 env[1312]: time="2025-09-06T00:21:15.922340752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:15.924314 env[1312]: time="2025-09-06T00:21:15.924249752Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:15.925119 env[1312]: time="2025-09-06T00:21:15.925070871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 00:21:15.925911 env[1312]: time="2025-09-06T00:21:15.925872935Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:21:17.673286 env[1312]: time="2025-09-06T00:21:17.673193441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:17.676937 env[1312]: time="2025-09-06T00:21:17.676876116Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:17.678818 env[1312]: time="2025-09-06T00:21:17.678766620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:17.680394 env[1312]: time="2025-09-06T00:21:17.680368694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:17.681073 env[1312]: time="2025-09-06T00:21:17.681025626Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 00:21:17.681805 env[1312]: time="2025-09-06T00:21:17.681758250Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:21:19.859728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:21:19.860076 systemd[1]: Stopped kubelet.service. Sep 6 00:21:19.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:19.945824 kernel: kauditd_printk_skb: 88 callbacks suppressed Sep 6 00:21:19.945930 kernel: audit: type=1130 audit(1757118079.858:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:19.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:19.946164 systemd[1]: Starting kubelet.service... Sep 6 00:21:19.953037 kernel: audit: type=1131 audit(1757118079.858:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:20.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:20.104099 systemd[1]: Started kubelet.service. Sep 6 00:21:20.109246 kernel: audit: type=1130 audit(1757118080.103:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:20.158010 kubelet[1653]: E0906 00:21:20.157340 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:21:20.160564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:21:20.160770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:21:20.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:21:20.165178 kernel: audit: type=1131 audit(1757118080.160:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:21:20.269901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231561024.mount: Deactivated successfully. Sep 6 00:21:21.903102 env[1312]: time="2025-09-06T00:21:21.902970512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:21.905642 env[1312]: time="2025-09-06T00:21:21.905567190Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:21.907861 env[1312]: time="2025-09-06T00:21:21.907795999Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:21.909514 env[1312]: time="2025-09-06T00:21:21.909451473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:21.910048 env[1312]: time="2025-09-06T00:21:21.909985665Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 00:21:21.911118 env[1312]: time="2025-09-06T00:21:21.911069948Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:21:22.558790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount109501171.mount: Deactivated successfully. Sep 6 00:21:24.826082 env[1312]: time="2025-09-06T00:21:24.826010471Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:24.828952 env[1312]: time="2025-09-06T00:21:24.828889659Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:24.831034 env[1312]: time="2025-09-06T00:21:24.830976051Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:24.833034 env[1312]: time="2025-09-06T00:21:24.832950032Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:24.834453 env[1312]: time="2025-09-06T00:21:24.834392858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:21:24.835249 env[1312]: time="2025-09-06T00:21:24.835209439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:21:25.586585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798159763.mount: Deactivated successfully. Sep 6 00:21:25.591707 env[1312]: time="2025-09-06T00:21:25.591644003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:25.593320 env[1312]: time="2025-09-06T00:21:25.593292785Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:25.594777 env[1312]: time="2025-09-06T00:21:25.594750047Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:25.596076 env[1312]: time="2025-09-06T00:21:25.596025749Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:25.596548 env[1312]: time="2025-09-06T00:21:25.596497584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:21:25.597004 env[1312]: time="2025-09-06T00:21:25.596982784Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:21:26.114799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010509197.mount: Deactivated successfully. Sep 6 00:21:29.376734 env[1312]: time="2025-09-06T00:21:29.376653817Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:29.381750 env[1312]: time="2025-09-06T00:21:29.381678115Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:29.383714 env[1312]: time="2025-09-06T00:21:29.383666026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:29.385392 env[1312]: time="2025-09-06T00:21:29.385369189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:29.386313 env[1312]: time="2025-09-06T00:21:29.386271599Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 00:21:30.197015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 6 00:21:30.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:30.197322 systemd[1]: Stopped kubelet.service. Sep 6 00:21:30.199394 systemd[1]: Starting kubelet.service... Sep 6 00:21:30.203930 kernel: audit: type=1130 audit(1757118090.196:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:30.204119 kernel: audit: type=1131 audit(1757118090.196:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:30.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:30.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:30.309552 systemd[1]: Started kubelet.service. Sep 6 00:21:30.314169 kernel: audit: type=1130 audit(1757118090.308:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:30.377192 kubelet[1692]: E0906 00:21:30.377122 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:21:30.379075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:21:30.379239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:21:30.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:21:30.383179 kernel: audit: type=1131 audit(1757118090.378:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:21:31.696312 systemd[1]: Stopped kubelet.service. Sep 6 00:21:31.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:31.698730 systemd[1]: Starting kubelet.service... Sep 6 00:21:31.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:31.702401 kernel: audit: type=1130 audit(1757118091.695:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:31.702503 kernel: audit: type=1131 audit(1757118091.695:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:31.722386 systemd[1]: Reloading. Sep 6 00:21:31.792942 /usr/lib/systemd/system-generators/torcx-generator[1728]: time="2025-09-06T00:21:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:21:31.792975 /usr/lib/systemd/system-generators/torcx-generator[1728]: time="2025-09-06T00:21:31Z" level=info msg="torcx already run" Sep 6 00:21:32.341497 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:21:32.341512 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:21:32.358671 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:21:32.435310 systemd[1]: Started kubelet.service. Sep 6 00:21:32.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:32.436843 systemd[1]: Stopping kubelet.service... Sep 6 00:21:32.437122 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:21:32.437373 systemd[1]: Stopped kubelet.service. Sep 6 00:21:32.438770 systemd[1]: Starting kubelet.service... Sep 6 00:21:32.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:32.442899 kernel: audit: type=1130 audit(1757118092.434:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:32.442956 kernel: audit: type=1131 audit(1757118092.436:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:32.530328 systemd[1]: Started kubelet.service. Sep 6 00:21:32.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:32.534213 kernel: audit: type=1130 audit(1757118092.529:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:32.736175 kubelet[1790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:32.736175 kubelet[1790]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:21:32.736175 kubelet[1790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:32.736684 kubelet[1790]: I0906 00:21:32.736428 1790 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:21:33.014437 kubelet[1790]: I0906 00:21:33.014315 1790 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:21:33.014437 kubelet[1790]: I0906 00:21:33.014350 1790 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:21:33.014675 kubelet[1790]: I0906 00:21:33.014653 1790 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:21:33.030497 kubelet[1790]: E0906 00:21:33.030443 1790 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:21:33.031477 kubelet[1790]: I0906 00:21:33.031454 1790 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:21:33.039184 kubelet[1790]: E0906 00:21:33.039148 1790 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:21:33.039184 kubelet[1790]: I0906 00:21:33.039177 1790 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:21:33.044512 kubelet[1790]: I0906 00:21:33.044483 1790 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:21:33.045273 kubelet[1790]: I0906 00:21:33.045251 1790 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:21:33.045411 kubelet[1790]: I0906 00:21:33.045372 1790 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:21:33.045598 kubelet[1790]: I0906 00:21:33.045405 1790 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:21:33.045708 kubelet[1790]: I0906 00:21:33.045607 1790 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:21:33.045708 kubelet[1790]: I0906 00:21:33.045616 1790 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:21:33.045761 kubelet[1790]: I0906 00:21:33.045724 1790 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:33.133866 kubelet[1790]: I0906 00:21:33.133802 1790 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:21:33.133866 kubelet[1790]: I0906 00:21:33.133873 1790 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:21:33.134104 kubelet[1790]: I0906 00:21:33.133938 1790 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:21:33.134104 kubelet[1790]: I0906 00:21:33.133975 1790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:21:33.156144 kubelet[1790]: W0906 00:21:33.156071 1790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 6 00:21:33.156242 kubelet[1790]: E0906 00:21:33.156180 1790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:21:33.157595 kubelet[1790]: W0906 00:21:33.157554 1790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 6 00:21:33.157672 kubelet[1790]: E0906 00:21:33.157604 1790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:21:33.158810 kubelet[1790]: I0906 00:21:33.158785 1790 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:21:33.159267 kubelet[1790]: I0906 00:21:33.159244 1790 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:21:33.159331 kubelet[1790]: W0906 00:21:33.159318 1790 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:21:33.161658 kubelet[1790]: I0906 00:21:33.161631 1790 server.go:1274] "Started kubelet" Sep 6 00:21:33.161885 kubelet[1790]: I0906 00:21:33.161840 1790 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:21:33.162308 kubelet[1790]: I0906 00:21:33.162228 1790 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:21:33.162539 kubelet[1790]: I0906 00:21:33.162510 1790 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:21:33.162000 audit[1790]: AVC avc: denied { mac_admin } for pid=1790 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:33.162000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:21:33.162000 audit[1790]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ae14d0 a1=c000b6e600 a2=c000ae14a0 a3=25 items=0 ppid=1 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:21:33.162000 audit[1790]: AVC avc: denied { mac_admin } for pid=1790 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:33.162000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:21:33.162000 audit[1790]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000501400 a1=c000b6e618 a2=c000ae1560 a3=25 items=0 ppid=1 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:21:33.166599 kubelet[1790]: I0906 00:21:33.163264 1790 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 6 00:21:33.166599 kubelet[1790]: I0906 00:21:33.163304 1790 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 6 00:21:33.166599 kubelet[1790]: I0906 00:21:33.163398 1790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:21:33.166599 kubelet[1790]: I0906 00:21:33.163769 1790 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:21:33.166599 kubelet[1790]: I0906 00:21:33.164177 1790 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:21:33.166599 kubelet[1790]: I0906 00:21:33.165862 1790 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:21:33.166599 kubelet[1790]: I0906 00:21:33.165979 1790 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:21:33.166599 kubelet[1790]: I0906 00:21:33.166023 1790 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:21:33.166852 kubelet[1790]: W0906 00:21:33.166821 1790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 6 00:21:33.165000 audit[1803]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:33.165000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffda7a69e60 a2=0 a3=7ffda7a69e4c items=0 ppid=1790 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 6 00:21:33.167091 kubelet[1790]: E0906 00:21:33.166863 1790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:21:33.167259 kernel: audit: type=1400 audit(1757118093.162:196): avc: denied { mac_admin } for pid=1790 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:33.167301 kubelet[1790]: E0906 00:21:33.167162 1790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:33.167301 kubelet[1790]: E0906 00:21:33.167219 1790 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="200ms" Sep 6 00:21:33.167644 kubelet[1790]: E0906 00:21:33.167624 1790 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:21:33.166000 audit[1804]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:33.166000 audit[1804]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc243fef0 a2=0 a3=7fffc243fedc items=0 ppid=1790 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.166000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 6 00:21:33.168000 audit[1806]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:33.168000 audit[1806]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc44f0dc30 a2=0 a3=7ffc44f0dc1c items=0 ppid=1790 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.168000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:21:33.172678 kubelet[1790]: I0906 00:21:33.172641 1790 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:21:33.172678 kubelet[1790]: I0906 00:21:33.172663 1790 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:21:33.172817 kubelet[1790]: I0906 00:21:33.172730 1790 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:21:33.172000 audit[1808]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1808 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:33.172000 audit[1808]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd13d69110 a2=0 a3=7ffd13d690fc items=0 ppid=1790 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:21:33.177206 kubelet[1790]: E0906 00:21:33.176244 1790 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.61:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.61:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186289989b4f1807 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:21:33.161601031 +0000 UTC m=+0.466513506,LastTimestamp:2025-09-06 00:21:33.161601031 +0000 UTC m=+0.466513506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:21:33.185000 audit[1813]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:33.185000 audit[1813]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffeac4d1680 a2=0 a3=7ffeac4d166c items=0 ppid=1790 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 6 00:21:33.189404 kubelet[1790]: I0906 00:21:33.189361 1790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:21:33.189000 audit[1816]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:33.189000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdafdbc410 a2=0 a3=7ffdafdbc3fc items=0 ppid=1790 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 6 00:21:33.190564 kubelet[1790]: I0906 00:21:33.190489 1790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:21:33.190564 kubelet[1790]: I0906 00:21:33.190514 1790 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:21:33.190564 kubelet[1790]: I0906 00:21:33.190542 1790 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:21:33.190673 kubelet[1790]: E0906 00:21:33.190586 1790 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:21:33.190000 audit[1817]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:33.190000 audit[1817]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb50b5d60 a2=0 a3=7ffeb50b5d4c items=0 ppid=1790 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.190000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 6 00:21:33.191991 kubelet[1790]: W0906 00:21:33.191867 1790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 6 00:21:33.191991 kubelet[1790]: E0906 00:21:33.191919 1790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:21:33.191000 audit[1818]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:33.191000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdefd087d0 a2=0 a3=7ffdefd087bc items=0 ppid=1790 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.191000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 6 00:21:33.192000 audit[1820]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=1820 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:33.192000 audit[1820]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffff6f53be0 a2=0 a3=7ffff6f53bcc items=0 ppid=1790 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 6 00:21:33.193000 audit[1821]: NETFILTER_CFG table=filter:35 family=10 entries=2 op=nft_register_chain pid=1821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:33.193000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdbd6beb30 a2=0 a3=7ffdbd6beb1c items=0 ppid=1790 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 6 00:21:33.193000 audit[1819]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=1819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:33.193000 audit[1819]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc210da270 a2=0 a3=7ffc210da25c items=0 ppid=1790 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 6 00:21:33.194000 audit[1822]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:33.194000 audit[1822]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe97b03920 a2=0 a3=7ffe97b0390c items=0 ppid=1790 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:33.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 6 00:21:33.204884 kubelet[1790]: I0906 00:21:33.204842 1790 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:21:33.204884 kubelet[1790]: I0906 00:21:33.204868 1790 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:21:33.204884 kubelet[1790]: I0906 00:21:33.204883 1790 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:33.267442 kubelet[1790]: E0906 00:21:33.267311 1790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:33.291722 kubelet[1790]: E0906 00:21:33.291649 1790 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:21:33.368037 kubelet[1790]: E0906 00:21:33.368012 1790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:33.368526 kubelet[1790]: E0906 00:21:33.368483 1790 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="400ms" Sep 6 00:21:33.468945 kubelet[1790]: E0906 00:21:33.468908 1790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:33.492141 kubelet[1790]: E0906 00:21:33.492081 1790 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:21:33.569741 kubelet[1790]: E0906 00:21:33.569617 1790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:33.670507 kubelet[1790]: E0906 00:21:33.670450 1790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:33.769806 kubelet[1790]: E0906 00:21:33.769702 1790 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="800ms" Sep 6 00:21:33.770766 kubelet[1790]: E0906 00:21:33.770683 1790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:33.871423 kubelet[1790]: E0906 00:21:33.871264 1790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:33.892523 kubelet[1790]: E0906 00:21:33.892453 1790 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:21:33.972055 kubelet[1790]: E0906 00:21:33.971992 1790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:34.058957 kubelet[1790]: W0906 00:21:34.058848 1790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 6 00:21:34.059104 kubelet[1790]: E0906 00:21:34.058968 1790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:21:34.072642 kubelet[1790]: E0906 00:21:34.072588 1790 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:34.109291 kubelet[1790]: I0906 00:21:34.109232 1790 policy_none.go:49] "None policy: Start" Sep 6 00:21:34.110364 kubelet[1790]: I0906 00:21:34.110333 1790 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:21:34.110443 kubelet[1790]: I0906 00:21:34.110409 1790 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:21:34.119147 kubelet[1790]: I0906 00:21:34.119093 1790 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:21:34.117000 audit[1790]: AVC avc: denied { mac_admin } for pid=1790 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:34.117000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:21:34.117000 audit[1790]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0010c0e40 a1=c000f7b980 a2=c0010c0e10 a3=25 items=0 ppid=1 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:34.117000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:21:34.119435 kubelet[1790]: I0906 00:21:34.119214 1790 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 6 00:21:34.119435 kubelet[1790]: I0906 00:21:34.119418 1790 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:21:34.119517 kubelet[1790]: I0906 00:21:34.119440 1790 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:21:34.120503 kubelet[1790]: I0906 00:21:34.120469 1790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:21:34.121469 kubelet[1790]: E0906 00:21:34.121397 1790 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 00:21:34.163242 kubelet[1790]: W0906 00:21:34.163164 1790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 6 00:21:34.163365 kubelet[1790]: E0906 00:21:34.163262 1790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:21:34.222112 kubelet[1790]: I0906 00:21:34.222077 1790 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:21:34.222743 kubelet[1790]: E0906 00:21:34.222677 1790 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Sep 6 00:21:34.425084 kubelet[1790]: I0906 00:21:34.424934 1790 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:21:34.425489 kubelet[1790]: E0906 00:21:34.425455 1790 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Sep 6 00:21:34.571077 kubelet[1790]: E0906 00:21:34.570999 1790 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="1.6s" Sep 6 00:21:34.662017 kubelet[1790]: W0906 00:21:34.661883 1790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 6 00:21:34.662017 kubelet[1790]: E0906 00:21:34.661984 1790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:21:34.704725 kubelet[1790]: W0906 00:21:34.704641 1790 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 6 00:21:34.704881 kubelet[1790]: E0906 00:21:34.704724 1790 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:21:34.775613 kubelet[1790]: I0906 00:21:34.775547 1790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95c668d983be6dbbbb15cf5ec2af5de7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"95c668d983be6dbbbb15cf5ec2af5de7\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:34.775613 kubelet[1790]: I0906 00:21:34.775612 1790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:34.776055 kubelet[1790]: I0906 00:21:34.775646 1790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:34.776055 kubelet[1790]: I0906 00:21:34.775669 1790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:34.776055 kubelet[1790]: I0906 00:21:34.775738 1790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:34.776055 kubelet[1790]: I0906 00:21:34.775773 1790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:34.776055 kubelet[1790]: I0906 00:21:34.775815 1790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95c668d983be6dbbbb15cf5ec2af5de7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"95c668d983be6dbbbb15cf5ec2af5de7\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:34.776194 kubelet[1790]: I0906 00:21:34.775841 1790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95c668d983be6dbbbb15cf5ec2af5de7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"95c668d983be6dbbbb15cf5ec2af5de7\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:34.776194 kubelet[1790]: I0906 00:21:34.775939 1790 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:34.827848 kubelet[1790]: I0906 00:21:34.827803 1790 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:21:34.828454 kubelet[1790]: E0906 00:21:34.828418 1790 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Sep 6 00:21:34.999744 kubelet[1790]: E0906 00:21:34.999579 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:35.000734 kubelet[1790]: E0906 00:21:35.000701 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:35.000817 env[1312]: time="2025-09-06T00:21:35.000688053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:95c668d983be6dbbbb15cf5ec2af5de7,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:35.001343 env[1312]: time="2025-09-06T00:21:35.001313885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:35.002492 kubelet[1790]: E0906 00:21:35.002462 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:35.002867 env[1312]: time="2025-09-06T00:21:35.002817105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:35.055384 kubelet[1790]: E0906 00:21:35.055287 1790 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:21:35.630888 kubelet[1790]: I0906 00:21:35.630829 1790 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:21:35.631379 kubelet[1790]: E0906 00:21:35.631323 1790 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Sep 6 00:21:35.880495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734921831.mount: Deactivated successfully. Sep 6 00:21:35.888262 env[1312]: time="2025-09-06T00:21:35.888119744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.891110 env[1312]: time="2025-09-06T00:21:35.891070449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.892035 env[1312]: time="2025-09-06T00:21:35.892008495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.893238 env[1312]: time="2025-09-06T00:21:35.893178021Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.895754 env[1312]: time="2025-09-06T00:21:35.895719367Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.897350 env[1312]: time="2025-09-06T00:21:35.897304474Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.898503 env[1312]: time="2025-09-06T00:21:35.898476083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.899851 env[1312]: time="2025-09-06T00:21:35.899825351Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.903582 env[1312]: time="2025-09-06T00:21:35.903542165Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.904602 env[1312]: time="2025-09-06T00:21:35.904570943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.905173 env[1312]: time="2025-09-06T00:21:35.905151928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.909315 env[1312]: time="2025-09-06T00:21:35.909281327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:35.928651 env[1312]: time="2025-09-06T00:21:35.928550398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:35.928651 env[1312]: time="2025-09-06T00:21:35.928615102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:35.928651 env[1312]: time="2025-09-06T00:21:35.928628837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:35.929059 env[1312]: time="2025-09-06T00:21:35.929004903Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdfe31a39bc691303b1c6779840e4fc899d2a3564d7baf982dc8fb711ca1acf1 pid=1831 runtime=io.containerd.runc.v2 Sep 6 00:21:35.941416 env[1312]: time="2025-09-06T00:21:35.941309390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:35.941416 env[1312]: time="2025-09-06T00:21:35.941365948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:35.941416 env[1312]: time="2025-09-06T00:21:35.941382589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:35.941925 env[1312]: time="2025-09-06T00:21:35.941854807Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01fb3ca7936c3ed6a31188eef6082263c9ed54412b1f14c64b967f0edb4f0311 pid=1855 runtime=io.containerd.runc.v2 Sep 6 00:21:35.947291 env[1312]: time="2025-09-06T00:21:35.947177727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:35.947291 env[1312]: time="2025-09-06T00:21:35.947224076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:35.947291 env[1312]: time="2025-09-06T00:21:35.947234005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:35.947705 env[1312]: time="2025-09-06T00:21:35.947652091Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee1bb9cbc1365ea0eb580cd6ad9b1d2adced32cb2d4629102350c415acfd1b7e pid=1877 runtime=io.containerd.runc.v2 Sep 6 00:21:35.989716 env[1312]: time="2025-09-06T00:21:35.989673111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdfe31a39bc691303b1c6779840e4fc899d2a3564d7baf982dc8fb711ca1acf1\"" Sep 6 00:21:35.991170 kubelet[1790]: E0906 00:21:35.991140 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:35.993281 env[1312]: time="2025-09-06T00:21:35.993230000Z" level=info msg="CreateContainer within sandbox \"bdfe31a39bc691303b1c6779840e4fc899d2a3564d7baf982dc8fb711ca1acf1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:21:35.993983 env[1312]: time="2025-09-06T00:21:35.993942596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:95c668d983be6dbbbb15cf5ec2af5de7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee1bb9cbc1365ea0eb580cd6ad9b1d2adced32cb2d4629102350c415acfd1b7e\"" Sep 6 00:21:35.994901 kubelet[1790]: E0906 00:21:35.994700 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:35.998849 env[1312]: time="2025-09-06T00:21:35.998801173Z" level=info msg="CreateContainer within sandbox \"ee1bb9cbc1365ea0eb580cd6ad9b1d2adced32cb2d4629102350c415acfd1b7e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:21:36.005087 env[1312]: time="2025-09-06T00:21:36.004585184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"01fb3ca7936c3ed6a31188eef6082263c9ed54412b1f14c64b967f0edb4f0311\"" Sep 6 00:21:36.005428 kubelet[1790]: E0906 00:21:36.005343 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:36.007408 env[1312]: time="2025-09-06T00:21:36.007377103Z" level=info msg="CreateContainer within sandbox \"01fb3ca7936c3ed6a31188eef6082263c9ed54412b1f14c64b967f0edb4f0311\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:21:36.017715 env[1312]: time="2025-09-06T00:21:36.017659673Z" level=info msg="CreateContainer within sandbox \"bdfe31a39bc691303b1c6779840e4fc899d2a3564d7baf982dc8fb711ca1acf1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dbf2deae63ba3cdd6155c270877ab253753177b1bddbd3531e47a997250e0312\"" Sep 6 00:21:36.018262 env[1312]: time="2025-09-06T00:21:36.018236660Z" level=info msg="StartContainer for \"dbf2deae63ba3cdd6155c270877ab253753177b1bddbd3531e47a997250e0312\"" Sep 6 00:21:36.022063 env[1312]: time="2025-09-06T00:21:36.022023961Z" level=info msg="CreateContainer within sandbox \"ee1bb9cbc1365ea0eb580cd6ad9b1d2adced32cb2d4629102350c415acfd1b7e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"373e7cb301b64143ef6132b74a4cbdc3dd8fe8864d1aca1f675ba32238ce2589\"" Sep 6 00:21:36.022749 env[1312]: time="2025-09-06T00:21:36.022730405Z" level=info msg="StartContainer for \"373e7cb301b64143ef6132b74a4cbdc3dd8fe8864d1aca1f675ba32238ce2589\"" Sep 6 00:21:36.030862 env[1312]: time="2025-09-06T00:21:36.030816479Z" level=info msg="CreateContainer within sandbox \"01fb3ca7936c3ed6a31188eef6082263c9ed54412b1f14c64b967f0edb4f0311\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"468203d93b144ae34e177c88d0fc7e4fa0bdaf9dd579ae392db2ca7d9bb3a2e7\"" Sep 6 00:21:36.031734 env[1312]: time="2025-09-06T00:21:36.031679700Z" level=info msg="StartContainer for \"468203d93b144ae34e177c88d0fc7e4fa0bdaf9dd579ae392db2ca7d9bb3a2e7\"" Sep 6 00:21:36.094177 env[1312]: time="2025-09-06T00:21:36.091031790Z" level=info msg="StartContainer for \"373e7cb301b64143ef6132b74a4cbdc3dd8fe8864d1aca1f675ba32238ce2589\" returns successfully" Sep 6 00:21:36.101162 env[1312]: time="2025-09-06T00:21:36.099899801Z" level=info msg="StartContainer for \"dbf2deae63ba3cdd6155c270877ab253753177b1bddbd3531e47a997250e0312\" returns successfully" Sep 6 00:21:36.106521 env[1312]: time="2025-09-06T00:21:36.106315358Z" level=info msg="StartContainer for \"468203d93b144ae34e177c88d0fc7e4fa0bdaf9dd579ae392db2ca7d9bb3a2e7\" returns successfully" Sep 6 00:21:36.199775 kubelet[1790]: E0906 00:21:36.199734 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:36.201152 kubelet[1790]: E0906 00:21:36.201117 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:36.202783 kubelet[1790]: E0906 00:21:36.202742 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:37.205002 kubelet[1790]: E0906 00:21:37.204962 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:37.232801 kubelet[1790]: I0906 00:21:37.232771 1790 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:21:37.476412 kubelet[1790]: E0906 00:21:37.476271 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:37.498598 kubelet[1790]: E0906 00:21:37.498557 1790 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 6 00:21:37.593627 kubelet[1790]: I0906 00:21:37.593580 1790 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 6 00:21:37.593627 kubelet[1790]: E0906 00:21:37.593621 1790 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 6 00:21:38.159863 kubelet[1790]: I0906 00:21:38.159795 1790 apiserver.go:52] "Watching apiserver" Sep 6 00:21:38.166314 kubelet[1790]: I0906 00:21:38.166257 1790 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:21:39.715322 kubelet[1790]: E0906 00:21:39.715276 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:39.974711 systemd[1]: Reloading. Sep 6 00:21:40.041575 /usr/lib/systemd/system-generators/torcx-generator[2083]: time="2025-09-06T00:21:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:21:40.041605 /usr/lib/systemd/system-generators/torcx-generator[2083]: time="2025-09-06T00:21:40Z" level=info msg="torcx already run" Sep 6 00:21:40.117575 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:21:40.117594 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:21:40.135223 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:21:40.208707 kubelet[1790]: E0906 00:21:40.208491 1790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:40.215920 systemd[1]: Stopping kubelet.service... Sep 6 00:21:40.237830 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:21:40.238291 systemd[1]: Stopped kubelet.service. Sep 6 00:21:40.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:40.239170 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 6 00:21:40.239221 kernel: audit: type=1131 audit(1757118100.237:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:40.240729 systemd[1]: Starting kubelet.service... Sep 6 00:21:40.337761 systemd[1]: Started kubelet.service. Sep 6 00:21:40.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:40.342171 kernel: audit: type=1130 audit(1757118100.337:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:40.438892 update_engine[1303]: I0906 00:21:40.438421 1303 update_attempter.cc:509] Updating boot flags... Sep 6 00:21:40.896597 kubelet[2141]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:40.896597 kubelet[2141]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:21:40.896597 kubelet[2141]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:40.896961 kubelet[2141]: I0906 00:21:40.896657 2141 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:21:40.907179 kubelet[2141]: I0906 00:21:40.905522 2141 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:21:40.907179 kubelet[2141]: I0906 00:21:40.905542 2141 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:21:40.907179 kubelet[2141]: I0906 00:21:40.905759 2141 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:21:40.907179 kubelet[2141]: I0906 00:21:40.906940 2141 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:21:40.909172 kubelet[2141]: I0906 00:21:40.909113 2141 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:21:40.912158 kubelet[2141]: E0906 00:21:40.912098 2141 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:21:40.912158 kubelet[2141]: I0906 00:21:40.912122 2141 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:21:40.928660 kubelet[2141]: I0906 00:21:40.928633 2141 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:21:40.929387 kubelet[2141]: I0906 00:21:40.929367 2141 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:21:40.929500 kubelet[2141]: I0906 00:21:40.929472 2141 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:21:40.929686 kubelet[2141]: I0906 00:21:40.929494 2141 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:21:40.929813 kubelet[2141]: I0906 00:21:40.929689 2141 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:21:40.929813 kubelet[2141]: I0906 00:21:40.929697 2141 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:21:40.929813 kubelet[2141]: I0906 00:21:40.929731 2141 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:40.929889 kubelet[2141]: I0906 00:21:40.929840 2141 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:21:40.929889 kubelet[2141]: I0906 00:21:40.929850 2141 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:21:40.929889 kubelet[2141]: I0906 00:21:40.929873 2141 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:21:40.934354 kubelet[2141]: I0906 00:21:40.934324 2141 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:21:40.938060 kubelet[2141]: I0906 00:21:40.938035 2141 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:21:40.938416 kubelet[2141]: I0906 00:21:40.938397 2141 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:21:40.938741 kubelet[2141]: I0906 00:21:40.938723 2141 server.go:1274] "Started kubelet" Sep 6 00:21:40.955051 kernel: audit: type=1400 audit(1757118100.939:213): avc: denied { mac_admin } for pid=2141 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:40.955252 kernel: audit: type=1401 audit(1757118100.939:213): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:21:40.955304 kernel: audit: type=1300 audit(1757118100.939:213): arch=c000003e syscall=188 success=no exit=-22 a0=c0009607e0 a1=c000bfc7f8 a2=c0009607b0 a3=25 items=0 ppid=1 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:40.939000 audit[2141]: AVC avc: denied { mac_admin } for pid=2141 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:40.939000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:21:40.939000 audit[2141]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009607e0 a1=c000bfc7f8 a2=c0009607b0 a3=25 items=0 ppid=1 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.941093 2141 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.941121 2141 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.941158 2141 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.947535 2141 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.948597 2141 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.949654 2141 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.949831 2141 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.950025 2141 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.951859 2141 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.951963 2141 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.952081 2141 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:21:40.955648 kubelet[2141]: E0906 00:21:40.953094 2141 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.953790 2141 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:21:40.955648 kubelet[2141]: I0906 00:21:40.953876 2141 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:21:40.956481 kubelet[2141]: I0906 00:21:40.955731 2141 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:21:40.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:21:40.961166 kernel: audit: type=1327 audit(1757118100.939:213): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:21:40.939000 audit[2141]: AVC avc: denied { mac_admin } for pid=2141 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:40.966080 kernel: audit: type=1400 audit(1757118100.939:214): avc: denied { mac_admin } for pid=2141 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:40.966187 kernel: audit: type=1401 audit(1757118100.939:214): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:21:40.939000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:21:40.939000 audit[2141]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ab8320 a1=c000bfc810 a2=c000960870 a3=25 items=0 ppid=1 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:40.969260 kubelet[2141]: I0906 00:21:40.967703 2141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:21:40.969260 kubelet[2141]: I0906 00:21:40.969186 2141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:21:40.969260 kubelet[2141]: I0906 00:21:40.969201 2141 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:21:40.969260 kubelet[2141]: I0906 00:21:40.969226 2141 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:21:40.969363 kubelet[2141]: E0906 00:21:40.969274 2141 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:21:40.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:21:41.006123 kubelet[2141]: I0906 00:21:41.006075 2141 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:21:41.006123 kubelet[2141]: I0906 00:21:41.006099 2141 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:21:41.006123 kubelet[2141]: I0906 00:21:41.006118 2141 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:41.006407 kubelet[2141]: I0906 00:21:41.006384 2141 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:21:41.006469 kubelet[2141]: I0906 00:21:41.006403 2141 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:21:41.006469 kubelet[2141]: I0906 00:21:41.006426 2141 policy_none.go:49] "None policy: Start" Sep 6 00:21:41.007993 kernel: audit: type=1300 audit(1757118100.939:214): arch=c000003e syscall=188 success=no exit=-22 a0=c000ab8320 a1=c000bfc810 a2=c000960870 a3=25 items=0 ppid=1 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:41.008078 kernel: audit: type=1327 audit(1757118100.939:214): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:21:41.008112 kubelet[2141]: I0906 00:21:41.007277 2141 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:21:41.008112 kubelet[2141]: I0906 00:21:41.007333 2141 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:21:41.008112 kubelet[2141]: I0906 00:21:41.007461 2141 state_mem.go:75] "Updated machine memory state" Sep 6 00:21:41.008898 kubelet[2141]: I0906 00:21:41.008865 2141 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:21:41.007000 audit[2141]: AVC avc: denied { mac_admin } for pid=2141 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:41.007000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:21:41.007000 audit[2141]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001076540 a1=c00103af60 a2=c001076510 a3=25 items=0 ppid=1 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:41.007000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:21:41.009103 kubelet[2141]: I0906 00:21:41.008924 2141 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 6 00:21:41.009103 kubelet[2141]: I0906 00:21:41.009076 2141 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:21:41.009192 kubelet[2141]: I0906 00:21:41.009090 2141 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:21:41.012909 kubelet[2141]: I0906 00:21:41.012876 2141 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:21:41.117408 kubelet[2141]: I0906 00:21:41.117363 2141 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:21:41.250485 kubelet[2141]: E0906 00:21:41.250403 2141 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:41.251243 kubelet[2141]: I0906 00:21:41.251212 2141 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 6 00:21:41.251325 kubelet[2141]: I0906 00:21:41.251314 2141 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 6 00:21:41.253505 kubelet[2141]: I0906 00:21:41.253479 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:41.253592 kubelet[2141]: I0906 00:21:41.253514 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95c668d983be6dbbbb15cf5ec2af5de7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"95c668d983be6dbbbb15cf5ec2af5de7\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:41.253592 kubelet[2141]: I0906 00:21:41.253538 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:41.253592 kubelet[2141]: I0906 00:21:41.253563 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:41.253592 kubelet[2141]: I0906 00:21:41.253581 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:41.253679 kubelet[2141]: I0906 00:21:41.253597 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:41.253679 kubelet[2141]: I0906 00:21:41.253612 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95c668d983be6dbbbb15cf5ec2af5de7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"95c668d983be6dbbbb15cf5ec2af5de7\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:41.253679 kubelet[2141]: I0906 00:21:41.253628 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95c668d983be6dbbbb15cf5ec2af5de7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"95c668d983be6dbbbb15cf5ec2af5de7\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:41.253679 kubelet[2141]: I0906 00:21:41.253646 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:41.397986 kubelet[2141]: E0906 00:21:41.397945 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:41.398193 kubelet[2141]: E0906 00:21:41.398001 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:41.551236 kubelet[2141]: E0906 00:21:41.551082 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:41.935426 kubelet[2141]: I0906 00:21:41.935295 2141 apiserver.go:52] "Watching apiserver" Sep 6 00:21:41.952790 kubelet[2141]: I0906 00:21:41.952716 2141 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:21:41.982939 kubelet[2141]: E0906 00:21:41.982890 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:41.984253 kubelet[2141]: E0906 00:21:41.984054 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:41.987833 kubelet[2141]: E0906 00:21:41.987808 2141 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:41.988093 kubelet[2141]: E0906 00:21:41.988078 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:42.002136 kubelet[2141]: I0906 00:21:42.002058 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.002042136 podStartE2EDuration="1.002042136s" podCreationTimestamp="2025-09-06 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:42.001988024 +0000 UTC m=+1.574546841" watchObservedRunningTime="2025-09-06 00:21:42.002042136 +0000 UTC m=+1.574600954" Sep 6 00:21:42.020434 kubelet[2141]: I0906 00:21:42.020394 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.020379057 podStartE2EDuration="1.020379057s" podCreationTimestamp="2025-09-06 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:42.014578597 +0000 UTC m=+1.587137414" watchObservedRunningTime="2025-09-06 00:21:42.020379057 +0000 UTC m=+1.592937874" Sep 6 00:21:42.020623 kubelet[2141]: I0906 00:21:42.020464 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.02046015 podStartE2EDuration="3.02046015s" podCreationTimestamp="2025-09-06 00:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:42.020229082 +0000 UTC m=+1.592787899" watchObservedRunningTime="2025-09-06 00:21:42.02046015 +0000 UTC m=+1.593018967" Sep 6 00:21:42.984451 kubelet[2141]: E0906 00:21:42.984421 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:43.985834 kubelet[2141]: E0906 00:21:43.985797 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:46.207419 kubelet[2141]: I0906 00:21:46.207372 2141 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:21:46.207969 env[1312]: time="2025-09-06T00:21:46.207754777Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:21:46.208293 kubelet[2141]: I0906 00:21:46.208112 2141 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:21:46.584893 kubelet[2141]: I0906 00:21:46.584747 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c2cce67-cad2-4c3f-ab50-29140c6b8a33-kube-proxy\") pod \"kube-proxy-mxfg7\" (UID: \"6c2cce67-cad2-4c3f-ab50-29140c6b8a33\") " pod="kube-system/kube-proxy-mxfg7" Sep 6 00:21:46.584893 kubelet[2141]: I0906 00:21:46.584796 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c2cce67-cad2-4c3f-ab50-29140c6b8a33-xtables-lock\") pod \"kube-proxy-mxfg7\" (UID: \"6c2cce67-cad2-4c3f-ab50-29140c6b8a33\") " pod="kube-system/kube-proxy-mxfg7" Sep 6 00:21:46.584893 kubelet[2141]: I0906 00:21:46.584817 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c2cce67-cad2-4c3f-ab50-29140c6b8a33-lib-modules\") pod \"kube-proxy-mxfg7\" (UID: \"6c2cce67-cad2-4c3f-ab50-29140c6b8a33\") " pod="kube-system/kube-proxy-mxfg7" Sep 6 00:21:46.584893 kubelet[2141]: I0906 00:21:46.584833 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw5wk\" (UniqueName: \"kubernetes.io/projected/6c2cce67-cad2-4c3f-ab50-29140c6b8a33-kube-api-access-gw5wk\") pod \"kube-proxy-mxfg7\" (UID: \"6c2cce67-cad2-4c3f-ab50-29140c6b8a33\") " pod="kube-system/kube-proxy-mxfg7" Sep 6 00:21:46.691881 kubelet[2141]: E0906 00:21:46.691835 2141 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 6 00:21:46.691881 kubelet[2141]: E0906 00:21:46.691879 2141 projected.go:194] Error preparing data for projected volume kube-api-access-gw5wk for pod kube-system/kube-proxy-mxfg7: configmap "kube-root-ca.crt" not found Sep 6 00:21:46.692106 kubelet[2141]: E0906 00:21:46.691952 2141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c2cce67-cad2-4c3f-ab50-29140c6b8a33-kube-api-access-gw5wk podName:6c2cce67-cad2-4c3f-ab50-29140c6b8a33 nodeName:}" failed. No retries permitted until 2025-09-06 00:21:47.191932261 +0000 UTC m=+6.764491078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gw5wk" (UniqueName: "kubernetes.io/projected/6c2cce67-cad2-4c3f-ab50-29140c6b8a33-kube-api-access-gw5wk") pod "kube-proxy-mxfg7" (UID: "6c2cce67-cad2-4c3f-ab50-29140c6b8a33") : configmap "kube-root-ca.crt" not found Sep 6 00:21:47.290468 kubelet[2141]: I0906 00:21:47.290412 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz82d\" (UniqueName: \"kubernetes.io/projected/67ff6bbc-be09-4791-a4fb-f31afc6ad23b-kube-api-access-rz82d\") pod \"tigera-operator-58fc44c59b-p67z2\" (UID: \"67ff6bbc-be09-4791-a4fb-f31afc6ad23b\") " pod="tigera-operator/tigera-operator-58fc44c59b-p67z2" Sep 6 00:21:47.290468 kubelet[2141]: I0906 00:21:47.290475 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67ff6bbc-be09-4791-a4fb-f31afc6ad23b-var-lib-calico\") pod \"tigera-operator-58fc44c59b-p67z2\" (UID: \"67ff6bbc-be09-4791-a4fb-f31afc6ad23b\") " pod="tigera-operator/tigera-operator-58fc44c59b-p67z2" Sep 6 00:21:47.291104 kubelet[2141]: I0906 00:21:47.291065 2141 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:21:47.476193 kubelet[2141]: E0906 00:21:47.476143 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:47.477177 env[1312]: time="2025-09-06T00:21:47.477119756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxfg7,Uid:6c2cce67-cad2-4c3f-ab50-29140c6b8a33,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:47.492317 env[1312]: time="2025-09-06T00:21:47.492195759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:47.492317 env[1312]: time="2025-09-06T00:21:47.492242437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:47.492317 env[1312]: time="2025-09-06T00:21:47.492258057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:47.492550 env[1312]: time="2025-09-06T00:21:47.492482420Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd28205c66e63d11f6d29afc02a2559fd5c6fff9f41a2d500202f8fad8a7b190 pid=2214 runtime=io.containerd.runc.v2 Sep 6 00:21:47.527878 env[1312]: time="2025-09-06T00:21:47.527835275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxfg7,Uid:6c2cce67-cad2-4c3f-ab50-29140c6b8a33,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd28205c66e63d11f6d29afc02a2559fd5c6fff9f41a2d500202f8fad8a7b190\"" Sep 6 00:21:47.528819 kubelet[2141]: E0906 00:21:47.528557 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:47.531554 env[1312]: time="2025-09-06T00:21:47.530533710Z" level=info msg="CreateContainer within sandbox \"fd28205c66e63d11f6d29afc02a2559fd5c6fff9f41a2d500202f8fad8a7b190\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:21:47.545283 env[1312]: time="2025-09-06T00:21:47.545189590Z" level=info msg="CreateContainer within sandbox \"fd28205c66e63d11f6d29afc02a2559fd5c6fff9f41a2d500202f8fad8a7b190\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43b23f9f7835d6c5685d18ecb4433cdaf4c2a945f0da7cd4d5af3d76222196c2\"" Sep 6 00:21:47.546089 env[1312]: time="2025-09-06T00:21:47.546043011Z" level=info msg="StartContainer for \"43b23f9f7835d6c5685d18ecb4433cdaf4c2a945f0da7cd4d5af3d76222196c2\"" Sep 6 00:21:47.579471 env[1312]: time="2025-09-06T00:21:47.579432068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-p67z2,Uid:67ff6bbc-be09-4791-a4fb-f31afc6ad23b,Namespace:tigera-operator,Attempt:0,}" Sep 6 00:21:47.592727 env[1312]: time="2025-09-06T00:21:47.592663849Z" level=info msg="StartContainer for \"43b23f9f7835d6c5685d18ecb4433cdaf4c2a945f0da7cd4d5af3d76222196c2\" returns successfully" Sep 6 00:21:47.596412 env[1312]: time="2025-09-06T00:21:47.596198152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:47.596412 env[1312]: time="2025-09-06T00:21:47.596250661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:47.596412 env[1312]: time="2025-09-06T00:21:47.596266481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:47.596590 env[1312]: time="2025-09-06T00:21:47.596492037Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21375e529081b68f2022c4ff1865b93aa10097e230cba546a75ef874d92ed77d pid=2293 runtime=io.containerd.runc.v2 Sep 6 00:21:47.705358 env[1312]: time="2025-09-06T00:21:47.704751446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-p67z2,Uid:67ff6bbc-be09-4791-a4fb-f31afc6ad23b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"21375e529081b68f2022c4ff1865b93aa10097e230cba546a75ef874d92ed77d\"" Sep 6 00:21:47.707339 env[1312]: time="2025-09-06T00:21:47.706404787Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 6 00:21:47.762000 audit[2359]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.764813 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 6 00:21:47.764879 kernel: audit: type=1325 audit(1757118107.762:216): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.777167 kernel: audit: type=1325 audit(1757118107.763:217): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.777298 kernel: audit: type=1300 audit(1757118107.763:217): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3a665800 a2=0 a3=7ffe3a6657ec items=0 ppid=2268 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.777315 kernel: audit: type=1327 audit(1757118107.763:217): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:21:47.763000 audit[2360]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.763000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3a665800 a2=0 a3=7ffe3a6657ec items=0 ppid=2268 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.763000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:21:47.780250 kernel: audit: type=1325 audit(1757118107.765:218): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.765000 audit[2361]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.765000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1d604b80 a2=0 a3=7ffc1d604b6c items=0 ppid=2268 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.785117 kernel: audit: type=1300 audit(1757118107.765:218): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1d604b80 a2=0 a3=7ffc1d604b6c items=0 ppid=2268 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 00:21:47.762000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe99f7cb40 a2=0 a3=7ffe99f7cb2c items=0 ppid=2268 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.792431 kernel: audit: type=1327 audit(1757118107.765:218): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 00:21:47.792524 kernel: audit: type=1300 audit(1757118107.762:216): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe99f7cb40 a2=0 a3=7ffe99f7cb2c items=0 ppid=2268 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.792552 kernel: audit: type=1327 audit(1757118107.762:216): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:21:47.762000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:21:47.794841 kernel: audit: type=1325 audit(1757118107.768:219): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.768000 audit[2362]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.768000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea0766340 a2=0 a3=7ffea076632c items=0 ppid=2268 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.768000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 00:21:47.770000 audit[2363]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.770000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda65bb3a0 a2=0 a3=7ffda65bb38c items=0 ppid=2268 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 00:21:47.771000 audit[2364]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.771000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdcb8a5d0 a2=0 a3=7ffcdcb8a5bc items=0 ppid=2268 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 00:21:47.865000 audit[2365]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.865000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff7b1a8550 a2=0 a3=7fff7b1a853c items=0 ppid=2268 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 6 00:21:47.868000 audit[2367]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2367 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.868000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe44f2ac70 a2=0 a3=7ffe44f2ac5c items=0 ppid=2268 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 6 00:21:47.872000 audit[2370]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.872000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcd1ac8500 a2=0 a3=7ffcd1ac84ec items=0 ppid=2268 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 6 00:21:47.873000 audit[2371]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.873000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe01831f00 a2=0 a3=7ffe01831eec items=0 ppid=2268 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 6 00:21:47.875000 audit[2373]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.875000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeaceefea0 a2=0 a3=7ffeaceefe8c items=0 ppid=2268 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.875000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 6 00:21:47.876000 audit[2374]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.876000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfbacbbd0 a2=0 a3=7ffdfbacbbbc items=0 ppid=2268 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 6 00:21:47.879446 kubelet[2141]: E0906 00:21:47.879413 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:47.880000 audit[2376]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.880000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff1ce3cd40 a2=0 a3=7fff1ce3cd2c items=0 ppid=2268 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 6 00:21:47.885000 audit[2379]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2379 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.885000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe276033a0 a2=0 a3=7ffe2760338c items=0 ppid=2268 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.885000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 6 00:21:47.886000 audit[2380]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.886000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea9b07330 a2=0 a3=7ffea9b0731c items=0 ppid=2268 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 6 00:21:47.889000 audit[2382]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.889000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd05054700 a2=0 a3=7ffd050546ec items=0 ppid=2268 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 6 00:21:47.890000 audit[2383]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.890000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe05ed7250 a2=0 a3=7ffe05ed723c items=0 ppid=2268 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 6 00:21:47.893000 audit[2385]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.893000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca0fa4c20 a2=0 a3=7ffca0fa4c0c items=0 ppid=2268 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 00:21:47.897000 audit[2388]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.897000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf384bec0 a2=0 a3=7ffdf384beac items=0 ppid=2268 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.897000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 00:21:47.901000 audit[2391]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.901000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe3bea0bf0 a2=0 a3=7ffe3bea0bdc items=0 ppid=2268 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.901000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 6 00:21:47.903000 audit[2392]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.903000 audit[2392]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd8a3d4110 a2=0 a3=7ffd8a3d40fc items=0 ppid=2268 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.903000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 6 00:21:47.905000 audit[2394]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.905000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcd3292e40 a2=0 a3=7ffcd3292e2c items=0 ppid=2268 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:21:47.909000 audit[2397]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.909000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8e387420 a2=0 a3=7ffd8e38740c items=0 ppid=2268 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:21:47.910000 audit[2398]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.910000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3f9a5740 a2=0 a3=7ffc3f9a572c items=0 ppid=2268 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 6 00:21:47.913000 audit[2400]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:21:47.913000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe5cb67bb0 a2=0 a3=7ffe5cb67b9c items=0 ppid=2268 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 6 00:21:47.943000 audit[2406]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:47.943000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd65f7250 a2=0 a3=7ffcd65f723c items=0 ppid=2268 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:47.954000 audit[2406]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:47.954000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcd65f7250 a2=0 a3=7ffcd65f723c items=0 ppid=2268 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:47.956000 audit[2411]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.956000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd87bced30 a2=0 a3=7ffd87bced1c items=0 ppid=2268 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 6 00:21:47.959000 audit[2413]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.959000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe1e352950 a2=0 a3=7ffe1e35293c items=0 ppid=2268 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.959000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 6 00:21:47.964000 audit[2416]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.964000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffbb61eb10 a2=0 a3=7fffbb61eafc items=0 ppid=2268 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.964000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 6 00:21:47.966000 audit[2417]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.966000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc97d8d630 a2=0 a3=7ffc97d8d61c items=0 ppid=2268 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 6 00:21:47.969000 audit[2419]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.969000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffce2f56a40 a2=0 a3=7ffce2f56a2c items=0 ppid=2268 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 6 00:21:47.970000 audit[2420]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.970000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1d2dced0 a2=0 a3=7fff1d2dcebc items=0 ppid=2268 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.970000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 6 00:21:47.976000 audit[2422]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.976000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff51445750 a2=0 a3=7fff5144573c items=0 ppid=2268 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.976000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 6 00:21:47.981000 audit[2425]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.981000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd454d89b0 a2=0 a3=7ffd454d899c items=0 ppid=2268 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 6 00:21:47.982000 audit[2426]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.982000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb1341ae0 a2=0 a3=7ffdb1341acc items=0 ppid=2268 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 6 00:21:47.984000 audit[2428]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2428 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.984000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc458f5370 a2=0 a3=7ffc458f535c items=0 ppid=2268 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 6 00:21:47.985000 audit[2429]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.985000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd13d3f540 a2=0 a3=7ffd13d3f52c items=0 ppid=2268 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.985000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 6 00:21:47.988000 audit[2431]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.988000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdb8b7680 a2=0 a3=7ffcdb8b766c items=0 ppid=2268 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.988000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 00:21:47.992000 audit[2434]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.992000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5ab69e40 a2=0 a3=7fff5ab69e2c items=0 ppid=2268 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.992000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 6 00:21:47.996192 kubelet[2141]: E0906 00:21:47.994774 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:47.996192 kubelet[2141]: E0906 00:21:47.995573 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:47.997000 audit[2437]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.997000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe30daa710 a2=0 a3=7ffe30daa6fc items=0 ppid=2268 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.997000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 6 00:21:47.998000 audit[2438]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:47.998000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb652f330 a2=0 a3=7ffdb652f31c items=0 ppid=2268 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.998000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 6 00:21:48.000000 audit[2440]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:48.000000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff1078dd60 a2=0 a3=7fff1078dd4c items=0 ppid=2268 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:48.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:21:48.007000 audit[2443]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:48.007000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff90b05a40 a2=0 a3=7fff90b05a2c items=0 ppid=2268 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:48.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:21:48.008000 audit[2444]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:48.008000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbf23e060 a2=0 a3=7ffcbf23e04c items=0 ppid=2268 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:48.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 6 00:21:48.012257 kubelet[2141]: I0906 00:21:48.012191 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mxfg7" podStartSLOduration=2.01217028 podStartE2EDuration="2.01217028s" podCreationTimestamp="2025-09-06 00:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:48.00396177 +0000 UTC m=+7.576520587" watchObservedRunningTime="2025-09-06 00:21:48.01217028 +0000 UTC m=+7.584729127" Sep 6 00:21:48.011000 audit[2446]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:48.011000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd7ecd1ff0 a2=0 a3=7ffd7ecd1fdc items=0 ppid=2268 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:48.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 6 00:21:48.012000 audit[2447]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:48.012000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd87ebf700 a2=0 a3=7ffd87ebf6ec items=0 ppid=2268 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:48.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 6 00:21:48.015000 audit[2449]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:48.015000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd1f9a7c10 a2=0 a3=7ffd1f9a7bfc items=0 ppid=2268 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:48.015000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:21:48.018000 audit[2452]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:21:48.018000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd311ca040 a2=0 a3=7ffd311ca02c items=0 ppid=2268 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:48.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:21:48.021000 audit[2454]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 6 00:21:48.021000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffcfb71b640 a2=0 a3=7ffcfb71b62c items=0 ppid=2268 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:48.021000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:48.021000 audit[2454]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 6 00:21:48.021000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcfb71b640 a2=0 a3=7ffcfb71b62c items=0 ppid=2268 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:48.021000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:49.123283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070399903.mount: Deactivated successfully. Sep 6 00:21:50.734270 env[1312]: time="2025-09-06T00:21:50.734199548Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.736220 env[1312]: time="2025-09-06T00:21:50.736166166Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.737904 env[1312]: time="2025-09-06T00:21:50.737872954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.739416 env[1312]: time="2025-09-06T00:21:50.739386289Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.739980 env[1312]: time="2025-09-06T00:21:50.739951424Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 6 00:21:50.742377 env[1312]: time="2025-09-06T00:21:50.742330390Z" level=info msg="CreateContainer within sandbox \"21375e529081b68f2022c4ff1865b93aa10097e230cba546a75ef874d92ed77d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 6 00:21:50.755275 env[1312]: time="2025-09-06T00:21:50.755231507Z" level=info msg="CreateContainer within sandbox \"21375e529081b68f2022c4ff1865b93aa10097e230cba546a75ef874d92ed77d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b3a700f1138ab521ff797911da2b8cb1c2b79aac46b1ae7d2c607439bbf0c72e\"" Sep 6 00:21:50.755622 env[1312]: time="2025-09-06T00:21:50.755583221Z" level=info msg="StartContainer for \"b3a700f1138ab521ff797911da2b8cb1c2b79aac46b1ae7d2c607439bbf0c72e\"" Sep 6 00:21:50.808219 kubelet[2141]: E0906 00:21:50.808180 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:51.137182 env[1312]: time="2025-09-06T00:21:51.137012331Z" level=info msg="StartContainer for \"b3a700f1138ab521ff797911da2b8cb1c2b79aac46b1ae7d2c607439bbf0c72e\" returns successfully" Sep 6 00:21:51.141582 kubelet[2141]: E0906 00:21:51.141558 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:51.587046 kubelet[2141]: E0906 00:21:51.586994 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:51.595210 kubelet[2141]: I0906 00:21:51.595150 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-p67z2" podStartSLOduration=1.560176726 podStartE2EDuration="4.59510566s" podCreationTimestamp="2025-09-06 00:21:47 +0000 UTC" firstStartedPulling="2025-09-06 00:21:47.705950921 +0000 UTC m=+7.278509738" lastFinishedPulling="2025-09-06 00:21:50.740879855 +0000 UTC m=+10.313438672" observedRunningTime="2025-09-06 00:21:51.159830674 +0000 UTC m=+10.732389491" watchObservedRunningTime="2025-09-06 00:21:51.59510566 +0000 UTC m=+11.167664477" Sep 6 00:21:52.143651 kubelet[2141]: E0906 00:21:52.143587 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:56.212018 sudo[1485]: pam_unix(sudo:session): session closed for user root Sep 6 00:21:56.211000 audit[1485]: USER_END pid=1485 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:56.213635 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 6 00:21:56.213706 kernel: audit: type=1106 audit(1757118116.211:267): pid=1485 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:56.212000 audit[1485]: CRED_DISP pid=1485 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:56.223156 kernel: audit: type=1104 audit(1757118116.212:268): pid=1485 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:56.223934 sshd[1481]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:56.225000 audit[1481]: USER_END pid=1481 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:56.230169 kernel: audit: type=1106 audit(1757118116.225:269): pid=1481 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:56.231579 systemd[1]: sshd@8-10.0.0.61:22-10.0.0.1:53622.service: Deactivated successfully. Sep 6 00:21:56.232787 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:21:56.229000 audit[1481]: CRED_DISP pid=1481 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:56.233234 systemd-logind[1293]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:21:56.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.61:22-10.0.0.1:53622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:56.277222 systemd-logind[1293]: Removed session 9. Sep 6 00:21:56.280882 kernel: audit: type=1104 audit(1757118116.229:270): pid=1481 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:21:56.280941 kernel: audit: type=1131 audit(1757118116.231:271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.61:22-10.0.0.1:53622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:57.340000 audit[2549]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:57.340000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc6c87ed90 a2=0 a3=7ffc6c87ed7c items=0 ppid=2268 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:57.365053 kernel: audit: type=1325 audit(1757118117.340:272): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:57.365117 kernel: audit: type=1300 audit(1757118117.340:272): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc6c87ed90 a2=0 a3=7ffc6c87ed7c items=0 ppid=2268 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:57.365158 kernel: audit: type=1327 audit(1757118117.340:272): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:57.340000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:57.369000 audit[2549]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:57.369000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6c87ed90 a2=0 a3=0 items=0 ppid=2268 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:57.399079 kernel: audit: type=1325 audit(1757118117.369:273): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:57.399369 kernel: audit: type=1300 audit(1757118117.369:273): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6c87ed90 a2=0 a3=0 items=0 ppid=2268 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:57.369000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:57.453000 audit[2551]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:57.453000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffcd70fae0 a2=0 a3=7fffcd70facc items=0 ppid=2268 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:57.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:57.460000 audit[2551]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:57.460000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffcd70fae0 a2=0 a3=0 items=0 ppid=2268 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:57.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:58.986000 audit[2553]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:58.986000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdaf8e1690 a2=0 a3=7ffdaf8e167c items=0 ppid=2268 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:58.986000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:58.991000 audit[2553]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:58.991000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdaf8e1690 a2=0 a3=0 items=0 ppid=2268 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:58.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:59.431000 audit[2555]: NETFILTER_CFG table=filter:95 family=2 entries=20 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:59.431000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc19d49030 a2=0 a3=7ffc19d4901c items=0 ppid=2268 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:59.431000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:59.440000 audit[2555]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:59.440000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc19d49030 a2=0 a3=0 items=0 ppid=2268 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:59.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:59.599819 kubelet[2141]: I0906 00:21:59.599753 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f9f4dea9-8450-4426-9556-7664bcb1f67b-typha-certs\") pod \"calico-typha-68d4fc9869-7r4db\" (UID: \"f9f4dea9-8450-4426-9556-7664bcb1f67b\") " pod="calico-system/calico-typha-68d4fc9869-7r4db" Sep 6 00:21:59.599819 kubelet[2141]: I0906 00:21:59.599797 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f4dea9-8450-4426-9556-7664bcb1f67b-tigera-ca-bundle\") pod \"calico-typha-68d4fc9869-7r4db\" (UID: \"f9f4dea9-8450-4426-9556-7664bcb1f67b\") " pod="calico-system/calico-typha-68d4fc9869-7r4db" Sep 6 00:21:59.599819 kubelet[2141]: I0906 00:21:59.599821 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x484\" (UniqueName: \"kubernetes.io/projected/f9f4dea9-8450-4426-9556-7664bcb1f67b-kube-api-access-7x484\") pod \"calico-typha-68d4fc9869-7r4db\" (UID: \"f9f4dea9-8450-4426-9556-7664bcb1f67b\") " pod="calico-system/calico-typha-68d4fc9869-7r4db" Sep 6 00:21:59.750844 kubelet[2141]: E0906 00:21:59.750803 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:59.751381 env[1312]: time="2025-09-06T00:21:59.751323597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68d4fc9869-7r4db,Uid:f9f4dea9-8450-4426-9556-7664bcb1f67b,Namespace:calico-system,Attempt:0,}" Sep 6 00:21:59.771826 env[1312]: time="2025-09-06T00:21:59.771620285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:59.771826 env[1312]: time="2025-09-06T00:21:59.771682211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:59.771826 env[1312]: time="2025-09-06T00:21:59.771693172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:59.772070 env[1312]: time="2025-09-06T00:21:59.771924127Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cbe3be282b7a381152c56ce4ff3165c65f24a46ef7d9d0acbafc83fb5ed13a87 pid=2565 runtime=io.containerd.runc.v2 Sep 6 00:21:59.823510 env[1312]: time="2025-09-06T00:21:59.821670548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68d4fc9869-7r4db,Uid:f9f4dea9-8450-4426-9556-7664bcb1f67b,Namespace:calico-system,Attempt:0,} returns sandbox id \"cbe3be282b7a381152c56ce4ff3165c65f24a46ef7d9d0acbafc83fb5ed13a87\"" Sep 6 00:21:59.823510 env[1312]: time="2025-09-06T00:21:59.823398148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 6 00:21:59.823821 kubelet[2141]: E0906 00:21:59.822551 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:00.003053 kubelet[2141]: I0906 00:22:00.002875 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/711edb63-7dfc-4839-98f9-d1645ed31ea3-lib-modules\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003053 kubelet[2141]: I0906 00:22:00.002946 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711edb63-7dfc-4839-98f9-d1645ed31ea3-tigera-ca-bundle\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003053 kubelet[2141]: I0906 00:22:00.003031 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/711edb63-7dfc-4839-98f9-d1645ed31ea3-cni-log-dir\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003318 kubelet[2141]: I0906 00:22:00.003070 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/711edb63-7dfc-4839-98f9-d1645ed31ea3-flexvol-driver-host\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003318 kubelet[2141]: I0906 00:22:00.003099 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/711edb63-7dfc-4839-98f9-d1645ed31ea3-node-certs\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003318 kubelet[2141]: I0906 00:22:00.003120 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2btq\" (UniqueName: \"kubernetes.io/projected/711edb63-7dfc-4839-98f9-d1645ed31ea3-kube-api-access-w2btq\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003318 kubelet[2141]: I0906 00:22:00.003169 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/711edb63-7dfc-4839-98f9-d1645ed31ea3-cni-bin-dir\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003318 kubelet[2141]: I0906 00:22:00.003188 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/711edb63-7dfc-4839-98f9-d1645ed31ea3-policysync\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003500 kubelet[2141]: I0906 00:22:00.003204 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/711edb63-7dfc-4839-98f9-d1645ed31ea3-xtables-lock\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003500 kubelet[2141]: I0906 00:22:00.003223 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/711edb63-7dfc-4839-98f9-d1645ed31ea3-var-run-calico\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003500 kubelet[2141]: I0906 00:22:00.003239 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/711edb63-7dfc-4839-98f9-d1645ed31ea3-cni-net-dir\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.003500 kubelet[2141]: I0906 00:22:00.003295 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/711edb63-7dfc-4839-98f9-d1645ed31ea3-var-lib-calico\") pod \"calico-node-hqs9p\" (UID: \"711edb63-7dfc-4839-98f9-d1645ed31ea3\") " pod="calico-system/calico-node-hqs9p" Sep 6 00:22:00.092901 kubelet[2141]: E0906 00:22:00.092483 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qr48z" podUID="aa5fe117-525e-4a2e-b423-0d13ab8c1f3f" Sep 6 00:22:00.109062 kubelet[2141]: E0906 00:22:00.108948 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.109062 kubelet[2141]: W0906 00:22:00.108978 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.109062 kubelet[2141]: E0906 00:22:00.109005 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.116109 kubelet[2141]: E0906 00:22:00.116058 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.116109 kubelet[2141]: W0906 00:22:00.116084 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.116109 kubelet[2141]: E0906 00:22:00.116109 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.150973 env[1312]: time="2025-09-06T00:22:00.150910673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hqs9p,Uid:711edb63-7dfc-4839-98f9-d1645ed31ea3,Namespace:calico-system,Attempt:0,}" Sep 6 00:22:00.178894 env[1312]: time="2025-09-06T00:22:00.178620304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:00.178894 env[1312]: time="2025-09-06T00:22:00.178670128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:00.178894 env[1312]: time="2025-09-06T00:22:00.178684986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:00.179120 env[1312]: time="2025-09-06T00:22:00.179006701Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fcd9365c05baa9fe596929693e8aaa2baa4a703c56e39e8decb5dd9492fc65b pid=2609 runtime=io.containerd.runc.v2 Sep 6 00:22:00.206321 kubelet[2141]: E0906 00:22:00.206138 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.206321 kubelet[2141]: W0906 00:22:00.206167 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.206893 kubelet[2141]: E0906 00:22:00.206189 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.206893 kubelet[2141]: I0906 00:22:00.206799 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aa5fe117-525e-4a2e-b423-0d13ab8c1f3f-registration-dir\") pod \"csi-node-driver-qr48z\" (UID: \"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f\") " pod="calico-system/csi-node-driver-qr48z" Sep 6 00:22:00.207338 kubelet[2141]: E0906 00:22:00.207198 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.207338 kubelet[2141]: W0906 00:22:00.207209 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.207338 kubelet[2141]: E0906 00:22:00.207224 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.207338 kubelet[2141]: I0906 00:22:00.207240 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa5fe117-525e-4a2e-b423-0d13ab8c1f3f-kubelet-dir\") pod \"csi-node-driver-qr48z\" (UID: \"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f\") " pod="calico-system/csi-node-driver-qr48z" Sep 6 00:22:00.207630 kubelet[2141]: E0906 00:22:00.207586 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.207683 kubelet[2141]: W0906 00:22:00.207625 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.207683 kubelet[2141]: E0906 00:22:00.207666 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.207735 kubelet[2141]: I0906 00:22:00.207709 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aa5fe117-525e-4a2e-b423-0d13ab8c1f3f-socket-dir\") pod \"csi-node-driver-qr48z\" (UID: \"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f\") " pod="calico-system/csi-node-driver-qr48z" Sep 6 00:22:00.208069 kubelet[2141]: E0906 00:22:00.208052 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.208069 kubelet[2141]: W0906 00:22:00.208069 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.208174 kubelet[2141]: E0906 00:22:00.208085 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.208174 kubelet[2141]: I0906 00:22:00.208105 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aa5fe117-525e-4a2e-b423-0d13ab8c1f3f-varrun\") pod \"csi-node-driver-qr48z\" (UID: \"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f\") " pod="calico-system/csi-node-driver-qr48z" Sep 6 00:22:00.208395 kubelet[2141]: E0906 00:22:00.208379 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.208395 kubelet[2141]: W0906 00:22:00.208392 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.208537 kubelet[2141]: E0906 00:22:00.208518 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.208570 kubelet[2141]: I0906 00:22:00.208548 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmn8g\" (UniqueName: \"kubernetes.io/projected/aa5fe117-525e-4a2e-b423-0d13ab8c1f3f-kube-api-access-jmn8g\") pod \"csi-node-driver-qr48z\" (UID: \"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f\") " pod="calico-system/csi-node-driver-qr48z" Sep 6 00:22:00.208635 kubelet[2141]: E0906 00:22:00.208623 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.208663 kubelet[2141]: W0906 00:22:00.208634 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.208776 kubelet[2141]: E0906 00:22:00.208760 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.208857 kubelet[2141]: E0906 00:22:00.208841 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.208857 kubelet[2141]: W0906 00:22:00.208852 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.209342 kubelet[2141]: E0906 00:22:00.208955 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.209342 kubelet[2141]: E0906 00:22:00.209054 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.209342 kubelet[2141]: W0906 00:22:00.209062 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.209342 kubelet[2141]: E0906 00:22:00.209105 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.209342 kubelet[2141]: E0906 00:22:00.209330 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.209342 kubelet[2141]: W0906 00:22:00.209339 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.209527 kubelet[2141]: E0906 00:22:00.209351 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.209616 kubelet[2141]: E0906 00:22:00.209600 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.209616 kubelet[2141]: W0906 00:22:00.209613 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.209670 kubelet[2141]: E0906 00:22:00.209625 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.209879 kubelet[2141]: E0906 00:22:00.209861 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.209879 kubelet[2141]: W0906 00:22:00.209873 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.209958 kubelet[2141]: E0906 00:22:00.209884 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.211337 kubelet[2141]: E0906 00:22:00.211006 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.211337 kubelet[2141]: W0906 00:22:00.211330 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.211429 kubelet[2141]: E0906 00:22:00.211346 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.211650 kubelet[2141]: E0906 00:22:00.211623 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.211650 kubelet[2141]: W0906 00:22:00.211639 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.211650 kubelet[2141]: E0906 00:22:00.211648 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.211957 kubelet[2141]: E0906 00:22:00.211924 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.211957 kubelet[2141]: W0906 00:22:00.211956 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.212048 kubelet[2141]: E0906 00:22:00.211966 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.212408 kubelet[2141]: E0906 00:22:00.212336 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.212408 kubelet[2141]: W0906 00:22:00.212402 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.212504 kubelet[2141]: E0906 00:22:00.212427 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.227751 env[1312]: time="2025-09-06T00:22:00.227681960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hqs9p,Uid:711edb63-7dfc-4839-98f9-d1645ed31ea3,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fcd9365c05baa9fe596929693e8aaa2baa4a703c56e39e8decb5dd9492fc65b\"" Sep 6 00:22:00.309875 kubelet[2141]: E0906 00:22:00.309730 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.309875 kubelet[2141]: W0906 00:22:00.309758 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.309875 kubelet[2141]: E0906 00:22:00.309796 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.310112 kubelet[2141]: E0906 00:22:00.310073 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.310112 kubelet[2141]: W0906 00:22:00.310084 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.310112 kubelet[2141]: E0906 00:22:00.310095 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.310368 kubelet[2141]: E0906 00:22:00.310335 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.310368 kubelet[2141]: W0906 00:22:00.310355 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.310368 kubelet[2141]: E0906 00:22:00.310370 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.310787 kubelet[2141]: E0906 00:22:00.310765 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.310870 kubelet[2141]: W0906 00:22:00.310817 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.310870 kubelet[2141]: E0906 00:22:00.310833 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.311261 kubelet[2141]: E0906 00:22:00.311223 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.311261 kubelet[2141]: W0906 00:22:00.311254 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.311526 kubelet[2141]: E0906 00:22:00.311290 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.311775 kubelet[2141]: E0906 00:22:00.311732 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.311775 kubelet[2141]: W0906 00:22:00.311752 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.312018 kubelet[2141]: E0906 00:22:00.311923 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.312156 kubelet[2141]: E0906 00:22:00.312110 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.312156 kubelet[2141]: W0906 00:22:00.312147 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.312273 kubelet[2141]: E0906 00:22:00.312192 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.312484 kubelet[2141]: E0906 00:22:00.312446 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.312484 kubelet[2141]: W0906 00:22:00.312464 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.312664 kubelet[2141]: E0906 00:22:00.312512 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.313170 kubelet[2141]: E0906 00:22:00.313145 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.313170 kubelet[2141]: W0906 00:22:00.313163 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.313296 kubelet[2141]: E0906 00:22:00.313201 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.313423 kubelet[2141]: E0906 00:22:00.313402 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.313423 kubelet[2141]: W0906 00:22:00.313417 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.313533 kubelet[2141]: E0906 00:22:00.313497 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.313674 kubelet[2141]: E0906 00:22:00.313652 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.313674 kubelet[2141]: W0906 00:22:00.313670 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.313810 kubelet[2141]: E0906 00:22:00.313703 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.313869 kubelet[2141]: E0906 00:22:00.313852 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.313869 kubelet[2141]: W0906 00:22:00.313862 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.313943 kubelet[2141]: E0906 00:22:00.313932 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.314060 kubelet[2141]: E0906 00:22:00.314041 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.314060 kubelet[2141]: W0906 00:22:00.314055 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.314156 kubelet[2141]: E0906 00:22:00.314088 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.314371 kubelet[2141]: E0906 00:22:00.314344 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.314371 kubelet[2141]: W0906 00:22:00.314357 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.314371 kubelet[2141]: E0906 00:22:00.314373 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.314678 kubelet[2141]: E0906 00:22:00.314656 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.314772 kubelet[2141]: W0906 00:22:00.314670 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.314772 kubelet[2141]: E0906 00:22:00.314753 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.314900 kubelet[2141]: E0906 00:22:00.314877 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.314900 kubelet[2141]: W0906 00:22:00.314894 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.315004 kubelet[2141]: E0906 00:22:00.314933 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.315112 kubelet[2141]: E0906 00:22:00.315098 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.315112 kubelet[2141]: W0906 00:22:00.315107 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.315308 kubelet[2141]: E0906 00:22:00.315186 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.315308 kubelet[2141]: E0906 00:22:00.315292 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.315308 kubelet[2141]: W0906 00:22:00.315300 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.315439 kubelet[2141]: E0906 00:22:00.315332 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.315439 kubelet[2141]: E0906 00:22:00.315438 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.315559 kubelet[2141]: W0906 00:22:00.315445 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.315559 kubelet[2141]: E0906 00:22:00.315502 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.315693 kubelet[2141]: E0906 00:22:00.315675 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.315693 kubelet[2141]: W0906 00:22:00.315690 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.315785 kubelet[2141]: E0906 00:22:00.315706 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.315958 kubelet[2141]: E0906 00:22:00.315930 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.315958 kubelet[2141]: W0906 00:22:00.315947 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.316049 kubelet[2141]: E0906 00:22:00.315966 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.316263 kubelet[2141]: E0906 00:22:00.316246 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.316263 kubelet[2141]: W0906 00:22:00.316259 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.316358 kubelet[2141]: E0906 00:22:00.316276 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.316529 kubelet[2141]: E0906 00:22:00.316510 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.316529 kubelet[2141]: W0906 00:22:00.316524 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.316625 kubelet[2141]: E0906 00:22:00.316541 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.316776 kubelet[2141]: E0906 00:22:00.316759 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.316776 kubelet[2141]: W0906 00:22:00.316771 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.316875 kubelet[2141]: E0906 00:22:00.316788 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.317027 kubelet[2141]: E0906 00:22:00.317005 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.317027 kubelet[2141]: W0906 00:22:00.317021 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.317158 kubelet[2141]: E0906 00:22:00.317033 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.326097 kubelet[2141]: E0906 00:22:00.326062 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:00.326097 kubelet[2141]: W0906 00:22:00.326084 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:00.326097 kubelet[2141]: E0906 00:22:00.326103 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:00.451000 audit[2686]: NETFILTER_CFG table=filter:97 family=2 entries=22 op=nft_register_rule pid=2686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:00.451000 audit[2686]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff65332460 a2=0 a3=7fff6533244c items=0 ppid=2268 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:00.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:00.458000 audit[2686]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:00.458000 audit[2686]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff65332460 a2=0 a3=0 items=0 ppid=2268 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:00.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:00.712228 systemd[1]: run-containerd-runc-k8s.io-cbe3be282b7a381152c56ce4ff3165c65f24a46ef7d9d0acbafc83fb5ed13a87-runc.awhrQS.mount: Deactivated successfully. Sep 6 00:22:01.295883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514044450.mount: Deactivated successfully. Sep 6 00:22:01.970302 kubelet[2141]: E0906 00:22:01.970217 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qr48z" podUID="aa5fe117-525e-4a2e-b423-0d13ab8c1f3f" Sep 6 00:22:02.192203 env[1312]: time="2025-09-06T00:22:02.192154834Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:02.194620 env[1312]: time="2025-09-06T00:22:02.194586906Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:02.196400 env[1312]: time="2025-09-06T00:22:02.196367053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:02.197831 env[1312]: time="2025-09-06T00:22:02.197806469Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:02.198246 env[1312]: time="2025-09-06T00:22:02.198202724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 6 00:22:02.201590 env[1312]: time="2025-09-06T00:22:02.201555487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 6 00:22:02.219573 env[1312]: time="2025-09-06T00:22:02.219480905Z" level=info msg="CreateContainer within sandbox \"cbe3be282b7a381152c56ce4ff3165c65f24a46ef7d9d0acbafc83fb5ed13a87\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 6 00:22:02.570065 env[1312]: time="2025-09-06T00:22:02.569994182Z" level=info msg="CreateContainer within sandbox \"cbe3be282b7a381152c56ce4ff3165c65f24a46ef7d9d0acbafc83fb5ed13a87\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f5c6304823ea6bb73b128363ca8dcf532da7f4730502c3233c3145d63f2a90ad\"" Sep 6 00:22:02.571735 env[1312]: time="2025-09-06T00:22:02.571673058Z" level=info msg="StartContainer for \"f5c6304823ea6bb73b128363ca8dcf532da7f4730502c3233c3145d63f2a90ad\"" Sep 6 00:22:02.677335 env[1312]: time="2025-09-06T00:22:02.677259102Z" level=info msg="StartContainer for \"f5c6304823ea6bb73b128363ca8dcf532da7f4730502c3233c3145d63f2a90ad\" returns successfully" Sep 6 00:22:03.166635 kubelet[2141]: E0906 00:22:03.166577 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:03.231455 kubelet[2141]: E0906 00:22:03.231401 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.231455 kubelet[2141]: W0906 00:22:03.231435 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.231455 kubelet[2141]: E0906 00:22:03.231458 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.231963 kubelet[2141]: E0906 00:22:03.231946 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.231963 kubelet[2141]: W0906 00:22:03.231958 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.232056 kubelet[2141]: E0906 00:22:03.231967 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.232199 kubelet[2141]: E0906 00:22:03.232186 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.232199 kubelet[2141]: W0906 00:22:03.232195 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.232199 kubelet[2141]: E0906 00:22:03.232202 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.232340 kubelet[2141]: E0906 00:22:03.232334 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.232368 kubelet[2141]: W0906 00:22:03.232341 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.232368 kubelet[2141]: E0906 00:22:03.232348 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.232567 kubelet[2141]: E0906 00:22:03.232534 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.232567 kubelet[2141]: W0906 00:22:03.232544 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.232567 kubelet[2141]: E0906 00:22:03.232551 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.232694 kubelet[2141]: E0906 00:22:03.232672 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.232694 kubelet[2141]: W0906 00:22:03.232684 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.232694 kubelet[2141]: E0906 00:22:03.232690 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.232827 kubelet[2141]: E0906 00:22:03.232806 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.232827 kubelet[2141]: W0906 00:22:03.232818 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.232827 kubelet[2141]: E0906 00:22:03.232825 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.233042 kubelet[2141]: E0906 00:22:03.233022 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.233042 kubelet[2141]: W0906 00:22:03.233032 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.233042 kubelet[2141]: E0906 00:22:03.233040 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.233301 kubelet[2141]: E0906 00:22:03.233282 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.233301 kubelet[2141]: W0906 00:22:03.233297 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.233379 kubelet[2141]: E0906 00:22:03.233307 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.233547 kubelet[2141]: E0906 00:22:03.233518 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.233547 kubelet[2141]: W0906 00:22:03.233527 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.233547 kubelet[2141]: E0906 00:22:03.233535 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.233730 kubelet[2141]: E0906 00:22:03.233703 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.233730 kubelet[2141]: W0906 00:22:03.233715 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.233730 kubelet[2141]: E0906 00:22:03.233725 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.233988 kubelet[2141]: E0906 00:22:03.233935 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.233988 kubelet[2141]: W0906 00:22:03.233945 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.233988 kubelet[2141]: E0906 00:22:03.233955 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.234161 kubelet[2141]: E0906 00:22:03.234138 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.234161 kubelet[2141]: W0906 00:22:03.234148 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.234161 kubelet[2141]: E0906 00:22:03.234156 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.234304 kubelet[2141]: E0906 00:22:03.234295 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.234304 kubelet[2141]: W0906 00:22:03.234302 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.234370 kubelet[2141]: E0906 00:22:03.234311 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.234515 kubelet[2141]: E0906 00:22:03.234498 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.234515 kubelet[2141]: W0906 00:22:03.234511 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.234592 kubelet[2141]: E0906 00:22:03.234520 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.234770 kubelet[2141]: E0906 00:22:03.234752 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.234770 kubelet[2141]: W0906 00:22:03.234763 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.234770 kubelet[2141]: E0906 00:22:03.234772 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.235004 kubelet[2141]: E0906 00:22:03.234987 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.235004 kubelet[2141]: W0906 00:22:03.234997 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.235159 kubelet[2141]: E0906 00:22:03.235009 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.235261 kubelet[2141]: E0906 00:22:03.235199 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.235261 kubelet[2141]: W0906 00:22:03.235206 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.235261 kubelet[2141]: E0906 00:22:03.235218 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.235540 kubelet[2141]: E0906 00:22:03.235502 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.235540 kubelet[2141]: W0906 00:22:03.235530 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.235646 kubelet[2141]: E0906 00:22:03.235568 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.235802 kubelet[2141]: E0906 00:22:03.235787 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.235802 kubelet[2141]: W0906 00:22:03.235798 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.235893 kubelet[2141]: E0906 00:22:03.235806 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.236122 kubelet[2141]: E0906 00:22:03.236104 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.236122 kubelet[2141]: W0906 00:22:03.236117 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.236122 kubelet[2141]: E0906 00:22:03.236144 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.236453 kubelet[2141]: E0906 00:22:03.236433 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.236453 kubelet[2141]: W0906 00:22:03.236445 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.236542 kubelet[2141]: E0906 00:22:03.236500 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.236689 kubelet[2141]: E0906 00:22:03.236674 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.236689 kubelet[2141]: W0906 00:22:03.236685 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.236806 kubelet[2141]: E0906 00:22:03.236700 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.236976 kubelet[2141]: E0906 00:22:03.236962 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.236976 kubelet[2141]: W0906 00:22:03.236974 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.237096 kubelet[2141]: E0906 00:22:03.237072 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.237230 kubelet[2141]: E0906 00:22:03.237217 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.237230 kubelet[2141]: W0906 00:22:03.237228 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.237230 kubelet[2141]: E0906 00:22:03.237242 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.237431 kubelet[2141]: E0906 00:22:03.237407 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.237431 kubelet[2141]: W0906 00:22:03.237426 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.237503 kubelet[2141]: E0906 00:22:03.237440 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.237622 kubelet[2141]: E0906 00:22:03.237609 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.237622 kubelet[2141]: W0906 00:22:03.237617 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.237699 kubelet[2141]: E0906 00:22:03.237628 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.237951 kubelet[2141]: E0906 00:22:03.237806 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.237951 kubelet[2141]: W0906 00:22:03.237826 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.237951 kubelet[2141]: E0906 00:22:03.237841 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.238086 kubelet[2141]: E0906 00:22:03.238038 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.238086 kubelet[2141]: W0906 00:22:03.238049 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.238086 kubelet[2141]: E0906 00:22:03.238064 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.238256 kubelet[2141]: E0906 00:22:03.238240 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.238256 kubelet[2141]: W0906 00:22:03.238252 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.238319 kubelet[2141]: E0906 00:22:03.238262 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.238450 kubelet[2141]: E0906 00:22:03.238436 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.238450 kubelet[2141]: W0906 00:22:03.238446 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.238533 kubelet[2141]: E0906 00:22:03.238458 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.238684 kubelet[2141]: E0906 00:22:03.238667 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.238684 kubelet[2141]: W0906 00:22:03.238677 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.238773 kubelet[2141]: E0906 00:22:03.238691 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.238858 kubelet[2141]: E0906 00:22:03.238845 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:03.238858 kubelet[2141]: W0906 00:22:03.238853 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:03.238926 kubelet[2141]: E0906 00:22:03.238861 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:03.971313 kubelet[2141]: E0906 00:22:03.971256 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qr48z" podUID="aa5fe117-525e-4a2e-b423-0d13ab8c1f3f" Sep 6 00:22:04.169529 kubelet[2141]: I0906 00:22:04.169471 2141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:22:04.169956 kubelet[2141]: E0906 00:22:04.169883 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:04.242998 kubelet[2141]: E0906 00:22:04.242870 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.242998 kubelet[2141]: W0906 00:22:04.242902 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.242998 kubelet[2141]: E0906 00:22:04.242929 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.243261 kubelet[2141]: E0906 00:22:04.243118 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.243261 kubelet[2141]: W0906 00:22:04.243160 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.243261 kubelet[2141]: E0906 00:22:04.243172 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.243461 kubelet[2141]: E0906 00:22:04.243421 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.243461 kubelet[2141]: W0906 00:22:04.243450 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.243595 kubelet[2141]: E0906 00:22:04.243474 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.243752 kubelet[2141]: E0906 00:22:04.243728 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.243752 kubelet[2141]: W0906 00:22:04.243743 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.243805 kubelet[2141]: E0906 00:22:04.243754 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.243980 kubelet[2141]: E0906 00:22:04.243967 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.243980 kubelet[2141]: W0906 00:22:04.243978 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.244070 kubelet[2141]: E0906 00:22:04.243988 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.244161 kubelet[2141]: E0906 00:22:04.244149 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.244197 kubelet[2141]: W0906 00:22:04.244160 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.244197 kubelet[2141]: E0906 00:22:04.244174 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.244365 kubelet[2141]: E0906 00:22:04.244351 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.244400 kubelet[2141]: W0906 00:22:04.244365 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.244400 kubelet[2141]: E0906 00:22:04.244379 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.244584 kubelet[2141]: E0906 00:22:04.244569 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.244609 kubelet[2141]: W0906 00:22:04.244584 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.244609 kubelet[2141]: E0906 00:22:04.244599 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.244817 kubelet[2141]: E0906 00:22:04.244804 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.244845 kubelet[2141]: W0906 00:22:04.244817 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.244845 kubelet[2141]: E0906 00:22:04.244828 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.245004 kubelet[2141]: E0906 00:22:04.244991 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.245034 kubelet[2141]: W0906 00:22:04.245004 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.245034 kubelet[2141]: E0906 00:22:04.245015 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.245211 kubelet[2141]: E0906 00:22:04.245198 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.245244 kubelet[2141]: W0906 00:22:04.245211 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.245244 kubelet[2141]: E0906 00:22:04.245221 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.245410 kubelet[2141]: E0906 00:22:04.245386 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.245442 kubelet[2141]: W0906 00:22:04.245410 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.245442 kubelet[2141]: E0906 00:22:04.245420 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.245638 kubelet[2141]: E0906 00:22:04.245622 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.245665 kubelet[2141]: W0906 00:22:04.245638 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.245665 kubelet[2141]: E0906 00:22:04.245649 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.245855 kubelet[2141]: E0906 00:22:04.245838 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.245855 kubelet[2141]: W0906 00:22:04.245851 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.245941 kubelet[2141]: E0906 00:22:04.245863 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.246087 kubelet[2141]: E0906 00:22:04.246066 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.246087 kubelet[2141]: W0906 00:22:04.246085 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.246183 kubelet[2141]: E0906 00:22:04.246101 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.343219 kubelet[2141]: E0906 00:22:04.343182 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.343219 kubelet[2141]: W0906 00:22:04.343212 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.343474 kubelet[2141]: E0906 00:22:04.343239 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.343524 kubelet[2141]: E0906 00:22:04.343474 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.343524 kubelet[2141]: W0906 00:22:04.343485 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.343524 kubelet[2141]: E0906 00:22:04.343496 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.343724 kubelet[2141]: E0906 00:22:04.343699 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.343724 kubelet[2141]: W0906 00:22:04.343716 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.343827 kubelet[2141]: E0906 00:22:04.343734 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.344271 kubelet[2141]: E0906 00:22:04.344208 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.344271 kubelet[2141]: W0906 00:22:04.344238 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.344271 kubelet[2141]: E0906 00:22:04.344258 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.344542 kubelet[2141]: E0906 00:22:04.344506 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.344542 kubelet[2141]: W0906 00:22:04.344520 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.344542 kubelet[2141]: E0906 00:22:04.344533 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.344710 kubelet[2141]: E0906 00:22:04.344693 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.344710 kubelet[2141]: W0906 00:22:04.344703 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.344839 kubelet[2141]: E0906 00:22:04.344714 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.344918 kubelet[2141]: E0906 00:22:04.344896 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.344918 kubelet[2141]: W0906 00:22:04.344910 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.345031 kubelet[2141]: E0906 00:22:04.344987 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.345419 kubelet[2141]: E0906 00:22:04.345379 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.345419 kubelet[2141]: W0906 00:22:04.345407 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.345525 kubelet[2141]: E0906 00:22:04.345475 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.345620 kubelet[2141]: E0906 00:22:04.345602 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.345620 kubelet[2141]: W0906 00:22:04.345613 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.345736 kubelet[2141]: E0906 00:22:04.345648 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.345792 kubelet[2141]: E0906 00:22:04.345768 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.345832 kubelet[2141]: W0906 00:22:04.345793 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.345832 kubelet[2141]: E0906 00:22:04.345807 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.346020 kubelet[2141]: E0906 00:22:04.345994 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.346020 kubelet[2141]: W0906 00:22:04.346007 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.346165 kubelet[2141]: E0906 00:22:04.346026 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.346217 kubelet[2141]: E0906 00:22:04.346203 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.346217 kubelet[2141]: W0906 00:22:04.346214 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.346294 kubelet[2141]: E0906 00:22:04.346229 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.346485 kubelet[2141]: E0906 00:22:04.346467 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.346485 kubelet[2141]: W0906 00:22:04.346479 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.346576 kubelet[2141]: E0906 00:22:04.346502 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.346700 kubelet[2141]: E0906 00:22:04.346673 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.346768 kubelet[2141]: W0906 00:22:04.346699 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.346768 kubelet[2141]: E0906 00:22:04.346714 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.346925 kubelet[2141]: E0906 00:22:04.346909 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.346925 kubelet[2141]: W0906 00:22:04.346920 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.347040 kubelet[2141]: E0906 00:22:04.346932 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.347158 kubelet[2141]: E0906 00:22:04.347107 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.347233 kubelet[2141]: W0906 00:22:04.347124 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.347233 kubelet[2141]: E0906 00:22:04.347195 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.347421 kubelet[2141]: E0906 00:22:04.347398 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.347421 kubelet[2141]: W0906 00:22:04.347408 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.347421 kubelet[2141]: E0906 00:22:04.347420 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.347591 kubelet[2141]: E0906 00:22:04.347578 2141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:22:04.347591 kubelet[2141]: W0906 00:22:04.347587 2141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:22:04.347663 kubelet[2141]: E0906 00:22:04.347595 2141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:22:04.402675 env[1312]: time="2025-09-06T00:22:04.402626339Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:04.404745 env[1312]: time="2025-09-06T00:22:04.404710556Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:04.406427 env[1312]: time="2025-09-06T00:22:04.406352902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:04.407950 env[1312]: time="2025-09-06T00:22:04.407906852Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:04.408444 env[1312]: time="2025-09-06T00:22:04.408402294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 6 00:22:04.412697 env[1312]: time="2025-09-06T00:22:04.412663662Z" level=info msg="CreateContainer within sandbox \"4fcd9365c05baa9fe596929693e8aaa2baa4a703c56e39e8decb5dd9492fc65b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 6 00:22:04.437312 env[1312]: time="2025-09-06T00:22:04.437236072Z" level=info msg="CreateContainer within sandbox \"4fcd9365c05baa9fe596929693e8aaa2baa4a703c56e39e8decb5dd9492fc65b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dfaf18c7773ade8828b4264c22af79462d7ff74dc8fa54fd92334d2ee45580cb\"" Sep 6 00:22:04.437999 env[1312]: time="2025-09-06T00:22:04.437936418Z" level=info msg="StartContainer for \"dfaf18c7773ade8828b4264c22af79462d7ff74dc8fa54fd92334d2ee45580cb\"" Sep 6 00:22:04.506623 env[1312]: time="2025-09-06T00:22:04.506509018Z" level=info msg="StartContainer for \"dfaf18c7773ade8828b4264c22af79462d7ff74dc8fa54fd92334d2ee45580cb\" returns successfully" Sep 6 00:22:04.551263 env[1312]: time="2025-09-06T00:22:04.551179949Z" level=info msg="shim disconnected" id=dfaf18c7773ade8828b4264c22af79462d7ff74dc8fa54fd92334d2ee45580cb Sep 6 00:22:04.551263 env[1312]: time="2025-09-06T00:22:04.551249430Z" level=warning msg="cleaning up after shim disconnected" id=dfaf18c7773ade8828b4264c22af79462d7ff74dc8fa54fd92334d2ee45580cb namespace=k8s.io Sep 6 00:22:04.551263 env[1312]: time="2025-09-06T00:22:04.551261372Z" level=info msg="cleaning up dead shim" Sep 6 00:22:04.559073 env[1312]: time="2025-09-06T00:22:04.558989295Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2844 runtime=io.containerd.runc.v2\n" Sep 6 00:22:05.173600 env[1312]: time="2025-09-06T00:22:05.173559154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 6 00:22:05.255172 kubelet[2141]: I0906 00:22:05.254408 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68d4fc9869-7r4db" podStartSLOduration=3.877944959 podStartE2EDuration="6.254387044s" podCreationTimestamp="2025-09-06 00:21:59 +0000 UTC" firstStartedPulling="2025-09-06 00:21:59.823047829 +0000 UTC m=+19.395606646" lastFinishedPulling="2025-09-06 00:22:02.199489914 +0000 UTC m=+21.772048731" observedRunningTime="2025-09-06 00:22:03.213813526 +0000 UTC m=+22.786372343" watchObservedRunningTime="2025-09-06 00:22:05.254387044 +0000 UTC m=+24.826945861" Sep 6 00:22:05.426001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfaf18c7773ade8828b4264c22af79462d7ff74dc8fa54fd92334d2ee45580cb-rootfs.mount: Deactivated successfully. Sep 6 00:22:05.970024 kubelet[2141]: E0906 00:22:05.969949 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qr48z" podUID="aa5fe117-525e-4a2e-b423-0d13ab8c1f3f" Sep 6 00:22:07.970264 kubelet[2141]: E0906 00:22:07.969752 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qr48z" podUID="aa5fe117-525e-4a2e-b423-0d13ab8c1f3f" Sep 6 00:22:09.543304 env[1312]: time="2025-09-06T00:22:09.543211620Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:09.545320 env[1312]: time="2025-09-06T00:22:09.545246502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:09.546787 env[1312]: time="2025-09-06T00:22:09.546758512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:09.548158 env[1312]: time="2025-09-06T00:22:09.548077088Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:09.548627 env[1312]: time="2025-09-06T00:22:09.548583780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 6 00:22:09.551289 env[1312]: time="2025-09-06T00:22:09.550891123Z" level=info msg="CreateContainer within sandbox \"4fcd9365c05baa9fe596929693e8aaa2baa4a703c56e39e8decb5dd9492fc65b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 6 00:22:09.565841 env[1312]: time="2025-09-06T00:22:09.565783774Z" level=info msg="CreateContainer within sandbox \"4fcd9365c05baa9fe596929693e8aaa2baa4a703c56e39e8decb5dd9492fc65b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ed76954655c8b203f255bcbfb4bbd75d53f178eb3f018982d617b5fa706af80d\"" Sep 6 00:22:09.566465 env[1312]: time="2025-09-06T00:22:09.566344798Z" level=info msg="StartContainer for \"ed76954655c8b203f255bcbfb4bbd75d53f178eb3f018982d617b5fa706af80d\"" Sep 6 00:22:09.970451 kubelet[2141]: E0906 00:22:09.970364 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qr48z" podUID="aa5fe117-525e-4a2e-b423-0d13ab8c1f3f" Sep 6 00:22:10.202879 env[1312]: time="2025-09-06T00:22:10.202812489Z" level=info msg="StartContainer for \"ed76954655c8b203f255bcbfb4bbd75d53f178eb3f018982d617b5fa706af80d\" returns successfully" Sep 6 00:22:11.038060 env[1312]: time="2025-09-06T00:22:11.037978180Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:22:11.058148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed76954655c8b203f255bcbfb4bbd75d53f178eb3f018982d617b5fa706af80d-rootfs.mount: Deactivated successfully. Sep 6 00:22:11.062333 env[1312]: time="2025-09-06T00:22:11.062265425Z" level=info msg="shim disconnected" id=ed76954655c8b203f255bcbfb4bbd75d53f178eb3f018982d617b5fa706af80d Sep 6 00:22:11.062333 env[1312]: time="2025-09-06T00:22:11.062332662Z" level=warning msg="cleaning up after shim disconnected" id=ed76954655c8b203f255bcbfb4bbd75d53f178eb3f018982d617b5fa706af80d namespace=k8s.io Sep 6 00:22:11.062333 env[1312]: time="2025-09-06T00:22:11.062342080Z" level=info msg="cleaning up dead shim" Sep 6 00:22:11.070598 env[1312]: time="2025-09-06T00:22:11.070559820Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2919 runtime=io.containerd.runc.v2\n" Sep 6 00:22:11.140783 kubelet[2141]: I0906 00:22:11.140742 2141 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:22:11.211198 env[1312]: time="2025-09-06T00:22:11.209935033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 6 00:22:11.294475 kubelet[2141]: I0906 00:22:11.293992 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d236a4c-f0ec-424c-baa8-2089b5f219ec-config-volume\") pod \"coredns-7c65d6cfc9-krllp\" (UID: \"5d236a4c-f0ec-424c-baa8-2089b5f219ec\") " pod="kube-system/coredns-7c65d6cfc9-krllp" Sep 6 00:22:11.294475 kubelet[2141]: I0906 00:22:11.294060 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6cca75d-1a19-48b7-bf46-1e5cf7e72c19-config\") pod \"goldmane-7988f88666-lvkqq\" (UID: \"a6cca75d-1a19-48b7-bf46-1e5cf7e72c19\") " pod="calico-system/goldmane-7988f88666-lvkqq" Sep 6 00:22:11.294475 kubelet[2141]: I0906 00:22:11.294091 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzzwm\" (UniqueName: \"kubernetes.io/projected/a6cca75d-1a19-48b7-bf46-1e5cf7e72c19-kube-api-access-tzzwm\") pod \"goldmane-7988f88666-lvkqq\" (UID: \"a6cca75d-1a19-48b7-bf46-1e5cf7e72c19\") " pod="calico-system/goldmane-7988f88666-lvkqq" Sep 6 00:22:11.294475 kubelet[2141]: I0906 00:22:11.294123 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr2ql\" (UniqueName: \"kubernetes.io/projected/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-kube-api-access-vr2ql\") pod \"whisker-7969cf68c8-xfwnf\" (UID: \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\") " pod="calico-system/whisker-7969cf68c8-xfwnf" Sep 6 00:22:11.294475 kubelet[2141]: I0906 00:22:11.294173 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rznt\" (UniqueName: \"kubernetes.io/projected/23a905d3-2b9b-4e8e-907e-242236a689bc-kube-api-access-7rznt\") pod \"coredns-7c65d6cfc9-dqfgt\" (UID: \"23a905d3-2b9b-4e8e-907e-242236a689bc\") " pod="kube-system/coredns-7c65d6cfc9-dqfgt" Sep 6 00:22:11.294741 kubelet[2141]: I0906 00:22:11.294195 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pznhr\" (UniqueName: \"kubernetes.io/projected/2997af2f-3793-4ebb-a625-6dd9b47d29e8-kube-api-access-pznhr\") pod \"calico-apiserver-7f95dfcdc5-xw9st\" (UID: \"2997af2f-3793-4ebb-a625-6dd9b47d29e8\") " pod="calico-apiserver/calico-apiserver-7f95dfcdc5-xw9st" Sep 6 00:22:11.294741 kubelet[2141]: I0906 00:22:11.294221 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-whisker-ca-bundle\") pod \"whisker-7969cf68c8-xfwnf\" (UID: \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\") " pod="calico-system/whisker-7969cf68c8-xfwnf" Sep 6 00:22:11.294741 kubelet[2141]: I0906 00:22:11.294254 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a905d3-2b9b-4e8e-907e-242236a689bc-config-volume\") pod \"coredns-7c65d6cfc9-dqfgt\" (UID: \"23a905d3-2b9b-4e8e-907e-242236a689bc\") " pod="kube-system/coredns-7c65d6cfc9-dqfgt" Sep 6 00:22:11.294741 kubelet[2141]: I0906 00:22:11.294276 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d47e55db-f531-4fdd-892c-a105be81339f-calico-apiserver-certs\") pod \"calico-apiserver-7f95dfcdc5-lkdpx\" (UID: \"d47e55db-f531-4fdd-892c-a105be81339f\") " pod="calico-apiserver/calico-apiserver-7f95dfcdc5-lkdpx" Sep 6 00:22:11.294741 kubelet[2141]: I0906 00:22:11.294335 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6cca75d-1a19-48b7-bf46-1e5cf7e72c19-goldmane-ca-bundle\") pod \"goldmane-7988f88666-lvkqq\" (UID: \"a6cca75d-1a19-48b7-bf46-1e5cf7e72c19\") " pod="calico-system/goldmane-7988f88666-lvkqq" Sep 6 00:22:11.294873 kubelet[2141]: I0906 00:22:11.294361 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tjzp\" (UniqueName: \"kubernetes.io/projected/c19353dd-4b41-4b6f-9132-f91a5ef28107-kube-api-access-8tjzp\") pod \"calico-kube-controllers-644bf98f67-gf7cj\" (UID: \"c19353dd-4b41-4b6f-9132-f91a5ef28107\") " pod="calico-system/calico-kube-controllers-644bf98f67-gf7cj" Sep 6 00:22:11.294873 kubelet[2141]: I0906 00:22:11.294383 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a6cca75d-1a19-48b7-bf46-1e5cf7e72c19-goldmane-key-pair\") pod \"goldmane-7988f88666-lvkqq\" (UID: \"a6cca75d-1a19-48b7-bf46-1e5cf7e72c19\") " pod="calico-system/goldmane-7988f88666-lvkqq" Sep 6 00:22:11.294873 kubelet[2141]: I0906 00:22:11.294406 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2997af2f-3793-4ebb-a625-6dd9b47d29e8-calico-apiserver-certs\") pod \"calico-apiserver-7f95dfcdc5-xw9st\" (UID: \"2997af2f-3793-4ebb-a625-6dd9b47d29e8\") " pod="calico-apiserver/calico-apiserver-7f95dfcdc5-xw9st" Sep 6 00:22:11.294873 kubelet[2141]: I0906 00:22:11.294426 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-whisker-backend-key-pair\") pod \"whisker-7969cf68c8-xfwnf\" (UID: \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\") " pod="calico-system/whisker-7969cf68c8-xfwnf" Sep 6 00:22:11.294873 kubelet[2141]: I0906 00:22:11.294495 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c19353dd-4b41-4b6f-9132-f91a5ef28107-tigera-ca-bundle\") pod \"calico-kube-controllers-644bf98f67-gf7cj\" (UID: \"c19353dd-4b41-4b6f-9132-f91a5ef28107\") " pod="calico-system/calico-kube-controllers-644bf98f67-gf7cj" Sep 6 00:22:11.295005 kubelet[2141]: I0906 00:22:11.294520 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhtnq\" (UniqueName: \"kubernetes.io/projected/5d236a4c-f0ec-424c-baa8-2089b5f219ec-kube-api-access-mhtnq\") pod \"coredns-7c65d6cfc9-krllp\" (UID: \"5d236a4c-f0ec-424c-baa8-2089b5f219ec\") " pod="kube-system/coredns-7c65d6cfc9-krllp" Sep 6 00:22:11.295005 kubelet[2141]: I0906 00:22:11.294540 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgw48\" (UniqueName: \"kubernetes.io/projected/d47e55db-f531-4fdd-892c-a105be81339f-kube-api-access-mgw48\") pod \"calico-apiserver-7f95dfcdc5-lkdpx\" (UID: \"d47e55db-f531-4fdd-892c-a105be81339f\") " pod="calico-apiserver/calico-apiserver-7f95dfcdc5-lkdpx" Sep 6 00:22:11.470460 env[1312]: time="2025-09-06T00:22:11.470412716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-644bf98f67-gf7cj,Uid:c19353dd-4b41-4b6f-9132-f91a5ef28107,Namespace:calico-system,Attempt:0,}" Sep 6 00:22:11.482849 kubelet[2141]: E0906 00:22:11.482806 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:11.484397 env[1312]: time="2025-09-06T00:22:11.483371028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dqfgt,Uid:23a905d3-2b9b-4e8e-907e-242236a689bc,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:11.490118 env[1312]: time="2025-09-06T00:22:11.490056622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-lvkqq,Uid:a6cca75d-1a19-48b7-bf46-1e5cf7e72c19,Namespace:calico-system,Attempt:0,}" Sep 6 00:22:11.496191 kubelet[2141]: E0906 00:22:11.494033 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:11.496439 env[1312]: time="2025-09-06T00:22:11.494693469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-krllp,Uid:5d236a4c-f0ec-424c-baa8-2089b5f219ec,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:11.496439 env[1312]: time="2025-09-06T00:22:11.495895396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f95dfcdc5-lkdpx,Uid:d47e55db-f531-4fdd-892c-a105be81339f,Namespace:calico-apiserver,Attempt:0,}" Sep 6 00:22:11.497890 env[1312]: time="2025-09-06T00:22:11.497843404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7969cf68c8-xfwnf,Uid:9ed7a149-33b2-4e31-9b85-7bdfa92bc378,Namespace:calico-system,Attempt:0,}" Sep 6 00:22:11.497969 env[1312]: time="2025-09-06T00:22:11.497923263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f95dfcdc5-xw9st,Uid:2997af2f-3793-4ebb-a625-6dd9b47d29e8,Namespace:calico-apiserver,Attempt:0,}" Sep 6 00:22:11.561759 env[1312]: time="2025-09-06T00:22:11.561581358Z" level=error msg="Failed to destroy network for sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.562114 env[1312]: time="2025-09-06T00:22:11.562061078Z" level=error msg="encountered an error cleaning up failed sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.562404 env[1312]: time="2025-09-06T00:22:11.562180934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-644bf98f67-gf7cj,Uid:c19353dd-4b41-4b6f-9132-f91a5ef28107,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.562773 kubelet[2141]: E0906 00:22:11.562708 2141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.562877 kubelet[2141]: E0906 00:22:11.562797 2141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-644bf98f67-gf7cj" Sep 6 00:22:11.562877 kubelet[2141]: E0906 00:22:11.562818 2141 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-644bf98f67-gf7cj" Sep 6 00:22:11.562938 kubelet[2141]: E0906 00:22:11.562864 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-644bf98f67-gf7cj_calico-system(c19353dd-4b41-4b6f-9132-f91a5ef28107)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-644bf98f67-gf7cj_calico-system(c19353dd-4b41-4b6f-9132-f91a5ef28107)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-644bf98f67-gf7cj" podUID="c19353dd-4b41-4b6f-9132-f91a5ef28107" Sep 6 00:22:11.698158 env[1312]: time="2025-09-06T00:22:11.698079026Z" level=error msg="Failed to destroy network for sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.698817 env[1312]: time="2025-09-06T00:22:11.698767219Z" level=error msg="encountered an error cleaning up failed sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.699023 env[1312]: time="2025-09-06T00:22:11.698823454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-krllp,Uid:5d236a4c-f0ec-424c-baa8-2089b5f219ec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.699106 kubelet[2141]: E0906 00:22:11.699022 2141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.699106 kubelet[2141]: E0906 00:22:11.699083 2141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-krllp" Sep 6 00:22:11.699106 kubelet[2141]: E0906 00:22:11.699103 2141 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-krllp" Sep 6 00:22:11.699287 kubelet[2141]: E0906 00:22:11.699189 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-krllp_kube-system(5d236a4c-f0ec-424c-baa8-2089b5f219ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-krllp_kube-system(5d236a4c-f0ec-424c-baa8-2089b5f219ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-krllp" podUID="5d236a4c-f0ec-424c-baa8-2089b5f219ec" Sep 6 00:22:11.722810 env[1312]: time="2025-09-06T00:22:11.722727340Z" level=error msg="Failed to destroy network for sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.723189 env[1312]: time="2025-09-06T00:22:11.723152157Z" level=error msg="encountered an error cleaning up failed sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.723296 env[1312]: time="2025-09-06T00:22:11.723206390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dqfgt,Uid:23a905d3-2b9b-4e8e-907e-242236a689bc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.723550 kubelet[2141]: E0906 00:22:11.723494 2141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.723653 kubelet[2141]: E0906 00:22:11.723583 2141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dqfgt" Sep 6 00:22:11.723653 kubelet[2141]: E0906 00:22:11.723612 2141 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dqfgt" Sep 6 00:22:11.723838 kubelet[2141]: E0906 00:22:11.723664 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dqfgt_kube-system(23a905d3-2b9b-4e8e-907e-242236a689bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dqfgt_kube-system(23a905d3-2b9b-4e8e-907e-242236a689bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dqfgt" podUID="23a905d3-2b9b-4e8e-907e-242236a689bc" Sep 6 00:22:11.725035 env[1312]: time="2025-09-06T00:22:11.724998896Z" level=error msg="Failed to destroy network for sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.725386 env[1312]: time="2025-09-06T00:22:11.725349725Z" level=error msg="encountered an error cleaning up failed sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.725477 env[1312]: time="2025-09-06T00:22:11.725390050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-lvkqq,Uid:a6cca75d-1a19-48b7-bf46-1e5cf7e72c19,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.725636 kubelet[2141]: E0906 00:22:11.725585 2141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.725699 kubelet[2141]: E0906 00:22:11.725660 2141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-lvkqq" Sep 6 00:22:11.725699 kubelet[2141]: E0906 00:22:11.725687 2141 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-lvkqq" Sep 6 00:22:11.725755 kubelet[2141]: E0906 00:22:11.725731 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-lvkqq_calico-system(a6cca75d-1a19-48b7-bf46-1e5cf7e72c19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-lvkqq_calico-system(a6cca75d-1a19-48b7-bf46-1e5cf7e72c19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-lvkqq" podUID="a6cca75d-1a19-48b7-bf46-1e5cf7e72c19" Sep 6 00:22:11.730587 env[1312]: time="2025-09-06T00:22:11.730505526Z" level=error msg="Failed to destroy network for sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.731306 env[1312]: time="2025-09-06T00:22:11.731243622Z" level=error msg="encountered an error cleaning up failed sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.731505 env[1312]: time="2025-09-06T00:22:11.731319364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f95dfcdc5-xw9st,Uid:2997af2f-3793-4ebb-a625-6dd9b47d29e8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.731574 env[1312]: time="2025-09-06T00:22:11.731500553Z" level=error msg="Failed to destroy network for sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.731665 kubelet[2141]: E0906 00:22:11.731626 2141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.731730 kubelet[2141]: E0906 00:22:11.731696 2141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f95dfcdc5-xw9st" Sep 6 00:22:11.731758 kubelet[2141]: E0906 00:22:11.731723 2141 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f95dfcdc5-xw9st" Sep 6 00:22:11.731818 kubelet[2141]: E0906 00:22:11.731786 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f95dfcdc5-xw9st_calico-apiserver(2997af2f-3793-4ebb-a625-6dd9b47d29e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f95dfcdc5-xw9st_calico-apiserver(2997af2f-3793-4ebb-a625-6dd9b47d29e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f95dfcdc5-xw9st" podUID="2997af2f-3793-4ebb-a625-6dd9b47d29e8" Sep 6 00:22:11.732725 env[1312]: time="2025-09-06T00:22:11.732218562Z" level=error msg="encountered an error cleaning up failed sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.732725 env[1312]: time="2025-09-06T00:22:11.732286941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f95dfcdc5-lkdpx,Uid:d47e55db-f531-4fdd-892c-a105be81339f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.732857 kubelet[2141]: E0906 00:22:11.732539 2141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.732857 kubelet[2141]: E0906 00:22:11.732628 2141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f95dfcdc5-lkdpx" Sep 6 00:22:11.732857 kubelet[2141]: E0906 00:22:11.732654 2141 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f95dfcdc5-lkdpx" Sep 6 00:22:11.732957 kubelet[2141]: E0906 00:22:11.732708 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f95dfcdc5-lkdpx_calico-apiserver(d47e55db-f531-4fdd-892c-a105be81339f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f95dfcdc5-lkdpx_calico-apiserver(d47e55db-f531-4fdd-892c-a105be81339f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f95dfcdc5-lkdpx" podUID="d47e55db-f531-4fdd-892c-a105be81339f" Sep 6 00:22:11.756240 env[1312]: time="2025-09-06T00:22:11.756120864Z" level=error msg="Failed to destroy network for sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.756642 env[1312]: time="2025-09-06T00:22:11.756596508Z" level=error msg="encountered an error cleaning up failed sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.756685 env[1312]: time="2025-09-06T00:22:11.756666780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7969cf68c8-xfwnf,Uid:9ed7a149-33b2-4e31-9b85-7bdfa92bc378,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.756971 kubelet[2141]: E0906 00:22:11.756924 2141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:11.757057 kubelet[2141]: E0906 00:22:11.757001 2141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7969cf68c8-xfwnf" Sep 6 00:22:11.757057 kubelet[2141]: E0906 00:22:11.757028 2141 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7969cf68c8-xfwnf" Sep 6 00:22:11.757117 kubelet[2141]: E0906 00:22:11.757089 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7969cf68c8-xfwnf_calico-system(9ed7a149-33b2-4e31-9b85-7bdfa92bc378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7969cf68c8-xfwnf_calico-system(9ed7a149-33b2-4e31-9b85-7bdfa92bc378)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7969cf68c8-xfwnf" podUID="9ed7a149-33b2-4e31-9b85-7bdfa92bc378" Sep 6 00:22:11.972389 env[1312]: time="2025-09-06T00:22:11.972297331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qr48z,Uid:aa5fe117-525e-4a2e-b423-0d13ab8c1f3f,Namespace:calico-system,Attempt:0,}" Sep 6 00:22:12.024597 env[1312]: time="2025-09-06T00:22:12.024524998Z" level=error msg="Failed to destroy network for sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.024870 env[1312]: time="2025-09-06T00:22:12.024840129Z" level=error msg="encountered an error cleaning up failed sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.024917 env[1312]: time="2025-09-06T00:22:12.024884051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qr48z,Uid:aa5fe117-525e-4a2e-b423-0d13ab8c1f3f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.025204 kubelet[2141]: E0906 00:22:12.025159 2141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.025297 kubelet[2141]: E0906 00:22:12.025237 2141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qr48z" Sep 6 00:22:12.025297 kubelet[2141]: E0906 00:22:12.025260 2141 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qr48z" Sep 6 00:22:12.025374 kubelet[2141]: E0906 00:22:12.025313 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qr48z_calico-system(aa5fe117-525e-4a2e-b423-0d13ab8c1f3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qr48z_calico-system(aa5fe117-525e-4a2e-b423-0d13ab8c1f3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qr48z" podUID="aa5fe117-525e-4a2e-b423-0d13ab8c1f3f" Sep 6 00:22:12.211167 kubelet[2141]: I0906 00:22:12.211107 2141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:12.211866 env[1312]: time="2025-09-06T00:22:12.211835167Z" level=info msg="StopPodSandbox for \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\"" Sep 6 00:22:12.212161 kubelet[2141]: I0906 00:22:12.212141 2141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:12.213321 env[1312]: time="2025-09-06T00:22:12.213295509Z" level=info msg="StopPodSandbox for \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\"" Sep 6 00:22:12.214184 kubelet[2141]: I0906 00:22:12.214164 2141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:12.214594 env[1312]: time="2025-09-06T00:22:12.214570874Z" level=info msg="StopPodSandbox for \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\"" Sep 6 00:22:12.215281 kubelet[2141]: I0906 00:22:12.215093 2141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:12.215916 env[1312]: time="2025-09-06T00:22:12.215872978Z" level=info msg="StopPodSandbox for \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\"" Sep 6 00:22:12.217911 kubelet[2141]: I0906 00:22:12.217323 2141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:12.218024 env[1312]: time="2025-09-06T00:22:12.217803884Z" level=info msg="StopPodSandbox for \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\"" Sep 6 00:22:12.219600 kubelet[2141]: I0906 00:22:12.219549 2141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:12.220323 env[1312]: time="2025-09-06T00:22:12.220266899Z" level=info msg="StopPodSandbox for \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\"" Sep 6 00:22:12.220923 kubelet[2141]: I0906 00:22:12.220898 2141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:12.221302 env[1312]: time="2025-09-06T00:22:12.221271935Z" level=info msg="StopPodSandbox for \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\"" Sep 6 00:22:12.222262 kubelet[2141]: I0906 00:22:12.222242 2141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:12.223174 env[1312]: time="2025-09-06T00:22:12.223042490Z" level=info msg="StopPodSandbox for \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\"" Sep 6 00:22:12.259567 env[1312]: time="2025-09-06T00:22:12.259487889Z" level=error msg="StopPodSandbox for \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\" failed" error="failed to destroy network for sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.259851 kubelet[2141]: E0906 00:22:12.259781 2141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:12.259925 kubelet[2141]: E0906 00:22:12.259877 2141 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c"} Sep 6 00:22:12.259980 kubelet[2141]: E0906 00:22:12.259951 2141 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23a905d3-2b9b-4e8e-907e-242236a689bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:22:12.260086 kubelet[2141]: E0906 00:22:12.259989 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23a905d3-2b9b-4e8e-907e-242236a689bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dqfgt" podUID="23a905d3-2b9b-4e8e-907e-242236a689bc" Sep 6 00:22:12.263540 env[1312]: time="2025-09-06T00:22:12.263472730Z" level=error msg="StopPodSandbox for \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\" failed" error="failed to destroy network for sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.264093 kubelet[2141]: E0906 00:22:12.264032 2141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:12.264232 kubelet[2141]: E0906 00:22:12.264106 2141 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e"} Sep 6 00:22:12.264232 kubelet[2141]: E0906 00:22:12.264158 2141 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6cca75d-1a19-48b7-bf46-1e5cf7e72c19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:22:12.264232 kubelet[2141]: E0906 00:22:12.264183 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6cca75d-1a19-48b7-bf46-1e5cf7e72c19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-lvkqq" podUID="a6cca75d-1a19-48b7-bf46-1e5cf7e72c19" Sep 6 00:22:12.267626 env[1312]: time="2025-09-06T00:22:12.267532483Z" level=error msg="StopPodSandbox for \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\" failed" error="failed to destroy network for sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.267858 kubelet[2141]: E0906 00:22:12.267814 2141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:12.267858 kubelet[2141]: E0906 00:22:12.267852 2141 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a"} Sep 6 00:22:12.268552 kubelet[2141]: E0906 00:22:12.267875 2141 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d47e55db-f531-4fdd-892c-a105be81339f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:22:12.268552 kubelet[2141]: E0906 00:22:12.267893 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d47e55db-f531-4fdd-892c-a105be81339f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f95dfcdc5-lkdpx" podUID="d47e55db-f531-4fdd-892c-a105be81339f" Sep 6 00:22:12.275196 env[1312]: time="2025-09-06T00:22:12.275113126Z" level=error msg="StopPodSandbox for \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\" failed" error="failed to destroy network for sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.275709 kubelet[2141]: E0906 00:22:12.275645 2141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:12.275793 kubelet[2141]: E0906 00:22:12.275719 2141 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a"} Sep 6 00:22:12.275793 kubelet[2141]: E0906 00:22:12.275768 2141 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:22:12.275912 kubelet[2141]: E0906 00:22:12.275796 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7969cf68c8-xfwnf" podUID="9ed7a149-33b2-4e31-9b85-7bdfa92bc378" Sep 6 00:22:12.290604 env[1312]: time="2025-09-06T00:22:12.290536906Z" level=error msg="StopPodSandbox for \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\" failed" error="failed to destroy network for sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.291281 kubelet[2141]: E0906 00:22:12.291072 2141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:12.291281 kubelet[2141]: E0906 00:22:12.291160 2141 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229"} Sep 6 00:22:12.291281 kubelet[2141]: E0906 00:22:12.291207 2141 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:22:12.291281 kubelet[2141]: E0906 00:22:12.291247 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qr48z" podUID="aa5fe117-525e-4a2e-b423-0d13ab8c1f3f" Sep 6 00:22:12.292368 env[1312]: time="2025-09-06T00:22:12.292328340Z" level=error msg="StopPodSandbox for \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\" failed" error="failed to destroy network for sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.292697 kubelet[2141]: E0906 00:22:12.292570 2141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:12.292697 kubelet[2141]: E0906 00:22:12.292602 2141 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf"} Sep 6 00:22:12.292697 kubelet[2141]: E0906 00:22:12.292632 2141 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c19353dd-4b41-4b6f-9132-f91a5ef28107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:22:12.292697 kubelet[2141]: E0906 00:22:12.292655 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c19353dd-4b41-4b6f-9132-f91a5ef28107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-644bf98f67-gf7cj" podUID="c19353dd-4b41-4b6f-9132-f91a5ef28107" Sep 6 00:22:12.295953 env[1312]: time="2025-09-06T00:22:12.295882793Z" level=error msg="StopPodSandbox for \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\" failed" error="failed to destroy network for sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.296179 kubelet[2141]: E0906 00:22:12.296155 2141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:12.296271 kubelet[2141]: E0906 00:22:12.296182 2141 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63"} Sep 6 00:22:12.296271 kubelet[2141]: E0906 00:22:12.296202 2141 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d236a4c-f0ec-424c-baa8-2089b5f219ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:22:12.296271 kubelet[2141]: E0906 00:22:12.296229 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d236a4c-f0ec-424c-baa8-2089b5f219ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-krllp" podUID="5d236a4c-f0ec-424c-baa8-2089b5f219ec" Sep 6 00:22:12.306162 env[1312]: time="2025-09-06T00:22:12.306094367Z" level=error msg="StopPodSandbox for \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\" failed" error="failed to destroy network for sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:22:12.306350 kubelet[2141]: E0906 00:22:12.306321 2141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:12.306436 kubelet[2141]: E0906 00:22:12.306358 2141 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8"} Sep 6 00:22:12.306436 kubelet[2141]: E0906 00:22:12.306384 2141 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2997af2f-3793-4ebb-a625-6dd9b47d29e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:22:12.306436 kubelet[2141]: E0906 00:22:12.306402 2141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2997af2f-3793-4ebb-a625-6dd9b47d29e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f95dfcdc5-xw9st" podUID="2997af2f-3793-4ebb-a625-6dd9b47d29e8" Sep 6 00:22:14.181771 kubelet[2141]: I0906 00:22:14.181706 2141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:22:14.182366 kubelet[2141]: E0906 00:22:14.182261 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:14.232047 kubelet[2141]: E0906 00:22:14.231998 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:14.267000 audit[3353]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:14.272193 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 6 00:22:14.272356 kernel: audit: type=1325 audit(1757118134.267:282): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:14.267000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff42477f10 a2=0 a3=7fff42477efc items=0 ppid=2268 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:14.277799 kernel: audit: type=1300 audit(1757118134.267:282): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff42477f10 a2=0 a3=7fff42477efc items=0 ppid=2268 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:14.267000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:14.281157 kernel: audit: type=1327 audit(1757118134.267:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:14.281000 audit[3353]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:14.281000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff42477f10 a2=0 a3=7fff42477efc items=0 ppid=2268 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:14.293307 kernel: audit: type=1325 audit(1757118134.281:283): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:14.293419 kernel: audit: type=1300 audit(1757118134.281:283): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff42477f10 a2=0 a3=7fff42477efc items=0 ppid=2268 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:14.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:14.295691 kernel: audit: type=1327 audit(1757118134.281:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:18.895453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807693438.mount: Deactivated successfully. Sep 6 00:22:20.763162 env[1312]: time="2025-09-06T00:22:20.763051587Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:20.765792 env[1312]: time="2025-09-06T00:22:20.765718432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:20.767705 env[1312]: time="2025-09-06T00:22:20.767646219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:20.769693 env[1312]: time="2025-09-06T00:22:20.769642456Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:20.770093 env[1312]: time="2025-09-06T00:22:20.770055972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 6 00:22:20.777820 env[1312]: time="2025-09-06T00:22:20.777771250Z" level=info msg="CreateContainer within sandbox \"4fcd9365c05baa9fe596929693e8aaa2baa4a703c56e39e8decb5dd9492fc65b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 6 00:22:20.796334 env[1312]: time="2025-09-06T00:22:20.796270575Z" level=info msg="CreateContainer within sandbox \"4fcd9365c05baa9fe596929693e8aaa2baa4a703c56e39e8decb5dd9492fc65b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b15837a3de5bb68d274bace22f328c94a285a782b8e23e37db209c62f53af590\"" Sep 6 00:22:20.796954 env[1312]: time="2025-09-06T00:22:20.796920364Z" level=info msg="StartContainer for \"b15837a3de5bb68d274bace22f328c94a285a782b8e23e37db209c62f53af590\"" Sep 6 00:22:20.974590 env[1312]: time="2025-09-06T00:22:20.973687524Z" level=info msg="StartContainer for \"b15837a3de5bb68d274bace22f328c94a285a782b8e23e37db209c62f53af590\" returns successfully" Sep 6 00:22:21.027490 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 6 00:22:21.027663 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 6 00:22:21.110567 env[1312]: time="2025-09-06T00:22:21.110512381Z" level=info msg="StopPodSandbox for \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\"" Sep 6 00:22:21.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.61:22-10.0.0.1:42108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.177573 systemd[1]: Started sshd@9-10.0.0.61:22-10.0.0.1:42108.service. Sep 6 00:22:21.182168 kernel: audit: type=1130 audit(1757118141.176:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.61:22-10.0.0.1:42108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.227000 audit[3438]: USER_ACCT pid=3438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.229264 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 42108 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:22:21.231000 audit[3438]: CRED_ACQ pid=3438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.233605 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:21.236700 kernel: audit: type=1101 audit(1757118141.227:285): pid=3438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.236761 kernel: audit: type=1103 audit(1757118141.231:286): pid=3438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.236781 kernel: audit: type=1006 audit(1757118141.231:287): pid=3438 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 6 00:22:21.238992 systemd[1]: Started session-10.scope. Sep 6 00:22:21.231000 audit[3438]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfbe415c0 a2=3 a3=0 items=0 ppid=1 pid=3438 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:21.239352 systemd-logind[1293]: New session 10 of user core. Sep 6 00:22:21.249452 kernel: audit: type=1300 audit(1757118141.231:287): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfbe415c0 a2=3 a3=0 items=0 ppid=1 pid=3438 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:21.249537 kernel: audit: type=1327 audit(1757118141.231:287): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:21.249557 kernel: audit: type=1105 audit(1757118141.243:288): pid=3438 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.231000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:21.243000 audit[3438]: USER_START pid=3438 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.253659 kernel: audit: type=1103 audit(1757118141.244:289): pid=3443 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.244000 audit[3443]: CRED_ACQ pid=3443 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.162 [INFO][3422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.162 [INFO][3422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" iface="eth0" netns="/var/run/netns/cni-8ae7a9b2-880e-fc81-7cc1-9060e5d43061" Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.163 [INFO][3422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" iface="eth0" netns="/var/run/netns/cni-8ae7a9b2-880e-fc81-7cc1-9060e5d43061" Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.163 [INFO][3422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" iface="eth0" netns="/var/run/netns/cni-8ae7a9b2-880e-fc81-7cc1-9060e5d43061" Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.163 [INFO][3422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.163 [INFO][3422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.248 [INFO][3433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" HandleID="k8s-pod-network.bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Workload="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.249 [INFO][3433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.250 [INFO][3433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.261 [WARNING][3433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" HandleID="k8s-pod-network.bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Workload="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.261 [INFO][3433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" HandleID="k8s-pod-network.bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Workload="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.266 [INFO][3433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:21.268957 env[1312]: 2025-09-06 00:22:21.267 [INFO][3422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:21.269431 env[1312]: time="2025-09-06T00:22:21.269160386Z" level=info msg="TearDown network for sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\" successfully" Sep 6 00:22:21.269431 env[1312]: time="2025-09-06T00:22:21.269194089Z" level=info msg="StopPodSandbox for \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\" returns successfully" Sep 6 00:22:21.358705 kubelet[2141]: I0906 00:22:21.358647 2141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr2ql\" (UniqueName: \"kubernetes.io/projected/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-kube-api-access-vr2ql\") pod \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\" (UID: \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\") " Sep 6 00:22:21.359248 kubelet[2141]: I0906 00:22:21.358716 2141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-whisker-backend-key-pair\") pod \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\" (UID: \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\") " Sep 6 00:22:21.359248 kubelet[2141]: I0906 00:22:21.358744 2141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-whisker-ca-bundle\") pod \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\" (UID: \"9ed7a149-33b2-4e31-9b85-7bdfa92bc378\") " Sep 6 00:22:21.361217 kubelet[2141]: I0906 00:22:21.359565 2141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9ed7a149-33b2-4e31-9b85-7bdfa92bc378" (UID: "9ed7a149-33b2-4e31-9b85-7bdfa92bc378"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:22:21.362242 kubelet[2141]: I0906 00:22:21.362211 2141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-kube-api-access-vr2ql" (OuterVolumeSpecName: "kube-api-access-vr2ql") pod "9ed7a149-33b2-4e31-9b85-7bdfa92bc378" (UID: "9ed7a149-33b2-4e31-9b85-7bdfa92bc378"). InnerVolumeSpecName "kube-api-access-vr2ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:22:21.362462 kubelet[2141]: I0906 00:22:21.362444 2141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9ed7a149-33b2-4e31-9b85-7bdfa92bc378" (UID: "9ed7a149-33b2-4e31-9b85-7bdfa92bc378"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:22:21.388711 sshd[3438]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:21.388000 audit[3438]: USER_END pid=3438 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.391681 systemd[1]: sshd@9-10.0.0.61:22-10.0.0.1:42108.service: Deactivated successfully. Sep 6 00:22:21.392750 systemd-logind[1293]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:22:21.393113 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:22:21.393865 systemd-logind[1293]: Removed session 10. Sep 6 00:22:21.388000 audit[3438]: CRED_DISP pid=3438 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.397572 kernel: audit: type=1106 audit(1757118141.388:290): pid=3438 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.397631 kernel: audit: type=1104 audit(1757118141.388:291): pid=3438 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:21.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.61:22-10.0.0.1:42108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.459490 kubelet[2141]: I0906 00:22:21.459436 2141 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:21.459490 kubelet[2141]: I0906 00:22:21.459477 2141 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:21.459490 kubelet[2141]: I0906 00:22:21.459485 2141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr2ql\" (UniqueName: \"kubernetes.io/projected/9ed7a149-33b2-4e31-9b85-7bdfa92bc378-kube-api-access-vr2ql\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:21.777653 systemd[1]: run-netns-cni\x2d8ae7a9b2\x2d880e\x2dfc81\x2d7cc1\x2d9060e5d43061.mount: Deactivated successfully. Sep 6 00:22:21.777891 systemd[1]: var-lib-kubelet-pods-9ed7a149\x2d33b2\x2d4e31\x2d9b85\x2d7bdfa92bc378-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvr2ql.mount: Deactivated successfully. Sep 6 00:22:21.778062 systemd[1]: var-lib-kubelet-pods-9ed7a149\x2d33b2\x2d4e31\x2d9b85\x2d7bdfa92bc378-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 6 00:22:22.270592 kubelet[2141]: I0906 00:22:22.269986 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hqs9p" podStartSLOduration=2.728533769 podStartE2EDuration="23.269958503s" podCreationTimestamp="2025-09-06 00:21:59 +0000 UTC" firstStartedPulling="2025-09-06 00:22:00.229323005 +0000 UTC m=+19.801881823" lastFinishedPulling="2025-09-06 00:22:20.77074774 +0000 UTC m=+40.343306557" observedRunningTime="2025-09-06 00:22:21.265869561 +0000 UTC m=+40.838428378" watchObservedRunningTime="2025-09-06 00:22:22.269958503 +0000 UTC m=+41.842517320" Sep 6 00:22:22.392000 audit[3552]: AVC avc: denied { write } for pid=3552 comm="tee" name="fd" dev="proc" ino=26813 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:22:22.392000 audit[3552]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff537377da a2=241 a3=1b6 items=1 ppid=3522 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.392000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 6 00:22:22.392000 audit: PATH item=0 name="/dev/fd/63" inode=24005 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.392000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:22:22.393000 audit[3546]: AVC avc: denied { write } for pid=3546 comm="tee" name="fd" dev="proc" ino=24008 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:22:22.393000 audit[3546]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff533867e9 a2=241 a3=1b6 items=1 ppid=3520 pid=3546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.393000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 6 00:22:22.393000 audit: PATH item=0 name="/dev/fd/63" inode=25903 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.393000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:22:22.398000 audit[3565]: AVC avc: denied { write } for pid=3565 comm="tee" name="fd" dev="proc" ino=26817 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:22:22.398000 audit[3565]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0caf07ea a2=241 a3=1b6 items=1 ppid=3519 pid=3565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.398000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 6 00:22:22.398000 audit: PATH item=0 name="/dev/fd/63" inode=25916 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:22:22.421000 audit[3569]: AVC avc: denied { write } for pid=3569 comm="tee" name="fd" dev="proc" ino=26825 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:22:22.421000 audit[3569]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe4b68e7e9 a2=241 a3=1b6 items=1 ppid=3523 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.421000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 6 00:22:22.421000 audit: PATH item=0 name="/dev/fd/63" inode=25917 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.421000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:22:22.426000 audit[3595]: AVC avc: denied { write } for pid=3595 comm="tee" name="fd" dev="proc" ino=24021 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:22:22.426000 audit[3595]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce2a777eb a2=241 a3=1b6 items=1 ppid=3527 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.426000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 6 00:22:22.426000 audit: PATH item=0 name="/dev/fd/63" inode=24018 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.426000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:22:22.429000 audit[3583]: AVC avc: denied { write } for pid=3583 comm="tee" name="fd" dev="proc" ino=25926 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:22:22.429000 audit[3583]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcbb6287d9 a2=241 a3=1b6 items=1 ppid=3536 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.429000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 6 00:22:22.429000 audit: PATH item=0 name="/dev/fd/63" inode=25003 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:22:22.431000 audit[3602]: AVC avc: denied { write } for pid=3602 comm="tee" name="fd" dev="proc" ino=26835 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:22:22.431000 audit[3602]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd863357e9 a2=241 a3=1b6 items=1 ppid=3529 pid=3602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.431000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 6 00:22:22.431000 audit: PATH item=0 name="/dev/fd/63" inode=25928 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.431000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:22:22.466344 kubelet[2141]: I0906 00:22:22.466281 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e423a7-4216-479f-98d7-6c85b8db1f03-whisker-ca-bundle\") pod \"whisker-67d6f7c79c-bfcfv\" (UID: \"14e423a7-4216-479f-98d7-6c85b8db1f03\") " pod="calico-system/whisker-67d6f7c79c-bfcfv" Sep 6 00:22:22.466344 kubelet[2141]: I0906 00:22:22.466338 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp4kl\" (UniqueName: \"kubernetes.io/projected/14e423a7-4216-479f-98d7-6c85b8db1f03-kube-api-access-tp4kl\") pod \"whisker-67d6f7c79c-bfcfv\" (UID: \"14e423a7-4216-479f-98d7-6c85b8db1f03\") " pod="calico-system/whisker-67d6f7c79c-bfcfv" Sep 6 00:22:22.466344 kubelet[2141]: I0906 00:22:22.466359 2141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14e423a7-4216-479f-98d7-6c85b8db1f03-whisker-backend-key-pair\") pod \"whisker-67d6f7c79c-bfcfv\" (UID: \"14e423a7-4216-479f-98d7-6c85b8db1f03\") " pod="calico-system/whisker-67d6f7c79c-bfcfv" Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit: BPF prog-id=10 op=LOAD Sep 6 00:22:22.584000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc6c1f1f0 a2=98 a3=1fffffffffffffff items=0 ppid=3524 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.584000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:22:22.584000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit: BPF prog-id=11 op=LOAD Sep 6 00:22:22.584000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc6c1f0d0 a2=94 a3=3 items=0 ppid=3524 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.584000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:22:22.584000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { bpf } for pid=3636 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit: BPF prog-id=12 op=LOAD Sep 6 00:22:22.584000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc6c1f110 a2=94 a3=7fffc6c1f2f0 items=0 ppid=3524 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.584000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:22:22.584000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:22:22.584000 audit[3636]: AVC avc: denied { perfmon } for pid=3636 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.584000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fffc6c1f1e0 a2=50 a3=a000000085 items=0 ppid=3524 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.584000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit: BPF prog-id=13 op=LOAD Sep 6 00:22:22.587000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe357f07a0 a2=98 a3=3 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.587000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.587000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit: BPF prog-id=14 op=LOAD Sep 6 00:22:22.587000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe357f0590 a2=94 a3=54428f items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.587000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.587000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.587000 audit: BPF prog-id=15 op=LOAD Sep 6 00:22:22.587000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe357f05c0 a2=94 a3=2 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.587000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.587000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:22:22.614186 env[1312]: time="2025-09-06T00:22:22.614091628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d6f7c79c-bfcfv,Uid:14e423a7-4216-479f-98d7-6c85b8db1f03,Namespace:calico-system,Attempt:0,}" Sep 6 00:22:22.700000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.700000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.700000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.700000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.700000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.700000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.700000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.700000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.700000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.700000 audit: BPF prog-id=16 op=LOAD Sep 6 00:22:22.700000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe357f0480 a2=94 a3=1 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.700000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.701000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:22:22.701000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.701000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe357f0550 a2=50 a3=7ffe357f0630 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.701000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.709000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.709000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe357f0490 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.709000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.709000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.709000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe357f04c0 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.709000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe357f03d0 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe357f04e0 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe357f04c0 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe357f04b0 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe357f04e0 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe357f04c0 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe357f04e0 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe357f04b0 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe357f0520 a2=28 a3=0 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe357f02d0 a2=50 a3=1 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit: BPF prog-id=17 op=LOAD Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe357f02d0 a2=94 a3=5 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe357f0380 a2=50 a3=1 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe357f04a0 a2=4 a3=38 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.710000 audit[3637]: AVC avc: denied { confidentiality } for pid=3637 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:22:22.710000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe357f04f0 a2=94 a3=6 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.710000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { confidentiality } for pid=3637 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:22:22.711000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe357efca0 a2=94 a3=88 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.711000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.711000 audit[3637]: AVC avc: denied { confidentiality } for pid=3637 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:22:22.711000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe357efca0 a2=94 a3=88 items=0 ppid=3524 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.711000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:22:22.729000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.729000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.729000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.729000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.729000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.729000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.729000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.729000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.729000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.729000 audit: BPF prog-id=18 op=LOAD Sep 6 00:22:22.729000 audit[3664]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8311c510 a2=98 a3=1999999999999999 items=0 ppid=3524 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.729000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:22:22.730000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:22:22.730000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.730000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.730000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.730000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.730000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.730000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.730000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.730000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.730000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.730000 audit: BPF prog-id=19 op=LOAD Sep 6 00:22:22.730000 audit[3664]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8311c3f0 a2=94 a3=ffff items=0 ppid=3524 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.730000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:22:22.731000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:22:22.731000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.731000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.731000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.731000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.731000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.731000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.731000 audit[3664]: AVC avc: denied { perfmon } for pid=3664 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.731000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.731000 audit[3664]: AVC avc: denied { bpf } for pid=3664 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.731000 audit: BPF prog-id=20 op=LOAD Sep 6 00:22:22.731000 audit[3664]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8311c430 a2=94 a3=7ffc8311c610 items=0 ppid=3524 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.731000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:22:22.731000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:22:22.763984 systemd-networkd[1075]: cali8ec3ae4bf60: Link UP Sep 6 00:22:22.766387 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8ec3ae4bf60: link becomes ready Sep 6 00:22:22.766220 systemd-networkd[1075]: cali8ec3ae4bf60: Gained carrier Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.681 [INFO][3639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0 whisker-67d6f7c79c- calico-system 14e423a7-4216-479f-98d7-6c85b8db1f03 952 0 2025-09-06 00:22:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67d6f7c79c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-67d6f7c79c-bfcfv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8ec3ae4bf60 [] [] }} ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Namespace="calico-system" Pod="whisker-67d6f7c79c-bfcfv" WorkloadEndpoint="localhost-k8s-whisker--67d6f7c79c--bfcfv-" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.681 [INFO][3639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Namespace="calico-system" Pod="whisker-67d6f7c79c-bfcfv" WorkloadEndpoint="localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.715 [INFO][3653] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" HandleID="k8s-pod-network.4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Workload="localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.715 [INFO][3653] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" HandleID="k8s-pod-network.4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Workload="localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-67d6f7c79c-bfcfv", "timestamp":"2025-09-06 00:22:22.715283561 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.715 [INFO][3653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.715 [INFO][3653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.715 [INFO][3653] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.727 [INFO][3653] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" host="localhost" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.732 [INFO][3653] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.735 [INFO][3653] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.737 [INFO][3653] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.739 [INFO][3653] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.739 [INFO][3653] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" host="localhost" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.740 [INFO][3653] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.744 [INFO][3653] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" host="localhost" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.751 [INFO][3653] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" host="localhost" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.751 [INFO][3653] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" host="localhost" Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.752 [INFO][3653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:22.782933 env[1312]: 2025-09-06 00:22:22.752 [INFO][3653] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" HandleID="k8s-pod-network.4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Workload="localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0" Sep 6 00:22:22.783571 env[1312]: 2025-09-06 00:22:22.755 [INFO][3639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Namespace="calico-system" Pod="whisker-67d6f7c79c-bfcfv" WorkloadEndpoint="localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0", GenerateName:"whisker-67d6f7c79c-", Namespace:"calico-system", SelfLink:"", UID:"14e423a7-4216-479f-98d7-6c85b8db1f03", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 22, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67d6f7c79c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-67d6f7c79c-bfcfv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8ec3ae4bf60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:22.783571 env[1312]: 2025-09-06 00:22:22.755 [INFO][3639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Namespace="calico-system" Pod="whisker-67d6f7c79c-bfcfv" WorkloadEndpoint="localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0" Sep 6 00:22:22.783571 env[1312]: 2025-09-06 00:22:22.755 [INFO][3639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ec3ae4bf60 ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Namespace="calico-system" Pod="whisker-67d6f7c79c-bfcfv" WorkloadEndpoint="localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0" Sep 6 00:22:22.783571 env[1312]: 2025-09-06 00:22:22.764 [INFO][3639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Namespace="calico-system" Pod="whisker-67d6f7c79c-bfcfv" WorkloadEndpoint="localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0" Sep 6 00:22:22.783571 env[1312]: 2025-09-06 00:22:22.766 [INFO][3639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Namespace="calico-system" Pod="whisker-67d6f7c79c-bfcfv" WorkloadEndpoint="localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0", GenerateName:"whisker-67d6f7c79c-", Namespace:"calico-system", SelfLink:"", UID:"14e423a7-4216-479f-98d7-6c85b8db1f03", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 22, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67d6f7c79c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb", Pod:"whisker-67d6f7c79c-bfcfv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8ec3ae4bf60", MAC:"26:85:94:b3:cd:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:22.783571 env[1312]: 2025-09-06 00:22:22.778 [INFO][3639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb" Namespace="calico-system" Pod="whisker-67d6f7c79c-bfcfv" WorkloadEndpoint="localhost-k8s-whisker--67d6f7c79c--bfcfv-eth0" Sep 6 00:22:22.800482 env[1312]: time="2025-09-06T00:22:22.800295072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:22.800482 env[1312]: time="2025-09-06T00:22:22.800341439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:22.800482 env[1312]: time="2025-09-06T00:22:22.800351458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:22.800824 env[1312]: time="2025-09-06T00:22:22.800762109Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb pid=3692 runtime=io.containerd.runc.v2 Sep 6 00:22:22.801976 systemd-networkd[1075]: vxlan.calico: Link UP Sep 6 00:22:22.801984 systemd-networkd[1075]: vxlan.calico: Gained carrier Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit: BPF prog-id=21 op=LOAD Sep 6 00:22:22.831000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb5e58510 a2=98 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.831000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.831000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.831000 audit: BPF prog-id=22 op=LOAD Sep 6 00:22:22.831000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb5e58320 a2=94 a3=54428f items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.831000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit: BPF prog-id=23 op=LOAD Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb5e58350 a2=94 a3=2 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb5e58220 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb5e58250 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb5e58160 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb5e58270 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb5e58250 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb5e58240 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb5e58270 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb5e58250 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb5e58270 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb5e58240 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb5e582b0 a2=28 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.832000 audit: BPF prog-id=24 op=LOAD Sep 6 00:22:22.832000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb5e58120 a2=94 a3=0 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.832000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.832000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffeb5e58110 a2=50 a3=2800 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.833000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffeb5e58110 a2=50 a3=2800 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.833000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit: BPF prog-id=25 op=LOAD Sep 6 00:22:22.833000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb5e57930 a2=94 a3=2 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.833000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.833000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.833000 audit: BPF prog-id=26 op=LOAD Sep 6 00:22:22.833000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb5e57a30 a2=94 a3=30 items=0 ppid=3524 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.833000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:22:22.846259 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit: BPF prog-id=27 op=LOAD Sep 6 00:22:22.845000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb05bd050 a2=98 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.845000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.845000 audit: BPF prog-id=27 op=UNLOAD Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit: BPF prog-id=28 op=LOAD Sep 6 00:22:22.845000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdb05bce40 a2=94 a3=54428f items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.845000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.845000 audit: BPF prog-id=28 op=UNLOAD Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.845000 audit: BPF prog-id=29 op=LOAD Sep 6 00:22:22.845000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdb05bce70 a2=94 a3=2 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.845000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.846000 audit: BPF prog-id=29 op=UNLOAD Sep 6 00:22:22.883348 env[1312]: time="2025-09-06T00:22:22.883272147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d6f7c79c-bfcfv,Uid:14e423a7-4216-479f-98d7-6c85b8db1f03,Namespace:calico-system,Attempt:0,} returns sandbox id \"4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb\"" Sep 6 00:22:22.885257 env[1312]: time="2025-09-06T00:22:22.885216715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 6 00:22:22.964000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.964000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.964000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.964000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.964000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.964000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.964000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.964000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.964000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.964000 audit: BPF prog-id=30 op=LOAD Sep 6 00:22:22.964000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdb05bcd30 a2=94 a3=1 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.964000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.965000 audit: BPF prog-id=30 op=UNLOAD Sep 6 00:22:22.965000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.965000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffdb05bce00 a2=50 a3=7ffdb05bcee0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.965000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.972886 env[1312]: time="2025-09-06T00:22:22.972805392Z" level=info msg="StopPodSandbox for \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\"" Sep 6 00:22:22.973310 env[1312]: time="2025-09-06T00:22:22.973231683Z" level=info msg="StopPodSandbox for \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\"" Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdb05bcd40 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdb05bcd70 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdb05bcc80 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdb05bcd90 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdb05bcd70 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdb05bcd60 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdb05bcd90 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdb05bcd70 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdb05bcd90 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdb05bcd60 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdb05bcdd0 a2=28 a3=0 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdb05bcb80 a2=50 a3=1 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.973000 audit: BPF prog-id=31 op=LOAD Sep 6 00:22:22.973000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb05bcb80 a2=94 a3=5 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.974000 audit: BPF prog-id=31 op=UNLOAD Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdb05bcc30 a2=50 a3=1 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.974000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffdb05bcd50 a2=4 a3=38 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.974000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { confidentiality } for pid=3732 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:22:22.974000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdb05bcda0 a2=94 a3=6 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.974000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { confidentiality } for pid=3732 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:22:22.974000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdb05bc550 a2=94 a3=88 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.974000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.974000 audit[3732]: AVC avc: denied { confidentiality } for pid=3732 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:22:22.974000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdb05bc550 a2=94 a3=88 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.974000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.975000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.975000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdb05bdf80 a2=10 a3=208 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.975000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.975000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.975000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdb05bde20 a2=10 a3=3 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.975000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.975000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.975000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdb05bddc0 a2=10 a3=3 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.975000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.975000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:22:22.975000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdb05bddc0 a2=10 a3=7 items=0 ppid=3524 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.975000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:22:22.980248 kubelet[2141]: I0906 00:22:22.975480 2141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed7a149-33b2-4e31-9b85-7bdfa92bc378" path="/var/lib/kubelet/pods/9ed7a149-33b2-4e31-9b85-7bdfa92bc378/volumes" Sep 6 00:22:22.983000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:22:22.983000 audit[1069]: SYSCALL arch=c000003e syscall=232 success=yes exit=1 a0=a a1=55e5d5e32150 a2=46 a3=ffffffff items=0 ppid=1 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-udevd" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.983000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-udevd" Sep 6 00:22:23.065000 audit[3822]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3822 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:23.066000 audit[3823]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3823 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:23.066000 audit[3823]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff386f8a80 a2=0 a3=7fff386f8a6c items=0 ppid=3524 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.066000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:23.065000 audit[3822]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffc6be5310 a2=0 a3=7fffc6be52fc items=0 ppid=3524 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.065000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:23.071000 audit[3825]: NETFILTER_CFG table=filter:103 family=2 entries=39 op=nft_register_chain pid=3825 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:23.071000 audit[3825]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffc6e170000 a2=0 a3=7ffc6e16ffec items=0 ppid=3524 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.071000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:23.077000 audit[3821]: NETFILTER_CFG table=raw:104 family=2 entries=21 op=nft_register_chain pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:23.077000 audit[3821]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffb6c71360 a2=0 a3=7fffb6c7134c items=0 ppid=3524 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.077000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.030 [INFO][3772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.032 [INFO][3772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" iface="eth0" netns="/var/run/netns/cni-ce702f47-18b7-cb7d-b35c-f3e43844e83e" Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.032 [INFO][3772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" iface="eth0" netns="/var/run/netns/cni-ce702f47-18b7-cb7d-b35c-f3e43844e83e" Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.032 [INFO][3772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" iface="eth0" netns="/var/run/netns/cni-ce702f47-18b7-cb7d-b35c-f3e43844e83e" Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.032 [INFO][3772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.032 [INFO][3772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.071 [INFO][3800] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" HandleID="k8s-pod-network.fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.072 [INFO][3800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.072 [INFO][3800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.079 [WARNING][3800] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" HandleID="k8s-pod-network.fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.079 [INFO][3800] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" HandleID="k8s-pod-network.fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.086 [INFO][3800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:23.095814 env[1312]: 2025-09-06 00:22:23.092 [INFO][3772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:23.100710 env[1312]: time="2025-09-06T00:22:23.100200893Z" level=info msg="TearDown network for sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\" successfully" Sep 6 00:22:23.100710 env[1312]: time="2025-09-06T00:22:23.100258931Z" level=info msg="StopPodSandbox for \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\" returns successfully" Sep 6 00:22:23.100830 kubelet[2141]: E0906 00:22:23.100634 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:23.101991 systemd[1]: run-netns-cni\x2dce702f47\x2d18b7\x2dcb7d\x2db35c\x2df3e43844e83e.mount: Deactivated successfully. Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.043 [INFO][3773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.044 [INFO][3773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" iface="eth0" netns="/var/run/netns/cni-8210ccb7-9009-52a3-9383-4046aa1c3684" Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.044 [INFO][3773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" iface="eth0" netns="/var/run/netns/cni-8210ccb7-9009-52a3-9383-4046aa1c3684" Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.044 [INFO][3773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" iface="eth0" netns="/var/run/netns/cni-8210ccb7-9009-52a3-9383-4046aa1c3684" Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.044 [INFO][3773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.044 [INFO][3773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.077 [INFO][3812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" HandleID="k8s-pod-network.1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.077 [INFO][3812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.087 [INFO][3812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.092 [WARNING][3812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" HandleID="k8s-pod-network.1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.092 [INFO][3812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" HandleID="k8s-pod-network.1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.095 [INFO][3812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:23.108582 env[1312]: 2025-09-06 00:22:23.098 [INFO][3773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:23.112499 env[1312]: time="2025-09-06T00:22:23.108805166Z" level=info msg="TearDown network for sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\" successfully" Sep 6 00:22:23.112499 env[1312]: time="2025-09-06T00:22:23.108855180Z" level=info msg="StopPodSandbox for \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\" returns successfully" Sep 6 00:22:23.112499 env[1312]: time="2025-09-06T00:22:23.108858977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dqfgt,Uid:23a905d3-2b9b-4e8e-907e-242236a689bc,Namespace:kube-system,Attempt:1,}" Sep 6 00:22:23.112499 env[1312]: time="2025-09-06T00:22:23.109791097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qr48z,Uid:aa5fe117-525e-4a2e-b423-0d13ab8c1f3f,Namespace:calico-system,Attempt:1,}" Sep 6 00:22:23.111087 systemd[1]: run-netns-cni\x2d8210ccb7\x2d9009\x2d52a3\x2d9383\x2d4046aa1c3684.mount: Deactivated successfully. Sep 6 00:22:23.114000 audit[3834]: NETFILTER_CFG table=filter:105 family=2 entries=59 op=nft_register_chain pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:23.114000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7ffe5a350fb0 a2=0 a3=7ffe5a350f9c items=0 ppid=3524 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.114000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:23.244880 systemd-networkd[1075]: cali538c9d24ba2: Link UP Sep 6 00:22:23.246491 systemd-networkd[1075]: cali538c9d24ba2: Gained carrier Sep 6 00:22:23.247403 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali538c9d24ba2: link becomes ready Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.170 [INFO][3840] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0 coredns-7c65d6cfc9- kube-system 23a905d3-2b9b-4e8e-907e-242236a689bc 965 0 2025-09-06 00:21:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-dqfgt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali538c9d24ba2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqfgt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqfgt-" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.171 [INFO][3840] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqfgt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.204 [INFO][3868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" HandleID="k8s-pod-network.8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.204 [INFO][3868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" HandleID="k8s-pod-network.8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000351700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-dqfgt", "timestamp":"2025-09-06 00:22:23.204632414 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.204 [INFO][3868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.204 [INFO][3868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.207 [INFO][3868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.214 [INFO][3868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" host="localhost" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.219 [INFO][3868] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.223 [INFO][3868] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.225 [INFO][3868] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.228 [INFO][3868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.228 [INFO][3868] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" host="localhost" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.230 [INFO][3868] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463 Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.235 [INFO][3868] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" host="localhost" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.240 [INFO][3868] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" host="localhost" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.240 [INFO][3868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" host="localhost" Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.240 [INFO][3868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:23.263581 env[1312]: 2025-09-06 00:22:23.240 [INFO][3868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" HandleID="k8s-pod-network.8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:23.264426 env[1312]: 2025-09-06 00:22:23.242 [INFO][3840] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqfgt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"23a905d3-2b9b-4e8e-907e-242236a689bc", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-dqfgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali538c9d24ba2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:23.264426 env[1312]: 2025-09-06 00:22:23.242 [INFO][3840] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqfgt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:23.264426 env[1312]: 2025-09-06 00:22:23.242 [INFO][3840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali538c9d24ba2 ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqfgt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:23.264426 env[1312]: 2025-09-06 00:22:23.246 [INFO][3840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqfgt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:23.264426 env[1312]: 2025-09-06 00:22:23.247 [INFO][3840] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqfgt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"23a905d3-2b9b-4e8e-907e-242236a689bc", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463", Pod:"coredns-7c65d6cfc9-dqfgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali538c9d24ba2", MAC:"a6:6b:72:8d:09:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:23.264426 env[1312]: 2025-09-06 00:22:23.257 [INFO][3840] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dqfgt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:23.276672 env[1312]: time="2025-09-06T00:22:23.276597982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:23.276908 env[1312]: time="2025-09-06T00:22:23.276648106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:23.276908 env[1312]: time="2025-09-06T00:22:23.276658174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:23.277073 env[1312]: time="2025-09-06T00:22:23.276929764Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463 pid=3900 runtime=io.containerd.runc.v2 Sep 6 00:22:23.276000 audit[3901]: NETFILTER_CFG table=filter:106 family=2 entries=42 op=nft_register_chain pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:23.276000 audit[3901]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffca27a4010 a2=0 a3=7ffca27a3ffc items=0 ppid=3524 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.276000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:23.299393 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:22:23.324810 env[1312]: time="2025-09-06T00:22:23.324764374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dqfgt,Uid:23a905d3-2b9b-4e8e-907e-242236a689bc,Namespace:kube-system,Attempt:1,} returns sandbox id \"8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463\"" Sep 6 00:22:23.326063 kubelet[2141]: E0906 00:22:23.325975 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:23.328004 env[1312]: time="2025-09-06T00:22:23.327969137Z" level=info msg="CreateContainer within sandbox \"8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:22:23.391804 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali09999983a2e: link becomes ready Sep 6 00:22:23.387662 systemd-networkd[1075]: cali09999983a2e: Link UP Sep 6 00:22:23.390632 systemd-networkd[1075]: cali09999983a2e: Gained carrier Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.181 [INFO][3846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qr48z-eth0 csi-node-driver- calico-system aa5fe117-525e-4a2e-b423-0d13ab8c1f3f 966 0 2025-09-06 00:22:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qr48z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali09999983a2e [] [] }} ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Namespace="calico-system" Pod="csi-node-driver-qr48z" WorkloadEndpoint="localhost-k8s-csi--node--driver--qr48z-" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.181 [INFO][3846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Namespace="calico-system" Pod="csi-node-driver-qr48z" WorkloadEndpoint="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.206 [INFO][3875] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" HandleID="k8s-pod-network.2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.206 [INFO][3875] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" HandleID="k8s-pod-network.2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qr48z", "timestamp":"2025-09-06 00:22:23.206278002 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.206 [INFO][3875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.240 [INFO][3875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.240 [INFO][3875] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.316 [INFO][3875] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" host="localhost" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.324 [INFO][3875] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.329 [INFO][3875] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.332 [INFO][3875] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.340 [INFO][3875] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.340 [INFO][3875] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" host="localhost" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.344 [INFO][3875] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.359 [INFO][3875] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" host="localhost" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.382 [INFO][3875] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" host="localhost" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.382 [INFO][3875] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" host="localhost" Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.382 [INFO][3875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:23.507538 env[1312]: 2025-09-06 00:22:23.382 [INFO][3875] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" HandleID="k8s-pod-network.2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:23.508437 env[1312]: 2025-09-06 00:22:23.385 [INFO][3846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Namespace="calico-system" Pod="csi-node-driver-qr48z" WorkloadEndpoint="localhost-k8s-csi--node--driver--qr48z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qr48z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qr48z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09999983a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:23.508437 env[1312]: 2025-09-06 00:22:23.385 [INFO][3846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Namespace="calico-system" Pod="csi-node-driver-qr48z" WorkloadEndpoint="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:23.508437 env[1312]: 2025-09-06 00:22:23.385 [INFO][3846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09999983a2e ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Namespace="calico-system" Pod="csi-node-driver-qr48z" WorkloadEndpoint="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:23.508437 env[1312]: 2025-09-06 00:22:23.391 [INFO][3846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Namespace="calico-system" Pod="csi-node-driver-qr48z" WorkloadEndpoint="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:23.508437 env[1312]: 2025-09-06 00:22:23.396 [INFO][3846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Namespace="calico-system" Pod="csi-node-driver-qr48z" WorkloadEndpoint="localhost-k8s-csi--node--driver--qr48z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qr48z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b", Pod:"csi-node-driver-qr48z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09999983a2e", MAC:"56:fc:57:bd:98:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:23.508437 env[1312]: 2025-09-06 00:22:23.504 [INFO][3846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b" Namespace="calico-system" Pod="csi-node-driver-qr48z" WorkloadEndpoint="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:23.512419 env[1312]: time="2025-09-06T00:22:23.512364282Z" level=info msg="CreateContainer within sandbox \"8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"186ef16d1118cb5fbf4380b368dbcf225d19f5278b240d22845bbbfd27091c8f\"" Sep 6 00:22:23.513257 env[1312]: time="2025-09-06T00:22:23.513217032Z" level=info msg="StartContainer for \"186ef16d1118cb5fbf4380b368dbcf225d19f5278b240d22845bbbfd27091c8f\"" Sep 6 00:22:23.515000 audit[3946]: NETFILTER_CFG table=filter:107 family=2 entries=40 op=nft_register_chain pid=3946 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:23.515000 audit[3946]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffc8566b400 a2=0 a3=7ffc8566b3ec items=0 ppid=3524 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.515000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:23.521905 env[1312]: time="2025-09-06T00:22:23.521810004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:23.521905 env[1312]: time="2025-09-06T00:22:23.521859547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:23.521905 env[1312]: time="2025-09-06T00:22:23.521872061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:23.522406 env[1312]: time="2025-09-06T00:22:23.522344918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b pid=3961 runtime=io.containerd.runc.v2 Sep 6 00:22:23.547269 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:22:23.567717 env[1312]: time="2025-09-06T00:22:23.567669159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qr48z,Uid:aa5fe117-525e-4a2e-b423-0d13ab8c1f3f,Namespace:calico-system,Attempt:1,} returns sandbox id \"2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b\"" Sep 6 00:22:23.576618 env[1312]: time="2025-09-06T00:22:23.576578966Z" level=info msg="StartContainer for \"186ef16d1118cb5fbf4380b368dbcf225d19f5278b240d22845bbbfd27091c8f\" returns successfully" Sep 6 00:22:23.971157 env[1312]: time="2025-09-06T00:22:23.971102728Z" level=info msg="StopPodSandbox for \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\"" Sep 6 00:22:23.971692 env[1312]: time="2025-09-06T00:22:23.971104491Z" level=info msg="StopPodSandbox for \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\"" Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.014 [INFO][4046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.014 [INFO][4046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" iface="eth0" netns="/var/run/netns/cni-d7045ff4-9f16-0341-0bf7-dcf83944671b" Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.014 [INFO][4046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" iface="eth0" netns="/var/run/netns/cni-d7045ff4-9f16-0341-0bf7-dcf83944671b" Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.014 [INFO][4046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" iface="eth0" netns="/var/run/netns/cni-d7045ff4-9f16-0341-0bf7-dcf83944671b" Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.014 [INFO][4046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.014 [INFO][4046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.045 [INFO][4062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" HandleID="k8s-pod-network.2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.045 [INFO][4062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.045 [INFO][4062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.051 [WARNING][4062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" HandleID="k8s-pod-network.2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.051 [INFO][4062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" HandleID="k8s-pod-network.2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.060 [INFO][4062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:24.065015 env[1312]: 2025-09-06 00:22:24.062 [INFO][4046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:24.068420 env[1312]: time="2025-09-06T00:22:24.068345938Z" level=info msg="TearDown network for sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\" successfully" Sep 6 00:22:24.068688 env[1312]: time="2025-09-06T00:22:24.068652673Z" level=info msg="StopPodSandbox for \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\" returns successfully" Sep 6 00:22:24.069758 env[1312]: time="2025-09-06T00:22:24.069698084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-644bf98f67-gf7cj,Uid:c19353dd-4b41-4b6f-9132-f91a5ef28107,Namespace:calico-system,Attempt:1,}" Sep 6 00:22:24.071110 systemd[1]: run-netns-cni\x2dd7045ff4\x2d9f16\x2d0341\x2d0bf7\x2ddcf83944671b.mount: Deactivated successfully. Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.037 [INFO][4047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.038 [INFO][4047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" iface="eth0" netns="/var/run/netns/cni-121705b7-2253-5116-8f4a-b40dcc107c12" Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.038 [INFO][4047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" iface="eth0" netns="/var/run/netns/cni-121705b7-2253-5116-8f4a-b40dcc107c12" Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.038 [INFO][4047] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" iface="eth0" netns="/var/run/netns/cni-121705b7-2253-5116-8f4a-b40dcc107c12" Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.038 [INFO][4047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.038 [INFO][4047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.065 [INFO][4069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" HandleID="k8s-pod-network.8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.066 [INFO][4069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.066 [INFO][4069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.073 [WARNING][4069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" HandleID="k8s-pod-network.8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.073 [INFO][4069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" HandleID="k8s-pod-network.8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.075 [INFO][4069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:24.080004 env[1312]: 2025-09-06 00:22:24.078 [INFO][4047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:24.081389 env[1312]: time="2025-09-06T00:22:24.081343734Z" level=info msg="TearDown network for sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\" successfully" Sep 6 00:22:24.081389 env[1312]: time="2025-09-06T00:22:24.081381685Z" level=info msg="StopPodSandbox for \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\" returns successfully" Sep 6 00:22:24.082464 env[1312]: time="2025-09-06T00:22:24.082434750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f95dfcdc5-lkdpx,Uid:d47e55db-f531-4fdd-892c-a105be81339f,Namespace:calico-apiserver,Attempt:1,}" Sep 6 00:22:24.083650 systemd[1]: run-netns-cni\x2d121705b7\x2d2253\x2d5116\x2d8f4a\x2db40dcc107c12.mount: Deactivated successfully. Sep 6 00:22:24.142292 systemd-networkd[1075]: vxlan.calico: Gained IPv6LL Sep 6 00:22:24.239480 systemd-networkd[1075]: calic1fc48d9b58: Link UP Sep 6 00:22:24.244484 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:22:24.244551 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic1fc48d9b58: link becomes ready Sep 6 00:22:24.244758 systemd-networkd[1075]: calic1fc48d9b58: Gained carrier Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.165 [INFO][4090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0 calico-kube-controllers-644bf98f67- calico-system c19353dd-4b41-4b6f-9132-f91a5ef28107 985 0 2025-09-06 00:22:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:644bf98f67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-644bf98f67-gf7cj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic1fc48d9b58 [] [] }} ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Namespace="calico-system" Pod="calico-kube-controllers-644bf98f67-gf7cj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.165 [INFO][4090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Namespace="calico-system" Pod="calico-kube-controllers-644bf98f67-gf7cj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.197 [INFO][4109] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" HandleID="k8s-pod-network.9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.197 [INFO][4109] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" HandleID="k8s-pod-network.9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c8fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-644bf98f67-gf7cj", "timestamp":"2025-09-06 00:22:24.197166369 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.197 [INFO][4109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.197 [INFO][4109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.197 [INFO][4109] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.204 [INFO][4109] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" host="localhost" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.212 [INFO][4109] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.218 [INFO][4109] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.220 [INFO][4109] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.222 [INFO][4109] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.222 [INFO][4109] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" host="localhost" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.223 [INFO][4109] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3 Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.227 [INFO][4109] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" host="localhost" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.233 [INFO][4109] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" host="localhost" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.233 [INFO][4109] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" host="localhost" Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.233 [INFO][4109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:24.288241 env[1312]: 2025-09-06 00:22:24.233 [INFO][4109] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" HandleID="k8s-pod-network.9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:24.288955 env[1312]: 2025-09-06 00:22:24.236 [INFO][4090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Namespace="calico-system" Pod="calico-kube-controllers-644bf98f67-gf7cj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0", GenerateName:"calico-kube-controllers-644bf98f67-", Namespace:"calico-system", SelfLink:"", UID:"c19353dd-4b41-4b6f-9132-f91a5ef28107", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"644bf98f67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-644bf98f67-gf7cj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1fc48d9b58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:24.288955 env[1312]: 2025-09-06 00:22:24.236 [INFO][4090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Namespace="calico-system" Pod="calico-kube-controllers-644bf98f67-gf7cj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:24.288955 env[1312]: 2025-09-06 00:22:24.236 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1fc48d9b58 ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Namespace="calico-system" Pod="calico-kube-controllers-644bf98f67-gf7cj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:24.288955 env[1312]: 2025-09-06 00:22:24.245 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Namespace="calico-system" Pod="calico-kube-controllers-644bf98f67-gf7cj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:24.288955 env[1312]: 2025-09-06 00:22:24.248 [INFO][4090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Namespace="calico-system" Pod="calico-kube-controllers-644bf98f67-gf7cj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0", GenerateName:"calico-kube-controllers-644bf98f67-", Namespace:"calico-system", SelfLink:"", UID:"c19353dd-4b41-4b6f-9132-f91a5ef28107", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"644bf98f67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3", Pod:"calico-kube-controllers-644bf98f67-gf7cj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1fc48d9b58", MAC:"ee:aa:34:7f:c2:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:24.288955 env[1312]: 2025-09-06 00:22:24.283 [INFO][4090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3" Namespace="calico-system" Pod="calico-kube-controllers-644bf98f67-gf7cj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:24.291439 kubelet[2141]: E0906 00:22:24.290682 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:24.325467 kubelet[2141]: I0906 00:22:24.325384 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dqfgt" podStartSLOduration=37.325358802 podStartE2EDuration="37.325358802s" podCreationTimestamp="2025-09-06 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:22:24.305500214 +0000 UTC m=+43.878059031" watchObservedRunningTime="2025-09-06 00:22:24.325358802 +0000 UTC m=+43.897917619" Sep 6 00:22:24.332044 env[1312]: time="2025-09-06T00:22:24.331984544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:24.332243 env[1312]: time="2025-09-06T00:22:24.332217901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:24.332992 env[1312]: time="2025-09-06T00:22:24.332940046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:24.333377 env[1312]: time="2025-09-06T00:22:24.333347129Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3 pid=4141 runtime=io.containerd.runc.v2 Sep 6 00:22:24.332000 audit[4138]: NETFILTER_CFG table=filter:108 family=2 entries=44 op=nft_register_chain pid=4138 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:24.332000 audit[4138]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffd2718de00 a2=0 a3=7ffd2718ddec items=0 ppid=3524 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:24.332000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:24.344000 audit[4153]: NETFILTER_CFG table=filter:109 family=2 entries=20 op=nft_register_rule pid=4153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:24.344000 audit[4153]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf307e4e0 a2=0 a3=7ffcf307e4cc items=0 ppid=2268 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:24.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:24.349000 audit[4153]: NETFILTER_CFG table=nat:110 family=2 entries=14 op=nft_register_rule pid=4153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:24.349000 audit[4153]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcf307e4e0 a2=0 a3=0 items=0 ppid=2268 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:24.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:24.361000 audit[4170]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=4170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:24.361000 audit[4170]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe0df59c50 a2=0 a3=7ffe0df59c3c items=0 ppid=2268 pid=4170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:24.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:24.366000 audit[4170]: NETFILTER_CFG table=nat:112 family=2 entries=35 op=nft_register_chain pid=4170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:24.366000 audit[4170]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe0df59c50 a2=0 a3=7ffe0df59c3c items=0 ppid=2268 pid=4170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:24.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:24.368962 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:22:24.372101 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali89f10fbc7a4: link becomes ready Sep 6 00:22:24.374305 systemd-networkd[1075]: cali89f10fbc7a4: Link UP Sep 6 00:22:24.374497 systemd-networkd[1075]: cali89f10fbc7a4: Gained carrier Sep 6 00:22:24.401625 env[1312]: time="2025-09-06T00:22:24.401569869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-644bf98f67-gf7cj,Uid:c19353dd-4b41-4b6f-9132-f91a5ef28107,Namespace:calico-system,Attempt:1,} returns sandbox id \"9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3\"" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.164 [INFO][4079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0 calico-apiserver-7f95dfcdc5- calico-apiserver d47e55db-f531-4fdd-892c-a105be81339f 986 0 2025-09-06 00:21:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f95dfcdc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f95dfcdc5-lkdpx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89f10fbc7a4 [] [] }} ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-lkdpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.164 [INFO][4079] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-lkdpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.198 [INFO][4110] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" HandleID="k8s-pod-network.55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.198 [INFO][4110] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" HandleID="k8s-pod-network.55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f95dfcdc5-lkdpx", "timestamp":"2025-09-06 00:22:24.198315374 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.198 [INFO][4110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.234 [INFO][4110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.234 [INFO][4110] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.309 [INFO][4110] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" host="localhost" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.326 [INFO][4110] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.333 [INFO][4110] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.336 [INFO][4110] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.339 [INFO][4110] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.339 [INFO][4110] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" host="localhost" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.342 [INFO][4110] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.351 [INFO][4110] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" host="localhost" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.364 [INFO][4110] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" host="localhost" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.365 [INFO][4110] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" host="localhost" Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.365 [INFO][4110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:24.408509 env[1312]: 2025-09-06 00:22:24.365 [INFO][4110] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" HandleID="k8s-pod-network.55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:24.409257 env[1312]: 2025-09-06 00:22:24.367 [INFO][4079] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-lkdpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0", GenerateName:"calico-apiserver-7f95dfcdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d47e55db-f531-4fdd-892c-a105be81339f", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f95dfcdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f95dfcdc5-lkdpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89f10fbc7a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:24.409257 env[1312]: 2025-09-06 00:22:24.367 [INFO][4079] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-lkdpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:24.409257 env[1312]: 2025-09-06 00:22:24.368 [INFO][4079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89f10fbc7a4 ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-lkdpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:24.409257 env[1312]: 2025-09-06 00:22:24.371 [INFO][4079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-lkdpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:24.409257 env[1312]: 2025-09-06 00:22:24.371 [INFO][4079] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-lkdpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0", GenerateName:"calico-apiserver-7f95dfcdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d47e55db-f531-4fdd-892c-a105be81339f", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f95dfcdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab", Pod:"calico-apiserver-7f95dfcdc5-lkdpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89f10fbc7a4", MAC:"9a:1a:bf:0c:67:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:24.409257 env[1312]: 2025-09-06 00:22:24.405 [INFO][4079] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-lkdpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:24.418000 audit[4189]: NETFILTER_CFG table=filter:113 family=2 entries=68 op=nft_register_chain pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:24.418000 audit[4189]: SYSCALL arch=c000003e syscall=46 success=yes exit=34624 a0=3 a1=7ffcd50bc390 a2=0 a3=7ffcd50bc37c items=0 ppid=3524 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:24.418000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:24.438310 env[1312]: time="2025-09-06T00:22:24.438023703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:24.438310 env[1312]: time="2025-09-06T00:22:24.438088935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:24.438310 env[1312]: time="2025-09-06T00:22:24.438119412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:24.438937 env[1312]: time="2025-09-06T00:22:24.438329436Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab pid=4199 runtime=io.containerd.runc.v2 Sep 6 00:22:24.463055 systemd-networkd[1075]: cali8ec3ae4bf60: Gained IPv6LL Sep 6 00:22:24.467046 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:22:24.503240 env[1312]: time="2025-09-06T00:22:24.501838892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f95dfcdc5-lkdpx,Uid:d47e55db-f531-4fdd-892c-a105be81339f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab\"" Sep 6 00:22:24.909392 systemd-networkd[1075]: cali09999983a2e: Gained IPv6LL Sep 6 00:22:24.951671 env[1312]: time="2025-09-06T00:22:24.951593687Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:24.983161 env[1312]: time="2025-09-06T00:22:24.983104578Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:25.044879 env[1312]: time="2025-09-06T00:22:25.044816679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:25.049564 env[1312]: time="2025-09-06T00:22:25.049522337Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:25.050032 env[1312]: time="2025-09-06T00:22:25.050003980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 6 00:22:25.051252 env[1312]: time="2025-09-06T00:22:25.051189525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 6 00:22:25.052285 env[1312]: time="2025-09-06T00:22:25.052239906Z" level=info msg="CreateContainer within sandbox \"4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 6 00:22:25.067951 env[1312]: time="2025-09-06T00:22:25.067887311Z" level=info msg="CreateContainer within sandbox \"4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5bb0b5b0a1a8479a9bc80c4d4031e17a1f7d3df16bcd4d719293ccdc588893af\"" Sep 6 00:22:25.068876 env[1312]: time="2025-09-06T00:22:25.068833316Z" level=info msg="StartContainer for \"5bb0b5b0a1a8479a9bc80c4d4031e17a1f7d3df16bcd4d719293ccdc588893af\"" Sep 6 00:22:25.128111 env[1312]: time="2025-09-06T00:22:25.128043351Z" level=info msg="StartContainer for \"5bb0b5b0a1a8479a9bc80c4d4031e17a1f7d3df16bcd4d719293ccdc588893af\" returns successfully" Sep 6 00:22:25.293457 systemd-networkd[1075]: cali538c9d24ba2: Gained IPv6LL Sep 6 00:22:25.313276 kubelet[2141]: E0906 00:22:25.313241 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:25.421332 systemd-networkd[1075]: cali89f10fbc7a4: Gained IPv6LL Sep 6 00:22:25.933334 systemd-networkd[1075]: calic1fc48d9b58: Gained IPv6LL Sep 6 00:22:25.970868 env[1312]: time="2025-09-06T00:22:25.970820588Z" level=info msg="StopPodSandbox for \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\"" Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.011 [INFO][4281] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.011 [INFO][4281] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" iface="eth0" netns="/var/run/netns/cni-da9d5bd4-8b1e-af11-2102-81c8e1627ccd" Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.012 [INFO][4281] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" iface="eth0" netns="/var/run/netns/cni-da9d5bd4-8b1e-af11-2102-81c8e1627ccd" Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.012 [INFO][4281] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" iface="eth0" netns="/var/run/netns/cni-da9d5bd4-8b1e-af11-2102-81c8e1627ccd" Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.012 [INFO][4281] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.012 [INFO][4281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.029 [INFO][4289] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" HandleID="k8s-pod-network.ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.029 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.029 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.038 [WARNING][4289] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" HandleID="k8s-pod-network.ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.038 [INFO][4289] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" HandleID="k8s-pod-network.ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.039 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:26.042517 env[1312]: 2025-09-06 00:22:26.040 [INFO][4281] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:26.045856 systemd[1]: run-netns-cni\x2dda9d5bd4\x2d8b1e\x2daf11\x2d2102\x2d81c8e1627ccd.mount: Deactivated successfully. Sep 6 00:22:26.046808 env[1312]: time="2025-09-06T00:22:26.046759155Z" level=info msg="TearDown network for sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\" successfully" Sep 6 00:22:26.046906 env[1312]: time="2025-09-06T00:22:26.046807496Z" level=info msg="StopPodSandbox for \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\" returns successfully" Sep 6 00:22:26.047261 kubelet[2141]: E0906 00:22:26.047234 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:26.047660 env[1312]: time="2025-09-06T00:22:26.047631592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-krllp,Uid:5d236a4c-f0ec-424c-baa8-2089b5f219ec,Namespace:kube-system,Attempt:1,}" Sep 6 00:22:26.206263 systemd-networkd[1075]: calidb01ba6d2d8: Link UP Sep 6 00:22:26.209257 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:22:26.209397 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidb01ba6d2d8: link becomes ready Sep 6 00:22:26.209551 systemd-networkd[1075]: calidb01ba6d2d8: Gained carrier Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.133 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--krllp-eth0 coredns-7c65d6cfc9- kube-system 5d236a4c-f0ec-424c-baa8-2089b5f219ec 1014 0 2025-09-06 00:21:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-krllp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidb01ba6d2d8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-krllp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--krllp-" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.133 [INFO][4297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-krllp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.159 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" HandleID="k8s-pod-network.7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.159 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" HandleID="k8s-pod-network.7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-krllp", "timestamp":"2025-09-06 00:22:26.159459215 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.159 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.159 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.159 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.168 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" host="localhost" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.175 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.179 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.184 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.188 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.188 [INFO][4312] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" host="localhost" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.190 [INFO][4312] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4 Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.193 [INFO][4312] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" host="localhost" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.200 [INFO][4312] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" host="localhost" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.200 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" host="localhost" Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.200 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:26.221946 env[1312]: 2025-09-06 00:22:26.200 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" HandleID="k8s-pod-network.7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:26.223018 env[1312]: 2025-09-06 00:22:26.203 [INFO][4297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-krllp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--krllp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5d236a4c-f0ec-424c-baa8-2089b5f219ec", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-krllp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb01ba6d2d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:26.223018 env[1312]: 2025-09-06 00:22:26.203 [INFO][4297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-krllp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:26.223018 env[1312]: 2025-09-06 00:22:26.203 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb01ba6d2d8 ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-krllp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:26.223018 env[1312]: 2025-09-06 00:22:26.209 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-krllp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:26.223018 env[1312]: 2025-09-06 00:22:26.210 [INFO][4297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-krllp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--krllp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5d236a4c-f0ec-424c-baa8-2089b5f219ec", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4", Pod:"coredns-7c65d6cfc9-krllp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb01ba6d2d8", MAC:"12:fa:43:05:34:72", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:26.223018 env[1312]: 2025-09-06 00:22:26.219 [INFO][4297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-krllp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:26.235000 audit[4335]: NETFILTER_CFG table=filter:114 family=2 entries=44 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:26.238428 kernel: kauditd_printk_skb: 587 callbacks suppressed Sep 6 00:22:26.238487 kernel: audit: type=1325 audit(1757118146.235:411): table=filter:114 family=2 entries=44 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:26.239185 env[1312]: time="2025-09-06T00:22:26.239087251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:26.239381 env[1312]: time="2025-09-06T00:22:26.239340465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:26.239516 env[1312]: time="2025-09-06T00:22:26.239486860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:26.239985 env[1312]: time="2025-09-06T00:22:26.239950651Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4 pid=4338 runtime=io.containerd.runc.v2 Sep 6 00:22:26.235000 audit[4335]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffe952fde20 a2=0 a3=7ffe952fde0c items=0 ppid=3524 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:26.249512 kernel: audit: type=1300 audit(1757118146.235:411): arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffe952fde20 a2=0 a3=7ffe952fde0c items=0 ppid=3524 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:26.249658 kernel: audit: type=1327 audit(1757118146.235:411): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:26.235000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:26.270844 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:22:26.294450 env[1312]: time="2025-09-06T00:22:26.294392135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-krllp,Uid:5d236a4c-f0ec-424c-baa8-2089b5f219ec,Namespace:kube-system,Attempt:1,} returns sandbox id \"7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4\"" Sep 6 00:22:26.295301 kubelet[2141]: E0906 00:22:26.295273 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:26.298338 env[1312]: time="2025-09-06T00:22:26.298294526Z" level=info msg="CreateContainer within sandbox \"7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:22:26.317212 env[1312]: time="2025-09-06T00:22:26.317159979Z" level=info msg="CreateContainer within sandbox \"7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3aeace04b22d5f97a409dd6f23d2da1022a7347a7296c484191828ce70278cfe\"" Sep 6 00:22:26.317593 kubelet[2141]: E0906 00:22:26.317562 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:26.318385 env[1312]: time="2025-09-06T00:22:26.318351464Z" level=info msg="StartContainer for \"3aeace04b22d5f97a409dd6f23d2da1022a7347a7296c484191828ce70278cfe\"" Sep 6 00:22:26.368666 env[1312]: time="2025-09-06T00:22:26.368616043Z" level=info msg="StartContainer for \"3aeace04b22d5f97a409dd6f23d2da1022a7347a7296c484191828ce70278cfe\" returns successfully" Sep 6 00:22:26.391943 systemd[1]: Started sshd@10-10.0.0.61:22-10.0.0.1:42116.service. Sep 6 00:22:26.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.61:22-10.0.0.1:42116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:26.399229 kernel: audit: type=1130 audit(1757118146.390:412): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.61:22-10.0.0.1:42116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:26.446000 audit[4407]: USER_ACCT pid=4407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:26.449395 sshd[4407]: Accepted publickey for core from 10.0.0.1 port 42116 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:22:26.451203 sshd[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:26.449000 audit[4407]: CRED_ACQ pid=4407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:26.461652 kernel: audit: type=1101 audit(1757118146.446:413): pid=4407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:26.461797 kernel: audit: type=1103 audit(1757118146.449:414): pid=4407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:26.461819 kernel: audit: type=1006 audit(1757118146.449:415): pid=4407 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Sep 6 00:22:26.461844 kernel: audit: type=1300 audit(1757118146.449:415): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe168420a0 a2=3 a3=0 items=0 ppid=1 pid=4407 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:26.449000 audit[4407]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe168420a0 a2=3 a3=0 items=0 ppid=1 pid=4407 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:26.457705 systemd-logind[1293]: New session 11 of user core. Sep 6 00:22:26.462360 kernel: audit: type=1327 audit(1757118146.449:415): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:26.449000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:26.458043 systemd[1]: Started session-11.scope. Sep 6 00:22:26.465000 audit[4407]: USER_START pid=4407 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:26.466000 audit[4412]: CRED_ACQ pid=4412 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:26.474165 kernel: audit: type=1105 audit(1757118146.465:416): pid=4407 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:26.789304 sshd[4407]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:26.789000 audit[4407]: USER_END pid=4407 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:26.789000 audit[4407]: CRED_DISP pid=4407 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:26.792072 systemd[1]: sshd@10-10.0.0.61:22-10.0.0.1:42116.service: Deactivated successfully. Sep 6 00:22:26.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.61:22-10.0.0.1:42116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:26.793258 systemd-logind[1293]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:22:26.793355 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:22:26.794171 systemd-logind[1293]: Removed session 11. Sep 6 00:22:26.972676 env[1312]: time="2025-09-06T00:22:26.972026538Z" level=info msg="StopPodSandbox for \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\"" Sep 6 00:22:26.972676 env[1312]: time="2025-09-06T00:22:26.972566160Z" level=info msg="StopPodSandbox for \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\"" Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.102 [INFO][4455] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.102 [INFO][4455] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" iface="eth0" netns="/var/run/netns/cni-dacd28f3-01b1-d0cd-dde0-b94099ba87a5" Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.102 [INFO][4455] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" iface="eth0" netns="/var/run/netns/cni-dacd28f3-01b1-d0cd-dde0-b94099ba87a5" Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.103 [INFO][4455] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" iface="eth0" netns="/var/run/netns/cni-dacd28f3-01b1-d0cd-dde0-b94099ba87a5" Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.103 [INFO][4455] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.103 [INFO][4455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.126 [INFO][4466] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" HandleID="k8s-pod-network.48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.126 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.126 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.133 [WARNING][4466] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" HandleID="k8s-pod-network.48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.133 [INFO][4466] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" HandleID="k8s-pod-network.48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.135 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:27.143222 env[1312]: 2025-09-06 00:22:27.137 [INFO][4455] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:27.144361 env[1312]: time="2025-09-06T00:22:27.144313377Z" level=info msg="TearDown network for sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\" successfully" Sep 6 00:22:27.144520 env[1312]: time="2025-09-06T00:22:27.144497171Z" level=info msg="StopPodSandbox for \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\" returns successfully" Sep 6 00:22:27.145072 systemd[1]: run-netns-cni\x2ddacd28f3\x2d01b1\x2dd0cd\x2ddde0\x2db94099ba87a5.mount: Deactivated successfully. Sep 6 00:22:27.147570 env[1312]: time="2025-09-06T00:22:27.147525512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-lvkqq,Uid:a6cca75d-1a19-48b7-bf46-1e5cf7e72c19,Namespace:calico-system,Attempt:1,}" Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.101 [INFO][4443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.101 [INFO][4443] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" iface="eth0" netns="/var/run/netns/cni-b4134a5a-ae6f-2c5d-da2c-1f16b748d36b" Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.101 [INFO][4443] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" iface="eth0" netns="/var/run/netns/cni-b4134a5a-ae6f-2c5d-da2c-1f16b748d36b" Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.102 [INFO][4443] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" iface="eth0" netns="/var/run/netns/cni-b4134a5a-ae6f-2c5d-da2c-1f16b748d36b" Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.102 [INFO][4443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.102 [INFO][4443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.132 [INFO][4464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" HandleID="k8s-pod-network.7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.133 [INFO][4464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.135 [INFO][4464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.146 [WARNING][4464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" HandleID="k8s-pod-network.7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.146 [INFO][4464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" HandleID="k8s-pod-network.7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.148 [INFO][4464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:27.152102 env[1312]: 2025-09-06 00:22:27.150 [INFO][4443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:27.152908 env[1312]: time="2025-09-06T00:22:27.152875509Z" level=info msg="TearDown network for sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\" successfully" Sep 6 00:22:27.152997 env[1312]: time="2025-09-06T00:22:27.152974965Z" level=info msg="StopPodSandbox for \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\" returns successfully" Sep 6 00:22:27.153735 env[1312]: time="2025-09-06T00:22:27.153702911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f95dfcdc5-xw9st,Uid:2997af2f-3793-4ebb-a625-6dd9b47d29e8,Namespace:calico-apiserver,Attempt:1,}" Sep 6 00:22:27.155544 systemd[1]: run-netns-cni\x2db4134a5a\x2dae6f\x2d2c5d\x2dda2c\x2d1f16b748d36b.mount: Deactivated successfully. Sep 6 00:22:27.158233 env[1312]: time="2025-09-06T00:22:27.158175752Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:27.163143 env[1312]: time="2025-09-06T00:22:27.163079681Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:27.166945 env[1312]: time="2025-09-06T00:22:27.166877186Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:27.230331 env[1312]: time="2025-09-06T00:22:27.230230415Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:27.231117 env[1312]: time="2025-09-06T00:22:27.231085199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 6 00:22:27.232723 env[1312]: time="2025-09-06T00:22:27.232676904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 6 00:22:27.234283 env[1312]: time="2025-09-06T00:22:27.234247681Z" level=info msg="CreateContainer within sandbox \"2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 6 00:22:27.322154 kubelet[2141]: E0906 00:22:27.322087 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:27.479630 kubelet[2141]: I0906 00:22:27.479542 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-krllp" podStartSLOduration=40.479516651 podStartE2EDuration="40.479516651s" podCreationTimestamp="2025-09-06 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:22:27.461004904 +0000 UTC m=+47.033563721" watchObservedRunningTime="2025-09-06 00:22:27.479516651 +0000 UTC m=+47.052075478" Sep 6 00:22:27.478000 audit[4481]: NETFILTER_CFG table=filter:115 family=2 entries=14 op=nft_register_rule pid=4481 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:27.478000 audit[4481]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcf3e1db70 a2=0 a3=7ffcf3e1db5c items=0 ppid=2268 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:27.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:27.488165 env[1312]: time="2025-09-06T00:22:27.488058245Z" level=info msg="CreateContainer within sandbox \"2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"df44ea04d58d322c0ad64e54615317bf01efb0851eb2ea94487a27d6edc64387\"" Sep 6 00:22:27.487000 audit[4481]: NETFILTER_CFG table=nat:116 family=2 entries=44 op=nft_register_rule pid=4481 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:27.487000 audit[4481]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcf3e1db70 a2=0 a3=7ffcf3e1db5c items=0 ppid=2268 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:27.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:27.489361 env[1312]: time="2025-09-06T00:22:27.489315944Z" level=info msg="StartContainer for \"df44ea04d58d322c0ad64e54615317bf01efb0851eb2ea94487a27d6edc64387\"" Sep 6 00:22:27.503000 audit[4519]: NETFILTER_CFG table=filter:117 family=2 entries=14 op=nft_register_rule pid=4519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:27.503000 audit[4519]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9cf94a40 a2=0 a3=7ffd9cf94a2c items=0 ppid=2268 pid=4519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:27.503000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:27.530000 audit[4519]: NETFILTER_CFG table=nat:118 family=2 entries=56 op=nft_register_chain pid=4519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:27.530000 audit[4519]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd9cf94a40 a2=0 a3=7ffd9cf94a2c items=0 ppid=2268 pid=4519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:27.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:27.585811 env[1312]: time="2025-09-06T00:22:27.585741917Z" level=info msg="StartContainer for \"df44ea04d58d322c0ad64e54615317bf01efb0851eb2ea94487a27d6edc64387\" returns successfully" Sep 6 00:22:27.621611 systemd-networkd[1075]: cali275aee5089f: Link UP Sep 6 00:22:27.624101 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:22:27.624226 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali275aee5089f: link becomes ready Sep 6 00:22:27.624438 systemd-networkd[1075]: cali275aee5089f: Gained carrier Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.524 [INFO][4482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--lvkqq-eth0 goldmane-7988f88666- calico-system a6cca75d-1a19-48b7-bf46-1e5cf7e72c19 1034 0 2025-09-06 00:21:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-lvkqq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali275aee5089f [] [] }} ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Namespace="calico-system" Pod="goldmane-7988f88666-lvkqq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lvkqq-" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.524 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Namespace="calico-system" Pod="goldmane-7988f88666-lvkqq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.565 [INFO][4539] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" HandleID="k8s-pod-network.52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.565 [INFO][4539] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" HandleID="k8s-pod-network.52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-lvkqq", "timestamp":"2025-09-06 00:22:27.56518343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.565 [INFO][4539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.565 [INFO][4539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.565 [INFO][4539] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.574 [INFO][4539] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" host="localhost" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.593 [INFO][4539] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.598 [INFO][4539] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.599 [INFO][4539] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.602 [INFO][4539] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.602 [INFO][4539] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" host="localhost" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.603 [INFO][4539] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5 Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.607 [INFO][4539] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" host="localhost" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.614 [INFO][4539] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" host="localhost" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.614 [INFO][4539] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" host="localhost" Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.615 [INFO][4539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:27.636071 env[1312]: 2025-09-06 00:22:27.615 [INFO][4539] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" HandleID="k8s-pod-network.52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:27.636977 env[1312]: 2025-09-06 00:22:27.617 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Namespace="calico-system" Pod="goldmane-7988f88666-lvkqq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--lvkqq-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a6cca75d-1a19-48b7-bf46-1e5cf7e72c19", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-lvkqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali275aee5089f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:27.636977 env[1312]: 2025-09-06 00:22:27.617 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Namespace="calico-system" Pod="goldmane-7988f88666-lvkqq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:27.636977 env[1312]: 2025-09-06 00:22:27.617 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali275aee5089f ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Namespace="calico-system" Pod="goldmane-7988f88666-lvkqq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:27.636977 env[1312]: 2025-09-06 00:22:27.624 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Namespace="calico-system" Pod="goldmane-7988f88666-lvkqq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:27.636977 env[1312]: 2025-09-06 00:22:27.624 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Namespace="calico-system" Pod="goldmane-7988f88666-lvkqq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--lvkqq-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a6cca75d-1a19-48b7-bf46-1e5cf7e72c19", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5", Pod:"goldmane-7988f88666-lvkqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali275aee5089f", MAC:"5a:94:cd:3a:53:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:27.636977 env[1312]: 2025-09-06 00:22:27.633 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5" Namespace="calico-system" Pod="goldmane-7988f88666-lvkqq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:27.651115 env[1312]: time="2025-09-06T00:22:27.650924959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:27.651370 env[1312]: time="2025-09-06T00:22:27.651217618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:27.651370 env[1312]: time="2025-09-06T00:22:27.651298741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:27.651787 env[1312]: time="2025-09-06T00:22:27.651713819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5 pid=4585 runtime=io.containerd.runc.v2 Sep 6 00:22:27.663000 audit[4602]: NETFILTER_CFG table=filter:119 family=2 entries=60 op=nft_register_chain pid=4602 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:27.663000 audit[4602]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7ffe40da8440 a2=0 a3=7ffe40da842c items=0 ppid=3524 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:27.663000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:27.686807 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:22:27.722966 env[1312]: time="2025-09-06T00:22:27.722608407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-lvkqq,Uid:a6cca75d-1a19-48b7-bf46-1e5cf7e72c19,Namespace:calico-system,Attempt:1,} returns sandbox id \"52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5\"" Sep 6 00:22:27.887581 systemd-networkd[1075]: calid1bd8f2b29c: Link UP Sep 6 00:22:27.890414 systemd-networkd[1075]: calid1bd8f2b29c: Gained carrier Sep 6 00:22:27.891232 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid1bd8f2b29c: link becomes ready Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.544 [INFO][4497] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0 calico-apiserver-7f95dfcdc5- calico-apiserver 2997af2f-3793-4ebb-a625-6dd9b47d29e8 1035 0 2025-09-06 00:21:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f95dfcdc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f95dfcdc5-xw9st eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid1bd8f2b29c [] [] }} ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-xw9st" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.544 [INFO][4497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-xw9st" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.593 [INFO][4547] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" HandleID="k8s-pod-network.4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.594 [INFO][4547] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" HandleID="k8s-pod-network.4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f95dfcdc5-xw9st", "timestamp":"2025-09-06 00:22:27.593641086 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.594 [INFO][4547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.615 [INFO][4547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.615 [INFO][4547] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.675 [INFO][4547] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" host="localhost" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.684 [INFO][4547] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.701 [INFO][4547] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.703 [INFO][4547] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.705 [INFO][4547] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.705 [INFO][4547] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" host="localhost" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.707 [INFO][4547] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7 Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.860 [INFO][4547] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" host="localhost" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.882 [INFO][4547] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" host="localhost" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.882 [INFO][4547] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" host="localhost" Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.882 [INFO][4547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:27.905068 env[1312]: 2025-09-06 00:22:27.882 [INFO][4547] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" HandleID="k8s-pod-network.4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:27.905946 env[1312]: 2025-09-06 00:22:27.884 [INFO][4497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-xw9st" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0", GenerateName:"calico-apiserver-7f95dfcdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2997af2f-3793-4ebb-a625-6dd9b47d29e8", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f95dfcdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f95dfcdc5-xw9st", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid1bd8f2b29c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:27.905946 env[1312]: 2025-09-06 00:22:27.884 [INFO][4497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-xw9st" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:27.905946 env[1312]: 2025-09-06 00:22:27.884 [INFO][4497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1bd8f2b29c ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-xw9st" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:27.905946 env[1312]: 2025-09-06 00:22:27.891 [INFO][4497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-xw9st" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:27.905946 env[1312]: 2025-09-06 00:22:27.891 [INFO][4497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-xw9st" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0", GenerateName:"calico-apiserver-7f95dfcdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2997af2f-3793-4ebb-a625-6dd9b47d29e8", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f95dfcdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7", Pod:"calico-apiserver-7f95dfcdc5-xw9st", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid1bd8f2b29c", MAC:"66:55:d9:71:cf:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:27.905946 env[1312]: 2025-09-06 00:22:27.902 [INFO][4497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7" Namespace="calico-apiserver" Pod="calico-apiserver-7f95dfcdc5-xw9st" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:27.920484 env[1312]: time="2025-09-06T00:22:27.919974502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:27.920484 env[1312]: time="2025-09-06T00:22:27.920019666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:27.920484 env[1312]: time="2025-09-06T00:22:27.920029455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:27.920484 env[1312]: time="2025-09-06T00:22:27.920254827Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7 pid=4636 runtime=io.containerd.runc.v2 Sep 6 00:22:27.934000 audit[4647]: NETFILTER_CFG table=filter:120 family=2 entries=63 op=nft_register_chain pid=4647 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:22:27.934000 audit[4647]: SYSCALL arch=c000003e syscall=46 success=yes exit=30664 a0=3 a1=7ffd3e0673e0 a2=0 a3=7ffd3e0673cc items=0 ppid=3524 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:27.934000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:22:27.948652 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:22:27.974522 env[1312]: time="2025-09-06T00:22:27.974435309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f95dfcdc5-xw9st,Uid:2997af2f-3793-4ebb-a625-6dd9b47d29e8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7\"" Sep 6 00:22:28.109380 systemd-networkd[1075]: calidb01ba6d2d8: Gained IPv6LL Sep 6 00:22:28.331196 kubelet[2141]: E0906 00:22:28.331159 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:28.941412 systemd-networkd[1075]: cali275aee5089f: Gained IPv6LL Sep 6 00:22:29.069310 systemd-networkd[1075]: calid1bd8f2b29c: Gained IPv6LL Sep 6 00:22:29.333773 kubelet[2141]: E0906 00:22:29.333734 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:30.844798 env[1312]: time="2025-09-06T00:22:30.844717946Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:30.848862 env[1312]: time="2025-09-06T00:22:30.848819430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:30.851184 env[1312]: time="2025-09-06T00:22:30.851157545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:30.852861 env[1312]: time="2025-09-06T00:22:30.852780080Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:30.853324 env[1312]: time="2025-09-06T00:22:30.853282912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 6 00:22:30.854788 env[1312]: time="2025-09-06T00:22:30.854745165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 00:22:30.865959 env[1312]: time="2025-09-06T00:22:30.865893387Z" level=info msg="CreateContainer within sandbox \"9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 6 00:22:30.881684 env[1312]: time="2025-09-06T00:22:30.881619326Z" level=info msg="CreateContainer within sandbox \"9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a46d61b3c1bc936b24eb82086ca92296bcb6dd93ee9c403e758e5cd029e84f4a\"" Sep 6 00:22:30.882290 env[1312]: time="2025-09-06T00:22:30.882250831Z" level=info msg="StartContainer for \"a46d61b3c1bc936b24eb82086ca92296bcb6dd93ee9c403e758e5cd029e84f4a\"" Sep 6 00:22:31.452556 env[1312]: time="2025-09-06T00:22:31.452486205Z" level=info msg="StartContainer for \"a46d61b3c1bc936b24eb82086ca92296bcb6dd93ee9c403e758e5cd029e84f4a\" returns successfully" Sep 6 00:22:31.793553 systemd[1]: Started sshd@11-10.0.0.61:22-10.0.0.1:58496.service. Sep 6 00:22:31.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.61:22-10.0.0.1:58496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:31.794925 kernel: kauditd_printk_skb: 22 callbacks suppressed Sep 6 00:22:31.795101 kernel: audit: type=1130 audit(1757118151.792:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.61:22-10.0.0.1:58496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:31.842000 audit[4745]: USER_ACCT pid=4745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:31.843677 sshd[4745]: Accepted publickey for core from 10.0.0.1 port 58496 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:22:31.850166 kernel: audit: type=1101 audit(1757118151.842:428): pid=4745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:31.870154 kernel: audit: type=1103 audit(1757118151.849:429): pid=4745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:31.870201 kernel: audit: type=1006 audit(1757118151.849:430): pid=4745 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Sep 6 00:22:31.870224 kernel: audit: type=1300 audit(1757118151.849:430): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd65303750 a2=3 a3=0 items=0 ppid=1 pid=4745 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:31.870243 kernel: audit: type=1327 audit(1757118151.849:430): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:31.870265 kernel: audit: type=1105 audit(1757118151.866:431): pid=4745 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:31.849000 audit[4745]: CRED_ACQ pid=4745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:31.849000 audit[4745]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd65303750 a2=3 a3=0 items=0 ppid=1 pid=4745 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:31.849000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:31.866000 audit[4745]: USER_START pid=4745 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:31.850714 sshd[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:31.855188 systemd-logind[1293]: New session 12 of user core. Sep 6 00:22:31.855868 systemd[1]: Started session-12.scope. Sep 6 00:22:31.868000 audit[4748]: CRED_ACQ pid=4748 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:31.877451 kernel: audit: type=1103 audit(1757118151.868:432): pid=4748 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:32.098505 sshd[4745]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:32.098000 audit[4745]: USER_END pid=4745 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:32.100946 systemd[1]: sshd@11-10.0.0.61:22-10.0.0.1:58496.service: Deactivated successfully. Sep 6 00:22:32.102103 systemd-logind[1293]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:22:32.102200 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:22:32.103082 systemd-logind[1293]: Removed session 12. Sep 6 00:22:32.098000 audit[4745]: CRED_DISP pid=4745 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:32.107322 kernel: audit: type=1106 audit(1757118152.098:433): pid=4745 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:32.107427 kernel: audit: type=1104 audit(1757118152.098:434): pid=4745 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:32.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.61:22-10.0.0.1:58496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:32.509593 kubelet[2141]: I0906 00:22:32.509518 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-644bf98f67-gf7cj" podStartSLOduration=26.058013234 podStartE2EDuration="32.509494693s" podCreationTimestamp="2025-09-06 00:22:00 +0000 UTC" firstStartedPulling="2025-09-06 00:22:24.403059944 +0000 UTC m=+43.975618761" lastFinishedPulling="2025-09-06 00:22:30.854541403 +0000 UTC m=+50.427100220" observedRunningTime="2025-09-06 00:22:31.753998406 +0000 UTC m=+51.326557223" watchObservedRunningTime="2025-09-06 00:22:32.509494693 +0000 UTC m=+52.082053510" Sep 6 00:22:34.745975 env[1312]: time="2025-09-06T00:22:34.745904430Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:34.748465 env[1312]: time="2025-09-06T00:22:34.748413612Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:34.750411 env[1312]: time="2025-09-06T00:22:34.750360077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:34.752037 env[1312]: time="2025-09-06T00:22:34.751998398Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:34.752614 env[1312]: time="2025-09-06T00:22:34.752570141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 6 00:22:34.753598 env[1312]: time="2025-09-06T00:22:34.753559500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 6 00:22:34.754854 env[1312]: time="2025-09-06T00:22:34.754819710Z" level=info msg="CreateContainer within sandbox \"55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 00:22:34.768678 env[1312]: time="2025-09-06T00:22:34.768624314Z" level=info msg="CreateContainer within sandbox \"55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0e8ccbdc3f7713368cf2e162adc733d43cc9f3a4f077ccb8ea081bc46ad97e60\"" Sep 6 00:22:34.769230 env[1312]: time="2025-09-06T00:22:34.769182991Z" level=info msg="StartContainer for \"0e8ccbdc3f7713368cf2e162adc733d43cc9f3a4f077ccb8ea081bc46ad97e60\"" Sep 6 00:22:34.888624 env[1312]: time="2025-09-06T00:22:34.888556597Z" level=info msg="StartContainer for \"0e8ccbdc3f7713368cf2e162adc733d43cc9f3a4f077ccb8ea081bc46ad97e60\" returns successfully" Sep 6 00:22:35.860000 audit[4831]: NETFILTER_CFG table=filter:121 family=2 entries=14 op=nft_register_rule pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:35.860000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff40e1afd0 a2=0 a3=7fff40e1afbc items=0 ppid=2268 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:35.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:35.864000 audit[4831]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:35.864000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff40e1afd0 a2=0 a3=7fff40e1afbc items=0 ppid=2268 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:35.864000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:36.469415 kubelet[2141]: I0906 00:22:36.469367 2141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:22:37.102326 systemd[1]: Started sshd@12-10.0.0.61:22-10.0.0.1:58508.service. Sep 6 00:22:37.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.61:22-10.0.0.1:58508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:37.104251 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:22:37.104341 kernel: audit: type=1130 audit(1757118157.101:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.61:22-10.0.0.1:58508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:37.149000 audit[4833]: USER_ACCT pid=4833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.151234 sshd[4833]: Accepted publickey for core from 10.0.0.1 port 58508 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:22:37.155866 sshd[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:37.151000 audit[4833]: CRED_ACQ pid=4833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.162461 systemd-logind[1293]: New session 13 of user core. Sep 6 00:22:37.179676 kernel: audit: type=1101 audit(1757118157.149:439): pid=4833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.179724 kernel: audit: type=1103 audit(1757118157.151:440): pid=4833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.179745 kernel: audit: type=1006 audit(1757118157.151:441): pid=4833 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 6 00:22:37.163211 systemd[1]: Started session-13.scope. Sep 6 00:22:37.180819 kernel: audit: type=1300 audit(1757118157.151:441): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9697b450 a2=3 a3=0 items=0 ppid=1 pid=4833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:37.151000 audit[4833]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9697b450 a2=3 a3=0 items=0 ppid=1 pid=4833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:37.184628 kernel: audit: type=1327 audit(1757118157.151:441): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:37.151000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:37.185912 kernel: audit: type=1105 audit(1757118157.183:442): pid=4833 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.183000 audit[4833]: USER_START pid=4833 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.184000 audit[4836]: CRED_ACQ pid=4836 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.193355 kernel: audit: type=1103 audit(1757118157.184:443): pid=4836 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.580760 sshd[4833]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:37.580000 audit[4833]: USER_END pid=4833 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.583485 systemd[1]: sshd@12-10.0.0.61:22-10.0.0.1:58508.service: Deactivated successfully. Sep 6 00:22:37.584886 systemd-logind[1293]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:22:37.584925 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:22:37.586201 systemd-logind[1293]: Removed session 13. Sep 6 00:22:37.581000 audit[4833]: CRED_DISP pid=4833 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.601092 kernel: audit: type=1106 audit(1757118157.580:444): pid=4833 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.601194 kernel: audit: type=1104 audit(1757118157.581:445): pid=4833 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:37.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.61:22-10.0.0.1:58508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:39.726812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027654951.mount: Deactivated successfully. Sep 6 00:22:40.423511 env[1312]: time="2025-09-06T00:22:40.423441775Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:40.427242 env[1312]: time="2025-09-06T00:22:40.427201357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:40.430062 env[1312]: time="2025-09-06T00:22:40.430009231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:40.431999 env[1312]: time="2025-09-06T00:22:40.431958105Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:40.432635 env[1312]: time="2025-09-06T00:22:40.432580009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 6 00:22:40.433778 env[1312]: time="2025-09-06T00:22:40.433746090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 6 00:22:40.434811 env[1312]: time="2025-09-06T00:22:40.434753756Z" level=info msg="CreateContainer within sandbox \"4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 6 00:22:40.450701 env[1312]: time="2025-09-06T00:22:40.450607030Z" level=info msg="CreateContainer within sandbox \"4102fc92b6b38af5d391a027c3bd1e4a5105a675547ae23f5b1baf090e9bbdfb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0ce92fb558d1be83b8676464336a08e4df8e51f476eb597cf256e558f313d913\"" Sep 6 00:22:40.452058 env[1312]: time="2025-09-06T00:22:40.451998923Z" level=info msg="StartContainer for \"0ce92fb558d1be83b8676464336a08e4df8e51f476eb597cf256e558f313d913\"" Sep 6 00:22:40.749192 env[1312]: time="2025-09-06T00:22:40.749088444Z" level=info msg="StartContainer for \"0ce92fb558d1be83b8676464336a08e4df8e51f476eb597cf256e558f313d913\" returns successfully" Sep 6 00:22:40.947161 env[1312]: time="2025-09-06T00:22:40.947104105Z" level=info msg="StopPodSandbox for \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\"" Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:40.986 [WARNING][4896] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0", GenerateName:"calico-kube-controllers-644bf98f67-", Namespace:"calico-system", SelfLink:"", UID:"c19353dd-4b41-4b6f-9132-f91a5ef28107", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"644bf98f67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3", Pod:"calico-kube-controllers-644bf98f67-gf7cj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1fc48d9b58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:40.986 [INFO][4896] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:40.986 [INFO][4896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" iface="eth0" netns="" Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:40.986 [INFO][4896] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:40.986 [INFO][4896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:41.020 [INFO][4907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" HandleID="k8s-pod-network.2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:41.020 [INFO][4907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:41.020 [INFO][4907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:41.027 [WARNING][4907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" HandleID="k8s-pod-network.2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:41.027 [INFO][4907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" HandleID="k8s-pod-network.2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:41.028 [INFO][4907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.032675 env[1312]: 2025-09-06 00:22:41.030 [INFO][4896] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:41.032675 env[1312]: time="2025-09-06T00:22:41.032621155Z" level=info msg="TearDown network for sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\" successfully" Sep 6 00:22:41.032675 env[1312]: time="2025-09-06T00:22:41.032656001Z" level=info msg="StopPodSandbox for \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\" returns successfully" Sep 6 00:22:41.033778 env[1312]: time="2025-09-06T00:22:41.033726676Z" level=info msg="RemovePodSandbox for \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\"" Sep 6 00:22:41.033855 env[1312]: time="2025-09-06T00:22:41.033779178Z" level=info msg="Forcibly stopping sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\"" Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.069 [WARNING][4924] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0", GenerateName:"calico-kube-controllers-644bf98f67-", Namespace:"calico-system", SelfLink:"", UID:"c19353dd-4b41-4b6f-9132-f91a5ef28107", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"644bf98f67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bcafb4b7c11c43d5e3ae088ec0bb823df9f685069e11c04ef147195b65d4db3", Pod:"calico-kube-controllers-644bf98f67-gf7cj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1fc48d9b58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.069 [INFO][4924] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.069 [INFO][4924] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" iface="eth0" netns="" Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.070 [INFO][4924] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.070 [INFO][4924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.090 [INFO][4934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" HandleID="k8s-pod-network.2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.090 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.090 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.096 [WARNING][4934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" HandleID="k8s-pod-network.2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.096 [INFO][4934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" HandleID="k8s-pod-network.2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Workload="localhost-k8s-calico--kube--controllers--644bf98f67--gf7cj-eth0" Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.097 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.101317 env[1312]: 2025-09-06 00:22:41.099 [INFO][4924] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf" Sep 6 00:22:41.101833 env[1312]: time="2025-09-06T00:22:41.101339237Z" level=info msg="TearDown network for sandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\" successfully" Sep 6 00:22:41.198332 env[1312]: time="2025-09-06T00:22:41.198241187Z" level=info msg="RemovePodSandbox \"2c9ae0509b98ccc2a8b583cb2c1de7d05893571fdbccad81216708d4194d8abf\" returns successfully" Sep 6 00:22:41.199112 env[1312]: time="2025-09-06T00:22:41.199042345Z" level=info msg="StopPodSandbox for \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\"" Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.232 [WARNING][4953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qr48z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b", Pod:"csi-node-driver-qr48z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09999983a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.232 [INFO][4953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.232 [INFO][4953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" iface="eth0" netns="" Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.232 [INFO][4953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.232 [INFO][4953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.253 [INFO][4961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" HandleID="k8s-pod-network.1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.253 [INFO][4961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.253 [INFO][4961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.259 [WARNING][4961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" HandleID="k8s-pod-network.1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.259 [INFO][4961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" HandleID="k8s-pod-network.1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.260 [INFO][4961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.263649 env[1312]: 2025-09-06 00:22:41.262 [INFO][4953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:41.264160 env[1312]: time="2025-09-06T00:22:41.263684243Z" level=info msg="TearDown network for sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\" successfully" Sep 6 00:22:41.264160 env[1312]: time="2025-09-06T00:22:41.263729370Z" level=info msg="StopPodSandbox for \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\" returns successfully" Sep 6 00:22:41.264444 env[1312]: time="2025-09-06T00:22:41.264393004Z" level=info msg="RemovePodSandbox for \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\"" Sep 6 00:22:41.264665 env[1312]: time="2025-09-06T00:22:41.264438331Z" level=info msg="Forcibly stopping sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\"" Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.297 [WARNING][4978] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qr48z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa5fe117-525e-4a2e-b423-0d13ab8c1f3f", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b", Pod:"csi-node-driver-qr48z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09999983a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.297 [INFO][4978] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.297 [INFO][4978] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" iface="eth0" netns="" Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.297 [INFO][4978] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.297 [INFO][4978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.322 [INFO][4986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" HandleID="k8s-pod-network.1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.322 [INFO][4986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.322 [INFO][4986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.329 [WARNING][4986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" HandleID="k8s-pod-network.1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.329 [INFO][4986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" HandleID="k8s-pod-network.1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Workload="localhost-k8s-csi--node--driver--qr48z-eth0" Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.330 [INFO][4986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.334749 env[1312]: 2025-09-06 00:22:41.332 [INFO][4978] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229" Sep 6 00:22:41.334749 env[1312]: time="2025-09-06T00:22:41.334682113Z" level=info msg="TearDown network for sandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\" successfully" Sep 6 00:22:41.340004 env[1312]: time="2025-09-06T00:22:41.339961419Z" level=info msg="RemovePodSandbox \"1d97729d0ee4a061f5bc2c29757758a471df5a31b026d48f513bef9af6f13229\" returns successfully" Sep 6 00:22:41.340738 env[1312]: time="2025-09-06T00:22:41.340700608Z" level=info msg="StopPodSandbox for \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\"" Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.375 [WARNING][5003] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--krllp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5d236a4c-f0ec-424c-baa8-2089b5f219ec", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4", Pod:"coredns-7c65d6cfc9-krllp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb01ba6d2d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.376 [INFO][5003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.376 [INFO][5003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" iface="eth0" netns="" Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.376 [INFO][5003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.376 [INFO][5003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.398 [INFO][5012] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" HandleID="k8s-pod-network.ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.398 [INFO][5012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.399 [INFO][5012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.404 [WARNING][5012] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" HandleID="k8s-pod-network.ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.404 [INFO][5012] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" HandleID="k8s-pod-network.ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.406 [INFO][5012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.409721 env[1312]: 2025-09-06 00:22:41.408 [INFO][5003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:41.410351 env[1312]: time="2025-09-06T00:22:41.409870548Z" level=info msg="TearDown network for sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\" successfully" Sep 6 00:22:41.410351 env[1312]: time="2025-09-06T00:22:41.409901748Z" level=info msg="StopPodSandbox for \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\" returns successfully" Sep 6 00:22:41.410421 env[1312]: time="2025-09-06T00:22:41.410379134Z" level=info msg="RemovePodSandbox for \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\"" Sep 6 00:22:41.410468 env[1312]: time="2025-09-06T00:22:41.410432547Z" level=info msg="Forcibly stopping sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\"" Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.439 [WARNING][5029] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--krllp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5d236a4c-f0ec-424c-baa8-2089b5f219ec", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7153d8fd4059077a73ad5166075bd3bb2eb4e71a4e1d034de4c302a7d5b249e4", Pod:"coredns-7c65d6cfc9-krllp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb01ba6d2d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.439 [INFO][5029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.439 [INFO][5029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" iface="eth0" netns="" Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.439 [INFO][5029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.439 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.460 [INFO][5038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" HandleID="k8s-pod-network.ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.460 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.460 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.467 [WARNING][5038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" HandleID="k8s-pod-network.ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.468 [INFO][5038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" HandleID="k8s-pod-network.ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Workload="localhost-k8s-coredns--7c65d6cfc9--krllp-eth0" Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.469 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.472802 env[1312]: 2025-09-06 00:22:41.471 [INFO][5029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63" Sep 6 00:22:41.473513 env[1312]: time="2025-09-06T00:22:41.472819004Z" level=info msg="TearDown network for sandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\" successfully" Sep 6 00:22:41.476410 env[1312]: time="2025-09-06T00:22:41.476349573Z" level=info msg="RemovePodSandbox \"ee101ca900c3a2404c915fceb1fc9e7a74890261af5a511af1346c9bec4f7f63\" returns successfully" Sep 6 00:22:41.476825 env[1312]: time="2025-09-06T00:22:41.476794818Z" level=info msg="StopPodSandbox for \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\"" Sep 6 00:22:41.512660 kubelet[2141]: I0906 00:22:41.512585 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f95dfcdc5-lkdpx" podStartSLOduration=34.264658605 podStartE2EDuration="44.512563976s" podCreationTimestamp="2025-09-06 00:21:57 +0000 UTC" firstStartedPulling="2025-09-06 00:22:24.505512575 +0000 UTC m=+44.078071392" lastFinishedPulling="2025-09-06 00:22:34.753417946 +0000 UTC m=+54.325976763" observedRunningTime="2025-09-06 00:22:35.627250904 +0000 UTC m=+55.199809721" watchObservedRunningTime="2025-09-06 00:22:41.512563976 +0000 UTC m=+61.085122793" Sep 6 00:22:41.513264 kubelet[2141]: I0906 00:22:41.512687 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-67d6f7c79c-bfcfv" podStartSLOduration=1.963914571 podStartE2EDuration="19.512682264s" podCreationTimestamp="2025-09-06 00:22:22 +0000 UTC" firstStartedPulling="2025-09-06 00:22:22.884780036 +0000 UTC m=+42.457338843" lastFinishedPulling="2025-09-06 00:22:40.433547699 +0000 UTC m=+60.006106536" observedRunningTime="2025-09-06 00:22:41.512450218 +0000 UTC m=+61.085009035" watchObservedRunningTime="2025-09-06 00:22:41.512682264 +0000 UTC m=+61.085241081" Sep 6 00:22:41.529000 audit[5080]: NETFILTER_CFG table=filter:123 family=2 entries=13 op=nft_register_rule pid=5080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:41.529000 audit[5080]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe91bb05a0 a2=0 a3=7ffe91bb058c items=0 ppid=2268 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:41.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:41.534000 audit[5080]: NETFILTER_CFG table=nat:124 family=2 entries=27 op=nft_register_chain pid=5080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:41.534000 audit[5080]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe91bb05a0 a2=0 a3=7ffe91bb058c items=0 ppid=2268 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:41.534000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.557 [WARNING][5066] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" WorkloadEndpoint="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.557 [INFO][5066] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.557 [INFO][5066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" iface="eth0" netns="" Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.558 [INFO][5066] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.558 [INFO][5066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.578 [INFO][5087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" HandleID="k8s-pod-network.bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Workload="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.578 [INFO][5087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.578 [INFO][5087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.586 [WARNING][5087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" HandleID="k8s-pod-network.bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Workload="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.586 [INFO][5087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" HandleID="k8s-pod-network.bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Workload="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.588 [INFO][5087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.591688 env[1312]: 2025-09-06 00:22:41.589 [INFO][5066] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:41.591688 env[1312]: time="2025-09-06T00:22:41.591602603Z" level=info msg="TearDown network for sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\" successfully" Sep 6 00:22:41.591688 env[1312]: time="2025-09-06T00:22:41.591644794Z" level=info msg="StopPodSandbox for \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\" returns successfully" Sep 6 00:22:41.592362 env[1312]: time="2025-09-06T00:22:41.592320230Z" level=info msg="RemovePodSandbox for \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\"" Sep 6 00:22:41.592409 env[1312]: time="2025-09-06T00:22:41.592368553Z" level=info msg="Forcibly stopping sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\"" Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.630 [WARNING][5105] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" WorkloadEndpoint="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.631 [INFO][5105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.631 [INFO][5105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" iface="eth0" netns="" Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.631 [INFO][5105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.631 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.651 [INFO][5114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" HandleID="k8s-pod-network.bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Workload="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.651 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.651 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.658 [WARNING][5114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" HandleID="k8s-pod-network.bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Workload="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.658 [INFO][5114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" HandleID="k8s-pod-network.bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Workload="localhost-k8s-whisker--7969cf68c8--xfwnf-eth0" Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.660 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.664772 env[1312]: 2025-09-06 00:22:41.663 [INFO][5105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a" Sep 6 00:22:41.665258 env[1312]: time="2025-09-06T00:22:41.664812138Z" level=info msg="TearDown network for sandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\" successfully" Sep 6 00:22:41.668740 env[1312]: time="2025-09-06T00:22:41.668693139Z" level=info msg="RemovePodSandbox \"bf4af62506a7d75dedca6b956ef2b7759236912c61a613729157451949d9560a\" returns successfully" Sep 6 00:22:41.669419 env[1312]: time="2025-09-06T00:22:41.669378685Z" level=info msg="StopPodSandbox for \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\"" Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.703 [WARNING][5132] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--lvkqq-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a6cca75d-1a19-48b7-bf46-1e5cf7e72c19", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5", Pod:"goldmane-7988f88666-lvkqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali275aee5089f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.704 [INFO][5132] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.704 [INFO][5132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" iface="eth0" netns="" Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.704 [INFO][5132] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.704 [INFO][5132] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.724 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" HandleID="k8s-pod-network.48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.725 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.725 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.730 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" HandleID="k8s-pod-network.48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.730 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" HandleID="k8s-pod-network.48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.731 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.735438 env[1312]: 2025-09-06 00:22:41.733 [INFO][5132] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:41.735922 env[1312]: time="2025-09-06T00:22:41.735483431Z" level=info msg="TearDown network for sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\" successfully" Sep 6 00:22:41.735922 env[1312]: time="2025-09-06T00:22:41.735523679Z" level=info msg="StopPodSandbox for \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\" returns successfully" Sep 6 00:22:41.736123 env[1312]: time="2025-09-06T00:22:41.736079776Z" level=info msg="RemovePodSandbox for \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\"" Sep 6 00:22:41.736202 env[1312]: time="2025-09-06T00:22:41.736143589Z" level=info msg="Forcibly stopping sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\"" Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.769 [WARNING][5157] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--lvkqq-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a6cca75d-1a19-48b7-bf46-1e5cf7e72c19", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5", Pod:"goldmane-7988f88666-lvkqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali275aee5089f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.769 [INFO][5157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.770 [INFO][5157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" iface="eth0" netns="" Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.770 [INFO][5157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.770 [INFO][5157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.790 [INFO][5165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" HandleID="k8s-pod-network.48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.790 [INFO][5165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.790 [INFO][5165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.796 [WARNING][5165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" HandleID="k8s-pod-network.48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.796 [INFO][5165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" HandleID="k8s-pod-network.48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Workload="localhost-k8s-goldmane--7988f88666--lvkqq-eth0" Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.797 [INFO][5165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.802442 env[1312]: 2025-09-06 00:22:41.799 [INFO][5157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e" Sep 6 00:22:41.802997 env[1312]: time="2025-09-06T00:22:41.802474008Z" level=info msg="TearDown network for sandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\" successfully" Sep 6 00:22:41.807331 env[1312]: time="2025-09-06T00:22:41.807083327Z" level=info msg="RemovePodSandbox \"48d97eb98c931aeff419e578359b19e89653df574dca767d4ca9bd8c9c95bf4e\" returns successfully" Sep 6 00:22:41.807926 env[1312]: time="2025-09-06T00:22:41.807821173Z" level=info msg="StopPodSandbox for \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\"" Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.842 [WARNING][5182] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0", GenerateName:"calico-apiserver-7f95dfcdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d47e55db-f531-4fdd-892c-a105be81339f", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f95dfcdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab", Pod:"calico-apiserver-7f95dfcdc5-lkdpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89f10fbc7a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.842 [INFO][5182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.842 [INFO][5182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" iface="eth0" netns="" Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.842 [INFO][5182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.842 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.861 [INFO][5190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" HandleID="k8s-pod-network.8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.861 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.861 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.868 [WARNING][5190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" HandleID="k8s-pod-network.8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.868 [INFO][5190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" HandleID="k8s-pod-network.8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.870 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.874601 env[1312]: 2025-09-06 00:22:41.872 [INFO][5182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:41.874601 env[1312]: time="2025-09-06T00:22:41.874538005Z" level=info msg="TearDown network for sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\" successfully" Sep 6 00:22:41.874601 env[1312]: time="2025-09-06T00:22:41.874572942Z" level=info msg="StopPodSandbox for \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\" returns successfully" Sep 6 00:22:41.875408 env[1312]: time="2025-09-06T00:22:41.875219163Z" level=info msg="RemovePodSandbox for \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\"" Sep 6 00:22:41.875408 env[1312]: time="2025-09-06T00:22:41.875256444Z" level=info msg="Forcibly stopping sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\"" Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.909 [WARNING][5211] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0", GenerateName:"calico-apiserver-7f95dfcdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d47e55db-f531-4fdd-892c-a105be81339f", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f95dfcdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55c7e614b93e9b77f341bfd45eabfe21843c33ea82c6ce0724fa2c13e0be3bab", Pod:"calico-apiserver-7f95dfcdc5-lkdpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89f10fbc7a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.909 [INFO][5211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.909 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" iface="eth0" netns="" Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.909 [INFO][5211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.909 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.967 [INFO][5220] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" HandleID="k8s-pod-network.8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.968 [INFO][5220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.968 [INFO][5220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.973 [WARNING][5220] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" HandleID="k8s-pod-network.8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.973 [INFO][5220] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" HandleID="k8s-pod-network.8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--lkdpx-eth0" Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.975 [INFO][5220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:41.979985 env[1312]: 2025-09-06 00:22:41.978 [INFO][5211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a" Sep 6 00:22:41.980491 env[1312]: time="2025-09-06T00:22:41.980022882Z" level=info msg="TearDown network for sandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\" successfully" Sep 6 00:22:41.983964 env[1312]: time="2025-09-06T00:22:41.983925264Z" level=info msg="RemovePodSandbox \"8320628b621dea8b3a5a6b5e2de01d14516f806f55bd3fb4f1c96f34339e6a6a\" returns successfully" Sep 6 00:22:41.984489 env[1312]: time="2025-09-06T00:22:41.984444320Z" level=info msg="StopPodSandbox for \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\"" Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.017 [WARNING][5236] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0", GenerateName:"calico-apiserver-7f95dfcdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2997af2f-3793-4ebb-a625-6dd9b47d29e8", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f95dfcdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7", Pod:"calico-apiserver-7f95dfcdc5-xw9st", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid1bd8f2b29c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.017 [INFO][5236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.017 [INFO][5236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" iface="eth0" netns="" Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.017 [INFO][5236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.017 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.043 [INFO][5245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" HandleID="k8s-pod-network.7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.043 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.043 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.049 [WARNING][5245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" HandleID="k8s-pod-network.7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.049 [INFO][5245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" HandleID="k8s-pod-network.7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.051 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:42.055159 env[1312]: 2025-09-06 00:22:42.053 [INFO][5236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:42.056428 env[1312]: time="2025-09-06T00:22:42.055181806Z" level=info msg="TearDown network for sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\" successfully" Sep 6 00:22:42.056428 env[1312]: time="2025-09-06T00:22:42.055215350Z" level=info msg="StopPodSandbox for \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\" returns successfully" Sep 6 00:22:42.056428 env[1312]: time="2025-09-06T00:22:42.055685212Z" level=info msg="RemovePodSandbox for \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\"" Sep 6 00:22:42.056428 env[1312]: time="2025-09-06T00:22:42.055716281Z" level=info msg="Forcibly stopping sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\"" Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.101 [WARNING][5263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0", GenerateName:"calico-apiserver-7f95dfcdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2997af2f-3793-4ebb-a625-6dd9b47d29e8", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f95dfcdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7", Pod:"calico-apiserver-7f95dfcdc5-xw9st", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid1bd8f2b29c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.102 [INFO][5263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.102 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" iface="eth0" netns="" Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.102 [INFO][5263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.102 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.132 [INFO][5272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" HandleID="k8s-pod-network.7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.132 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.132 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.138 [WARNING][5272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" HandleID="k8s-pod-network.7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.138 [INFO][5272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" HandleID="k8s-pod-network.7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Workload="localhost-k8s-calico--apiserver--7f95dfcdc5--xw9st-eth0" Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.140 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:42.143917 env[1312]: 2025-09-06 00:22:42.141 [INFO][5263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8" Sep 6 00:22:42.143917 env[1312]: time="2025-09-06T00:22:42.143846386Z" level=info msg="TearDown network for sandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\" successfully" Sep 6 00:22:42.148264 env[1312]: time="2025-09-06T00:22:42.148181514Z" level=info msg="RemovePodSandbox \"7108f8374eaf5ee35976fa866fed5953ec5a556253d323892c4923ef9198afd8\" returns successfully" Sep 6 00:22:42.148887 env[1312]: time="2025-09-06T00:22:42.148838834Z" level=info msg="StopPodSandbox for \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\"" Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.185 [WARNING][5289] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"23a905d3-2b9b-4e8e-907e-242236a689bc", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463", Pod:"coredns-7c65d6cfc9-dqfgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali538c9d24ba2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.185 [INFO][5289] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.185 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" iface="eth0" netns="" Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.185 [INFO][5289] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.185 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.205 [INFO][5299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" HandleID="k8s-pod-network.fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.205 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.205 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.211 [WARNING][5299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" HandleID="k8s-pod-network.fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.211 [INFO][5299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" HandleID="k8s-pod-network.fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.213 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:42.218332 env[1312]: 2025-09-06 00:22:42.216 [INFO][5289] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:42.218845 env[1312]: time="2025-09-06T00:22:42.218370917Z" level=info msg="TearDown network for sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\" successfully" Sep 6 00:22:42.218845 env[1312]: time="2025-09-06T00:22:42.218413649Z" level=info msg="StopPodSandbox for \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\" returns successfully" Sep 6 00:22:42.219058 env[1312]: time="2025-09-06T00:22:42.219002138Z" level=info msg="RemovePodSandbox for \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\"" Sep 6 00:22:42.219108 env[1312]: time="2025-09-06T00:22:42.219069357Z" level=info msg="Forcibly stopping sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\"" Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.256 [WARNING][5317] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"23a905d3-2b9b-4e8e-907e-242236a689bc", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a8b0a967c1bdaf1a0d5739d2da352f5a8f7585c8ea329de247be830943ea463", Pod:"coredns-7c65d6cfc9-dqfgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali538c9d24ba2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.257 [INFO][5317] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.257 [INFO][5317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" iface="eth0" netns="" Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.257 [INFO][5317] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.257 [INFO][5317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.281 [INFO][5325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" HandleID="k8s-pod-network.fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.281 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.281 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.305 [WARNING][5325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" HandleID="k8s-pod-network.fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.305 [INFO][5325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" HandleID="k8s-pod-network.fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Workload="localhost-k8s-coredns--7c65d6cfc9--dqfgt-eth0" Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.326 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:22:42.329330 env[1312]: 2025-09-06 00:22:42.327 [INFO][5317] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c" Sep 6 00:22:42.329837 env[1312]: time="2025-09-06T00:22:42.329362815Z" level=info msg="TearDown network for sandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\" successfully" Sep 6 00:22:42.468743 env[1312]: time="2025-09-06T00:22:42.468681307Z" level=info msg="RemovePodSandbox \"fa177c0d1caf1c6b25ecf007348dc91a9a049126e0c5d82044054d756189fa2c\" returns successfully" Sep 6 00:22:42.511262 env[1312]: time="2025-09-06T00:22:42.511231413Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:42.584321 systemd[1]: Started sshd@13-10.0.0.61:22-10.0.0.1:38292.service. Sep 6 00:22:42.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.61:22-10.0.0.1:38292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.585556 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:22:42.585605 kernel: audit: type=1130 audit(1757118162.583:449): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.61:22-10.0.0.1:38292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.604619 env[1312]: time="2025-09-06T00:22:42.604566906Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:42.625111 env[1312]: time="2025-09-06T00:22:42.625070593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:42.628000 audit[5333]: USER_ACCT pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.629489 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 38292 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:22:42.640148 sshd[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:42.647313 env[1312]: time="2025-09-06T00:22:42.647284623Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:42.648202 env[1312]: time="2025-09-06T00:22:42.648173709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 6 00:22:42.630000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.650971 systemd-logind[1293]: New session 14 of user core. Sep 6 00:22:42.651961 systemd[1]: Started session-14.scope. Sep 6 00:22:42.653308 env[1312]: time="2025-09-06T00:22:42.653271459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 6 00:22:42.653491 kernel: audit: type=1101 audit(1757118162.628:450): pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.653543 kernel: audit: type=1103 audit(1757118162.630:451): pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.654623 env[1312]: time="2025-09-06T00:22:42.654580640Z" level=info msg="CreateContainer within sandbox \"2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 6 00:22:42.630000 audit[5333]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd29658da0 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:42.666521 kernel: audit: type=1006 audit(1757118162.630:452): pid=5333 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 6 00:22:42.666617 kernel: audit: type=1300 audit(1757118162.630:452): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd29658da0 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:42.666637 kernel: audit: type=1327 audit(1757118162.630:452): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:42.630000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:42.658000 audit[5333]: USER_START pid=5333 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.674171 env[1312]: time="2025-09-06T00:22:42.674096383Z" level=info msg="CreateContainer within sandbox \"2428315845dece4d7c207bbd3408470c48d9b80140a03b924e9f9f70ef84177b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7dbac5ba3bcde45dadff952c9a9ab7ee243ec3f30ac68445c2ea00870462933d\"" Sep 6 00:22:42.674301 kernel: audit: type=1105 audit(1757118162.658:453): pid=5333 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.674357 kernel: audit: type=1103 audit(1757118162.659:454): pid=5336 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.659000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.676049 env[1312]: time="2025-09-06T00:22:42.675998803Z" level=info msg="StartContainer for \"7dbac5ba3bcde45dadff952c9a9ab7ee243ec3f30ac68445c2ea00870462933d\"" Sep 6 00:22:42.728914 env[1312]: time="2025-09-06T00:22:42.728728580Z" level=info msg="StartContainer for \"7dbac5ba3bcde45dadff952c9a9ab7ee243ec3f30ac68445c2ea00870462933d\" returns successfully" Sep 6 00:22:42.893975 sshd[5333]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:42.897229 systemd[1]: Started sshd@14-10.0.0.61:22-10.0.0.1:38296.service. Sep 6 00:22:42.897783 systemd[1]: sshd@13-10.0.0.61:22-10.0.0.1:38292.service: Deactivated successfully. Sep 6 00:22:42.894000 audit[5333]: USER_END pid=5333 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.900411 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:22:42.900905 systemd-logind[1293]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:22:42.901994 systemd-logind[1293]: Removed session 14. Sep 6 00:22:42.894000 audit[5333]: CRED_DISP pid=5333 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.906224 kernel: audit: type=1106 audit(1757118162.894:455): pid=5333 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.906654 kernel: audit: type=1104 audit(1757118162.894:456): pid=5333 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.61:22-10.0.0.1:38296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.61:22-10.0.0.1:38292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:42.945000 audit[5383]: USER_ACCT pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.947325 sshd[5383]: Accepted publickey for core from 10.0.0.1 port 38296 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:22:42.946000 audit[5383]: CRED_ACQ pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.947000 audit[5383]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3b8a41f0 a2=3 a3=0 items=0 ppid=1 pid=5383 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:42.947000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:42.948576 sshd[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:42.952655 systemd-logind[1293]: New session 15 of user core. Sep 6 00:22:42.953549 systemd[1]: Started session-15.scope. Sep 6 00:22:42.957000 audit[5383]: USER_START pid=5383 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:42.958000 audit[5388]: CRED_ACQ pid=5388 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:43.056156 kubelet[2141]: I0906 00:22:43.056007 2141 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 6 00:22:43.056156 kubelet[2141]: I0906 00:22:43.056048 2141 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 6 00:22:43.124440 sshd[5383]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:43.125000 audit[5383]: USER_END pid=5383 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:43.125000 audit[5383]: CRED_DISP pid=5383 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:43.133293 systemd[1]: Started sshd@15-10.0.0.61:22-10.0.0.1:38308.service. Sep 6 00:22:43.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.61:22-10.0.0.1:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:43.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.61:22-10.0.0.1:38296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:43.134961 systemd[1]: sshd@14-10.0.0.61:22-10.0.0.1:38296.service: Deactivated successfully. Sep 6 00:22:43.136880 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:22:43.138630 systemd-logind[1293]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:22:43.140422 systemd-logind[1293]: Removed session 15. Sep 6 00:22:43.183000 audit[5402]: USER_ACCT pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:43.184838 sshd[5402]: Accepted publickey for core from 10.0.0.1 port 38308 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:22:43.184000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:43.184000 audit[5402]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd066444a0 a2=3 a3=0 items=0 ppid=1 pid=5402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:43.184000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:43.185990 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:43.189742 systemd-logind[1293]: New session 16 of user core. Sep 6 00:22:43.190516 systemd[1]: Started session-16.scope. Sep 6 00:22:43.193000 audit[5402]: USER_START pid=5402 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:43.194000 audit[5406]: CRED_ACQ pid=5406 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:43.335998 sshd[5402]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:43.336000 audit[5402]: USER_END pid=5402 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:43.336000 audit[5402]: CRED_DISP pid=5402 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:43.338811 systemd[1]: sshd@15-10.0.0.61:22-10.0.0.1:38308.service: Deactivated successfully. Sep 6 00:22:43.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.61:22-10.0.0.1:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:43.340252 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:22:43.340839 systemd-logind[1293]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:22:43.342164 systemd-logind[1293]: Removed session 16. Sep 6 00:22:43.520466 kubelet[2141]: I0906 00:22:43.520394 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qr48z" podStartSLOduration=24.439629948 podStartE2EDuration="43.520371529s" podCreationTimestamp="2025-09-06 00:22:00 +0000 UTC" firstStartedPulling="2025-09-06 00:22:23.56947114 +0000 UTC m=+43.142029967" lastFinishedPulling="2025-09-06 00:22:42.650212731 +0000 UTC m=+62.222771548" observedRunningTime="2025-09-06 00:22:43.518872305 +0000 UTC m=+63.091431152" watchObservedRunningTime="2025-09-06 00:22:43.520371529 +0000 UTC m=+63.092930346" Sep 6 00:22:45.191423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191056866.mount: Deactivated successfully. Sep 6 00:22:46.163890 env[1312]: time="2025-09-06T00:22:46.163815020Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.165795 env[1312]: time="2025-09-06T00:22:46.165758189Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.167699 env[1312]: time="2025-09-06T00:22:46.167638388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.169381 env[1312]: time="2025-09-06T00:22:46.169353731Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.169949 env[1312]: time="2025-09-06T00:22:46.169913383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 6 00:22:46.171098 env[1312]: time="2025-09-06T00:22:46.171066850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 00:22:46.172274 env[1312]: time="2025-09-06T00:22:46.172249645Z" level=info msg="CreateContainer within sandbox \"52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 6 00:22:46.183176 env[1312]: time="2025-09-06T00:22:46.183098632Z" level=info msg="CreateContainer within sandbox \"52d3c996d776364582505e7e98415cbe22245f2c9bbd2eebb2958c17775f4bf5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"25e426a3a593816ed81e8be08d4d2cf0ea3c8978d5b386773de21bc1878ff600\"" Sep 6 00:22:46.184002 env[1312]: time="2025-09-06T00:22:46.183982984Z" level=info msg="StartContainer for \"25e426a3a593816ed81e8be08d4d2cf0ea3c8978d5b386773de21bc1878ff600\"" Sep 6 00:22:46.259277 env[1312]: time="2025-09-06T00:22:46.259232555Z" level=info msg="StartContainer for \"25e426a3a593816ed81e8be08d4d2cf0ea3c8978d5b386773de21bc1878ff600\" returns successfully" Sep 6 00:22:46.535000 audit[5453]: NETFILTER_CFG table=filter:125 family=2 entries=12 op=nft_register_rule pid=5453 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:46.535000 audit[5453]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff37d46b90 a2=0 a3=7fff37d46b7c items=0 ppid=2268 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:46.535000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:46.541175 env[1312]: time="2025-09-06T00:22:46.541077532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.541000 audit[5453]: NETFILTER_CFG table=nat:126 family=2 entries=22 op=nft_register_rule pid=5453 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:46.541000 audit[5453]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff37d46b90 a2=0 a3=7fff37d46b7c items=0 ppid=2268 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:46.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:46.543259 env[1312]: time="2025-09-06T00:22:46.543208151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.544913 env[1312]: time="2025-09-06T00:22:46.544860724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.546185 env[1312]: time="2025-09-06T00:22:46.546158659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.546589 env[1312]: time="2025-09-06T00:22:46.546563454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 6 00:22:46.548822 env[1312]: time="2025-09-06T00:22:46.548771620Z" level=info msg="CreateContainer within sandbox \"4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 00:22:46.558823 env[1312]: time="2025-09-06T00:22:46.558762684Z" level=info msg="CreateContainer within sandbox \"4668d37b800fc57f3a2af4c4bde1f54463d102f4adb31e1d27ceb2d5b7b621a7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eda86c60a86ca5c8209759f7603cb1021d1b5dc3ddadb2673a7be066ac81ba2b\"" Sep 6 00:22:46.559871 env[1312]: time="2025-09-06T00:22:46.559311055Z" level=info msg="StartContainer for \"eda86c60a86ca5c8209759f7603cb1021d1b5dc3ddadb2673a7be066ac81ba2b\"" Sep 6 00:22:46.619407 env[1312]: time="2025-09-06T00:22:46.619351692Z" level=info msg="StartContainer for \"eda86c60a86ca5c8209759f7603cb1021d1b5dc3ddadb2673a7be066ac81ba2b\" returns successfully" Sep 6 00:22:47.534628 kubelet[2141]: I0906 00:22:47.534122 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-lvkqq" podStartSLOduration=30.087050415 podStartE2EDuration="48.534093567s" podCreationTimestamp="2025-09-06 00:21:59 +0000 UTC" firstStartedPulling="2025-09-06 00:22:27.723827033 +0000 UTC m=+47.296385850" lastFinishedPulling="2025-09-06 00:22:46.170870185 +0000 UTC m=+65.743429002" observedRunningTime="2025-09-06 00:22:46.5260099 +0000 UTC m=+66.098568717" watchObservedRunningTime="2025-09-06 00:22:47.534093567 +0000 UTC m=+67.106652384" Sep 6 00:22:47.551000 audit[5515]: NETFILTER_CFG table=filter:127 family=2 entries=12 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:47.551000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd256fab60 a2=0 a3=7ffd256fab4c items=0 ppid=2268 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:47.551000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:47.556000 audit[5515]: NETFILTER_CFG table=nat:128 family=2 entries=22 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:47.556000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd256fab60 a2=0 a3=7ffd256fab4c items=0 ppid=2268 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:47.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:48.213638 kubelet[2141]: I0906 00:22:48.213542 2141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f95dfcdc5-xw9st" podStartSLOduration=32.641548816 podStartE2EDuration="51.213519449s" podCreationTimestamp="2025-09-06 00:21:57 +0000 UTC" firstStartedPulling="2025-09-06 00:22:27.975483815 +0000 UTC m=+47.548042633" lastFinishedPulling="2025-09-06 00:22:46.547454449 +0000 UTC m=+66.120013266" observedRunningTime="2025-09-06 00:22:47.534599665 +0000 UTC m=+67.107158492" watchObservedRunningTime="2025-09-06 00:22:48.213519449 +0000 UTC m=+67.786078266" Sep 6 00:22:48.223000 audit[5526]: NETFILTER_CFG table=filter:129 family=2 entries=11 op=nft_register_rule pid=5526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:48.225640 kernel: kauditd_printk_skb: 35 callbacks suppressed Sep 6 00:22:48.225808 kernel: audit: type=1325 audit(1757118168.223:480): table=filter:129 family=2 entries=11 op=nft_register_rule pid=5526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:48.223000 audit[5526]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd5f0a7380 a2=0 a3=7ffd5f0a736c items=0 ppid=2268 pid=5526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:48.234122 kernel: audit: type=1300 audit(1757118168.223:480): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd5f0a7380 a2=0 a3=7ffd5f0a736c items=0 ppid=2268 pid=5526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:48.234211 kernel: audit: type=1327 audit(1757118168.223:480): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:48.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:48.235000 audit[5526]: NETFILTER_CFG table=nat:130 family=2 entries=29 op=nft_register_chain pid=5526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:48.238559 kernel: audit: type=1325 audit(1757118168.235:481): table=nat:130 family=2 entries=29 op=nft_register_chain pid=5526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:48.238606 kernel: audit: type=1300 audit(1757118168.235:481): arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffd5f0a7380 a2=0 a3=7ffd5f0a736c items=0 ppid=2268 pid=5526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:48.235000 audit[5526]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffd5f0a7380 a2=0 a3=7ffd5f0a736c items=0 ppid=2268 pid=5526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:48.242958 kernel: audit: type=1327 audit(1757118168.235:481): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:48.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:48.338895 systemd[1]: Started sshd@16-10.0.0.61:22-10.0.0.1:38316.service. Sep 6 00:22:48.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.61:22-10.0.0.1:38316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:48.344174 kernel: audit: type=1130 audit(1757118168.338:482): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.61:22-10.0.0.1:38316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:48.384000 audit[5527]: USER_ACCT pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:48.384737 sshd[5527]: Accepted publickey for core from 10.0.0.1 port 38316 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:22:48.386837 sshd[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:48.386000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:48.392079 kernel: audit: type=1101 audit(1757118168.384:483): pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:48.392180 kernel: audit: type=1103 audit(1757118168.386:484): pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:48.392208 kernel: audit: type=1006 audit(1757118168.386:485): pid=5527 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 6 00:22:48.391753 systemd-logind[1293]: New session 17 of user core. Sep 6 00:22:48.393072 systemd[1]: Started session-17.scope. Sep 6 00:22:48.386000 audit[5527]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd75de8de0 a2=3 a3=0 items=0 ppid=1 pid=5527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:48.386000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:48.397000 audit[5527]: USER_START pid=5527 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:48.398000 audit[5530]: CRED_ACQ pid=5530 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:48.543060 systemd[1]: run-containerd-runc-k8s.io-25e426a3a593816ed81e8be08d4d2cf0ea3c8978d5b386773de21bc1878ff600-runc.e32RHA.mount: Deactivated successfully. Sep 6 00:22:48.750583 sshd[5527]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:48.751000 audit[5527]: USER_END pid=5527 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:48.751000 audit[5527]: CRED_DISP pid=5527 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:48.753282 systemd[1]: sshd@16-10.0.0.61:22-10.0.0.1:38316.service: Deactivated successfully. Sep 6 00:22:48.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.61:22-10.0.0.1:38316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:48.754469 systemd-logind[1293]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:22:48.754538 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:22:48.755320 systemd-logind[1293]: Removed session 17. Sep 6 00:22:53.753668 systemd[1]: Started sshd@17-10.0.0.61:22-10.0.0.1:58748.service. Sep 6 00:22:53.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.61:22-10.0.0.1:58748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:53.754733 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:22:53.754823 kernel: audit: type=1130 audit(1757118173.752:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.61:22-10.0.0.1:58748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:53.800000 audit[5589]: USER_ACCT pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.802372 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 58748 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:22:53.804000 audit[5589]: CRED_ACQ pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.806221 kernel: audit: type=1101 audit(1757118173.800:492): pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.806258 kernel: audit: type=1103 audit(1757118173.804:493): pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.806597 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:53.811233 systemd-logind[1293]: New session 18 of user core. Sep 6 00:22:53.812044 kernel: audit: type=1006 audit(1757118173.804:494): pid=5589 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Sep 6 00:22:53.812093 kernel: audit: type=1300 audit(1757118173.804:494): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc16b3b460 a2=3 a3=0 items=0 ppid=1 pid=5589 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:53.804000 audit[5589]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc16b3b460 a2=3 a3=0 items=0 ppid=1 pid=5589 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:53.812519 systemd[1]: Started session-18.scope. Sep 6 00:22:53.817102 kernel: audit: type=1327 audit(1757118173.804:494): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:53.804000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:53.816000 audit[5589]: USER_START pid=5589 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.818000 audit[5592]: CRED_ACQ pid=5592 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.825489 kernel: audit: type=1105 audit(1757118173.816:495): pid=5589 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.825556 kernel: audit: type=1103 audit(1757118173.818:496): pid=5592 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.960792 sshd[5589]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:53.960000 audit[5589]: USER_END pid=5589 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.963533 systemd[1]: sshd@17-10.0.0.61:22-10.0.0.1:58748.service: Deactivated successfully. Sep 6 00:22:53.964376 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:22:53.960000 audit[5589]: CRED_DISP pid=5589 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.968746 systemd-logind[1293]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:22:53.969460 systemd-logind[1293]: Removed session 18. Sep 6 00:22:53.970548 kernel: audit: type=1106 audit(1757118173.960:497): pid=5589 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.970592 kernel: audit: type=1104 audit(1757118173.960:498): pid=5589 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:53.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.61:22-10.0.0.1:58748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.971157 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:22:58.971359 kernel: audit: type=1130 audit(1757118178.963:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.61:22-10.0.0.1:58752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.61:22-10.0.0.1:58752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.964799 systemd[1]: Started sshd@18-10.0.0.61:22-10.0.0.1:58752.service. Sep 6 00:22:59.007000 audit[5603]: USER_ACCT pid=5603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.008934 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 58752 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:22:59.011000 audit[5603]: CRED_ACQ pid=5603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.013439 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:59.016859 kernel: audit: type=1101 audit(1757118179.007:501): pid=5603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.016949 kernel: audit: type=1103 audit(1757118179.011:502): pid=5603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.017016 kernel: audit: type=1006 audit(1757118179.011:503): pid=5603 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Sep 6 00:22:59.017599 systemd-logind[1293]: New session 19 of user core. Sep 6 00:22:59.018695 systemd[1]: Started session-19.scope. Sep 6 00:22:59.011000 audit[5603]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3c23b030 a2=3 a3=0 items=0 ppid=1 pid=5603 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:59.023604 kernel: audit: type=1300 audit(1757118179.011:503): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3c23b030 a2=3 a3=0 items=0 ppid=1 pid=5603 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:59.011000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:59.025191 kernel: audit: type=1327 audit(1757118179.011:503): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:59.025264 kernel: audit: type=1105 audit(1757118179.023:504): pid=5603 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.023000 audit[5603]: USER_START pid=5603 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.024000 audit[5606]: CRED_ACQ pid=5606 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.033397 kernel: audit: type=1103 audit(1757118179.024:505): pid=5606 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.134352 sshd[5603]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:59.134000 audit[5603]: USER_END pid=5603 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.136489 systemd[1]: sshd@18-10.0.0.61:22-10.0.0.1:58752.service: Deactivated successfully. Sep 6 00:22:59.137955 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:22:59.137993 systemd-logind[1293]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:22:59.139148 systemd-logind[1293]: Removed session 19. Sep 6 00:22:59.144171 kernel: audit: type=1106 audit(1757118179.134:506): pid=5603 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.144270 kernel: audit: type=1104 audit(1757118179.134:507): pid=5603 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.134000 audit[5603]: CRED_DISP pid=5603 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:22:59.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.61:22-10.0.0.1:58752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:59.974667 kubelet[2141]: E0906 00:22:59.974590 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:04.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.61:22-10.0.0.1:45800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:04.138212 systemd[1]: Started sshd@19-10.0.0.61:22-10.0.0.1:45800.service. Sep 6 00:23:04.139725 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:23:04.139795 kernel: audit: type=1130 audit(1757118184.137:509): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.61:22-10.0.0.1:45800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:04.187000 audit[5623]: USER_ACCT pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.189560 sshd[5623]: Accepted publickey for core from 10.0.0.1 port 45800 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:23:04.191002 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:04.189000 audit[5623]: CRED_ACQ pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.195183 systemd-logind[1293]: New session 20 of user core. Sep 6 00:23:04.196466 systemd[1]: Started session-20.scope. Sep 6 00:23:04.198590 kernel: audit: type=1101 audit(1757118184.187:510): pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.198646 kernel: audit: type=1103 audit(1757118184.189:511): pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.201387 kernel: audit: type=1006 audit(1757118184.189:512): pid=5623 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Sep 6 00:23:04.201624 kernel: audit: type=1300 audit(1757118184.189:512): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe897cbd40 a2=3 a3=0 items=0 ppid=1 pid=5623 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:04.189000 audit[5623]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe897cbd40 a2=3 a3=0 items=0 ppid=1 pid=5623 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:04.189000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:04.207161 kernel: audit: type=1327 audit(1757118184.189:512): proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:04.207207 kernel: audit: type=1105 audit(1757118184.200:513): pid=5623 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.200000 audit[5623]: USER_START pid=5623 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.202000 audit[5626]: CRED_ACQ pid=5626 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.216571 kernel: audit: type=1103 audit(1757118184.202:514): pid=5626 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.501574 sshd[5623]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:04.503437 systemd[1]: Started sshd@20-10.0.0.61:22-10.0.0.1:45810.service. Sep 6 00:23:04.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.61:22-10.0.0.1:45810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:04.508188 kernel: audit: type=1130 audit(1757118184.502:515): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.61:22-10.0.0.1:45810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:04.511000 audit[5623]: USER_END pid=5623 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.514497 systemd[1]: sshd@19-10.0.0.61:22-10.0.0.1:45800.service: Deactivated successfully. Sep 6 00:23:04.515367 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:23:04.511000 audit[5623]: CRED_DISP pid=5623 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.61:22-10.0.0.1:45800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:04.518174 kernel: audit: type=1106 audit(1757118184.511:516): pid=5623 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.518383 systemd-logind[1293]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:23:04.519859 systemd-logind[1293]: Removed session 20. Sep 6 00:23:04.552000 audit[5635]: USER_ACCT pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.554019 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 45810 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:23:04.554000 audit[5635]: CRED_ACQ pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.554000 audit[5635]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff64091410 a2=3 a3=0 items=0 ppid=1 pid=5635 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:04.554000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:04.555977 sshd[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:04.561729 systemd-logind[1293]: New session 21 of user core. Sep 6 00:23:04.562881 systemd[1]: Started session-21.scope. Sep 6 00:23:04.568000 audit[5635]: USER_START pid=5635 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.569000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.872476 sshd[5635]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:04.874000 audit[5635]: USER_END pid=5635 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.874000 audit[5635]: CRED_DISP pid=5635 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.61:22-10.0.0.1:45814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:04.875919 systemd[1]: Started sshd@21-10.0.0.61:22-10.0.0.1:45814.service. Sep 6 00:23:04.876771 systemd[1]: sshd@20-10.0.0.61:22-10.0.0.1:45810.service: Deactivated successfully. Sep 6 00:23:04.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.61:22-10.0.0.1:45810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:04.878278 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:23:04.878325 systemd-logind[1293]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:23:04.879485 systemd-logind[1293]: Removed session 21. Sep 6 00:23:04.922000 audit[5648]: USER_ACCT pid=5648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.923933 sshd[5648]: Accepted publickey for core from 10.0.0.1 port 45814 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:23:04.923000 audit[5648]: CRED_ACQ pid=5648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.923000 audit[5648]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7333d5b0 a2=3 a3=0 items=0 ppid=1 pid=5648 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:04.923000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:04.925356 sshd[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:04.929077 systemd-logind[1293]: New session 22 of user core. Sep 6 00:23:04.929864 systemd[1]: Started session-22.scope. Sep 6 00:23:04.932000 audit[5648]: USER_START pid=5648 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.934000 audit[5653]: CRED_ACQ pid=5653 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:04.970479 kubelet[2141]: E0906 00:23:04.970437 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:06.769000 audit[5686]: NETFILTER_CFG table=filter:131 family=2 entries=22 op=nft_register_rule pid=5686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:06.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.61:22-10.0.0.1:45822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:06.769000 audit[5686]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffdbaec8a70 a2=0 a3=7ffdbaec8a5c items=0 ppid=2268 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:06.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:06.770000 audit[5648]: USER_END pid=5648 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:06.770000 audit[5648]: CRED_DISP pid=5648 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:06.771580 systemd[1]: Started sshd@22-10.0.0.61:22-10.0.0.1:45822.service. Sep 6 00:23:06.770580 sshd[5648]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:06.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.61:22-10.0.0.1:45814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:06.774602 systemd[1]: sshd@21-10.0.0.61:22-10.0.0.1:45814.service: Deactivated successfully. Sep 6 00:23:06.776064 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:23:06.777033 systemd-logind[1293]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:23:06.775000 audit[5686]: NETFILTER_CFG table=nat:132 family=2 entries=24 op=nft_register_rule pid=5686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:06.775000 audit[5686]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffdbaec8a70 a2=0 a3=0 items=0 ppid=2268 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:06.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:06.779935 systemd-logind[1293]: Removed session 22. Sep 6 00:23:06.793000 audit[5692]: NETFILTER_CFG table=filter:133 family=2 entries=34 op=nft_register_rule pid=5692 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:06.793000 audit[5692]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffe2e750ca0 a2=0 a3=7ffe2e750c8c items=0 ppid=2268 pid=5692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:06.793000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:06.799000 audit[5692]: NETFILTER_CFG table=nat:134 family=2 entries=24 op=nft_register_rule pid=5692 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:06.799000 audit[5692]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffe2e750ca0 a2=0 a3=0 items=0 ppid=2268 pid=5692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:06.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:06.822000 audit[5687]: USER_ACCT pid=5687 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:06.823968 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 45822 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:23:06.823000 audit[5687]: CRED_ACQ pid=5687 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:06.823000 audit[5687]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6af319d0 a2=3 a3=0 items=0 ppid=1 pid=5687 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:06.823000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:06.825091 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:06.828860 systemd-logind[1293]: New session 23 of user core. Sep 6 00:23:06.829612 systemd[1]: Started session-23.scope. Sep 6 00:23:06.833000 audit[5687]: USER_START pid=5687 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:06.834000 audit[5694]: CRED_ACQ pid=5694 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:07.363691 sshd[5687]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:07.364000 audit[5687]: USER_END pid=5687 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:07.366728 systemd[1]: Started sshd@23-10.0.0.61:22-10.0.0.1:45824.service. Sep 6 00:23:07.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.61:22-10.0.0.1:45824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:07.367000 audit[5687]: CRED_DISP pid=5687 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:07.370751 systemd-logind[1293]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:23:07.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.61:22-10.0.0.1:45822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:07.371909 systemd[1]: sshd@22-10.0.0.61:22-10.0.0.1:45822.service: Deactivated successfully. Sep 6 00:23:07.373210 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:23:07.375044 systemd-logind[1293]: Removed session 23. Sep 6 00:23:07.413000 audit[5701]: USER_ACCT pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:07.414743 sshd[5701]: Accepted publickey for core from 10.0.0.1 port 45824 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:23:07.414000 audit[5701]: CRED_ACQ pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:07.414000 audit[5701]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff43e3bfd0 a2=3 a3=0 items=0 ppid=1 pid=5701 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:07.414000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:07.416186 sshd[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:07.421575 systemd[1]: Started session-24.scope. Sep 6 00:23:07.421874 systemd-logind[1293]: New session 24 of user core. Sep 6 00:23:07.426000 audit[5701]: USER_START pid=5701 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:07.427000 audit[5706]: CRED_ACQ pid=5706 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:07.545562 kubelet[2141]: I0906 00:23:07.545493 2141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:23:07.555009 sshd[5701]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:07.555000 audit[5701]: USER_END pid=5701 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:07.555000 audit[5701]: CRED_DISP pid=5701 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:07.558384 systemd[1]: sshd@23-10.0.0.61:22-10.0.0.1:45824.service: Deactivated successfully. Sep 6 00:23:07.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.61:22-10.0.0.1:45824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:07.570734 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:23:07.571456 systemd-logind[1293]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:23:07.572348 systemd-logind[1293]: Removed session 24. Sep 6 00:23:07.815000 audit[5718]: NETFILTER_CFG table=filter:135 family=2 entries=34 op=nft_register_rule pid=5718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:07.815000 audit[5718]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7fff3efce990 a2=0 a3=7fff3efce97c items=0 ppid=2268 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:07.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:07.822000 audit[5718]: NETFILTER_CFG table=nat:136 family=2 entries=36 op=nft_register_chain pid=5718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:07.822000 audit[5718]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7fff3efce990 a2=0 a3=7fff3efce97c items=0 ppid=2268 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:07.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:10.973024 kubelet[2141]: E0906 00:23:10.972974 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:11.493441 systemd[1]: run-containerd-runc-k8s.io-a46d61b3c1bc936b24eb82086ca92296bcb6dd93ee9c403e758e5cd029e84f4a-runc.nbwEIf.mount: Deactivated successfully. Sep 6 00:23:11.922000 audit[5764]: NETFILTER_CFG table=filter:137 family=2 entries=33 op=nft_register_rule pid=5764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:11.924817 kernel: kauditd_printk_skb: 63 callbacks suppressed Sep 6 00:23:11.924899 kernel: audit: type=1325 audit(1757118191.922:560): table=filter:137 family=2 entries=33 op=nft_register_rule pid=5764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:11.922000 audit[5764]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff73057cc0 a2=0 a3=7fff73057cac items=0 ppid=2268 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:11.934985 kernel: audit: type=1300 audit(1757118191.922:560): arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff73057cc0 a2=0 a3=7fff73057cac items=0 ppid=2268 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:11.935235 kernel: audit: type=1327 audit(1757118191.922:560): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:11.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:11.939000 audit[5764]: NETFILTER_CFG table=nat:138 family=2 entries=31 op=nft_register_chain pid=5764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:11.939000 audit[5764]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fff73057cc0 a2=0 a3=7fff73057cac items=0 ppid=2268 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:11.963187 kernel: audit: type=1325 audit(1757118191.939:561): table=nat:138 family=2 entries=31 op=nft_register_chain pid=5764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:11.963385 kernel: audit: type=1300 audit(1757118191.939:561): arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fff73057cc0 a2=0 a3=7fff73057cac items=0 ppid=2268 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:11.963431 kernel: audit: type=1327 audit(1757118191.939:561): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:11.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:12.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.61:22-10.0.0.1:33294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:12.581754 kernel: audit: type=1130 audit(1757118192.559:562): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.61:22-10.0.0.1:33294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:12.560720 systemd[1]: Started sshd@24-10.0.0.61:22-10.0.0.1:33294.service. Sep 6 00:23:12.625000 audit[5765]: USER_ACCT pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:12.628798 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 33294 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:23:12.634601 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:12.632000 audit[5765]: CRED_ACQ pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:12.661009 systemd[1]: Started session-25.scope. Sep 6 00:23:12.662730 kernel: audit: type=1101 audit(1757118192.625:563): pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:12.662964 kernel: audit: type=1103 audit(1757118192.632:564): pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:12.663023 kernel: audit: type=1006 audit(1757118192.633:565): pid=5765 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 6 00:23:12.633000 audit[5765]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9f9bc180 a2=3 a3=0 items=0 ppid=1 pid=5765 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:12.633000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:12.664941 systemd-logind[1293]: New session 25 of user core. Sep 6 00:23:12.675000 audit[5765]: USER_START pid=5765 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:12.677000 audit[5768]: CRED_ACQ pid=5768 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:12.888796 sshd[5765]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:12.891000 audit[5765]: USER_END pid=5765 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:12.891000 audit[5765]: CRED_DISP pid=5765 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:12.894629 systemd[1]: sshd@24-10.0.0.61:22-10.0.0.1:33294.service: Deactivated successfully. Sep 6 00:23:12.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.61:22-10.0.0.1:33294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:12.896307 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:23:12.896945 systemd-logind[1293]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:23:12.898454 systemd-logind[1293]: Removed session 25. Sep 6 00:23:12.973099 kubelet[2141]: E0906 00:23:12.973043 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:14.342000 audit[5780]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:14.342000 audit[5780]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffcb67fbe00 a2=0 a3=7ffcb67fbdec items=0 ppid=2268 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:14.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:14.358000 audit[5780]: NETFILTER_CFG table=nat:140 family=2 entries=110 op=nft_register_chain pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:23:14.358000 audit[5780]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffcb67fbe00 a2=0 a3=7ffcb67fbdec items=0 ppid=2268 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:14.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:23:17.905241 kernel: kauditd_printk_skb: 13 callbacks suppressed Sep 6 00:23:17.905445 kernel: audit: type=1130 audit(1757118197.899:573): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.61:22-10.0.0.1:33306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:17.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.61:22-10.0.0.1:33306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:17.901834 systemd[1]: Started sshd@25-10.0.0.61:22-10.0.0.1:33306.service. Sep 6 00:23:18.032360 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 33306 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:23:18.031000 audit[5784]: USER_ACCT pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.034770 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:18.033000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.043015 kernel: audit: type=1101 audit(1757118198.031:574): pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.043189 kernel: audit: type=1103 audit(1757118198.033:575): pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.046159 kernel: audit: type=1006 audit(1757118198.033:576): pid=5784 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 6 00:23:18.052077 kernel: audit: type=1300 audit(1757118198.033:576): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdca848f10 a2=3 a3=0 items=0 ppid=1 pid=5784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:18.033000 audit[5784]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdca848f10 a2=3 a3=0 items=0 ppid=1 pid=5784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:18.044064 systemd[1]: Started session-26.scope. Sep 6 00:23:18.045287 systemd-logind[1293]: New session 26 of user core. Sep 6 00:23:18.033000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:18.050000 audit[5784]: USER_START pid=5784 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.062325 kernel: audit: type=1327 audit(1757118198.033:576): proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:18.062527 kernel: audit: type=1105 audit(1757118198.050:577): pid=5784 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.062564 kernel: audit: type=1103 audit(1757118198.052:578): pid=5787 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.052000 audit[5787]: CRED_ACQ pid=5787 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.217583 sshd[5784]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:18.221000 audit[5784]: USER_END pid=5784 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.224633 systemd[1]: sshd@25-10.0.0.61:22-10.0.0.1:33306.service: Deactivated successfully. Sep 6 00:23:18.226786 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:23:18.227738 systemd-logind[1293]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:23:18.221000 audit[5784]: CRED_DISP pid=5784 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.229093 systemd-logind[1293]: Removed session 26. Sep 6 00:23:18.233028 kernel: audit: type=1106 audit(1757118198.221:579): pid=5784 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.233194 kernel: audit: type=1104 audit(1757118198.221:580): pid=5784 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:18.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.61:22-10.0.0.1:33306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:23.224882 systemd[1]: Started sshd@26-10.0.0.61:22-10.0.0.1:60492.service. Sep 6 00:23:23.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.61:22-10.0.0.1:60492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:23.227181 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:23:23.227277 kernel: audit: type=1130 audit(1757118203.223:582): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.61:22-10.0.0.1:60492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:23.329000 audit[5819]: USER_ACCT pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.332330 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 60492 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:23:23.333009 sshd[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:23.349087 kernel: audit: type=1101 audit(1757118203.329:583): pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.349283 kernel: audit: type=1103 audit(1757118203.331:584): pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.349326 kernel: audit: type=1006 audit(1757118203.331:585): pid=5819 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 6 00:23:23.349352 kernel: audit: type=1300 audit(1757118203.331:585): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebcf0d7c0 a2=3 a3=0 items=0 ppid=1 pid=5819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:23.349376 kernel: audit: type=1327 audit(1757118203.331:585): proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:23.331000 audit[5819]: CRED_ACQ pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.331000 audit[5819]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebcf0d7c0 a2=3 a3=0 items=0 ppid=1 pid=5819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:23.331000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:23.345539 systemd[1]: Started session-27.scope. Sep 6 00:23:23.350247 systemd-logind[1293]: New session 27 of user core. Sep 6 00:23:23.356000 audit[5819]: USER_START pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.359000 audit[5822]: CRED_ACQ pid=5822 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.388284 kernel: audit: type=1105 audit(1757118203.356:586): pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.388401 kernel: audit: type=1103 audit(1757118203.359:587): pid=5822 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.981516 sshd[5819]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:23.983000 audit[5819]: USER_END pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.986078 systemd[1]: sshd@26-10.0.0.61:22-10.0.0.1:60492.service: Deactivated successfully. Sep 6 00:23:23.987588 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 00:23:23.988413 systemd-logind[1293]: Session 27 logged out. Waiting for processes to exit. Sep 6 00:23:23.989943 systemd-logind[1293]: Removed session 27. Sep 6 00:23:23.983000 audit[5819]: CRED_DISP pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.995691 kernel: audit: type=1106 audit(1757118203.983:588): pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.995914 kernel: audit: type=1104 audit(1757118203.983:589): pid=5819 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:23.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.61:22-10.0.0.1:60492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:28.765510 systemd[1]: Started sshd@27-10.0.0.61:22-10.0.0.1:60498.service. Sep 6 00:23:28.770465 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:23:28.770562 kernel: audit: type=1130 audit(1757118208.764:591): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.61:22-10.0.0.1:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:28.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.61:22-10.0.0.1:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:23:28.816000 audit[5833]: USER_ACCT pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.818586 sshd[5833]: Accepted publickey for core from 10.0.0.1 port 60498 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:23:28.819367 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:28.817000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.825672 systemd-logind[1293]: New session 28 of user core. Sep 6 00:23:28.826761 kernel: audit: type=1101 audit(1757118208.816:592): pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.826829 kernel: audit: type=1103 audit(1757118208.817:593): pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.826869 kernel: audit: type=1006 audit(1757118208.817:594): pid=5833 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Sep 6 00:23:28.826016 systemd[1]: Started session-28.scope. Sep 6 00:23:28.817000 audit[5833]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe726e8f20 a2=3 a3=0 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:28.834950 kernel: audit: type=1300 audit(1757118208.817:594): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe726e8f20 a2=3 a3=0 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:23:28.817000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:28.832000 audit[5833]: USER_START pid=5833 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.843979 kernel: audit: type=1327 audit(1757118208.817:594): proctitle=737368643A20636F7265205B707269765D Sep 6 00:23:28.844187 kernel: audit: type=1105 audit(1757118208.832:595): pid=5833 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.844230 kernel: audit: type=1103 audit(1757118208.834:596): pid=5837 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.834000 audit[5837]: CRED_ACQ pid=5837 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.971483 kubelet[2141]: E0906 00:23:28.971430 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:28.982043 sshd[5833]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:28.982000 audit[5833]: USER_END pid=5833 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.985614 systemd[1]: sshd@27-10.0.0.61:22-10.0.0.1:60498.service: Deactivated successfully. Sep 6 00:23:28.986862 systemd[1]: session-28.scope: Deactivated successfully. Sep 6 00:23:28.987828 systemd-logind[1293]: Session 28 logged out. Waiting for processes to exit. Sep 6 00:23:28.982000 audit[5833]: CRED_DISP pid=5833 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.989089 systemd-logind[1293]: Removed session 28. Sep 6 00:23:28.995036 kernel: audit: type=1106 audit(1757118208.982:597): pid=5833 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.995163 kernel: audit: type=1104 audit(1757118208.982:598): pid=5833 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 6 00:23:28.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.61:22-10.0.0.1:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'