Sep 13 00:42:26.826609 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:42:26.826638 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:42:26.826648 kernel: BIOS-provided physical RAM map: Sep 13 00:42:26.826653 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:42:26.826658 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:42:26.826664 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:42:26.826670 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:42:26.826676 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:42:26.826681 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:42:26.826688 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:42:26.826693 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 13 00:42:26.826699 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 13 00:42:26.826704 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:42:26.826710 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:42:26.826716 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:42:26.826723 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:42:26.826729 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:42:26.826735 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:42:26.826741 kernel: NX (Execute Disable) protection: active Sep 13 00:42:26.826747 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 13 00:42:26.826752 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 13 00:42:26.826758 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 13 00:42:26.826764 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 13 00:42:26.826770 kernel: extended physical RAM map: Sep 13 00:42:26.826775 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:42:26.826783 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:42:26.826788 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:42:26.826794 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:42:26.826800 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:42:26.826806 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:42:26.826812 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:42:26.826818 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Sep 13 00:42:26.826824 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Sep 13 00:42:26.826830 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Sep 13 00:42:26.826835 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Sep 13 00:42:26.826841 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Sep 13 00:42:26.826848 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 13 00:42:26.826854 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:42:26.826860 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:42:26.826866 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:42:26.826874 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:42:26.826881 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:42:26.826887 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:42:26.826894 kernel: efi: EFI v2.70 by EDK II Sep 13 00:42:26.826900 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Sep 13 00:42:26.826907 kernel: random: crng init done Sep 13 00:42:26.826913 kernel: SMBIOS 2.8 present. Sep 13 00:42:26.826919 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 13 00:42:26.826926 kernel: Hypervisor detected: KVM Sep 13 00:42:26.826932 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:42:26.826938 kernel: kvm-clock: cpu 0, msr 5219f001, primary cpu clock Sep 13 00:42:26.826944 kernel: kvm-clock: using sched offset of 3910777967 cycles Sep 13 00:42:26.826952 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:42:26.826959 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:42:26.826966 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:42:26.826972 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:42:26.826979 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 13 00:42:26.826985 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:42:26.826992 kernel: Using GB pages for direct mapping Sep 13 00:42:26.826998 kernel: Secure boot disabled Sep 13 00:42:26.827005 kernel: ACPI: Early table checksum verification disabled Sep 13 00:42:26.827012 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 00:42:26.827019 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:42:26.827025 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:26.827032 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:26.827038 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 00:42:26.827044 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:26.827051 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:26.827057 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:26.827064 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:26.827071 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:42:26.827078 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 00:42:26.827084 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 00:42:26.827091 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 00:42:26.827097 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 00:42:26.827103 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 00:42:26.827110 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 00:42:26.827116 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 00:42:26.827122 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 00:42:26.827130 kernel: No NUMA configuration found Sep 13 00:42:26.827136 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 13 00:42:26.827143 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 13 00:42:26.827149 kernel: Zone ranges: Sep 13 00:42:26.827156 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:42:26.827162 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 13 00:42:26.827169 kernel: Normal empty Sep 13 00:42:26.827175 kernel: Movable zone start for each node Sep 13 00:42:26.827182 kernel: Early memory node ranges Sep 13 00:42:26.827189 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:42:26.827196 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 00:42:26.827202 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 00:42:26.827208 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 13 00:42:26.827215 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 13 00:42:26.827221 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 13 00:42:26.827227 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 13 00:42:26.827234 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:42:26.827240 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:42:26.827247 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 00:42:26.827254 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:42:26.827260 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 13 00:42:26.827267 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:42:26.827273 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 13 00:42:26.827280 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:42:26.827286 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:42:26.827293 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:42:26.827299 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:42:26.827306 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:42:26.827313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:42:26.827320 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:42:26.827326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:42:26.827333 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:42:26.827339 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:42:26.827345 kernel: TSC deadline timer available Sep 13 00:42:26.827352 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:42:26.827358 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:42:26.827364 kernel: kvm-guest: setup PV sched yield Sep 13 00:42:26.827372 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:42:26.827388 kernel: Booting paravirtualized kernel on KVM Sep 13 00:42:26.827400 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:42:26.827408 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:42:26.827415 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 13 00:42:26.827421 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 13 00:42:26.827428 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:42:26.827434 kernel: kvm-guest: setup async PF for cpu 0 Sep 13 00:42:26.827441 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Sep 13 00:42:26.827448 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:42:26.827455 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:42:26.827461 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 13 00:42:26.827469 kernel: Policy zone: DMA32 Sep 13 00:42:26.827477 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:42:26.827484 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:42:26.827491 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:42:26.827499 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:42:26.827506 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:42:26.827513 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 169308K reserved, 0K cma-reserved) Sep 13 00:42:26.827520 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:42:26.827527 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:42:26.827534 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:42:26.827540 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:42:26.827548 kernel: rcu: RCU event tracing is enabled. Sep 13 00:42:26.827555 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:42:26.827563 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:42:26.827569 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:42:26.827576 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:42:26.827583 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:42:26.827590 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:42:26.827597 kernel: Console: colour dummy device 80x25 Sep 13 00:42:26.827603 kernel: printk: console [ttyS0] enabled Sep 13 00:42:26.827610 kernel: ACPI: Core revision 20210730 Sep 13 00:42:26.827627 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:42:26.827635 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:42:26.827642 kernel: x2apic enabled Sep 13 00:42:26.827649 kernel: Switched APIC routing to physical x2apic. Sep 13 00:42:26.827655 kernel: kvm-guest: setup PV IPIs Sep 13 00:42:26.827662 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:42:26.827669 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:42:26.827676 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:42:26.827683 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:42:26.827690 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:42:26.827698 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:42:26.827705 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:42:26.827712 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:42:26.827719 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:42:26.827726 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:42:26.827732 kernel: active return thunk: retbleed_return_thunk Sep 13 00:42:26.827739 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:42:26.827746 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:42:26.827753 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:42:26.827762 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:42:26.827768 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:42:26.827775 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:42:26.827782 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:42:26.827789 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:42:26.827796 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:42:26.827802 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:42:26.827809 kernel: LSM: Security Framework initializing Sep 13 00:42:26.827816 kernel: SELinux: Initializing. Sep 13 00:42:26.827826 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:42:26.827834 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:42:26.827842 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:42:26.827850 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:42:26.827857 kernel: ... version: 0 Sep 13 00:42:26.827863 kernel: ... bit width: 48 Sep 13 00:42:26.827870 kernel: ... generic registers: 6 Sep 13 00:42:26.827877 kernel: ... value mask: 0000ffffffffffff Sep 13 00:42:26.827883 kernel: ... max period: 00007fffffffffff Sep 13 00:42:26.827891 kernel: ... fixed-purpose events: 0 Sep 13 00:42:26.827898 kernel: ... event mask: 000000000000003f Sep 13 00:42:26.827905 kernel: signal: max sigframe size: 1776 Sep 13 00:42:26.827911 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:42:26.827918 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:42:26.827925 kernel: x86: Booting SMP configuration: Sep 13 00:42:26.827931 kernel: .... node #0, CPUs: #1 Sep 13 00:42:26.827938 kernel: kvm-clock: cpu 1, msr 5219f041, secondary cpu clock Sep 13 00:42:26.827945 kernel: kvm-guest: setup async PF for cpu 1 Sep 13 00:42:26.827953 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Sep 13 00:42:26.827959 kernel: #2 Sep 13 00:42:26.827967 kernel: kvm-clock: cpu 2, msr 5219f081, secondary cpu clock Sep 13 00:42:26.827973 kernel: kvm-guest: setup async PF for cpu 2 Sep 13 00:42:26.827980 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Sep 13 00:42:26.827987 kernel: #3 Sep 13 00:42:26.827993 kernel: kvm-clock: cpu 3, msr 5219f0c1, secondary cpu clock Sep 13 00:42:26.828000 kernel: kvm-guest: setup async PF for cpu 3 Sep 13 00:42:26.828007 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Sep 13 00:42:26.828013 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:42:26.828021 kernel: smpboot: Max logical packages: 1 Sep 13 00:42:26.828028 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:42:26.828035 kernel: devtmpfs: initialized Sep 13 00:42:26.828041 kernel: x86/mm: Memory block size: 128MB Sep 13 00:42:26.828048 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 00:42:26.828055 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 00:42:26.828062 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 13 00:42:26.828069 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 00:42:26.828076 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 00:42:26.828084 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:42:26.828091 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:42:26.828097 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:42:26.828104 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:42:26.828111 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:42:26.828118 kernel: audit: type=2000 audit(1757724146.041:1): state=initialized audit_enabled=0 res=1 Sep 13 00:42:26.828124 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:42:26.828131 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:42:26.828139 kernel: cpuidle: using governor menu Sep 13 00:42:26.828146 kernel: ACPI: bus type PCI registered Sep 13 00:42:26.828153 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:42:26.828159 kernel: dca service started, version 1.12.1 Sep 13 00:42:26.828166 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:42:26.828173 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 13 00:42:26.828180 kernel: PCI: Using configuration type 1 for base access Sep 13 00:42:26.828187 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:42:26.828193 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:42:26.828201 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:42:26.828208 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:42:26.828215 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:42:26.828221 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:42:26.828228 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:42:26.828235 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:42:26.828242 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:42:26.828248 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:42:26.828255 kernel: ACPI: Interpreter enabled Sep 13 00:42:26.828262 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:42:26.828270 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:42:26.828277 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:42:26.828283 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:42:26.828290 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:42:26.828406 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:42:26.828477 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:42:26.828545 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:42:26.828557 kernel: PCI host bridge to bus 0000:00 Sep 13 00:42:26.828640 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:42:26.828706 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:42:26.828766 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:42:26.828827 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:42:26.828886 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:42:26.828947 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 13 00:42:26.829010 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:42:26.829089 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:42:26.829166 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:42:26.829237 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 13 00:42:26.829305 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 13 00:42:26.829374 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:42:26.829455 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 13 00:42:26.829522 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:42:26.829603 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:42:26.829689 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 13 00:42:26.829760 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 13 00:42:26.829829 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 13 00:42:26.829904 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:42:26.829977 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 13 00:42:26.830044 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 13 00:42:26.830114 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 13 00:42:26.830200 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:42:26.830293 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 13 00:42:26.830362 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 13 00:42:26.830441 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 13 00:42:26.830514 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 13 00:42:26.830587 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:42:26.830677 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:42:26.831556 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:42:26.831648 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 13 00:42:26.831718 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 13 00:42:26.831791 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:42:26.831862 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 13 00:42:26.831871 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:42:26.831878 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:42:26.831885 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:42:26.831892 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:42:26.831899 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:42:26.831906 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:42:26.831913 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:42:26.831922 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:42:26.831928 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:42:26.831935 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:42:26.831942 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:42:26.831949 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:42:26.831955 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:42:26.831962 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:42:26.831968 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:42:26.831975 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:42:26.831983 kernel: iommu: Default domain type: Translated Sep 13 00:42:26.831990 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:42:26.832057 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:42:26.832122 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:42:26.832187 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:42:26.832197 kernel: vgaarb: loaded Sep 13 00:42:26.832203 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:42:26.832211 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:42:26.832220 kernel: PTP clock support registered Sep 13 00:42:26.832226 kernel: Registered efivars operations Sep 13 00:42:26.832233 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:42:26.832240 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:42:26.832247 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 00:42:26.832253 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 13 00:42:26.832260 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Sep 13 00:42:26.832267 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Sep 13 00:42:26.832273 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 13 00:42:26.832280 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 13 00:42:26.832288 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:42:26.832295 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:42:26.832302 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:42:26.832309 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:42:26.832316 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:42:26.832323 kernel: pnp: PnP ACPI init Sep 13 00:42:26.832406 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:42:26.832418 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:42:26.832425 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:42:26.832432 kernel: NET: Registered PF_INET protocol family Sep 13 00:42:26.832439 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:42:26.832446 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:42:26.832453 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:42:26.832460 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:42:26.832467 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:42:26.832473 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:42:26.832482 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:42:26.832489 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:42:26.832495 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:42:26.832502 kernel: NET: Registered PF_XDP protocol family Sep 13 00:42:26.832572 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 13 00:42:26.832672 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 13 00:42:26.832733 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:42:26.832792 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:42:26.832854 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:42:26.832912 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:42:26.832970 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:42:26.833028 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 13 00:42:26.833037 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:42:26.833044 kernel: Initialise system trusted keyrings Sep 13 00:42:26.833051 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:42:26.833058 kernel: Key type asymmetric registered Sep 13 00:42:26.833065 kernel: Asymmetric key parser 'x509' registered Sep 13 00:42:26.833074 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:42:26.833081 kernel: io scheduler mq-deadline registered Sep 13 00:42:26.833098 kernel: io scheduler kyber registered Sep 13 00:42:26.833106 kernel: io scheduler bfq registered Sep 13 00:42:26.833113 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:42:26.833121 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:42:26.833128 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:42:26.833135 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:42:26.833142 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:42:26.833150 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:42:26.833158 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:42:26.833165 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:42:26.833172 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:42:26.833244 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:42:26.833255 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:42:26.833326 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:42:26.833412 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:42:26 UTC (1757724146) Sep 13 00:42:26.833479 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:42:26.833489 kernel: efifb: probing for efifb Sep 13 00:42:26.833496 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 13 00:42:26.833503 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 13 00:42:26.833510 kernel: efifb: scrolling: redraw Sep 13 00:42:26.833517 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:42:26.833524 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:42:26.833532 kernel: fb0: EFI VGA frame buffer device Sep 13 00:42:26.833539 kernel: pstore: Registered efi as persistent store backend Sep 13 00:42:26.833548 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:42:26.833555 kernel: Segment Routing with IPv6 Sep 13 00:42:26.833562 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:42:26.833572 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:42:26.833579 kernel: Key type dns_resolver registered Sep 13 00:42:26.833587 kernel: IPI shorthand broadcast: enabled Sep 13 00:42:26.833594 kernel: sched_clock: Marking stable (453569357, 122239975)->(621238634, -45429302) Sep 13 00:42:26.833601 kernel: registered taskstats version 1 Sep 13 00:42:26.833608 kernel: Loading compiled-in X.509 certificates Sep 13 00:42:26.833626 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:42:26.833634 kernel: Key type .fscrypt registered Sep 13 00:42:26.833641 kernel: Key type fscrypt-provisioning registered Sep 13 00:42:26.833649 kernel: pstore: Using crash dump compression: deflate Sep 13 00:42:26.833656 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:42:26.833665 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:42:26.833672 kernel: ima: No architecture policies found Sep 13 00:42:26.833679 kernel: clk: Disabling unused clocks Sep 13 00:42:26.833686 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:42:26.833693 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:42:26.833701 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:42:26.833708 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:42:26.833715 kernel: Run /init as init process Sep 13 00:42:26.833722 kernel: with arguments: Sep 13 00:42:26.833730 kernel: /init Sep 13 00:42:26.833737 kernel: with environment: Sep 13 00:42:26.833744 kernel: HOME=/ Sep 13 00:42:26.833751 kernel: TERM=linux Sep 13 00:42:26.833758 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:42:26.833767 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:42:26.833776 systemd[1]: Detected virtualization kvm. Sep 13 00:42:26.833784 systemd[1]: Detected architecture x86-64. Sep 13 00:42:26.833793 systemd[1]: Running in initrd. Sep 13 00:42:26.833800 systemd[1]: No hostname configured, using default hostname. Sep 13 00:42:26.833808 systemd[1]: Hostname set to . Sep 13 00:42:26.833816 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:42:26.833823 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:42:26.833831 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:42:26.833838 systemd[1]: Reached target cryptsetup.target. Sep 13 00:42:26.833845 systemd[1]: Reached target paths.target. Sep 13 00:42:26.833854 systemd[1]: Reached target slices.target. Sep 13 00:42:26.833862 systemd[1]: Reached target swap.target. Sep 13 00:42:26.833869 systemd[1]: Reached target timers.target. Sep 13 00:42:26.833877 systemd[1]: Listening on iscsid.socket. Sep 13 00:42:26.833884 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:42:26.833892 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:42:26.833899 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:42:26.833907 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:42:26.833917 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:42:26.833926 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:42:26.833936 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:42:26.833945 systemd[1]: Reached target sockets.target. Sep 13 00:42:26.833955 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:42:26.833963 systemd[1]: Finished network-cleanup.service. Sep 13 00:42:26.833970 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:42:26.833978 systemd[1]: Starting systemd-journald.service... Sep 13 00:42:26.833986 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:42:26.833995 systemd[1]: Starting systemd-resolved.service... Sep 13 00:42:26.834003 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:42:26.834010 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:42:26.834018 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:42:26.834025 kernel: audit: type=1130 audit(1757724146.826:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.834033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:42:26.834043 systemd-journald[197]: Journal started Sep 13 00:42:26.834082 systemd-journald[197]: Runtime Journal (/run/log/journal/9a990410d0e04e30a98a5806c541f016) is 6.0M, max 48.4M, 42.4M free. Sep 13 00:42:26.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.835890 systemd[1]: Started systemd-journald.service. Sep 13 00:42:26.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.836292 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:42:26.839958 kernel: audit: type=1130 audit(1757724146.835:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.841804 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:42:26.842115 systemd-modules-load[198]: Inserted module 'overlay' Sep 13 00:42:26.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.847638 kernel: audit: type=1130 audit(1757724146.840:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.847853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:42:26.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.853688 kernel: audit: type=1130 audit(1757724146.848:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.857192 systemd-resolved[199]: Positive Trust Anchors: Sep 13 00:42:26.857205 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:42:26.857231 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:42:26.859357 systemd-resolved[199]: Defaulting to hostname 'linux'. Sep 13 00:42:26.871249 kernel: audit: type=1130 audit(1757724146.866:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.860030 systemd[1]: Started systemd-resolved.service. Sep 13 00:42:26.876447 kernel: audit: type=1130 audit(1757724146.870:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.869506 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:42:26.871291 systemd[1]: Reached target nss-lookup.target. Sep 13 00:42:26.875473 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:42:26.883654 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:42:26.885007 dracut-cmdline[216]: dracut-dracut-053 Sep 13 00:42:26.887069 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:42:26.893035 systemd-modules-load[198]: Inserted module 'br_netfilter' Sep 13 00:42:26.894156 kernel: Bridge firewalling registered Sep 13 00:42:26.911655 kernel: SCSI subsystem initialized Sep 13 00:42:26.922654 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:42:26.922680 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:42:26.922694 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:42:26.926741 systemd-modules-load[198]: Inserted module 'dm_multipath' Sep 13 00:42:26.927468 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:42:26.933111 kernel: audit: type=1130 audit(1757724146.927:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.929150 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:42:26.938400 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:42:26.942642 kernel: audit: type=1130 audit(1757724146.938:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:26.948644 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:42:26.964649 kernel: iscsi: registered transport (tcp) Sep 13 00:42:26.985651 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:42:26.985669 kernel: QLogic iSCSI HBA Driver Sep 13 00:42:27.011590 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:42:27.016856 kernel: audit: type=1130 audit(1757724147.011:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:27.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:27.013357 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:42:27.058645 kernel: raid6: avx2x4 gen() 30530 MB/s Sep 13 00:42:27.075639 kernel: raid6: avx2x4 xor() 8192 MB/s Sep 13 00:42:27.092637 kernel: raid6: avx2x2 gen() 32324 MB/s Sep 13 00:42:27.109636 kernel: raid6: avx2x2 xor() 19209 MB/s Sep 13 00:42:27.126643 kernel: raid6: avx2x1 gen() 26106 MB/s Sep 13 00:42:27.143638 kernel: raid6: avx2x1 xor() 15170 MB/s Sep 13 00:42:27.160637 kernel: raid6: sse2x4 gen() 14686 MB/s Sep 13 00:42:27.177635 kernel: raid6: sse2x4 xor() 7505 MB/s Sep 13 00:42:27.194642 kernel: raid6: sse2x2 gen() 16255 MB/s Sep 13 00:42:27.211640 kernel: raid6: sse2x2 xor() 9823 MB/s Sep 13 00:42:27.228640 kernel: raid6: sse2x1 gen() 12231 MB/s Sep 13 00:42:27.245966 kernel: raid6: sse2x1 xor() 7765 MB/s Sep 13 00:42:27.245983 kernel: raid6: using algorithm avx2x2 gen() 32324 MB/s Sep 13 00:42:27.245992 kernel: raid6: .... xor() 19209 MB/s, rmw enabled Sep 13 00:42:27.246649 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:42:27.258638 kernel: xor: automatically using best checksumming function avx Sep 13 00:42:27.346664 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:42:27.353662 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:42:27.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:27.354000 audit: BPF prog-id=7 op=LOAD Sep 13 00:42:27.354000 audit: BPF prog-id=8 op=LOAD Sep 13 00:42:27.355512 systemd[1]: Starting systemd-udevd.service... Sep 13 00:42:27.367216 systemd-udevd[400]: Using default interface naming scheme 'v252'. Sep 13 00:42:27.370937 systemd[1]: Started systemd-udevd.service. Sep 13 00:42:27.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:27.371693 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:42:27.381483 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Sep 13 00:42:27.403700 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:42:27.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:27.404574 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:42:27.437347 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:42:27.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:27.462644 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:42:27.468241 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:42:27.468258 kernel: GPT:9289727 != 19775487 Sep 13 00:42:27.468267 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:42:27.468276 kernel: GPT:9289727 != 19775487 Sep 13 00:42:27.468284 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:42:27.468292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:42:27.473636 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:42:27.477641 kernel: libata version 3.00 loaded. Sep 13 00:42:27.484644 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:42:27.515696 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:42:27.515714 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:42:27.515803 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:42:27.515874 kernel: scsi host0: ahci Sep 13 00:42:27.515959 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:42:27.515969 kernel: AES CTR mode by8 optimization enabled Sep 13 00:42:27.515978 kernel: scsi host1: ahci Sep 13 00:42:27.516062 kernel: scsi host2: ahci Sep 13 00:42:27.516149 kernel: scsi host3: ahci Sep 13 00:42:27.516228 kernel: scsi host4: ahci Sep 13 00:42:27.516306 kernel: scsi host5: ahci Sep 13 00:42:27.516395 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 13 00:42:27.516405 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 13 00:42:27.516414 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (452) Sep 13 00:42:27.516422 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 13 00:42:27.516433 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 13 00:42:27.516442 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 13 00:42:27.516451 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 13 00:42:27.500473 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:42:27.501704 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:42:27.508143 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:42:27.525599 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:42:27.530506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:42:27.532849 systemd[1]: Starting disk-uuid.service... Sep 13 00:42:27.539351 disk-uuid[528]: Primary Header is updated. Sep 13 00:42:27.539351 disk-uuid[528]: Secondary Entries is updated. Sep 13 00:42:27.539351 disk-uuid[528]: Secondary Header is updated. Sep 13 00:42:27.542638 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:42:27.546659 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:42:27.549652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:42:27.824205 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:27.824293 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:27.824305 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:27.825651 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:27.826657 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:42:27.827644 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:27.828656 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:42:27.830072 kernel: ata3.00: applying bridge limits Sep 13 00:42:27.830095 kernel: ata3.00: configured for UDMA/100 Sep 13 00:42:27.830654 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:42:27.862657 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:42:27.879154 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:42:27.879169 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:42:28.549287 disk-uuid[529]: The operation has completed successfully. Sep 13 00:42:28.550338 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:42:28.570030 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:42:28.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.570106 systemd[1]: Finished disk-uuid.service. Sep 13 00:42:28.578144 systemd[1]: Starting verity-setup.service... Sep 13 00:42:28.590650 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:42:28.609282 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:42:28.612736 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:42:28.614687 systemd[1]: Finished verity-setup.service. Sep 13 00:42:28.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.670644 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:42:28.670795 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:42:28.672512 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:42:28.674793 systemd[1]: Starting ignition-setup.service... Sep 13 00:42:28.677170 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:42:28.683667 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:28.683693 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:42:28.683703 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:42:28.691724 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:42:28.701420 systemd[1]: Finished ignition-setup.service. Sep 13 00:42:28.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.703030 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:42:28.738156 ignition[648]: Ignition 2.14.0 Sep 13 00:42:28.738167 ignition[648]: Stage: fetch-offline Sep 13 00:42:28.738239 ignition[648]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:28.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.738247 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:28.739891 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:42:28.738340 ignition[648]: parsed url from cmdline: "" Sep 13 00:42:28.738342 ignition[648]: no config URL provided Sep 13 00:42:28.738347 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:42:28.738353 ignition[648]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:42:28.738368 ignition[648]: op(1): [started] loading QEMU firmware config module Sep 13 00:42:28.738372 ignition[648]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:42:28.744407 ignition[648]: op(1): [finished] loading QEMU firmware config module Sep 13 00:42:28.748000 audit: BPF prog-id=9 op=LOAD Sep 13 00:42:28.749729 systemd[1]: Starting systemd-networkd.service... Sep 13 00:42:28.785841 ignition[648]: parsing config with SHA512: c30e127ef5f9f9d69e8c4bd787899fabcecd83e357e7980fd3b9005c725c6642821fb53a100a9f9a21dd1cdedfbee88a2b83d5eaf475a3cf70c6e13f93b91eaf Sep 13 00:42:28.793215 unknown[648]: fetched base config from "system" Sep 13 00:42:28.793231 unknown[648]: fetched user config from "qemu" Sep 13 00:42:28.795227 ignition[648]: fetch-offline: fetch-offline passed Sep 13 00:42:28.796063 ignition[648]: Ignition finished successfully Sep 13 00:42:28.797556 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:42:28.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.802599 systemd-networkd[721]: lo: Link UP Sep 13 00:42:28.802606 systemd-networkd[721]: lo: Gained carrier Sep 13 00:42:28.802972 systemd-networkd[721]: Enumeration completed Sep 13 00:42:28.803160 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:28.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.803758 systemd[1]: Started systemd-networkd.service. Sep 13 00:42:28.803992 systemd-networkd[721]: eth0: Link UP Sep 13 00:42:28.803994 systemd-networkd[721]: eth0: Gained carrier Sep 13 00:42:28.808447 systemd[1]: Reached target network.target. Sep 13 00:42:28.809954 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:42:28.813915 systemd[1]: Starting ignition-kargs.service... Sep 13 00:42:28.816089 systemd[1]: Starting iscsiuio.service... Sep 13 00:42:28.820536 systemd[1]: Started iscsiuio.service. Sep 13 00:42:28.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.821566 systemd[1]: Starting iscsid.service... Sep 13 00:42:28.822741 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:42:28.825154 iscsid[732]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:42:28.825154 iscsid[732]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:42:28.825154 iscsid[732]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:42:28.825154 iscsid[732]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:42:28.825154 iscsid[732]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:42:28.825154 iscsid[732]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:42:28.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.826445 systemd[1]: Started iscsid.service. Sep 13 00:42:28.830295 ignition[723]: Ignition 2.14.0 Sep 13 00:42:28.828223 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:42:28.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.830303 ignition[723]: Stage: kargs Sep 13 00:42:28.834513 systemd[1]: Finished ignition-kargs.service. Sep 13 00:42:28.830422 ignition[723]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:28.840722 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:42:28.830434 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:28.843364 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:42:28.831650 ignition[723]: kargs: kargs passed Sep 13 00:42:28.844859 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:42:28.831693 ignition[723]: Ignition finished successfully Sep 13 00:42:28.846463 systemd[1]: Reached target remote-fs.target. Sep 13 00:42:28.852871 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:42:28.854902 systemd[1]: Starting ignition-disks.service... Sep 13 00:42:28.860128 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:42:28.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.863016 ignition[742]: Ignition 2.14.0 Sep 13 00:42:28.863024 ignition[742]: Stage: disks Sep 13 00:42:28.863113 ignition[742]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:28.863120 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:28.864316 ignition[742]: disks: disks passed Sep 13 00:42:28.864368 ignition[742]: Ignition finished successfully Sep 13 00:42:28.867995 systemd[1]: Finished ignition-disks.service. Sep 13 00:42:28.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.868491 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:42:28.869959 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:42:28.871340 systemd[1]: Reached target local-fs.target. Sep 13 00:42:28.873741 systemd[1]: Reached target sysinit.target. Sep 13 00:42:28.874103 systemd[1]: Reached target basic.target. Sep 13 00:42:28.876380 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:42:28.887192 systemd-resolved[199]: Detected conflict on linux IN A 10.0.0.27 Sep 13 00:42:28.887207 systemd-resolved[199]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Sep 13 00:42:28.890523 systemd-fsck[754]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:42:28.895853 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:42:28.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.898750 systemd[1]: Mounting sysroot.mount... Sep 13 00:42:28.905661 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:42:28.905738 systemd[1]: Mounted sysroot.mount. Sep 13 00:42:28.907401 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:42:28.910064 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:42:28.911795 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:42:28.911841 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:42:28.913310 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:42:28.917516 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:42:28.919362 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:42:28.923063 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:42:28.927013 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:42:28.930413 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:42:28.933823 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:42:28.957227 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:42:28.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.958143 systemd[1]: Starting ignition-mount.service... Sep 13 00:42:28.960121 systemd[1]: Starting sysroot-boot.service... Sep 13 00:42:28.963668 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:42:28.970380 ignition[806]: INFO : Ignition 2.14.0 Sep 13 00:42:28.970380 ignition[806]: INFO : Stage: mount Sep 13 00:42:28.971903 ignition[806]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:28.971903 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:28.971903 ignition[806]: INFO : mount: mount passed Sep 13 00:42:28.971903 ignition[806]: INFO : Ignition finished successfully Sep 13 00:42:28.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:28.971991 systemd[1]: Finished ignition-mount.service. Sep 13 00:42:28.982739 systemd[1]: Finished sysroot-boot.service. Sep 13 00:42:28.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:29.621258 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:42:29.628364 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Sep 13 00:42:29.628391 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:29.628401 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:42:29.629124 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:42:29.632998 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:42:29.634362 systemd[1]: Starting ignition-files.service... Sep 13 00:42:29.647081 ignition[835]: INFO : Ignition 2.14.0 Sep 13 00:42:29.647081 ignition[835]: INFO : Stage: files Sep 13 00:42:29.648754 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:29.648754 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:29.648754 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:42:29.652458 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:42:29.652458 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:42:29.652458 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:42:29.652458 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:42:29.652458 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:42:29.652066 unknown[835]: wrote ssh authorized keys file for user: core Sep 13 00:42:29.660306 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:42:29.660306 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:42:29.660306 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:42:29.660306 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:42:29.713448 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:42:29.904786 systemd-networkd[721]: eth0: Gained IPv6LL Sep 13 00:42:29.933428 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:29.935733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:42:30.294436 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:42:30.692478 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:30.692478 ignition[835]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:42:30.696636 ignition[835]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:42:30.730538 ignition[835]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:42:30.732222 ignition[835]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:42:30.733733 ignition[835]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:42:30.735454 ignition[835]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:42:30.735454 ignition[835]: INFO : files: files passed Sep 13 00:42:30.737848 ignition[835]: INFO : Ignition finished successfully Sep 13 00:42:30.739639 systemd[1]: Finished ignition-files.service. Sep 13 00:42:30.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.740845 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:42:30.741333 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:42:30.742914 systemd[1]: Starting ignition-quench.service... Sep 13 00:42:30.746259 initrd-setup-root-after-ignition[860]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:42:30.747731 initrd-setup-root-after-ignition[862]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:42:30.747905 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:42:30.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.750290 systemd[1]: Reached target ignition-complete.target. Sep 13 00:42:30.751500 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:42:30.756179 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:42:30.756282 systemd[1]: Finished ignition-quench.service. Sep 13 00:42:30.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.763243 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:42:30.763326 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:42:30.764985 systemd[1]: Reached target initrd-fs.target. Sep 13 00:42:30.765535 systemd[1]: Reached target initrd.target. Sep 13 00:42:30.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.767693 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:42:30.768280 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:42:30.777219 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:42:30.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.778170 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:42:30.786496 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:42:30.786978 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:42:30.788421 systemd[1]: Stopped target timers.target. Sep 13 00:42:30.788882 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:42:30.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.788970 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:42:30.791279 systemd[1]: Stopped target initrd.target. Sep 13 00:42:30.792973 systemd[1]: Stopped target basic.target. Sep 13 00:42:30.794165 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:42:30.795480 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:42:30.796887 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:42:30.798392 systemd[1]: Stopped target remote-fs.target. Sep 13 00:42:30.799934 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:42:30.800249 systemd[1]: Stopped target sysinit.target. Sep 13 00:42:30.802926 systemd[1]: Stopped target local-fs.target. Sep 13 00:42:30.804169 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:42:30.805492 systemd[1]: Stopped target swap.target. Sep 13 00:42:30.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.806847 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:42:30.806933 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:42:30.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.808272 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:42:30.809516 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:42:30.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.809599 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:42:30.810039 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:42:30.810121 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:42:30.812480 systemd[1]: Stopped target paths.target. Sep 13 00:42:30.812864 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:42:30.816674 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:42:30.817235 systemd[1]: Stopped target slices.target. Sep 13 00:42:30.819193 systemd[1]: Stopped target sockets.target. Sep 13 00:42:30.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.820964 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:42:30.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.821030 systemd[1]: Closed iscsid.socket. Sep 13 00:42:30.822291 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:42:30.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.822381 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:42:30.823545 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:42:30.823640 systemd[1]: Stopped ignition-files.service. Sep 13 00:42:30.825762 systemd[1]: Stopping ignition-mount.service... Sep 13 00:42:30.826904 systemd[1]: Stopping iscsiuio.service... Sep 13 00:42:30.828104 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:42:30.828225 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:42:30.830745 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:42:30.834209 ignition[876]: INFO : Ignition 2.14.0 Sep 13 00:42:30.834209 ignition[876]: INFO : Stage: umount Sep 13 00:42:30.834209 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:30.834209 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:30.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.831695 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:42:30.841929 ignition[876]: INFO : umount: umount passed Sep 13 00:42:30.841929 ignition[876]: INFO : Ignition finished successfully Sep 13 00:42:30.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.831916 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:42:30.834904 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:42:30.835004 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:42:30.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.841494 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:42:30.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.841572 systemd[1]: Stopped iscsiuio.service. Sep 13 00:42:30.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.843250 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:42:30.843340 systemd[1]: Stopped ignition-mount.service. Sep 13 00:42:30.844405 systemd[1]: Stopped target network.target. Sep 13 00:42:30.845628 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:42:30.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.845658 systemd[1]: Closed iscsiuio.socket. Sep 13 00:42:30.846984 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:42:30.847017 systemd[1]: Stopped ignition-disks.service. Sep 13 00:42:30.848495 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:42:30.848525 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:42:30.849894 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:42:30.849925 systemd[1]: Stopped ignition-setup.service. Sep 13 00:42:30.850449 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:42:30.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.852740 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:42:30.855488 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:42:30.855559 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:42:30.862840 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:42:30.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.862941 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:42:30.865683 systemd-networkd[721]: eth0: DHCPv6 lease lost Sep 13 00:42:30.867368 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:42:30.867467 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:42:30.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.869808 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:42:30.873000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:42:30.869925 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:42:30.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.871188 systemd[1]: Stopping network-cleanup.service... Sep 13 00:42:30.872926 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:42:30.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.880000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:42:30.872969 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:42:30.873335 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:42:30.873364 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:42:30.878565 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:42:30.878596 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:42:30.880380 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:42:30.885011 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:42:30.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.888193 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:42:30.888285 systemd[1]: Stopped network-cleanup.service. Sep 13 00:42:30.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.890668 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:42:30.890763 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:42:30.893041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:42:30.893072 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:42:30.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.893560 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:42:30.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.893582 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:42:30.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.895906 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:42:30.895937 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:42:30.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.896506 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:42:30.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.896534 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:42:30.896820 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:42:30.896846 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:42:30.897695 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:42:30.901434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:42:30.901473 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:42:30.902306 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:42:30.902370 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:42:30.920952 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:42:30.993660 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:42:30.993754 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:42:30.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.995597 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:42:30.996902 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:42:30.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.996938 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:42:30.999326 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:42:31.007233 systemd[1]: Switching root. Sep 13 00:42:31.008000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:42:31.008000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:42:31.008000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:42:31.010000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:42:31.010000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:42:31.026372 iscsid[732]: iscsid shutting down. Sep 13 00:42:31.027124 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Sep 13 00:42:31.027180 systemd-journald[197]: Journal stopped Sep 13 00:42:33.900861 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:42:33.900922 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:42:33.900943 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:42:33.900961 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:42:33.900975 kernel: SELinux: policy capability open_perms=1 Sep 13 00:42:33.900989 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:42:33.901002 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:42:33.901016 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:42:33.901034 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:42:33.901047 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:42:33.901062 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:42:33.901078 systemd[1]: Successfully loaded SELinux policy in 40.538ms. Sep 13 00:42:33.901104 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.295ms. Sep 13 00:42:33.901120 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:42:33.901136 systemd[1]: Detected virtualization kvm. Sep 13 00:42:33.901150 systemd[1]: Detected architecture x86-64. Sep 13 00:42:33.901164 systemd[1]: Detected first boot. Sep 13 00:42:33.901193 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:42:33.901208 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:42:33.901221 kernel: kauditd_printk_skb: 71 callbacks suppressed Sep 13 00:42:33.901242 kernel: audit: type=1400 audit(1757724151.377:82): avc: denied { associate } for pid=927 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:42:33.901257 kernel: audit: type=1300 audit(1757724151.377:82): arch=c000003e syscall=188 success=yes exit=0 a0=c0001916c4 a1=c00002cb40 a2=c00002aa40 a3=32 items=0 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:33.901275 kernel: audit: type=1327 audit(1757724151.377:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:42:33.901290 kernel: audit: type=1400 audit(1757724151.378:83): avc: denied { associate } for pid=927 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:42:33.901303 kernel: audit: type=1300 audit(1757724151.378:83): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001917a9 a2=1ed a3=0 items=2 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:33.901316 kernel: audit: type=1307 audit(1757724151.378:83): cwd="/" Sep 13 00:42:33.901328 kernel: audit: type=1302 audit(1757724151.378:83): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:33.901341 kernel: audit: type=1302 audit(1757724151.378:83): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:33.901354 kernel: audit: type=1327 audit(1757724151.378:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:42:33.901365 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:42:33.901375 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:42:33.901386 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:42:33.901398 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:42:33.901413 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:42:33.901426 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:42:33.901436 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:42:33.901447 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:42:33.901460 systemd[1]: Created slice system-getty.slice. Sep 13 00:42:33.901470 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:42:33.901480 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:42:33.901491 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:42:33.901504 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:42:33.901515 systemd[1]: Created slice user.slice. Sep 13 00:42:33.901526 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:42:33.901536 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:42:33.901545 systemd[1]: Set up automount boot.automount. Sep 13 00:42:33.901555 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:42:33.901565 systemd[1]: Reached target integritysetup.target. Sep 13 00:42:33.901575 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:42:33.901585 systemd[1]: Reached target remote-fs.target. Sep 13 00:42:33.901596 systemd[1]: Reached target slices.target. Sep 13 00:42:33.901608 systemd[1]: Reached target swap.target. Sep 13 00:42:33.901632 systemd[1]: Reached target torcx.target. Sep 13 00:42:33.901648 systemd[1]: Reached target veritysetup.target. Sep 13 00:42:33.901672 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:42:33.901686 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:42:33.901699 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:42:33.901713 kernel: audit: type=1400 audit(1757724153.811:84): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:42:33.901725 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:42:33.901737 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:42:33.901748 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:42:33.901761 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:42:33.901771 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:42:33.901781 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:42:33.901791 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:42:33.901801 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:42:33.901811 systemd[1]: Mounting media.mount... Sep 13 00:42:33.901822 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:33.901831 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:42:33.901841 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:42:33.901853 systemd[1]: Mounting tmp.mount... Sep 13 00:42:33.901863 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:42:33.901873 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:42:33.901884 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:42:33.901894 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:42:33.901903 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:42:33.901913 systemd[1]: Starting modprobe@drm.service... Sep 13 00:42:33.901923 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:42:33.901934 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:42:33.901946 systemd[1]: Starting modprobe@loop.service... Sep 13 00:42:33.901956 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:42:33.901966 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:42:33.901976 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:42:33.901988 systemd[1]: Starting systemd-journald.service... Sep 13 00:42:33.902013 kernel: fuse: init (API version 7.34) Sep 13 00:42:33.902027 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:42:33.902040 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:42:33.902054 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:42:33.902070 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:42:33.902084 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:33.902097 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:42:33.902109 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:42:33.902122 systemd-journald[1030]: Journal started Sep 13 00:42:33.902162 systemd-journald[1030]: Runtime Journal (/run/log/journal/9a990410d0e04e30a98a5806c541f016) is 6.0M, max 48.4M, 42.4M free. Sep 13 00:42:33.811000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:42:33.811000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:42:33.895000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:42:33.895000 audit[1030]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe5e8e6700 a2=4000 a3=7ffe5e8e679c items=0 ppid=1 pid=1030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:33.895000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:42:33.904764 systemd[1]: Started systemd-journald.service. Sep 13 00:42:33.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.906170 systemd[1]: Mounted media.mount. Sep 13 00:42:33.907021 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:42:33.907644 kernel: loop: module loaded Sep 13 00:42:33.908442 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:42:33.909342 systemd[1]: Mounted tmp.mount. Sep 13 00:42:33.910381 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:42:33.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.911585 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:42:33.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.912585 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:42:33.912764 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:42:33.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.913803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:42:33.913973 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:42:33.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.915051 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:42:33.915284 systemd[1]: Finished modprobe@drm.service. Sep 13 00:42:33.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.916392 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:42:33.916590 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:42:33.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.917790 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:42:33.917970 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:42:33.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.919060 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:42:33.919316 systemd[1]: Finished modprobe@loop.service. Sep 13 00:42:33.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.920784 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:42:33.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.922129 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:42:33.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.923477 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:42:33.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.924799 systemd[1]: Reached target network-pre.target. Sep 13 00:42:33.926874 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:42:33.928730 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:42:33.929551 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:42:33.931371 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:42:33.933500 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:42:33.934614 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:42:33.935726 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:42:33.936753 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:42:33.937535 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:42:33.939467 systemd-journald[1030]: Time spent on flushing to /var/log/journal/9a990410d0e04e30a98a5806c541f016 is 23.771ms for 1099 entries. Sep 13 00:42:33.939467 systemd-journald[1030]: System Journal (/var/log/journal/9a990410d0e04e30a98a5806c541f016) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:42:33.982428 systemd-journald[1030]: Received client request to flush runtime journal. Sep 13 00:42:33.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.939219 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:42:33.943105 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:42:33.944769 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:42:33.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.946768 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:42:33.948028 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:42:33.955323 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:42:33.956727 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:42:33.959021 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:42:33.972265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:42:33.981181 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:42:33.983190 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:42:33.985012 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:42:33.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.988411 udevadm[1068]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:42:34.342113 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:42:34.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.344079 systemd[1]: Starting systemd-udevd.service... Sep 13 00:42:34.359288 systemd-udevd[1071]: Using default interface naming scheme 'v252'. Sep 13 00:42:34.370973 systemd[1]: Started systemd-udevd.service. Sep 13 00:42:34.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.374286 systemd[1]: Starting systemd-networkd.service... Sep 13 00:42:34.378678 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:42:34.411781 systemd[1]: Started systemd-userdbd.service. Sep 13 00:42:34.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.418886 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:42:34.423372 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:42:34.444649 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:42:34.450644 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:42:34.461077 systemd-networkd[1082]: lo: Link UP Sep 13 00:42:34.461084 systemd-networkd[1082]: lo: Gained carrier Sep 13 00:42:34.461494 systemd-networkd[1082]: Enumeration completed Sep 13 00:42:34.461606 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:34.461653 systemd[1]: Started systemd-networkd.service. Sep 13 00:42:34.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.463483 systemd-networkd[1082]: eth0: Link UP Sep 13 00:42:34.463489 systemd-networkd[1082]: eth0: Gained carrier Sep 13 00:42:34.475775 systemd-networkd[1082]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:42:34.460000 audit[1079]: AVC avc: denied { confidentiality } for pid=1079 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:42:34.460000 audit[1079]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55943d8d1d10 a1=338ec a2=7fdfd6bbcbc5 a3=5 items=110 ppid=1071 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:34.460000 audit: CWD cwd="/" Sep 13 00:42:34.460000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=1 name=(null) inode=11862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=2 name=(null) inode=11862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=3 name=(null) inode=11863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=4 name=(null) inode=11862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=5 name=(null) inode=11864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=6 name=(null) inode=11862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=7 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=8 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=9 name=(null) inode=11866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=10 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=11 name=(null) inode=11867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=12 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=13 name=(null) inode=11868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=14 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=15 name=(null) inode=11869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=16 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=17 name=(null) inode=11870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=18 name=(null) inode=11862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=19 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=20 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=21 name=(null) inode=11872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=22 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=23 name=(null) inode=11873 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=24 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=25 name=(null) inode=11874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=26 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=27 name=(null) inode=11875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=28 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=29 name=(null) inode=11876 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=30 name=(null) inode=11862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=31 name=(null) inode=11877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=32 name=(null) inode=11877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=33 name=(null) inode=11878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=34 name=(null) inode=11877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=35 name=(null) inode=11879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=36 name=(null) inode=11877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=37 name=(null) inode=11880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=38 name=(null) inode=11877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=39 name=(null) inode=11881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=40 name=(null) inode=11877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=41 name=(null) inode=11882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=42 name=(null) inode=11862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=43 name=(null) inode=11883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=44 name=(null) inode=11883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=45 name=(null) inode=11884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=46 name=(null) inode=11883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=47 name=(null) inode=11885 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=48 name=(null) inode=11883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=49 name=(null) inode=11886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=50 name=(null) inode=11883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=51 name=(null) inode=11887 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=52 name=(null) inode=11883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=53 name=(null) inode=11888 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=55 name=(null) inode=11889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=56 name=(null) inode=11889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=57 name=(null) inode=11890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=58 name=(null) inode=11889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=59 name=(null) inode=11891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=60 name=(null) inode=11889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=61 name=(null) inode=11892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=62 name=(null) inode=11892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=63 name=(null) inode=11893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=64 name=(null) inode=11892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=65 name=(null) inode=11894 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=66 name=(null) inode=11892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=67 name=(null) inode=11895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=68 name=(null) inode=11892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=69 name=(null) inode=11896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=70 name=(null) inode=11892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=71 name=(null) inode=11897 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=72 name=(null) inode=11889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=73 name=(null) inode=11898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=74 name=(null) inode=11898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=75 name=(null) inode=11899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=76 name=(null) inode=11898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=77 name=(null) inode=11900 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=78 name=(null) inode=11898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=79 name=(null) inode=11901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=80 name=(null) inode=11898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=81 name=(null) inode=11902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=82 name=(null) inode=11898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=83 name=(null) inode=11903 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=84 name=(null) inode=11889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=85 name=(null) inode=11904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=86 name=(null) inode=11904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=87 name=(null) inode=11905 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=88 name=(null) inode=11904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=89 name=(null) inode=11906 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=90 name=(null) inode=11904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=91 name=(null) inode=11907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=92 name=(null) inode=11904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=93 name=(null) inode=11908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=94 name=(null) inode=11904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=95 name=(null) inode=11909 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=96 name=(null) inode=11889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=97 name=(null) inode=11910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=98 name=(null) inode=11910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=99 name=(null) inode=11911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=100 name=(null) inode=11910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=101 name=(null) inode=11912 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=102 name=(null) inode=11910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=103 name=(null) inode=11913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=104 name=(null) inode=11910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=105 name=(null) inode=11914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=106 name=(null) inode=11910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=107 name=(null) inode=11915 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PATH item=109 name=(null) inode=11916 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:34.460000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:42:34.494667 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:42:34.498101 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 00:42:34.500230 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:42:34.500371 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:42:34.500507 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:42:34.509637 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:42:34.546055 kernel: kvm: Nested Virtualization enabled Sep 13 00:42:34.546126 kernel: SVM: kvm: Nested Paging enabled Sep 13 00:42:34.546164 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 13 00:42:34.546179 kernel: SVM: Virtual GIF supported Sep 13 00:42:34.561638 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:42:34.586044 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:42:34.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.588135 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:42:34.594589 lvm[1109]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:42:34.617459 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:42:34.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.618811 systemd[1]: Reached target cryptsetup.target. Sep 13 00:42:34.621378 systemd[1]: Starting lvm2-activation.service... Sep 13 00:42:34.624366 lvm[1111]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:42:34.645508 systemd[1]: Finished lvm2-activation.service. Sep 13 00:42:34.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.646702 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:42:34.647777 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:42:34.647812 systemd[1]: Reached target local-fs.target. Sep 13 00:42:34.648799 systemd[1]: Reached target machines.target. Sep 13 00:42:34.651257 systemd[1]: Starting ldconfig.service... Sep 13 00:42:34.652641 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:42:34.652702 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:34.653948 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:42:34.656373 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:42:34.659441 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:42:34.661846 systemd[1]: Starting systemd-sysext.service... Sep 13 00:42:34.663272 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1114 (bootctl) Sep 13 00:42:34.664415 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:42:34.668955 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:42:34.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.672006 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:42:34.676741 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:42:34.676932 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:42:34.686650 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:42:34.705550 systemd-fsck[1123]: fsck.fat 4.2 (2021-01-31) Sep 13 00:42:34.705550 systemd-fsck[1123]: /dev/vda1: 791 files, 120781/258078 clusters Sep 13 00:42:34.707551 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:42:34.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.710796 systemd[1]: Mounting boot.mount... Sep 13 00:42:34.869786 systemd[1]: Mounted boot.mount. Sep 13 00:42:34.879636 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:42:34.880330 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:42:34.881516 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:42:34.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.882920 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:42:34.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.894634 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:42:34.897820 (sd-sysext)[1135]: Using extensions 'kubernetes'. Sep 13 00:42:34.898116 (sd-sysext)[1135]: Merged extensions into '/usr'. Sep 13 00:42:34.911096 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:34.912314 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:42:34.913364 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:42:34.914648 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:42:34.916609 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:42:34.918955 systemd[1]: Starting modprobe@loop.service... Sep 13 00:42:34.919815 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:42:34.919928 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:34.920032 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:34.922495 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:42:34.923967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:42:34.924108 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:42:34.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.925401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:42:34.925521 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:42:34.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.926804 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:42:34.926923 systemd[1]: Finished modprobe@loop.service. Sep 13 00:42:34.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.928077 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:42:34.928173 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:42:34.929043 systemd[1]: Finished systemd-sysext.service. Sep 13 00:42:34.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.931646 ldconfig[1113]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:42:34.931043 systemd[1]: Starting ensure-sysext.service... Sep 13 00:42:34.932910 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:42:34.937471 systemd[1]: Finished ldconfig.service. Sep 13 00:42:34.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:34.938461 systemd[1]: Reloading. Sep 13 00:42:34.940832 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:42:34.941763 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:42:34.943058 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:42:34.981782 /usr/lib/systemd/system-generators/torcx-generator[1170]: time="2025-09-13T00:42:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:42:34.982072 /usr/lib/systemd/system-generators/torcx-generator[1170]: time="2025-09-13T00:42:34Z" level=info msg="torcx already run" Sep 13 00:42:35.046500 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:42:35.046517 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:42:35.062931 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:42:35.116732 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:42:35.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.119892 systemd[1]: Starting audit-rules.service... Sep 13 00:42:35.121526 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:42:35.123245 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:42:35.125478 systemd[1]: Starting systemd-resolved.service... Sep 13 00:42:35.128868 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:42:35.130526 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:42:35.131829 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:42:35.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.134000 audit[1232]: SYSTEM_BOOT pid=1232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.137163 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:42:35.139029 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:42:35.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.141506 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:42:35.142702 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:35.142879 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.143940 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:42:35.146210 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:42:35.147861 systemd[1]: Starting modprobe@loop.service... Sep 13 00:42:35.149008 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.149111 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:35.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.152397 systemd[1]: Starting systemd-update-done.service... Sep 13 00:42:35.153333 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:42:35.153427 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:35.154469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:42:35.154596 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:42:35.155799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:42:35.155922 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:42:35.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.157199 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:42:35.157423 systemd[1]: Finished modprobe@loop.service. Sep 13 00:42:35.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.158413 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:42:35.158495 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.160190 systemd[1]: Finished systemd-update-done.service. Sep 13 00:42:35.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.161210 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:35.161383 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.162323 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:42:35.163921 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:42:35.166604 systemd[1]: Starting modprobe@loop.service... Sep 13 00:42:35.167788 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.167890 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:35.167965 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:42:35.168023 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:35.168944 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:42:35.169067 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:42:35.170164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:42:35.170273 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:42:35.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.173344 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:42:35.173471 systemd[1]: Finished modprobe@loop.service. Sep 13 00:42:35.173000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:42:35.173000 audit[1257]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec5634d70 a2=420 a3=0 items=0 ppid=1219 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:35.173000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:42:35.174411 augenrules[1257]: No rules Sep 13 00:42:35.174831 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:42:35.174909 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.177099 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:35.177307 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.178248 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:42:35.179935 systemd[1]: Starting modprobe@drm.service... Sep 13 00:42:35.181564 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:42:35.184651 systemd[1]: Starting modprobe@loop.service... Sep 13 00:42:35.185653 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.185754 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:35.186839 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:42:35.188071 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:42:35.188170 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:35.189443 systemd[1]: Finished audit-rules.service. Sep 13 00:42:35.190600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:42:35.190731 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:42:35.192082 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:42:35.192248 systemd[1]: Finished modprobe@drm.service. Sep 13 00:42:35.193733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:42:35.193895 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:42:35.195328 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:42:35.195601 systemd[1]: Finished modprobe@loop.service. Sep 13 00:42:35.196894 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:42:35.196970 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.198334 systemd[1]: Finished ensure-sysext.service. Sep 13 00:42:35.206742 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:42:35.207742 systemd-resolved[1228]: Positive Trust Anchors: Sep 13 00:42:35.207754 systemd-resolved[1228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:42:35.207782 systemd-resolved[1228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:42:35.207900 systemd[1]: Reached target time-set.target. Sep 13 00:42:35.208250 systemd-timesyncd[1230]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:42:35.208282 systemd-timesyncd[1230]: Initial clock synchronization to Sat 2025-09-13 00:42:35.255605 UTC. Sep 13 00:42:35.215037 systemd-resolved[1228]: Defaulting to hostname 'linux'. Sep 13 00:42:35.216350 systemd[1]: Started systemd-resolved.service. Sep 13 00:42:35.217258 systemd[1]: Reached target network.target. Sep 13 00:42:35.218033 systemd[1]: Reached target nss-lookup.target. Sep 13 00:42:35.218880 systemd[1]: Reached target sysinit.target. Sep 13 00:42:35.219713 systemd[1]: Started motdgen.path. Sep 13 00:42:35.220417 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:42:35.221611 systemd[1]: Started logrotate.timer. Sep 13 00:42:35.222381 systemd[1]: Started mdadm.timer. Sep 13 00:42:35.223110 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:42:35.223989 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:42:35.224017 systemd[1]: Reached target paths.target. Sep 13 00:42:35.224811 systemd[1]: Reached target timers.target. Sep 13 00:42:35.225785 systemd[1]: Listening on dbus.socket. Sep 13 00:42:35.227537 systemd[1]: Starting docker.socket... Sep 13 00:42:35.228998 systemd[1]: Listening on sshd.socket. Sep 13 00:42:35.229805 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:35.230034 systemd[1]: Listening on docker.socket. Sep 13 00:42:35.230802 systemd[1]: Reached target sockets.target. Sep 13 00:42:35.231561 systemd[1]: Reached target basic.target. Sep 13 00:42:35.232408 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:42:35.232443 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.232460 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.233318 systemd[1]: Starting containerd.service... Sep 13 00:42:35.234901 systemd[1]: Starting dbus.service... Sep 13 00:42:35.236418 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:42:35.238026 systemd[1]: Starting extend-filesystems.service... Sep 13 00:42:35.238954 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:42:35.240658 jq[1281]: false Sep 13 00:42:35.239875 systemd[1]: Starting motdgen.service... Sep 13 00:42:35.242006 systemd[1]: Starting prepare-helm.service... Sep 13 00:42:35.243778 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:42:35.245753 systemd[1]: Starting sshd-keygen.service... Sep 13 00:42:35.249203 extend-filesystems[1282]: Found loop1 Sep 13 00:42:35.250199 extend-filesystems[1282]: Found sr0 Sep 13 00:42:35.250199 extend-filesystems[1282]: Found vda Sep 13 00:42:35.250199 extend-filesystems[1282]: Found vda1 Sep 13 00:42:35.250199 extend-filesystems[1282]: Found vda2 Sep 13 00:42:35.250199 extend-filesystems[1282]: Found vda3 Sep 13 00:42:35.256008 extend-filesystems[1282]: Found usr Sep 13 00:42:35.256008 extend-filesystems[1282]: Found vda4 Sep 13 00:42:35.256008 extend-filesystems[1282]: Found vda6 Sep 13 00:42:35.256008 extend-filesystems[1282]: Found vda7 Sep 13 00:42:35.256008 extend-filesystems[1282]: Found vda9 Sep 13 00:42:35.256008 extend-filesystems[1282]: Checking size of /dev/vda9 Sep 13 00:42:35.252053 systemd[1]: Starting systemd-logind.service... Sep 13 00:42:35.258463 dbus-daemon[1280]: [system] SELinux support is enabled Sep 13 00:42:35.255261 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:35.284610 extend-filesystems[1282]: Resized partition /dev/vda9 Sep 13 00:42:35.286002 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:42:35.256022 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:42:35.286227 extend-filesystems[1312]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:42:35.287522 jq[1305]: true Sep 13 00:42:35.257702 systemd[1]: Starting update-engine.service... Sep 13 00:42:35.260873 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:42:35.288218 tar[1313]: linux-amd64/helm Sep 13 00:42:35.263164 systemd[1]: Started dbus.service. Sep 13 00:42:35.288557 jq[1314]: true Sep 13 00:42:35.266190 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:42:35.269823 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:42:35.270096 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:42:35.270317 systemd[1]: Finished motdgen.service. Sep 13 00:42:35.271932 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:42:35.272302 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:42:35.276103 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:42:35.276122 systemd[1]: Reached target system-config.target. Sep 13 00:42:35.277334 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:42:35.277346 systemd[1]: Reached target user-config.target. Sep 13 00:42:35.299277 update_engine[1304]: I0913 00:42:35.298950 1304 main.cc:92] Flatcar Update Engine starting Sep 13 00:42:35.341465 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:42:35.300905 systemd[1]: Started update-engine.service. Sep 13 00:42:35.341580 env[1315]: time="2025-09-13T00:42:35.309669636Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:42:35.341580 env[1315]: time="2025-09-13T00:42:35.330978718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:42:35.341580 env[1315]: time="2025-09-13T00:42:35.341550704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:35.341822 update_engine[1304]: I0913 00:42:35.300974 1304 update_check_scheduler.cc:74] Next update check in 6m2s Sep 13 00:42:35.303726 systemd[1]: Started locksmithd.service. Sep 13 00:42:35.342935 env[1315]: time="2025-09-13T00:42:35.342903541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:42:35.342935 env[1315]: time="2025-09-13T00:42:35.342931594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:35.343162 env[1315]: time="2025-09-13T00:42:35.343127802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:42:35.343162 env[1315]: time="2025-09-13T00:42:35.343153680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:35.343219 env[1315]: time="2025-09-13T00:42:35.343164651Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:42:35.343219 env[1315]: time="2025-09-13T00:42:35.343174329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:35.343258 env[1315]: time="2025-09-13T00:42:35.343233350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:35.343679 env[1315]: time="2025-09-13T00:42:35.343664448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:35.343709 extend-filesystems[1312]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:42:35.343709 extend-filesystems[1312]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:42:35.343709 extend-filesystems[1312]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:42:35.355011 extend-filesystems[1282]: Resized filesystem in /dev/vda9 Sep 13 00:42:35.356019 bash[1339]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:42:35.356097 env[1315]: time="2025-09-13T00:42:35.343795283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:42:35.356097 env[1315]: time="2025-09-13T00:42:35.343808007Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:42:35.356097 env[1315]: time="2025-09-13T00:42:35.344034422Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:42:35.356097 env[1315]: time="2025-09-13T00:42:35.344046474Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:42:35.344406 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:42:35.344610 systemd[1]: Finished extend-filesystems.service. Sep 13 00:42:35.345722 systemd-logind[1299]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:42:35.345738 systemd-logind[1299]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:42:35.348799 systemd-logind[1299]: New seat seat0. Sep 13 00:42:35.350415 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:42:35.351886 systemd[1]: Started systemd-logind.service. Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356660530Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356701317Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356713009Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356744347Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356757542Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356775296Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356786126Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356797587Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356809229Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356821252Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356832042Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356842622Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356929435Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:42:35.359609 env[1315]: time="2025-09-13T00:42:35.356990439Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357328072Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357352278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357363559Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357409154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357420866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357431326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357441785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357452195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357463827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357474016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357483534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357494284Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357583912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357600914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360008 env[1315]: time="2025-09-13T00:42:35.357611694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360373 env[1315]: time="2025-09-13T00:42:35.357635970Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:42:35.360373 env[1315]: time="2025-09-13T00:42:35.357649435Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:42:35.360373 env[1315]: time="2025-09-13T00:42:35.357658792Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:42:35.360373 env[1315]: time="2025-09-13T00:42:35.357692566Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:42:35.360373 env[1315]: time="2025-09-13T00:42:35.357729665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:42:35.360368 systemd[1]: Started containerd.service. Sep 13 00:42:35.360539 env[1315]: time="2025-09-13T00:42:35.357914011Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:42:35.360539 env[1315]: time="2025-09-13T00:42:35.358111421Z" level=info msg="Connect containerd service" Sep 13 00:42:35.360539 env[1315]: time="2025-09-13T00:42:35.359246029Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:42:35.360539 env[1315]: time="2025-09-13T00:42:35.359938237Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:42:35.360539 env[1315]: time="2025-09-13T00:42:35.360204196Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:42:35.360539 env[1315]: time="2025-09-13T00:42:35.360246686Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:42:35.362840 env[1315]: time="2025-09-13T00:42:35.360970553Z" level=info msg="Start subscribing containerd event" Sep 13 00:42:35.362840 env[1315]: time="2025-09-13T00:42:35.361069639Z" level=info msg="Start recovering state" Sep 13 00:42:35.362840 env[1315]: time="2025-09-13T00:42:35.361164868Z" level=info msg="Start event monitor" Sep 13 00:42:35.362840 env[1315]: time="2025-09-13T00:42:35.361204251Z" level=info msg="Start snapshots syncer" Sep 13 00:42:35.362840 env[1315]: time="2025-09-13T00:42:35.361213509Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:42:35.362840 env[1315]: time="2025-09-13T00:42:35.361220211Z" level=info msg="Start streaming server" Sep 13 00:42:35.362840 env[1315]: time="2025-09-13T00:42:35.361688600Z" level=info msg="containerd successfully booted in 0.052637s" Sep 13 00:42:35.373084 locksmithd[1341]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:42:35.674859 tar[1313]: linux-amd64/LICENSE Sep 13 00:42:35.674859 tar[1313]: linux-amd64/README.md Sep 13 00:42:35.678863 systemd[1]: Finished prepare-helm.service. Sep 13 00:42:35.726207 sshd_keygen[1306]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:42:35.743482 systemd[1]: Finished sshd-keygen.service. Sep 13 00:42:35.745638 systemd[1]: Starting issuegen.service... Sep 13 00:42:35.750304 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:42:35.750486 systemd[1]: Finished issuegen.service. Sep 13 00:42:35.752372 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:42:35.756902 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:42:35.758867 systemd[1]: Started getty@tty1.service. Sep 13 00:42:35.760565 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:42:35.761577 systemd[1]: Reached target getty.target. Sep 13 00:42:35.920796 systemd-networkd[1082]: eth0: Gained IPv6LL Sep 13 00:42:35.922499 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:42:35.923808 systemd[1]: Reached target network-online.target. Sep 13 00:42:35.926017 systemd[1]: Starting kubelet.service... Sep 13 00:42:36.586049 systemd[1]: Started kubelet.service. Sep 13 00:42:36.587161 systemd[1]: Reached target multi-user.target. Sep 13 00:42:36.589263 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:42:36.594758 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:42:36.594969 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:42:36.597495 systemd[1]: Startup finished in 5.005s (kernel) + 5.526s (userspace) = 10.532s. Sep 13 00:42:36.968605 kubelet[1383]: E0913 00:42:36.968497 1383 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:42:36.970274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:42:36.970435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:42:39.136560 systemd[1]: Created slice system-sshd.slice. Sep 13 00:42:39.137537 systemd[1]: Started sshd@0-10.0.0.27:22-10.0.0.1:50238.service. Sep 13 00:42:39.172471 sshd[1393]: Accepted publickey for core from 10.0.0.1 port 50238 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:42:39.173865 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:39.182764 systemd-logind[1299]: New session 1 of user core. Sep 13 00:42:39.183524 systemd[1]: Created slice user-500.slice. Sep 13 00:42:39.184460 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:42:39.192035 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:42:39.193152 systemd[1]: Starting user@500.service... Sep 13 00:42:39.197007 (systemd)[1397]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:39.290306 systemd[1397]: Queued start job for default target default.target. Sep 13 00:42:39.290595 systemd[1397]: Reached target paths.target. Sep 13 00:42:39.290615 systemd[1397]: Reached target sockets.target. Sep 13 00:42:39.290648 systemd[1397]: Reached target timers.target. Sep 13 00:42:39.290666 systemd[1397]: Reached target basic.target. Sep 13 00:42:39.290710 systemd[1397]: Reached target default.target. Sep 13 00:42:39.290736 systemd[1397]: Startup finished in 87ms. Sep 13 00:42:39.290903 systemd[1]: Started user@500.service. Sep 13 00:42:39.292161 systemd[1]: Started session-1.scope. Sep 13 00:42:39.342363 systemd[1]: Started sshd@1-10.0.0.27:22-10.0.0.1:50250.service. Sep 13 00:42:39.376690 sshd[1407]: Accepted publickey for core from 10.0.0.1 port 50250 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:42:39.377802 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:39.381453 systemd-logind[1299]: New session 2 of user core. Sep 13 00:42:39.382185 systemd[1]: Started session-2.scope. Sep 13 00:42:39.435146 sshd[1407]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:39.438317 systemd[1]: Started sshd@2-10.0.0.27:22-10.0.0.1:50252.service. Sep 13 00:42:39.438836 systemd[1]: sshd@1-10.0.0.27:22-10.0.0.1:50250.service: Deactivated successfully. Sep 13 00:42:39.440102 systemd-logind[1299]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:42:39.440163 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:42:39.441455 systemd-logind[1299]: Removed session 2. Sep 13 00:42:39.472478 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 50252 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:42:39.473788 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:39.477508 systemd-logind[1299]: New session 3 of user core. Sep 13 00:42:39.478243 systemd[1]: Started session-3.scope. Sep 13 00:42:39.527844 sshd[1413]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:39.529950 systemd[1]: Started sshd@3-10.0.0.27:22-10.0.0.1:50264.service. Sep 13 00:42:39.530753 systemd[1]: sshd@2-10.0.0.27:22-10.0.0.1:50252.service: Deactivated successfully. Sep 13 00:42:39.531465 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:42:39.531490 systemd-logind[1299]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:42:39.532320 systemd-logind[1299]: Removed session 3. Sep 13 00:42:39.561733 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 50264 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:42:39.562645 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:39.565899 systemd-logind[1299]: New session 4 of user core. Sep 13 00:42:39.566647 systemd[1]: Started session-4.scope. Sep 13 00:42:39.617982 sshd[1419]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:39.620150 systemd[1]: Started sshd@4-10.0.0.27:22-10.0.0.1:50266.service. Sep 13 00:42:39.620517 systemd[1]: sshd@3-10.0.0.27:22-10.0.0.1:50264.service: Deactivated successfully. Sep 13 00:42:39.621439 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:42:39.621864 systemd-logind[1299]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:42:39.622684 systemd-logind[1299]: Removed session 4. Sep 13 00:42:39.652374 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 50266 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:42:39.653328 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:39.656472 systemd-logind[1299]: New session 5 of user core. Sep 13 00:42:39.657121 systemd[1]: Started session-5.scope. Sep 13 00:42:39.711525 sudo[1432]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:42:39.711733 sudo[1432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:42:39.717854 dbus-daemon[1280]: Ѝ*X(V: received setenforce notice (enforcing=1285780080) Sep 13 00:42:39.720016 sudo[1432]: pam_unix(sudo:session): session closed for user root Sep 13 00:42:39.721912 sshd[1426]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:39.724247 systemd[1]: Started sshd@5-10.0.0.27:22-10.0.0.1:50278.service. Sep 13 00:42:39.724763 systemd[1]: sshd@4-10.0.0.27:22-10.0.0.1:50266.service: Deactivated successfully. Sep 13 00:42:39.725379 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:42:39.726009 systemd-logind[1299]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:42:39.726816 systemd-logind[1299]: Removed session 5. Sep 13 00:42:39.756458 sshd[1434]: Accepted publickey for core from 10.0.0.1 port 50278 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:42:39.757599 sshd[1434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:39.761116 systemd-logind[1299]: New session 6 of user core. Sep 13 00:42:39.761881 systemd[1]: Started session-6.scope. Sep 13 00:42:39.815465 sudo[1441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:42:39.815668 sudo[1441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:42:39.818160 sudo[1441]: pam_unix(sudo:session): session closed for user root Sep 13 00:42:39.822085 sudo[1440]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:42:39.822277 sudo[1440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:42:39.829654 systemd[1]: Stopping audit-rules.service... Sep 13 00:42:39.830000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:42:39.830955 auditctl[1444]: No rules Sep 13 00:42:39.831281 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:42:39.831499 systemd[1]: Stopped audit-rules.service. Sep 13 00:42:39.832860 systemd[1]: Starting audit-rules.service... Sep 13 00:42:39.833565 kernel: kauditd_printk_skb: 180 callbacks suppressed Sep 13 00:42:39.833603 kernel: audit: type=1305 audit(1757724159.830:148): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:42:39.833620 kernel: audit: type=1300 audit(1757724159.830:148): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff2af12aa0 a2=420 a3=0 items=0 ppid=1 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:39.830000 audit[1444]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff2af12aa0 a2=420 a3=0 items=0 ppid=1 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:39.838104 kernel: audit: type=1327 audit(1757724159.830:148): proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:42:39.830000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:42:39.838895 kernel: audit: type=1131 audit(1757724159.830:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.850936 augenrules[1462]: No rules Sep 13 00:42:39.851552 systemd[1]: Finished audit-rules.service. Sep 13 00:42:39.852355 sudo[1440]: pam_unix(sudo:session): session closed for user root Sep 13 00:42:39.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.853537 sshd[1434]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:39.851000 audit[1440]: USER_END pid=1440 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.858939 kernel: audit: type=1130 audit(1757724159.851:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.859002 kernel: audit: type=1106 audit(1757724159.851:151): pid=1440 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.859023 kernel: audit: type=1104 audit(1757724159.851:152): pid=1440 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.851000 audit[1440]: CRED_DISP pid=1440 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.859165 systemd[1]: Started sshd@6-10.0.0.27:22-10.0.0.1:50284.service. Sep 13 00:42:39.859551 systemd[1]: sshd@5-10.0.0.27:22-10.0.0.1:50278.service: Deactivated successfully. Sep 13 00:42:39.860451 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:42:39.861196 systemd-logind[1299]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:42:39.856000 audit[1434]: USER_END pid=1434 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:42:39.862200 systemd-logind[1299]: Removed session 6. Sep 13 00:42:39.866154 kernel: audit: type=1106 audit(1757724159.856:153): pid=1434 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:42:39.866187 kernel: audit: type=1104 audit(1757724159.856:154): pid=1434 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:42:39.856000 audit[1434]: CRED_DISP pid=1434 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:42:39.869462 kernel: audit: type=1130 audit(1757724159.858:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.27:22-10.0.0.1:50284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.27:22-10.0.0.1:50284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.27:22-10.0.0.1:50278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.896000 audit[1468]: USER_ACCT pid=1468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:42:39.896914 sshd[1468]: Accepted publickey for core from 10.0.0.1 port 50284 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:42:39.896000 audit[1468]: CRED_ACQ pid=1468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:42:39.896000 audit[1468]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff87b8fe40 a2=3 a3=0 items=0 ppid=1 pid=1468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:39.896000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:42:39.897768 sshd[1468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:39.901135 systemd-logind[1299]: New session 7 of user core. Sep 13 00:42:39.901803 systemd[1]: Started session-7.scope. Sep 13 00:42:39.904000 audit[1468]: USER_START pid=1468 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:42:39.905000 audit[1472]: CRED_ACQ pid=1472 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:42:39.951000 audit[1473]: USER_ACCT pid=1473 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.952845 sudo[1473]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:42:39.951000 audit[1473]: CRED_REFR pid=1473 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.953031 sudo[1473]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:42:39.952000 audit[1473]: USER_START pid=1473 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.975147 systemd[1]: Starting docker.service... Sep 13 00:42:40.030208 env[1486]: time="2025-09-13T00:42:40.030157843Z" level=info msg="Starting up" Sep 13 00:42:40.031505 env[1486]: time="2025-09-13T00:42:40.031477887Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:42:40.031505 env[1486]: time="2025-09-13T00:42:40.031494516Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:42:40.031568 env[1486]: time="2025-09-13T00:42:40.031510813Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:42:40.031568 env[1486]: time="2025-09-13T00:42:40.031520173Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:42:40.037362 env[1486]: time="2025-09-13T00:42:40.037342340Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:42:40.037362 env[1486]: time="2025-09-13T00:42:40.037357724Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:42:40.037427 env[1486]: time="2025-09-13T00:42:40.037368650Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:42:40.037427 env[1486]: time="2025-09-13T00:42:40.037376071Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:42:40.590154 env[1486]: time="2025-09-13T00:42:40.590095456Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:42:40.590154 env[1486]: time="2025-09-13T00:42:40.590126807Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:42:40.590367 env[1486]: time="2025-09-13T00:42:40.590324247Z" level=info msg="Loading containers: start." Sep 13 00:42:40.640000 audit[1520]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.640000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc2aa87ec0 a2=0 a3=7ffc2aa87eac items=0 ppid=1486 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.640000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 13 00:42:40.642000 audit[1522]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.642000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff81921120 a2=0 a3=7fff8192110c items=0 ppid=1486 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.642000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 13 00:42:40.644000 audit[1524]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.644000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd4c5c90a0 a2=0 a3=7ffd4c5c908c items=0 ppid=1486 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.644000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:42:40.645000 audit[1526]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.645000 audit[1526]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc0c480a60 a2=0 a3=7ffc0c480a4c items=0 ppid=1486 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.645000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:42:40.647000 audit[1528]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.647000 audit[1528]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe4810e7a0 a2=0 a3=7ffe4810e78c items=0 ppid=1486 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.647000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 13 00:42:40.663000 audit[1533]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.663000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc23ab1140 a2=0 a3=7ffc23ab112c items=0 ppid=1486 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.663000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 13 00:42:40.671000 audit[1535]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.671000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2f6077d0 a2=0 a3=7ffd2f6077bc items=0 ppid=1486 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.671000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 13 00:42:40.673000 audit[1537]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.673000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe45e6c970 a2=0 a3=7ffe45e6c95c items=0 ppid=1486 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.673000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 13 00:42:40.675000 audit[1539]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.675000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffdab5a3b30 a2=0 a3=7ffdab5a3b1c items=0 ppid=1486 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.675000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:42:40.683000 audit[1543]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.683000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe6f903330 a2=0 a3=7ffe6f90331c items=0 ppid=1486 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.683000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:42:40.690000 audit[1544]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.690000 audit[1544]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffede217c0 a2=0 a3=7fffede217ac items=0 ppid=1486 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.690000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:42:40.699647 kernel: Initializing XFRM netlink socket Sep 13 00:42:40.804604 env[1486]: time="2025-09-13T00:42:40.804543009Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:42:40.822000 audit[1552]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.822000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fffce1cf8c0 a2=0 a3=7fffce1cf8ac items=0 ppid=1486 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.822000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 13 00:42:40.833000 audit[1555]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.833000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc9bc44700 a2=0 a3=7ffc9bc446ec items=0 ppid=1486 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.833000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 13 00:42:40.835000 audit[1558]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.835000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffd643c800 a2=0 a3=7fffd643c7ec items=0 ppid=1486 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.835000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 13 00:42:40.837000 audit[1560]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.837000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffee459cd40 a2=0 a3=7ffee459cd2c items=0 ppid=1486 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.837000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 13 00:42:40.839000 audit[1562]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.839000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd6f45f880 a2=0 a3=7ffd6f45f86c items=0 ppid=1486 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.839000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 13 00:42:40.841000 audit[1564]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.841000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe8c5d29a0 a2=0 a3=7ffe8c5d298c items=0 ppid=1486 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.841000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 13 00:42:40.843000 audit[1566]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.843000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffcaf416fd0 a2=0 a3=7ffcaf416fbc items=0 ppid=1486 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.843000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 13 00:42:40.850000 audit[1569]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.850000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff107d5200 a2=0 a3=7fff107d51ec items=0 ppid=1486 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.850000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 13 00:42:40.852000 audit[1571]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.852000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff130c1ba0 a2=0 a3=7fff130c1b8c items=0 ppid=1486 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.852000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:42:40.855000 audit[1573]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.855000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff11c3f510 a2=0 a3=7fff11c3f4fc items=0 ppid=1486 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.855000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:42:40.856000 audit[1575]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.856000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc31498ba0 a2=0 a3=7ffc31498b8c items=0 ppid=1486 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.856000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 13 00:42:40.858897 systemd-networkd[1082]: docker0: Link UP Sep 13 00:42:40.866000 audit[1579]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.866000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd163a92f0 a2=0 a3=7ffd163a92dc items=0 ppid=1486 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.866000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:42:40.872000 audit[1580]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:40.872000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc34fad450 a2=0 a3=7ffc34fad43c items=0 ppid=1486 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.872000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:42:40.874105 env[1486]: time="2025-09-13T00:42:40.874053210Z" level=info msg="Loading containers: done." Sep 13 00:42:40.916844 env[1486]: time="2025-09-13T00:42:40.916760136Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:42:40.917038 env[1486]: time="2025-09-13T00:42:40.916969687Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:42:40.917072 env[1486]: time="2025-09-13T00:42:40.917065656Z" level=info msg="Daemon has completed initialization" Sep 13 00:42:40.938658 systemd[1]: Started docker.service. Sep 13 00:42:40.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.942525 env[1486]: time="2025-09-13T00:42:40.942464505Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:42:41.900210 env[1315]: time="2025-09-13T00:42:41.900150328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:42:42.568638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618580772.mount: Deactivated successfully. Sep 13 00:42:44.261442 env[1315]: time="2025-09-13T00:42:44.261370349Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:44.263455 env[1315]: time="2025-09-13T00:42:44.263401788Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:44.265278 env[1315]: time="2025-09-13T00:42:44.265251555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:44.267763 env[1315]: time="2025-09-13T00:42:44.267701916Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:44.268672 env[1315]: time="2025-09-13T00:42:44.268645327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:42:44.269580 env[1315]: time="2025-09-13T00:42:44.269537282Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:42:45.781605 env[1315]: time="2025-09-13T00:42:45.781520206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:45.783883 env[1315]: time="2025-09-13T00:42:45.783834599Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:45.786136 env[1315]: time="2025-09-13T00:42:45.786096959Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:45.788177 env[1315]: time="2025-09-13T00:42:45.788126015Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:45.788907 env[1315]: time="2025-09-13T00:42:45.788868202Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:42:45.789541 env[1315]: time="2025-09-13T00:42:45.789505349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:42:46.995748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:42:46.996001 systemd[1]: Stopped kubelet.service. Sep 13 00:42:46.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:46.997603 systemd[1]: Starting kubelet.service... Sep 13 00:42:46.998756 kernel: kauditd_printk_skb: 84 callbacks suppressed Sep 13 00:42:46.998814 kernel: audit: type=1130 audit(1757724166.995:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:46.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:47.006033 kernel: audit: type=1131 audit(1757724166.995:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:47.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:47.091409 systemd[1]: Started kubelet.service. Sep 13 00:42:47.095647 kernel: audit: type=1130 audit(1757724167.090:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:47.137411 kubelet[1625]: E0913 00:42:47.137357 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:42:47.140270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:42:47.140418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:42:47.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:42:47.144655 kernel: audit: type=1131 audit(1757724167.140:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:42:47.425040 env[1315]: time="2025-09-13T00:42:47.424899774Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:47.427402 env[1315]: time="2025-09-13T00:42:47.427355675Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:47.429362 env[1315]: time="2025-09-13T00:42:47.429328650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:47.431159 env[1315]: time="2025-09-13T00:42:47.431110198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:47.432129 env[1315]: time="2025-09-13T00:42:47.432076995Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:42:47.432672 env[1315]: time="2025-09-13T00:42:47.432647513Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:42:48.853640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822959478.mount: Deactivated successfully. Sep 13 00:42:49.821786 env[1315]: time="2025-09-13T00:42:49.821722770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:49.823574 env[1315]: time="2025-09-13T00:42:49.823542494Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:49.824944 env[1315]: time="2025-09-13T00:42:49.824914719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:49.826135 env[1315]: time="2025-09-13T00:42:49.826099263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:49.826518 env[1315]: time="2025-09-13T00:42:49.826480815Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:42:49.827134 env[1315]: time="2025-09-13T00:42:49.827096506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:42:50.270306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820790126.mount: Deactivated successfully. Sep 13 00:42:51.211008 env[1315]: time="2025-09-13T00:42:51.210942804Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:51.212883 env[1315]: time="2025-09-13T00:42:51.212847012Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:51.214904 env[1315]: time="2025-09-13T00:42:51.214855873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:51.216809 env[1315]: time="2025-09-13T00:42:51.216757121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:51.217559 env[1315]: time="2025-09-13T00:42:51.217523117Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:42:51.218225 env[1315]: time="2025-09-13T00:42:51.218190778Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:42:51.643429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264257117.mount: Deactivated successfully. Sep 13 00:42:51.649430 env[1315]: time="2025-09-13T00:42:51.649382277Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:51.651451 env[1315]: time="2025-09-13T00:42:51.651400827Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:51.653325 env[1315]: time="2025-09-13T00:42:51.653285517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:51.657165 env[1315]: time="2025-09-13T00:42:51.657114884Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:51.657844 env[1315]: time="2025-09-13T00:42:51.657797349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:42:51.658424 env[1315]: time="2025-09-13T00:42:51.658384489Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:42:52.349490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257026705.mount: Deactivated successfully. Sep 13 00:42:55.531109 env[1315]: time="2025-09-13T00:42:55.531041337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:55.533167 env[1315]: time="2025-09-13T00:42:55.533111423Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:55.534931 env[1315]: time="2025-09-13T00:42:55.534898373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:55.536598 env[1315]: time="2025-09-13T00:42:55.536567491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:55.537419 env[1315]: time="2025-09-13T00:42:55.537383479Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:42:57.245452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:42:57.245642 systemd[1]: Stopped kubelet.service. Sep 13 00:42:57.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:57.246965 systemd[1]: Starting kubelet.service... Sep 13 00:42:57.252075 kernel: audit: type=1130 audit(1757724177.245:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:57.252220 kernel: audit: type=1131 audit(1757724177.245:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:57.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:57.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:57.338372 systemd[1]: Started kubelet.service. Sep 13 00:42:57.342652 kernel: audit: type=1130 audit(1757724177.337:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:57.372548 kubelet[1662]: E0913 00:42:57.372486 1662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:42:57.374493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:42:57.374644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:42:57.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:42:57.378650 kernel: audit: type=1131 audit(1757724177.374:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:42:58.331973 systemd[1]: Stopped kubelet.service. Sep 13 00:42:58.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:58.333890 systemd[1]: Starting kubelet.service... Sep 13 00:42:58.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:58.338770 kernel: audit: type=1130 audit(1757724178.331:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:58.338821 kernel: audit: type=1131 audit(1757724178.331:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:58.358353 systemd[1]: Reloading. Sep 13 00:42:58.424980 /usr/lib/systemd/system-generators/torcx-generator[1700]: time="2025-09-13T00:42:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:42:58.425395 /usr/lib/systemd/system-generators/torcx-generator[1700]: time="2025-09-13T00:42:58Z" level=info msg="torcx already run" Sep 13 00:42:59.067763 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:42:59.067778 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:42:59.084462 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:42:59.156906 systemd[1]: Started kubelet.service. Sep 13 00:42:59.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:59.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:59.158388 systemd[1]: Stopping kubelet.service... Sep 13 00:42:59.158736 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:42:59.158955 systemd[1]: Stopped kubelet.service. Sep 13 00:42:59.160287 systemd[1]: Starting kubelet.service... Sep 13 00:42:59.163976 kernel: audit: type=1130 audit(1757724179.155:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:59.164023 kernel: audit: type=1131 audit(1757724179.157:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:59.245894 systemd[1]: Started kubelet.service. Sep 13 00:42:59.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:59.249665 kernel: audit: type=1130 audit(1757724179.244:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:59.341434 kubelet[1759]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:42:59.341434 kubelet[1759]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:42:59.341434 kubelet[1759]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:42:59.341434 kubelet[1759]: I0913 00:42:59.341413 1759 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:42:59.525710 kubelet[1759]: I0913 00:42:59.525673 1759 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:42:59.525710 kubelet[1759]: I0913 00:42:59.525695 1759 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:42:59.525909 kubelet[1759]: I0913 00:42:59.525895 1759 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:42:59.552456 kubelet[1759]: E0913 00:42:59.552418 1759 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:59.553202 kubelet[1759]: I0913 00:42:59.553171 1759 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:42:59.558122 kubelet[1759]: E0913 00:42:59.558093 1759 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:42:59.558122 kubelet[1759]: I0913 00:42:59.558113 1759 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:42:59.563060 kubelet[1759]: I0913 00:42:59.563032 1759 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:42:59.564004 kubelet[1759]: I0913 00:42:59.563981 1759 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:42:59.564141 kubelet[1759]: I0913 00:42:59.564104 1759 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:42:59.564316 kubelet[1759]: I0913 00:42:59.564135 1759 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:42:59.564414 kubelet[1759]: I0913 00:42:59.564319 1759 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:42:59.564414 kubelet[1759]: I0913 00:42:59.564327 1759 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:42:59.564467 kubelet[1759]: I0913 00:42:59.564443 1759 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:42:59.572129 kubelet[1759]: I0913 00:42:59.572104 1759 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:42:59.572129 kubelet[1759]: I0913 00:42:59.572125 1759 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:42:59.572207 kubelet[1759]: I0913 00:42:59.572155 1759 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:42:59.572207 kubelet[1759]: I0913 00:42:59.572169 1759 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:42:59.656609 kubelet[1759]: I0913 00:42:59.656509 1759 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:42:59.656961 kubelet[1759]: I0913 00:42:59.656947 1759 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:42:59.660419 kubelet[1759]: W0913 00:42:59.660374 1759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Sep 13 00:42:59.660480 kubelet[1759]: E0913 00:42:59.660422 1759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:59.660749 kubelet[1759]: W0913 00:42:59.660723 1759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Sep 13 00:42:59.660791 kubelet[1759]: E0913 00:42:59.660755 1759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:59.661358 kubelet[1759]: W0913 00:42:59.661335 1759 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:42:59.663709 kubelet[1759]: I0913 00:42:59.663695 1759 server.go:1274] "Started kubelet" Sep 13 00:42:59.663993 kubelet[1759]: I0913 00:42:59.663894 1759 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:42:59.664041 kubelet[1759]: I0913 00:42:59.663988 1759 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:42:59.664301 kubelet[1759]: I0913 00:42:59.664288 1759 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:42:59.665109 kubelet[1759]: I0913 00:42:59.665086 1759 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:42:59.670000 audit[1759]: AVC avc: denied { mac_admin } for pid=1759 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:42:59.671098 kubelet[1759]: I0913 00:42:59.670984 1759 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:42:59.671098 kubelet[1759]: I0913 00:42:59.671014 1759 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:42:59.671098 kubelet[1759]: I0913 00:42:59.671079 1759 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:42:59.671521 kubelet[1759]: I0913 00:42:59.671504 1759 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:42:59.671676 kubelet[1759]: I0913 00:42:59.671660 1759 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:42:59.672143 kubelet[1759]: I0913 00:42:59.672127 1759 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:42:59.672199 kubelet[1759]: I0913 00:42:59.672163 1759 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:42:59.670000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:42:59.670000 audit[1759]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c5d6e0 a1=c000c74dc8 a2=c000c5d6b0 a3=25 items=0 ppid=1 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.670000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:42:59.670000 audit[1759]: AVC avc: denied { mac_admin } for pid=1759 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:42:59.670000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:42:59.670000 audit[1759]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d5eaa0 a1=c000c74de0 a2=c000c5d770 a3=25 items=0 ppid=1 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.670000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:42:59.674642 kernel: audit: type=1400 audit(1757724179.670:203): avc: denied { mac_admin } for pid=1759 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:42:59.674816 kubelet[1759]: E0913 00:42:59.674766 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="200ms" Sep 13 00:42:59.674914 kubelet[1759]: W0913 00:42:59.674872 1759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Sep 13 00:42:59.674944 kubelet[1759]: E0913 00:42:59.674914 1759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:59.674972 kubelet[1759]: E0913 00:42:59.674944 1759 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:59.675276 kubelet[1759]: I0913 00:42:59.675259 1759 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:42:59.675578 kubelet[1759]: I0913 00:42:59.675559 1759 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:42:59.678000 audit[1772]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:59.678000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcc54d5950 a2=0 a3=7ffcc54d593c items=0 ppid=1759 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:42:59.679000 audit[1773]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:59.679000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6a51e400 a2=0 a3=7fff6a51e3ec items=0 ppid=1759 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:42:59.682959 kubelet[1759]: E0913 00:42:59.682943 1759 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:42:59.683674 kubelet[1759]: I0913 00:42:59.683654 1759 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:42:59.684052 kubelet[1759]: E0913 00:42:59.683089 1759 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.27:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.27:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b0d41df7c2ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:42:59.663667951 +0000 UTC m=+0.413273787,LastTimestamp:2025-09-13 00:42:59.663667951 +0000 UTC m=+0.413273787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:42:59.683000 audit[1776]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:59.683000 audit[1776]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcf7da4230 a2=0 a3=7ffcf7da421c items=0 ppid=1759 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:42:59.685000 audit[1778]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:59.685000 audit[1778]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdea1ede20 a2=0 a3=7ffdea1ede0c items=0 ppid=1759 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:42:59.690000 audit[1781]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:59.690000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc79ddc430 a2=0 a3=7ffc79ddc41c items=0 ppid=1759 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 13 00:42:59.691784 kubelet[1759]: I0913 00:42:59.691754 1759 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:42:59.691000 audit[1782]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:42:59.691000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffea6007780 a2=0 a3=7ffea600776c items=0 ppid=1759 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:42:59.692653 kubelet[1759]: I0913 00:42:59.692583 1759 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:42:59.692653 kubelet[1759]: I0913 00:42:59.692602 1759 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:42:59.692653 kubelet[1759]: I0913 00:42:59.692633 1759 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:42:59.692742 kubelet[1759]: E0913 00:42:59.692666 1759 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:42:59.692000 audit[1784]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:59.692000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbbe643a0 a2=0 a3=7ffdbbe6438c items=0 ppid=1759 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:42:59.693000 audit[1785]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:59.693000 audit[1785]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb5bb8630 a2=0 a3=7fffb5bb861c items=0 ppid=1759 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:42:59.694000 audit[1786]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:42:59.694000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe11e32760 a2=0 a3=7ffe11e3274c items=0 ppid=1759 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.694000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:42:59.694000 audit[1787]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:42:59.694000 audit[1787]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd00d1e390 a2=0 a3=7ffd00d1e37c items=0 ppid=1759 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.694000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:42:59.695000 audit[1788]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:42:59.695000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffcbb0531c0 a2=0 a3=7ffcbb0531ac items=0 ppid=1759 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:42:59.696000 audit[1789]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:42:59.696000 audit[1789]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd1a51fe70 a2=0 a3=7ffd1a51fe5c items=0 ppid=1759 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:59.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:42:59.698210 kubelet[1759]: W0913 00:42:59.698171 1759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Sep 13 00:42:59.698310 kubelet[1759]: E0913 00:42:59.698219 1759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:59.699312 kubelet[1759]: I0913 00:42:59.699300 1759 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:42:59.699312 kubelet[1759]: I0913 00:42:59.699310 1759 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:42:59.699397 kubelet[1759]: I0913 00:42:59.699324 1759 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:42:59.775944 kubelet[1759]: E0913 00:42:59.775922 1759 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:59.793257 kubelet[1759]: E0913 00:42:59.793211 1759 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:42:59.875978 kubelet[1759]: E0913 00:42:59.875935 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="400ms" Sep 13 00:42:59.876041 kubelet[1759]: E0913 00:42:59.876011 1759 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:59.976539 kubelet[1759]: E0913 00:42:59.976461 1759 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:59.993695 kubelet[1759]: E0913 00:42:59.993660 1759 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:43:00.017702 kubelet[1759]: I0913 00:43:00.017664 1759 policy_none.go:49] "None policy: Start" Sep 13 00:43:00.018422 kubelet[1759]: I0913 00:43:00.018405 1759 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:43:00.018477 kubelet[1759]: I0913 00:43:00.018428 1759 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:43:00.077587 kubelet[1759]: E0913 00:43:00.077529 1759 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:43:00.082756 kubelet[1759]: I0913 00:43:00.082723 1759 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:43:00.082000 audit[1759]: AVC avc: denied { mac_admin } for pid=1759 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:00.082000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:43:00.082000 audit[1759]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e49f50 a1=c000e4a798 a2=c000e49f20 a3=25 items=0 ppid=1 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:00.082000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:43:00.083051 kubelet[1759]: I0913 00:43:00.082804 1759 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:43:00.083051 kubelet[1759]: I0913 00:43:00.082935 1759 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:43:00.083051 kubelet[1759]: I0913 00:43:00.082955 1759 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:43:00.083253 kubelet[1759]: I0913 00:43:00.083232 1759 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:43:00.084305 kubelet[1759]: E0913 00:43:00.084287 1759 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:43:00.185413 kubelet[1759]: I0913 00:43:00.185387 1759 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:43:00.185766 kubelet[1759]: E0913 00:43:00.185733 1759 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Sep 13 00:43:00.276439 kubelet[1759]: E0913 00:43:00.276390 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="800ms" Sep 13 00:43:00.387207 kubelet[1759]: I0913 00:43:00.387162 1759 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:43:00.387581 kubelet[1759]: E0913 00:43:00.387477 1759 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Sep 13 00:43:00.477651 kubelet[1759]: I0913 00:43:00.477567 1759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:00.477651 kubelet[1759]: I0913 00:43:00.477617 1759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3898f0de5d0df6b739defba673683116-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3898f0de5d0df6b739defba673683116\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:00.477651 kubelet[1759]: I0913 00:43:00.477663 1759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3898f0de5d0df6b739defba673683116-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3898f0de5d0df6b739defba673683116\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:00.477907 kubelet[1759]: I0913 00:43:00.477688 1759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3898f0de5d0df6b739defba673683116-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3898f0de5d0df6b739defba673683116\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:00.477907 kubelet[1759]: I0913 00:43:00.477715 1759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:00.477907 kubelet[1759]: I0913 00:43:00.477747 1759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:00.477907 kubelet[1759]: I0913 00:43:00.477784 1759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:00.477907 kubelet[1759]: I0913 00:43:00.477812 1759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:00.478112 kubelet[1759]: I0913 00:43:00.477840 1759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:00.484193 kubelet[1759]: W0913 00:43:00.484107 1759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Sep 13 00:43:00.484337 kubelet[1759]: E0913 00:43:00.484193 1759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:00.501155 kubelet[1759]: W0913 00:43:00.501106 1759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Sep 13 00:43:00.501155 kubelet[1759]: E0913 00:43:00.501155 1759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:00.650663 kubelet[1759]: W0913 00:43:00.650545 1759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Sep 13 00:43:00.650663 kubelet[1759]: E0913 00:43:00.650594 1759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:00.699366 kubelet[1759]: E0913 00:43:00.699343 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:00.699966 env[1315]: time="2025-09-13T00:43:00.699926739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:00.701118 kubelet[1759]: E0913 00:43:00.701079 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:00.701811 env[1315]: time="2025-09-13T00:43:00.701711387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3898f0de5d0df6b739defba673683116,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:00.703319 kubelet[1759]: E0913 00:43:00.703271 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:00.703681 env[1315]: time="2025-09-13T00:43:00.703650491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:00.789001 kubelet[1759]: I0913 00:43:00.788976 1759 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:43:00.789231 kubelet[1759]: E0913 00:43:00.789207 1759 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Sep 13 00:43:01.076983 kubelet[1759]: E0913 00:43:01.076911 1759 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="1.6s" Sep 13 00:43:01.240518 kubelet[1759]: W0913 00:43:01.240444 1759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Sep 13 00:43:01.240518 kubelet[1759]: E0913 00:43:01.240514 1759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:01.540652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount230989078.mount: Deactivated successfully. Sep 13 00:43:01.546669 env[1315]: time="2025-09-13T00:43:01.546611267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.550690 env[1315]: time="2025-09-13T00:43:01.550612857Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.551921 env[1315]: time="2025-09-13T00:43:01.551876380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.552880 env[1315]: time="2025-09-13T00:43:01.552827404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.555006 env[1315]: time="2025-09-13T00:43:01.554976410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.556898 env[1315]: time="2025-09-13T00:43:01.556878446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.558653 env[1315]: time="2025-09-13T00:43:01.558609471Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.560065 env[1315]: time="2025-09-13T00:43:01.560037210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.561398 env[1315]: time="2025-09-13T00:43:01.561367808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.563132 env[1315]: time="2025-09-13T00:43:01.563100839Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.564455 env[1315]: time="2025-09-13T00:43:01.564425903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.566352 env[1315]: time="2025-09-13T00:43:01.566294207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.590737 kubelet[1759]: I0913 00:43:01.590685 1759 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:43:01.591110 env[1315]: time="2025-09-13T00:43:01.590778929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:01.591110 env[1315]: time="2025-09-13T00:43:01.590817023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:01.591110 env[1315]: time="2025-09-13T00:43:01.590828441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:01.591207 kubelet[1759]: E0913 00:43:01.591116 1759 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Sep 13 00:43:01.591581 env[1315]: time="2025-09-13T00:43:01.590957138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ccede823b186a613a77398cda6cf1c835c49319e168d76da23dba7f66823a2d pid=1800 runtime=io.containerd.runc.v2 Sep 13 00:43:01.646252 kubelet[1759]: E0913 00:43:01.646207 1759 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:01.657679 env[1315]: time="2025-09-13T00:43:01.657534801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:01.657679 env[1315]: time="2025-09-13T00:43:01.657577015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:01.657679 env[1315]: time="2025-09-13T00:43:01.657586347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:01.657894 env[1315]: time="2025-09-13T00:43:01.657738042Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ed71bfac16c02cbedca591b63ffae98aa0b703b1d87f2d9e3611474c40d3481 pid=1824 runtime=io.containerd.runc.v2 Sep 13 00:43:01.661322 env[1315]: time="2025-09-13T00:43:01.660882851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:01.661322 env[1315]: time="2025-09-13T00:43:01.660911210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:01.661322 env[1315]: time="2025-09-13T00:43:01.660920001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:01.663538 env[1315]: time="2025-09-13T00:43:01.662589936Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d10505277a2f139b9f054739ae91911836703ae2d8677e2cd860d0f7a829a307 pid=1838 runtime=io.containerd.runc.v2 Sep 13 00:43:01.847800 env[1315]: time="2025-09-13T00:43:01.846963006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d10505277a2f139b9f054739ae91911836703ae2d8677e2cd860d0f7a829a307\"" Sep 13 00:43:01.848771 env[1315]: time="2025-09-13T00:43:01.848739632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3898f0de5d0df6b739defba673683116,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ed71bfac16c02cbedca591b63ffae98aa0b703b1d87f2d9e3611474c40d3481\"" Sep 13 00:43:01.849754 kubelet[1759]: E0913 00:43:01.849515 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:01.853016 env[1315]: time="2025-09-13T00:43:01.852983039Z" level=info msg="CreateContainer within sandbox \"d10505277a2f139b9f054739ae91911836703ae2d8677e2cd860d0f7a829a307\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:43:01.853262 kubelet[1759]: E0913 00:43:01.853232 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:01.854706 env[1315]: time="2025-09-13T00:43:01.854669785Z" level=info msg="CreateContainer within sandbox \"6ed71bfac16c02cbedca591b63ffae98aa0b703b1d87f2d9e3611474c40d3481\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:43:01.858103 env[1315]: time="2025-09-13T00:43:01.858065633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ccede823b186a613a77398cda6cf1c835c49319e168d76da23dba7f66823a2d\"" Sep 13 00:43:01.858519 kubelet[1759]: E0913 00:43:01.858493 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:01.859758 env[1315]: time="2025-09-13T00:43:01.859732089Z" level=info msg="CreateContainer within sandbox \"5ccede823b186a613a77398cda6cf1c835c49319e168d76da23dba7f66823a2d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:43:01.875661 env[1315]: time="2025-09-13T00:43:01.875595989Z" level=info msg="CreateContainer within sandbox \"d10505277a2f139b9f054739ae91911836703ae2d8677e2cd860d0f7a829a307\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a9156d35e6a1c8285c1a36d88e52ed82b99b1fcc9c1ead4e2fee93df6c6ec94f\"" Sep 13 00:43:01.876187 env[1315]: time="2025-09-13T00:43:01.876155317Z" level=info msg="StartContainer for \"a9156d35e6a1c8285c1a36d88e52ed82b99b1fcc9c1ead4e2fee93df6c6ec94f\"" Sep 13 00:43:01.879823 env[1315]: time="2025-09-13T00:43:01.879766466Z" level=info msg="CreateContainer within sandbox \"6ed71bfac16c02cbedca591b63ffae98aa0b703b1d87f2d9e3611474c40d3481\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a2bac81b4db8bdb656eb69e17c23e793ff3e1554382ae203b2d8422d6a68b7f\"" Sep 13 00:43:01.880142 env[1315]: time="2025-09-13T00:43:01.880119064Z" level=info msg="StartContainer for \"1a2bac81b4db8bdb656eb69e17c23e793ff3e1554382ae203b2d8422d6a68b7f\"" Sep 13 00:43:01.883664 env[1315]: time="2025-09-13T00:43:01.883603089Z" level=info msg="CreateContainer within sandbox \"5ccede823b186a613a77398cda6cf1c835c49319e168d76da23dba7f66823a2d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb07e8d40ba56cca5ac30f4d0f5754adfea72a27a980da997bab4fa731f2a461\"" Sep 13 00:43:01.884060 env[1315]: time="2025-09-13T00:43:01.884037289Z" level=info msg="StartContainer for \"cb07e8d40ba56cca5ac30f4d0f5754adfea72a27a980da997bab4fa731f2a461\"" Sep 13 00:43:01.942415 env[1315]: time="2025-09-13T00:43:01.942371800Z" level=info msg="StartContainer for \"a9156d35e6a1c8285c1a36d88e52ed82b99b1fcc9c1ead4e2fee93df6c6ec94f\" returns successfully" Sep 13 00:43:01.950677 env[1315]: time="2025-09-13T00:43:01.949129151Z" level=info msg="StartContainer for \"cb07e8d40ba56cca5ac30f4d0f5754adfea72a27a980da997bab4fa731f2a461\" returns successfully" Sep 13 00:43:01.966237 env[1315]: time="2025-09-13T00:43:01.966147876Z" level=info msg="StartContainer for \"1a2bac81b4db8bdb656eb69e17c23e793ff3e1554382ae203b2d8422d6a68b7f\" returns successfully" Sep 13 00:43:02.727030 kubelet[1759]: E0913 00:43:02.726992 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:02.728512 kubelet[1759]: E0913 00:43:02.728488 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:02.729825 kubelet[1759]: E0913 00:43:02.729804 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:03.192649 kubelet[1759]: I0913 00:43:03.192597 1759 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:43:03.731778 kubelet[1759]: E0913 00:43:03.731721 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:03.821489 kubelet[1759]: E0913 00:43:03.821463 1759 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:43:03.904445 kubelet[1759]: I0913 00:43:03.904409 1759 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:43:04.647030 kubelet[1759]: I0913 00:43:04.646997 1759 apiserver.go:52] "Watching apiserver" Sep 13 00:43:04.672714 kubelet[1759]: I0913 00:43:04.672672 1759 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:43:05.790766 kubelet[1759]: E0913 00:43:05.790719 1759 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:06.183929 systemd[1]: Reloading. Sep 13 00:43:06.237267 /usr/lib/systemd/system-generators/torcx-generator[2051]: time="2025-09-13T00:43:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:43:06.237772 /usr/lib/systemd/system-generators/torcx-generator[2051]: time="2025-09-13T00:43:06Z" level=info msg="torcx already run" Sep 13 00:43:06.301441 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:43:06.301457 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:43:06.317876 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:43:06.395723 systemd[1]: Stopping kubelet.service... Sep 13 00:43:06.418266 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:43:06.418530 systemd[1]: Stopped kubelet.service. Sep 13 00:43:06.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:06.419469 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 13 00:43:06.419535 kernel: audit: type=1131 audit(1757724186.416:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:06.420300 systemd[1]: Starting kubelet.service... Sep 13 00:43:06.533812 systemd[1]: Started kubelet.service. Sep 13 00:43:06.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:06.537694 kernel: audit: type=1130 audit(1757724186.532:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:06.582904 kubelet[2107]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:43:06.582904 kubelet[2107]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:43:06.582904 kubelet[2107]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:43:06.583613 kubelet[2107]: I0913 00:43:06.582957 2107 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:43:06.589016 kubelet[2107]: I0913 00:43:06.588965 2107 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:43:06.589016 kubelet[2107]: I0913 00:43:06.589003 2107 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:43:06.589342 kubelet[2107]: I0913 00:43:06.589311 2107 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:43:06.592333 kubelet[2107]: I0913 00:43:06.592304 2107 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:43:06.595876 kubelet[2107]: I0913 00:43:06.595792 2107 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:43:06.599732 kubelet[2107]: E0913 00:43:06.599695 2107 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:43:06.599732 kubelet[2107]: I0913 00:43:06.599729 2107 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:43:06.604656 kubelet[2107]: I0913 00:43:06.604634 2107 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:43:06.605140 kubelet[2107]: I0913 00:43:06.605112 2107 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:43:06.605270 kubelet[2107]: I0913 00:43:06.605238 2107 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:43:06.605464 kubelet[2107]: I0913 00:43:06.605270 2107 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:43:06.605572 kubelet[2107]: I0913 00:43:06.605469 2107 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:43:06.605572 kubelet[2107]: I0913 00:43:06.605481 2107 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:43:06.605572 kubelet[2107]: I0913 00:43:06.605512 2107 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:43:06.605732 kubelet[2107]: I0913 00:43:06.605643 2107 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:43:06.605732 kubelet[2107]: I0913 00:43:06.605657 2107 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:43:06.605732 kubelet[2107]: I0913 00:43:06.605686 2107 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:43:06.605732 kubelet[2107]: I0913 00:43:06.605697 2107 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:43:06.606750 kubelet[2107]: I0913 00:43:06.606720 2107 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:43:06.607227 kubelet[2107]: I0913 00:43:06.607204 2107 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:43:06.607721 kubelet[2107]: I0913 00:43:06.607692 2107 server.go:1274] "Started kubelet" Sep 13 00:43:06.617599 kernel: audit: type=1400 audit(1757724186.608:220): avc: denied { mac_admin } for pid=2107 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:06.617755 kernel: audit: type=1401 audit(1757724186.608:220): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:43:06.608000 audit[2107]: AVC avc: denied { mac_admin } for pid=2107 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:06.608000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:43:06.617864 kubelet[2107]: I0913 00:43:06.611910 2107 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:43:06.617864 kubelet[2107]: I0913 00:43:06.612308 2107 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:43:06.617864 kubelet[2107]: I0913 00:43:06.612370 2107 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:43:06.617864 kubelet[2107]: E0913 00:43:06.613108 2107 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:43:06.617864 kubelet[2107]: I0913 00:43:06.613568 2107 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:43:06.608000 audit[2107]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0002af6b0 a1=c000aae810 a2=c0002af680 a3=25 items=0 ppid=1 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:06.622648 kernel: audit: type=1300 audit(1757724186.608:220): arch=c000003e syscall=188 success=no exit=-22 a0=c0002af6b0 a1=c000aae810 a2=c0002af680 a3=25 items=0 ppid=1 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:06.608000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:43:06.626696 kernel: audit: type=1327 audit(1757724186.608:220): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:43:06.626735 kubelet[2107]: I0913 00:43:06.622681 2107 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:43:06.626735 kubelet[2107]: I0913 00:43:06.622742 2107 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:43:06.626735 kubelet[2107]: I0913 00:43:06.622777 2107 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:43:06.626735 kubelet[2107]: I0913 00:43:06.623117 2107 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:43:06.626735 kubelet[2107]: I0913 00:43:06.623219 2107 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:43:06.626735 kubelet[2107]: I0913 00:43:06.623407 2107 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:43:06.626735 kubelet[2107]: I0913 00:43:06.623632 2107 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:43:06.626735 kubelet[2107]: I0913 00:43:06.623703 2107 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:43:06.626735 kubelet[2107]: I0913 00:43:06.624404 2107 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:43:06.626735 kubelet[2107]: I0913 00:43:06.625150 2107 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:43:06.621000 audit[2107]: AVC avc: denied { mac_admin } for pid=2107 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:06.621000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:43:06.631962 kernel: audit: type=1400 audit(1757724186.621:221): avc: denied { mac_admin } for pid=2107 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:06.632021 kernel: audit: type=1401 audit(1757724186.621:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:43:06.632039 kernel: audit: type=1300 audit(1757724186.621:221): arch=c000003e syscall=188 success=no exit=-22 a0=c00040c020 a1=c00099c000 a2=c0002340c0 a3=25 items=0 ppid=1 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:06.621000 audit[2107]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00040c020 a1=c00099c000 a2=c0002340c0 a3=25 items=0 ppid=1 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:06.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:43:06.640382 kernel: audit: type=1327 audit(1757724186.621:221): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:43:06.645208 kubelet[2107]: I0913 00:43:06.645153 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:43:06.646152 kubelet[2107]: I0913 00:43:06.646130 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:43:06.646152 kubelet[2107]: I0913 00:43:06.646152 2107 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:43:06.646202 kubelet[2107]: I0913 00:43:06.646170 2107 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:43:06.646245 kubelet[2107]: E0913 00:43:06.646209 2107 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:43:06.670017 kubelet[2107]: I0913 00:43:06.669983 2107 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:43:06.670017 kubelet[2107]: I0913 00:43:06.670001 2107 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:43:06.670163 kubelet[2107]: I0913 00:43:06.670029 2107 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:43:06.670186 kubelet[2107]: I0913 00:43:06.670170 2107 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:43:06.670207 kubelet[2107]: I0913 00:43:06.670179 2107 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:43:06.670207 kubelet[2107]: I0913 00:43:06.670195 2107 policy_none.go:49] "None policy: Start" Sep 13 00:43:06.675321 kubelet[2107]: I0913 00:43:06.674816 2107 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:43:06.675321 kubelet[2107]: I0913 00:43:06.674834 2107 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:43:06.675321 kubelet[2107]: I0913 00:43:06.675058 2107 state_mem.go:75] "Updated machine memory state" Sep 13 00:43:06.676507 kubelet[2107]: I0913 00:43:06.676487 2107 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:43:06.674000 audit[2107]: AVC avc: denied { mac_admin } for pid=2107 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:06.674000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:43:06.674000 audit[2107]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e809f0 a1=c001033398 a2=c000e809c0 a3=25 items=0 ppid=1 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:06.674000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:43:06.676826 kubelet[2107]: I0913 00:43:06.676576 2107 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:43:06.676826 kubelet[2107]: I0913 00:43:06.676808 2107 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:43:06.676906 kubelet[2107]: I0913 00:43:06.676822 2107 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:43:06.677120 kubelet[2107]: I0913 00:43:06.677091 2107 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:43:06.774508 kubelet[2107]: E0913 00:43:06.774433 2107 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:06.784465 kubelet[2107]: I0913 00:43:06.784312 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:43:06.808878 kubelet[2107]: I0913 00:43:06.808845 2107 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:43:06.809032 kubelet[2107]: I0913 00:43:06.808927 2107 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:43:06.825371 kubelet[2107]: I0913 00:43:06.825333 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:06.825371 kubelet[2107]: I0913 00:43:06.825367 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:06.825565 kubelet[2107]: I0913 00:43:06.825384 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:06.825565 kubelet[2107]: I0913 00:43:06.825403 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3898f0de5d0df6b739defba673683116-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3898f0de5d0df6b739defba673683116\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:06.825565 kubelet[2107]: I0913 00:43:06.825425 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3898f0de5d0df6b739defba673683116-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3898f0de5d0df6b739defba673683116\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:06.825565 kubelet[2107]: I0913 00:43:06.825443 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3898f0de5d0df6b739defba673683116-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3898f0de5d0df6b739defba673683116\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:06.825565 kubelet[2107]: I0913 00:43:06.825461 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:06.825692 kubelet[2107]: I0913 00:43:06.825480 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:06.825692 kubelet[2107]: I0913 00:43:06.825499 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:07.073440 kubelet[2107]: E0913 00:43:07.073313 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:07.073440 kubelet[2107]: E0913 00:43:07.073344 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:07.075647 kubelet[2107]: E0913 00:43:07.075609 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:07.611488 kubelet[2107]: I0913 00:43:07.611454 2107 apiserver.go:52] "Watching apiserver" Sep 13 00:43:07.624030 kubelet[2107]: I0913 00:43:07.624002 2107 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:43:07.656660 kubelet[2107]: E0913 00:43:07.656611 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:07.656660 kubelet[2107]: E0913 00:43:07.656638 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:07.656847 kubelet[2107]: E0913 00:43:07.656741 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:07.676072 kubelet[2107]: I0913 00:43:07.676015 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.675998795 podStartE2EDuration="1.675998795s" podCreationTimestamp="2025-09-13 00:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:07.675770806 +0000 UTC m=+1.137606563" watchObservedRunningTime="2025-09-13 00:43:07.675998795 +0000 UTC m=+1.137834532" Sep 13 00:43:07.682509 kubelet[2107]: I0913 00:43:07.681682 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.68167136 podStartE2EDuration="1.68167136s" podCreationTimestamp="2025-09-13 00:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:07.681636601 +0000 UTC m=+1.143472348" watchObservedRunningTime="2025-09-13 00:43:07.68167136 +0000 UTC m=+1.143507097" Sep 13 00:43:07.688495 kubelet[2107]: I0913 00:43:07.688456 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.6884451719999998 podStartE2EDuration="2.688445172s" podCreationTimestamp="2025-09-13 00:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:07.688289808 +0000 UTC m=+1.150125555" watchObservedRunningTime="2025-09-13 00:43:07.688445172 +0000 UTC m=+1.150280909" Sep 13 00:43:08.657281 kubelet[2107]: E0913 00:43:08.657249 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:08.657638 kubelet[2107]: E0913 00:43:08.657320 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:11.712336 kubelet[2107]: I0913 00:43:11.712300 2107 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:43:11.712726 env[1315]: time="2025-09-13T00:43:11.712589676Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:43:11.712895 kubelet[2107]: I0913 00:43:11.712864 2107 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:43:11.848991 kubelet[2107]: E0913 00:43:11.848963 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.664192 kubelet[2107]: E0913 00:43:12.664140 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.678411 kubelet[2107]: I0913 00:43:12.678383 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2def66a3-0336-48ce-bb54-e7ad15cfaf5c-var-lib-calico\") pod \"tigera-operator-58fc44c59b-25hlw\" (UID: \"2def66a3-0336-48ce-bb54-e7ad15cfaf5c\") " pod="tigera-operator/tigera-operator-58fc44c59b-25hlw" Sep 13 00:43:12.678532 kubelet[2107]: I0913 00:43:12.678416 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99h2v\" (UniqueName: \"kubernetes.io/projected/2def66a3-0336-48ce-bb54-e7ad15cfaf5c-kube-api-access-99h2v\") pod \"tigera-operator-58fc44c59b-25hlw\" (UID: \"2def66a3-0336-48ce-bb54-e7ad15cfaf5c\") " pod="tigera-operator/tigera-operator-58fc44c59b-25hlw" Sep 13 00:43:12.778653 kubelet[2107]: I0913 00:43:12.778539 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0ade6256-aa60-417c-9000-e667878b457e-kube-proxy\") pod \"kube-proxy-4fvtj\" (UID: \"0ade6256-aa60-417c-9000-e667878b457e\") " pod="kube-system/kube-proxy-4fvtj" Sep 13 00:43:12.779074 kubelet[2107]: I0913 00:43:12.778717 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q5kz\" (UniqueName: \"kubernetes.io/projected/0ade6256-aa60-417c-9000-e667878b457e-kube-api-access-4q5kz\") pod \"kube-proxy-4fvtj\" (UID: \"0ade6256-aa60-417c-9000-e667878b457e\") " pod="kube-system/kube-proxy-4fvtj" Sep 13 00:43:12.779074 kubelet[2107]: I0913 00:43:12.778774 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ade6256-aa60-417c-9000-e667878b457e-xtables-lock\") pod \"kube-proxy-4fvtj\" (UID: \"0ade6256-aa60-417c-9000-e667878b457e\") " pod="kube-system/kube-proxy-4fvtj" Sep 13 00:43:12.779074 kubelet[2107]: I0913 00:43:12.778799 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ade6256-aa60-417c-9000-e667878b457e-lib-modules\") pod \"kube-proxy-4fvtj\" (UID: \"0ade6256-aa60-417c-9000-e667878b457e\") " pod="kube-system/kube-proxy-4fvtj" Sep 13 00:43:12.785377 kubelet[2107]: I0913 00:43:12.785326 2107 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:43:12.830901 env[1315]: time="2025-09-13T00:43:12.830824217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-25hlw,Uid:2def66a3-0336-48ce-bb54-e7ad15cfaf5c,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:43:12.955592 kubelet[2107]: E0913 00:43:12.955507 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.956089 env[1315]: time="2025-09-13T00:43:12.956053686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4fvtj,Uid:0ade6256-aa60-417c-9000-e667878b457e,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:13.075176 env[1315]: time="2025-09-13T00:43:13.075106312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:13.075176 env[1315]: time="2025-09-13T00:43:13.075143020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:13.075176 env[1315]: time="2025-09-13T00:43:13.075155046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:13.075425 env[1315]: time="2025-09-13T00:43:13.075320100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4da4e92145da732e56b536d075170ddefd707794f88e21926d61bc79ed2a6703 pid=2167 runtime=io.containerd.runc.v2 Sep 13 00:43:13.076715 env[1315]: time="2025-09-13T00:43:13.076659325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:13.076715 env[1315]: time="2025-09-13T00:43:13.076688578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:13.076812 env[1315]: time="2025-09-13T00:43:13.076697608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:13.076896 env[1315]: time="2025-09-13T00:43:13.076841547Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f28a8a23b11ae75c6fbaacaca96a2c806b76b4f754b8d49f357f0c54b56e6761 pid=2175 runtime=io.containerd.runc.v2 Sep 13 00:43:13.121199 env[1315]: time="2025-09-13T00:43:13.121148315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4fvtj,Uid:0ade6256-aa60-417c-9000-e667878b457e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f28a8a23b11ae75c6fbaacaca96a2c806b76b4f754b8d49f357f0c54b56e6761\"" Sep 13 00:43:13.122650 kubelet[2107]: E0913 00:43:13.122610 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:13.125792 env[1315]: time="2025-09-13T00:43:13.125683942Z" level=info msg="CreateContainer within sandbox \"f28a8a23b11ae75c6fbaacaca96a2c806b76b4f754b8d49f357f0c54b56e6761\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:43:13.137422 env[1315]: time="2025-09-13T00:43:13.137383679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-25hlw,Uid:2def66a3-0336-48ce-bb54-e7ad15cfaf5c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4da4e92145da732e56b536d075170ddefd707794f88e21926d61bc79ed2a6703\"" Sep 13 00:43:13.138953 env[1315]: time="2025-09-13T00:43:13.138932033Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:43:13.166974 env[1315]: time="2025-09-13T00:43:13.166916140Z" level=info msg="CreateContainer within sandbox \"f28a8a23b11ae75c6fbaacaca96a2c806b76b4f754b8d49f357f0c54b56e6761\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e24634872edb6f1d86d61c6a6b7cfd85c5c084d1548691ad2b506dbc81d630ff\"" Sep 13 00:43:13.167535 env[1315]: time="2025-09-13T00:43:13.167492347Z" level=info msg="StartContainer for \"e24634872edb6f1d86d61c6a6b7cfd85c5c084d1548691ad2b506dbc81d630ff\"" Sep 13 00:43:13.214695 env[1315]: time="2025-09-13T00:43:13.214564252Z" level=info msg="StartContainer for \"e24634872edb6f1d86d61c6a6b7cfd85c5c084d1548691ad2b506dbc81d630ff\" returns successfully" Sep 13 00:43:13.304000 audit[2310]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.312185 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:43:13.312274 kernel: audit: type=1325 audit(1757724193.304:223): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.312312 kernel: audit: type=1300 audit(1757724193.304:223): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe65d6ced0 a2=0 a3=7ffe65d6cebc items=0 ppid=2259 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.304000 audit[2310]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe65d6ced0 a2=0 a3=7ffe65d6cebc items=0 ppid=2259 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.304000 audit[2311]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.314628 kernel: audit: type=1325 audit(1757724193.304:224): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.304000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc42d6e000 a2=0 a3=7ffc42d6dfec items=0 ppid=2259 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.304000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:43:13.321125 kernel: audit: type=1300 audit(1757724193.304:224): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc42d6e000 a2=0 a3=7ffc42d6dfec items=0 ppid=2259 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.321163 kernel: audit: type=1327 audit(1757724193.304:224): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:43:13.321179 kernel: audit: type=1327 audit(1757724193.304:223): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:43:13.304000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:43:13.323189 kernel: audit: type=1325 audit(1757724193.309:225): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.309000 audit[2313]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.325369 kernel: audit: type=1300 audit(1757724193.309:225): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb239fee0 a2=0 a3=7fffb239fecc items=0 ppid=2259 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.309000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb239fee0 a2=0 a3=7fffb239fecc items=0 ppid=2259 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.329949 kernel: audit: type=1327 audit(1757724193.309:225): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:43:13.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:43:13.331964 kernel: audit: type=1325 audit(1757724193.311:226): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.311000 audit[2314]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.311000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8a70a000 a2=0 a3=7ffe8a709fec items=0 ppid=2259 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:43:13.317000 audit[2315]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.317000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd514e0060 a2=0 a3=7ffd514e004c items=0 ppid=2259 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:43:13.318000 audit[2316]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.318000 audit[2316]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb330ad40 a2=0 a3=7fffb330ad2c items=0 ppid=2259 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:43:13.406000 audit[2317]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.406000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffef3645070 a2=0 a3=7ffef364505c items=0 ppid=2259 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.406000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:43:13.408000 audit[2319]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.408000 audit[2319]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc35afcdd0 a2=0 a3=7ffc35afcdbc items=0 ppid=2259 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 13 00:43:13.411000 audit[2322]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.411000 audit[2322]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc5017af50 a2=0 a3=7ffc5017af3c items=0 ppid=2259 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 13 00:43:13.412000 audit[2323]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.412000 audit[2323]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd90ea9020 a2=0 a3=7ffd90ea900c items=0 ppid=2259 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:43:13.414000 audit[2325]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.414000 audit[2325]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff2525950 a2=0 a3=7ffff252593c items=0 ppid=2259 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.414000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:43:13.415000 audit[2326]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.415000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5861cf90 a2=0 a3=7ffc5861cf7c items=0 ppid=2259 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:43:13.416000 audit[2328]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.416000 audit[2328]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffe96c4450 a2=0 a3=7fffe96c443c items=0 ppid=2259 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:43:13.419000 audit[2331]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.419000 audit[2331]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcbac938f0 a2=0 a3=7ffcbac938dc items=0 ppid=2259 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 13 00:43:13.420000 audit[2332]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.420000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffcf42530 a2=0 a3=7ffffcf4251c items=0 ppid=2259 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:43:13.422000 audit[2334]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.422000 audit[2334]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda6984870 a2=0 a3=7ffda698485c items=0 ppid=2259 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:43:13.422000 audit[2335]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.422000 audit[2335]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3b8d1fd0 a2=0 a3=7ffc3b8d1fbc items=0 ppid=2259 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:43:13.424000 audit[2337]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.424000 audit[2337]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf6020b80 a2=0 a3=7ffcf6020b6c items=0 ppid=2259 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:43:13.427000 audit[2340]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.427000 audit[2340]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe40768d20 a2=0 a3=7ffe40768d0c items=0 ppid=2259 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:43:13.430000 audit[2343]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2343 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.430000 audit[2343]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe82738bd0 a2=0 a3=7ffe82738bbc items=0 ppid=2259 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:43:13.431000 audit[2344]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.431000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb53c90b0 a2=0 a3=7ffdb53c909c items=0 ppid=2259 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:43:13.432000 audit[2346]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.432000 audit[2346]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffea1244fe0 a2=0 a3=7ffea1244fcc items=0 ppid=2259 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:43:13.435000 audit[2349]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.435000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7711bcb0 a2=0 a3=7ffe7711bc9c items=0 ppid=2259 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:43:13.436000 audit[2350]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.436000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff14928790 a2=0 a3=7fff1492877c items=0 ppid=2259 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:43:13.438000 audit[2352]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:43:13.438000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd8a202c70 a2=0 a3=7ffd8a202c5c items=0 ppid=2259 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.438000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:43:13.458000 audit[2358]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:13.458000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdf69a9ee0 a2=0 a3=7ffdf69a9ecc items=0 ppid=2259 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:13.467000 audit[2358]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:13.467000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdf69a9ee0 a2=0 a3=7ffdf69a9ecc items=0 ppid=2259 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:13.468000 audit[2363]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.468000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc58ff7790 a2=0 a3=7ffc58ff777c items=0 ppid=2259 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:43:13.470000 audit[2365]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.470000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff4d5649c0 a2=0 a3=7fff4d5649ac items=0 ppid=2259 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 13 00:43:13.473000 audit[2368]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.473000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff4d54e650 a2=0 a3=7fff4d54e63c items=0 ppid=2259 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 13 00:43:13.474000 audit[2369]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.474000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc27987190 a2=0 a3=7ffc2798717c items=0 ppid=2259 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.474000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:43:13.476000 audit[2371]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.476000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff4b0f2760 a2=0 a3=7fff4b0f274c items=0 ppid=2259 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:43:13.477000 audit[2372]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.477000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd84ad2c50 a2=0 a3=7ffd84ad2c3c items=0 ppid=2259 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:43:13.479000 audit[2374]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.479000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd02c39ae0 a2=0 a3=7ffd02c39acc items=0 ppid=2259 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 13 00:43:13.481000 audit[2377]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.481000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe50316280 a2=0 a3=7ffe5031626c items=0 ppid=2259 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:43:13.482000 audit[2378]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.482000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb30a11c0 a2=0 a3=7ffeb30a11ac items=0 ppid=2259 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.482000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:43:13.484000 audit[2380]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.484000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc107032c0 a2=0 a3=7ffc107032ac items=0 ppid=2259 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:43:13.485000 audit[2381]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.485000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd399df9d0 a2=0 a3=7ffd399df9bc items=0 ppid=2259 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.485000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:43:13.487000 audit[2383]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.487000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffceb0bdbe0 a2=0 a3=7ffceb0bdbcc items=0 ppid=2259 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:43:13.489000 audit[2386]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.489000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdff3ebe20 a2=0 a3=7ffdff3ebe0c items=0 ppid=2259 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:43:13.492000 audit[2389]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.492000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd50ad10d0 a2=0 a3=7ffd50ad10bc items=0 ppid=2259 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.492000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 13 00:43:13.493000 audit[2390]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.493000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc6fc14800 a2=0 a3=7ffc6fc147ec items=0 ppid=2259 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:43:13.495000 audit[2392]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.495000 audit[2392]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdeb9a6310 a2=0 a3=7ffdeb9a62fc items=0 ppid=2259 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.495000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:43:13.497000 audit[2395]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.497000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffdb35ce280 a2=0 a3=7ffdb35ce26c items=0 ppid=2259 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:43:13.498000 audit[2396]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.498000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc35107630 a2=0 a3=7ffc3510761c items=0 ppid=2259 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:43:13.500000 audit[2398]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.500000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdb1378090 a2=0 a3=7ffdb137807c items=0 ppid=2259 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:43:13.501000 audit[2399]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.501000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5c427970 a2=0 a3=7ffd5c42795c items=0 ppid=2259 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.501000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:43:13.502000 audit[2401]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.502000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffff73d45a0 a2=0 a3=7ffff73d458c items=0 ppid=2259 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:43:13.505000 audit[2404]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:43:13.505000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdde2f1c50 a2=0 a3=7ffdde2f1c3c items=0 ppid=2259 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:43:13.508000 audit[2406]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:43:13.508000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffedf361920 a2=0 a3=7ffedf36190c items=0 ppid=2259 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.508000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:13.508000 audit[2406]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:43:13.508000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffedf361920 a2=0 a3=7ffedf36190c items=0 ppid=2259 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:13.508000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:13.667087 kubelet[2107]: E0913 00:43:13.667039 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:13.668827 kubelet[2107]: E0913 00:43:13.668791 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:13.676651 kubelet[2107]: I0913 00:43:13.676560 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4fvtj" podStartSLOduration=1.676543154 podStartE2EDuration="1.676543154s" podCreationTimestamp="2025-09-13 00:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:13.676366384 +0000 UTC m=+7.138202131" watchObservedRunningTime="2025-09-13 00:43:13.676543154 +0000 UTC m=+7.138378891" Sep 13 00:43:14.414953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992633355.mount: Deactivated successfully. Sep 13 00:43:15.747262 kubelet[2107]: E0913 00:43:15.747168 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:16.054103 env[1315]: time="2025-09-13T00:43:16.053940785Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:16.055908 env[1315]: time="2025-09-13T00:43:16.055853700Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:16.057630 env[1315]: time="2025-09-13T00:43:16.057576296Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:16.059278 env[1315]: time="2025-09-13T00:43:16.059253196Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:16.059831 env[1315]: time="2025-09-13T00:43:16.059806908Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:43:16.061701 env[1315]: time="2025-09-13T00:43:16.061678536Z" level=info msg="CreateContainer within sandbox \"4da4e92145da732e56b536d075170ddefd707794f88e21926d61bc79ed2a6703\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:43:16.073841 env[1315]: time="2025-09-13T00:43:16.073786061Z" level=info msg="CreateContainer within sandbox \"4da4e92145da732e56b536d075170ddefd707794f88e21926d61bc79ed2a6703\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bda6fcd12570271e1b175865df54327ac10979f64c2c489ffae02f81f8770e07\"" Sep 13 00:43:16.074297 env[1315]: time="2025-09-13T00:43:16.074265116Z" level=info msg="StartContainer for \"bda6fcd12570271e1b175865df54327ac10979f64c2c489ffae02f81f8770e07\"" Sep 13 00:43:16.310333 env[1315]: time="2025-09-13T00:43:16.310186295Z" level=info msg="StartContainer for \"bda6fcd12570271e1b175865df54327ac10979f64c2c489ffae02f81f8770e07\" returns successfully" Sep 13 00:43:16.674359 kubelet[2107]: E0913 00:43:16.674258 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:16.746184 kubelet[2107]: I0913 00:43:16.746120 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-25hlw" podStartSLOduration=1.823917331 podStartE2EDuration="4.746101268s" podCreationTimestamp="2025-09-13 00:43:12 +0000 UTC" firstStartedPulling="2025-09-13 00:43:13.138414381 +0000 UTC m=+6.600250119" lastFinishedPulling="2025-09-13 00:43:16.060598319 +0000 UTC m=+9.522434056" observedRunningTime="2025-09-13 00:43:16.745945851 +0000 UTC m=+10.207781588" watchObservedRunningTime="2025-09-13 00:43:16.746101268 +0000 UTC m=+10.207937005" Sep 13 00:43:17.760125 kubelet[2107]: E0913 00:43:17.760087 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:20.078047 update_engine[1304]: I0913 00:43:20.077991 1304 update_attempter.cc:509] Updating boot flags... Sep 13 00:43:22.425451 sudo[1473]: pam_unix(sudo:session): session closed for user root Sep 13 00:43:22.431987 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 13 00:43:22.432049 kernel: audit: type=1106 audit(1757724202.424:274): pid=1473 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:43:22.424000 audit[1473]: USER_END pid=1473 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:43:22.435782 kernel: audit: type=1104 audit(1757724202.429:275): pid=1473 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:43:22.429000 audit[1473]: CRED_DISP pid=1473 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:43:22.433278 sshd[1468]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:22.435000 audit[1468]: USER_END pid=1468 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:22.437986 systemd[1]: sshd@6-10.0.0.27:22-10.0.0.1:50284.service: Deactivated successfully. Sep 13 00:43:22.442670 kernel: audit: type=1106 audit(1757724202.435:276): pid=1468 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:22.438856 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:43:22.438874 systemd-logind[1299]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:43:22.439689 systemd-logind[1299]: Removed session 7. Sep 13 00:43:22.435000 audit[1468]: CRED_DISP pid=1468 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:22.450698 kernel: audit: type=1104 audit(1757724202.435:277): pid=1468 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:22.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.27:22-10.0.0.1:50284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:22.455668 kernel: audit: type=1131 audit(1757724202.436:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.27:22-10.0.0.1:50284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:22.583919 kernel: audit: type=1325 audit(1757724202.574:279): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:22.584038 kernel: audit: type=1300 audit(1757724202.574:279): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd2805e50 a2=0 a3=7ffdd2805e3c items=0 ppid=2259 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:22.574000 audit[2514]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:22.574000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd2805e50 a2=0 a3=7ffdd2805e3c items=0 ppid=2259 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:22.586844 kernel: audit: type=1327 audit(1757724202.574:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:22.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:22.586000 audit[2514]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:22.591646 kernel: audit: type=1325 audit(1757724202.586:280): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:22.586000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd2805e50 a2=0 a3=0 items=0 ppid=2259 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:22.600890 kernel: audit: type=1300 audit(1757724202.586:280): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd2805e50 a2=0 a3=0 items=0 ppid=2259 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:22.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:22.598000 audit[2516]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:22.598000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffea80a2080 a2=0 a3=7ffea80a206c items=0 ppid=2259 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:22.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:22.601000 audit[2516]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:22.601000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea80a2080 a2=0 a3=0 items=0 ppid=2259 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:22.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:24.451000 audit[2518]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:24.451000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffde9ed94a0 a2=0 a3=7ffde9ed948c items=0 ppid=2259 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:24.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:24.460000 audit[2518]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:24.460000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde9ed94a0 a2=0 a3=0 items=0 ppid=2259 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:24.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:24.636000 audit[2520]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:24.636000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe5bc65e00 a2=0 a3=7ffe5bc65dec items=0 ppid=2259 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:24.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:24.641000 audit[2520]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:24.641000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5bc65e00 a2=0 a3=0 items=0 ppid=2259 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:24.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:24.959921 kubelet[2107]: I0913 00:43:24.959862 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd0c172d-5001-498f-9d9b-b5ac5229abb5-typha-certs\") pod \"calico-typha-647b575c7c-s2g6n\" (UID: \"dd0c172d-5001-498f-9d9b-b5ac5229abb5\") " pod="calico-system/calico-typha-647b575c7c-s2g6n" Sep 13 00:43:24.959921 kubelet[2107]: I0913 00:43:24.959908 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd0c172d-5001-498f-9d9b-b5ac5229abb5-tigera-ca-bundle\") pod \"calico-typha-647b575c7c-s2g6n\" (UID: \"dd0c172d-5001-498f-9d9b-b5ac5229abb5\") " pod="calico-system/calico-typha-647b575c7c-s2g6n" Sep 13 00:43:24.959921 kubelet[2107]: I0913 00:43:24.959931 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp7c7\" (UniqueName: \"kubernetes.io/projected/dd0c172d-5001-498f-9d9b-b5ac5229abb5-kube-api-access-hp7c7\") pod \"calico-typha-647b575c7c-s2g6n\" (UID: \"dd0c172d-5001-498f-9d9b-b5ac5229abb5\") " pod="calico-system/calico-typha-647b575c7c-s2g6n" Sep 13 00:43:25.110978 kubelet[2107]: E0913 00:43:25.110941 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:25.111481 env[1315]: time="2025-09-13T00:43:25.111440916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647b575c7c-s2g6n,Uid:dd0c172d-5001-498f-9d9b-b5ac5229abb5,Namespace:calico-system,Attempt:0,}" Sep 13 00:43:25.130467 env[1315]: time="2025-09-13T00:43:25.130372228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:25.130633 env[1315]: time="2025-09-13T00:43:25.130446607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:25.130633 env[1315]: time="2025-09-13T00:43:25.130460775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:25.130729 env[1315]: time="2025-09-13T00:43:25.130617588Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/15196a94e455303238972687bf14069d5604c86d07b885dbe0fc2264af64616d pid=2531 runtime=io.containerd.runc.v2 Sep 13 00:43:25.201318 env[1315]: time="2025-09-13T00:43:25.200552223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647b575c7c-s2g6n,Uid:dd0c172d-5001-498f-9d9b-b5ac5229abb5,Namespace:calico-system,Attempt:0,} returns sandbox id \"15196a94e455303238972687bf14069d5604c86d07b885dbe0fc2264af64616d\"" Sep 13 00:43:25.201463 kubelet[2107]: E0913 00:43:25.201195 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:25.202200 env[1315]: time="2025-09-13T00:43:25.202152906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:43:25.262455 kubelet[2107]: I0913 00:43:25.262409 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/23d54753-7a2c-4d55-a149-f5fc16838524-cni-bin-dir\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262455 kubelet[2107]: I0913 00:43:25.262448 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/23d54753-7a2c-4d55-a149-f5fc16838524-cni-net-dir\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262690 kubelet[2107]: I0913 00:43:25.262468 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/23d54753-7a2c-4d55-a149-f5fc16838524-flexvol-driver-host\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262690 kubelet[2107]: I0913 00:43:25.262486 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23d54753-7a2c-4d55-a149-f5fc16838524-tigera-ca-bundle\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262690 kubelet[2107]: I0913 00:43:25.262501 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23d54753-7a2c-4d55-a149-f5fc16838524-var-lib-calico\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262690 kubelet[2107]: I0913 00:43:25.262513 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23d54753-7a2c-4d55-a149-f5fc16838524-lib-modules\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262690 kubelet[2107]: I0913 00:43:25.262528 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/23d54753-7a2c-4d55-a149-f5fc16838524-policysync\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262852 kubelet[2107]: I0913 00:43:25.262546 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/23d54753-7a2c-4d55-a149-f5fc16838524-cni-log-dir\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262852 kubelet[2107]: I0913 00:43:25.262559 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkrvx\" (UniqueName: \"kubernetes.io/projected/23d54753-7a2c-4d55-a149-f5fc16838524-kube-api-access-bkrvx\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262852 kubelet[2107]: I0913 00:43:25.262572 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/23d54753-7a2c-4d55-a149-f5fc16838524-var-run-calico\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262852 kubelet[2107]: I0913 00:43:25.262586 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/23d54753-7a2c-4d55-a149-f5fc16838524-node-certs\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.262852 kubelet[2107]: I0913 00:43:25.262601 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23d54753-7a2c-4d55-a149-f5fc16838524-xtables-lock\") pod \"calico-node-lssz6\" (UID: \"23d54753-7a2c-4d55-a149-f5fc16838524\") " pod="calico-system/calico-node-lssz6" Sep 13 00:43:25.364080 kubelet[2107]: E0913 00:43:25.364031 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.364080 kubelet[2107]: W0913 00:43:25.364068 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.364251 kubelet[2107]: E0913 00:43:25.364094 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.364323 kubelet[2107]: E0913 00:43:25.364274 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.364323 kubelet[2107]: W0913 00:43:25.364291 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.364323 kubelet[2107]: E0913 00:43:25.364304 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.364570 kubelet[2107]: E0913 00:43:25.364525 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.364570 kubelet[2107]: W0913 00:43:25.364542 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.364570 kubelet[2107]: E0913 00:43:25.364555 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.364823 kubelet[2107]: E0913 00:43:25.364800 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.364823 kubelet[2107]: W0913 00:43:25.364814 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.364823 kubelet[2107]: E0913 00:43:25.364832 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.365095 kubelet[2107]: E0913 00:43:25.364998 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.365095 kubelet[2107]: W0913 00:43:25.365006 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.365095 kubelet[2107]: E0913 00:43:25.365019 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.365209 kubelet[2107]: E0913 00:43:25.365190 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.365209 kubelet[2107]: W0913 00:43:25.365199 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.365283 kubelet[2107]: E0913 00:43:25.365213 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.365573 kubelet[2107]: E0913 00:43:25.365551 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.365573 kubelet[2107]: W0913 00:43:25.365567 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.365716 kubelet[2107]: E0913 00:43:25.365692 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.365903 kubelet[2107]: E0913 00:43:25.365861 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.365903 kubelet[2107]: W0913 00:43:25.365882 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.366012 kubelet[2107]: E0913 00:43:25.365908 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.366215 kubelet[2107]: E0913 00:43:25.366198 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.366215 kubelet[2107]: W0913 00:43:25.366213 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.366328 kubelet[2107]: E0913 00:43:25.366227 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.366476 kubelet[2107]: E0913 00:43:25.366460 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.366566 kubelet[2107]: W0913 00:43:25.366543 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.366647 kubelet[2107]: E0913 00:43:25.366567 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.366791 kubelet[2107]: E0913 00:43:25.366775 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.366791 kubelet[2107]: W0913 00:43:25.366790 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.366898 kubelet[2107]: E0913 00:43:25.366802 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.370744 kubelet[2107]: E0913 00:43:25.370725 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.370744 kubelet[2107]: W0913 00:43:25.370740 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.370850 kubelet[2107]: E0913 00:43:25.370752 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.395991 kubelet[2107]: E0913 00:43:25.395945 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twb76" podUID="0495105a-b1c5-41d8-b0c9-fcfac0de6125" Sep 13 00:43:25.462919 env[1315]: time="2025-09-13T00:43:25.462871834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lssz6,Uid:23d54753-7a2c-4d55-a149-f5fc16838524,Namespace:calico-system,Attempt:0,}" Sep 13 00:43:25.464295 kubelet[2107]: E0913 00:43:25.464263 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.464418 kubelet[2107]: W0913 00:43:25.464369 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.464418 kubelet[2107]: E0913 00:43:25.464400 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.464704 kubelet[2107]: E0913 00:43:25.464682 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.464704 kubelet[2107]: W0913 00:43:25.464701 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.464792 kubelet[2107]: E0913 00:43:25.464722 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.464939 kubelet[2107]: E0913 00:43:25.464922 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.464939 kubelet[2107]: W0913 00:43:25.464935 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.465019 kubelet[2107]: E0913 00:43:25.464946 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.465177 kubelet[2107]: E0913 00:43:25.465161 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.465177 kubelet[2107]: W0913 00:43:25.465172 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.465257 kubelet[2107]: E0913 00:43:25.465181 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.465391 kubelet[2107]: E0913 00:43:25.465367 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.465391 kubelet[2107]: W0913 00:43:25.465380 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.465391 kubelet[2107]: E0913 00:43:25.465390 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.465540 kubelet[2107]: E0913 00:43:25.465529 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.465540 kubelet[2107]: W0913 00:43:25.465536 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.465635 kubelet[2107]: E0913 00:43:25.465543 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.465814 kubelet[2107]: E0913 00:43:25.465803 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.465814 kubelet[2107]: W0913 00:43:25.465813 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.465884 kubelet[2107]: E0913 00:43:25.465822 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.465999 kubelet[2107]: E0913 00:43:25.465984 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.465999 kubelet[2107]: W0913 00:43:25.465994 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.465999 kubelet[2107]: E0913 00:43:25.466002 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.466166 kubelet[2107]: E0913 00:43:25.466155 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.466166 kubelet[2107]: W0913 00:43:25.466162 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.466166 kubelet[2107]: E0913 00:43:25.466169 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.466307 kubelet[2107]: E0913 00:43:25.466295 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.466307 kubelet[2107]: W0913 00:43:25.466303 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.466307 kubelet[2107]: E0913 00:43:25.466309 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.466472 kubelet[2107]: E0913 00:43:25.466463 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.466472 kubelet[2107]: W0913 00:43:25.466470 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.466522 kubelet[2107]: E0913 00:43:25.466477 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.466651 kubelet[2107]: E0913 00:43:25.466640 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.466691 kubelet[2107]: W0913 00:43:25.466650 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.466691 kubelet[2107]: E0913 00:43:25.466658 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.466827 kubelet[2107]: E0913 00:43:25.466816 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.466827 kubelet[2107]: W0913 00:43:25.466824 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.466900 kubelet[2107]: E0913 00:43:25.466830 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.466972 kubelet[2107]: E0913 00:43:25.466962 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.466998 kubelet[2107]: W0913 00:43:25.466969 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.466998 kubelet[2107]: E0913 00:43:25.466984 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.467130 kubelet[2107]: E0913 00:43:25.467122 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.467157 kubelet[2107]: W0913 00:43:25.467129 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.467157 kubelet[2107]: E0913 00:43:25.467145 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.467303 kubelet[2107]: E0913 00:43:25.467287 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.467303 kubelet[2107]: W0913 00:43:25.467295 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.467303 kubelet[2107]: E0913 00:43:25.467303 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.467440 kubelet[2107]: E0913 00:43:25.467424 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.467440 kubelet[2107]: W0913 00:43:25.467434 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.467440 kubelet[2107]: E0913 00:43:25.467442 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.467573 kubelet[2107]: E0913 00:43:25.467561 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.467573 kubelet[2107]: W0913 00:43:25.467569 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.467664 kubelet[2107]: E0913 00:43:25.467576 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.467703 kubelet[2107]: E0913 00:43:25.467690 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.467703 kubelet[2107]: W0913 00:43:25.467697 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.467703 kubelet[2107]: E0913 00:43:25.467704 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.467811 kubelet[2107]: E0913 00:43:25.467798 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.467811 kubelet[2107]: W0913 00:43:25.467806 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.467863 kubelet[2107]: E0913 00:43:25.467812 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.486206 env[1315]: time="2025-09-13T00:43:25.485996525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:25.486206 env[1315]: time="2025-09-13T00:43:25.486032377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:25.486206 env[1315]: time="2025-09-13T00:43:25.486043830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:25.486364 env[1315]: time="2025-09-13T00:43:25.486251024Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8b62aa41671bdb4a73d6c90f4965150b3827d90c69a1cba23440a5f78935535 pid=2616 runtime=io.containerd.runc.v2 Sep 13 00:43:25.514202 env[1315]: time="2025-09-13T00:43:25.514103353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lssz6,Uid:23d54753-7a2c-4d55-a149-f5fc16838524,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8b62aa41671bdb4a73d6c90f4965150b3827d90c69a1cba23440a5f78935535\"" Sep 13 00:43:25.565424 kubelet[2107]: E0913 00:43:25.565392 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.565424 kubelet[2107]: W0913 00:43:25.565414 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.565424 kubelet[2107]: E0913 00:43:25.565434 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.565602 kubelet[2107]: I0913 00:43:25.565463 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0495105a-b1c5-41d8-b0c9-fcfac0de6125-kubelet-dir\") pod \"csi-node-driver-twb76\" (UID: \"0495105a-b1c5-41d8-b0c9-fcfac0de6125\") " pod="calico-system/csi-node-driver-twb76" Sep 13 00:43:25.565671 kubelet[2107]: E0913 00:43:25.565651 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.565671 kubelet[2107]: W0913 00:43:25.565659 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.565719 kubelet[2107]: E0913 00:43:25.565672 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.565719 kubelet[2107]: I0913 00:43:25.565685 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0495105a-b1c5-41d8-b0c9-fcfac0de6125-varrun\") pod \"csi-node-driver-twb76\" (UID: \"0495105a-b1c5-41d8-b0c9-fcfac0de6125\") " pod="calico-system/csi-node-driver-twb76" Sep 13 00:43:25.565883 kubelet[2107]: E0913 00:43:25.565866 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.565883 kubelet[2107]: W0913 00:43:25.565877 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.565983 kubelet[2107]: E0913 00:43:25.565888 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.565983 kubelet[2107]: I0913 00:43:25.565902 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0495105a-b1c5-41d8-b0c9-fcfac0de6125-registration-dir\") pod \"csi-node-driver-twb76\" (UID: \"0495105a-b1c5-41d8-b0c9-fcfac0de6125\") " pod="calico-system/csi-node-driver-twb76" Sep 13 00:43:25.566174 kubelet[2107]: E0913 00:43:25.566147 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.566222 kubelet[2107]: W0913 00:43:25.566172 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.566222 kubelet[2107]: E0913 00:43:25.566198 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.566333 kubelet[2107]: E0913 00:43:25.566322 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.566333 kubelet[2107]: W0913 00:43:25.566331 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.566381 kubelet[2107]: E0913 00:43:25.566341 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.566477 kubelet[2107]: E0913 00:43:25.566465 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.566477 kubelet[2107]: W0913 00:43:25.566474 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.566544 kubelet[2107]: E0913 00:43:25.566485 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.566631 kubelet[2107]: E0913 00:43:25.566609 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.566631 kubelet[2107]: W0913 00:43:25.566616 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.566684 kubelet[2107]: E0913 00:43:25.566637 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.566763 kubelet[2107]: E0913 00:43:25.566754 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.566787 kubelet[2107]: W0913 00:43:25.566762 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.566787 kubelet[2107]: E0913 00:43:25.566772 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.566833 kubelet[2107]: I0913 00:43:25.566796 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p6h5\" (UniqueName: \"kubernetes.io/projected/0495105a-b1c5-41d8-b0c9-fcfac0de6125-kube-api-access-4p6h5\") pod \"csi-node-driver-twb76\" (UID: \"0495105a-b1c5-41d8-b0c9-fcfac0de6125\") " pod="calico-system/csi-node-driver-twb76" Sep 13 00:43:25.566954 kubelet[2107]: E0913 00:43:25.566942 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.566954 kubelet[2107]: W0913 00:43:25.566950 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.567023 kubelet[2107]: E0913 00:43:25.566960 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.567023 kubelet[2107]: I0913 00:43:25.566971 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0495105a-b1c5-41d8-b0c9-fcfac0de6125-socket-dir\") pod \"csi-node-driver-twb76\" (UID: \"0495105a-b1c5-41d8-b0c9-fcfac0de6125\") " pod="calico-system/csi-node-driver-twb76" Sep 13 00:43:25.567139 kubelet[2107]: E0913 00:43:25.567128 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.567139 kubelet[2107]: W0913 00:43:25.567136 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.567192 kubelet[2107]: E0913 00:43:25.567166 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.567256 kubelet[2107]: E0913 00:43:25.567244 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.567256 kubelet[2107]: W0913 00:43:25.567252 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.567318 kubelet[2107]: E0913 00:43:25.567280 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.567472 kubelet[2107]: E0913 00:43:25.567450 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.567472 kubelet[2107]: W0913 00:43:25.567458 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.567472 kubelet[2107]: E0913 00:43:25.567470 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.567610 kubelet[2107]: E0913 00:43:25.567597 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.567610 kubelet[2107]: W0913 00:43:25.567604 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.567610 kubelet[2107]: E0913 00:43:25.567611 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.567762 kubelet[2107]: E0913 00:43:25.567750 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.567762 kubelet[2107]: W0913 00:43:25.567757 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.567848 kubelet[2107]: E0913 00:43:25.567764 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.567874 kubelet[2107]: E0913 00:43:25.567864 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.567874 kubelet[2107]: W0913 00:43:25.567869 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.567920 kubelet[2107]: E0913 00:43:25.567876 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.651000 audit[2666]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:25.651000 audit[2666]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffed4fa8a50 a2=0 a3=7ffed4fa8a3c items=0 ppid=2259 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:25.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:25.660000 audit[2666]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:25.660000 audit[2666]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed4fa8a50 a2=0 a3=0 items=0 ppid=2259 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:25.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:25.667432 kubelet[2107]: E0913 00:43:25.667404 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.667432 kubelet[2107]: W0913 00:43:25.667424 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.667525 kubelet[2107]: E0913 00:43:25.667445 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.667607 kubelet[2107]: E0913 00:43:25.667597 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.667607 kubelet[2107]: W0913 00:43:25.667605 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.667688 kubelet[2107]: E0913 00:43:25.667634 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.667802 kubelet[2107]: E0913 00:43:25.667772 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.667802 kubelet[2107]: W0913 00:43:25.667781 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.667802 kubelet[2107]: E0913 00:43:25.667790 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.667957 kubelet[2107]: E0913 00:43:25.667936 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.667957 kubelet[2107]: W0913 00:43:25.667949 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.668009 kubelet[2107]: E0913 00:43:25.667963 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.668121 kubelet[2107]: E0913 00:43:25.668102 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.668121 kubelet[2107]: W0913 00:43:25.668113 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.668191 kubelet[2107]: E0913 00:43:25.668125 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.668235 kubelet[2107]: E0913 00:43:25.668225 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.668235 kubelet[2107]: W0913 00:43:25.668232 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.668280 kubelet[2107]: E0913 00:43:25.668243 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.668473 kubelet[2107]: E0913 00:43:25.668448 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.668524 kubelet[2107]: W0913 00:43:25.668469 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.668524 kubelet[2107]: E0913 00:43:25.668494 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.668630 kubelet[2107]: E0913 00:43:25.668605 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.668630 kubelet[2107]: W0913 00:43:25.668614 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.668675 kubelet[2107]: E0913 00:43:25.668636 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.668777 kubelet[2107]: E0913 00:43:25.668767 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.668806 kubelet[2107]: W0913 00:43:25.668777 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.668806 kubelet[2107]: E0913 00:43:25.668790 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.668916 kubelet[2107]: E0913 00:43:25.668908 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.668940 kubelet[2107]: W0913 00:43:25.668915 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.668940 kubelet[2107]: E0913 00:43:25.668926 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.669049 kubelet[2107]: E0913 00:43:25.669041 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.669049 kubelet[2107]: W0913 00:43:25.669049 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.669143 kubelet[2107]: E0913 00:43:25.669059 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.669242 kubelet[2107]: E0913 00:43:25.669228 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.669242 kubelet[2107]: W0913 00:43:25.669238 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.669316 kubelet[2107]: E0913 00:43:25.669250 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.669426 kubelet[2107]: E0913 00:43:25.669411 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.669426 kubelet[2107]: W0913 00:43:25.669423 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.669478 kubelet[2107]: E0913 00:43:25.669436 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.669569 kubelet[2107]: E0913 00:43:25.669559 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.669592 kubelet[2107]: W0913 00:43:25.669568 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.669592 kubelet[2107]: E0913 00:43:25.669580 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.669790 kubelet[2107]: E0913 00:43:25.669778 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.669790 kubelet[2107]: W0913 00:43:25.669789 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.669860 kubelet[2107]: E0913 00:43:25.669803 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.669934 kubelet[2107]: E0913 00:43:25.669925 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.669934 kubelet[2107]: W0913 00:43:25.669933 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.669934 kubelet[2107]: E0913 00:43:25.669941 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.670082 kubelet[2107]: E0913 00:43:25.670072 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.670082 kubelet[2107]: W0913 00:43:25.670080 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.670145 kubelet[2107]: E0913 00:43:25.670100 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.670321 kubelet[2107]: E0913 00:43:25.670303 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.670321 kubelet[2107]: W0913 00:43:25.670317 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.670405 kubelet[2107]: E0913 00:43:25.670329 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.670488 kubelet[2107]: E0913 00:43:25.670473 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.670488 kubelet[2107]: W0913 00:43:25.670484 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.670572 kubelet[2107]: E0913 00:43:25.670495 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.670730 kubelet[2107]: E0913 00:43:25.670705 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.670730 kubelet[2107]: W0913 00:43:25.670718 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.670730 kubelet[2107]: E0913 00:43:25.670731 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.670941 kubelet[2107]: E0913 00:43:25.670901 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.670941 kubelet[2107]: W0913 00:43:25.670908 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.670995 kubelet[2107]: E0913 00:43:25.670960 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.671066 kubelet[2107]: E0913 00:43:25.671044 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.671066 kubelet[2107]: W0913 00:43:25.671054 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.671066 kubelet[2107]: E0913 00:43:25.671061 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.671217 kubelet[2107]: E0913 00:43:25.671177 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.671217 kubelet[2107]: W0913 00:43:25.671188 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.671217 kubelet[2107]: E0913 00:43:25.671195 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.671363 kubelet[2107]: E0913 00:43:25.671345 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.671363 kubelet[2107]: W0913 00:43:25.671357 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.671469 kubelet[2107]: E0913 00:43:25.671365 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.671581 kubelet[2107]: E0913 00:43:25.671558 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.671581 kubelet[2107]: W0913 00:43:25.671571 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.671708 kubelet[2107]: E0913 00:43:25.671583 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:25.676997 kubelet[2107]: E0913 00:43:25.676969 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:25.676997 kubelet[2107]: W0913 00:43:25.676983 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:25.677110 kubelet[2107]: E0913 00:43:25.677013 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:26.068247 systemd[1]: run-containerd-runc-k8s.io-15196a94e455303238972687bf14069d5604c86d07b885dbe0fc2264af64616d-runc.dGN8LB.mount: Deactivated successfully. Sep 13 00:43:26.647098 kubelet[2107]: E0913 00:43:26.647026 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twb76" podUID="0495105a-b1c5-41d8-b0c9-fcfac0de6125" Sep 13 00:43:27.057976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247184918.mount: Deactivated successfully. Sep 13 00:43:27.911966 env[1315]: time="2025-09-13T00:43:27.911892699Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:27.913699 env[1315]: time="2025-09-13T00:43:27.913665048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:27.915363 env[1315]: time="2025-09-13T00:43:27.915311247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:27.916899 env[1315]: time="2025-09-13T00:43:27.916876153Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:27.917396 env[1315]: time="2025-09-13T00:43:27.917365194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:43:27.918495 env[1315]: time="2025-09-13T00:43:27.918463154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:43:27.932407 env[1315]: time="2025-09-13T00:43:27.932370094Z" level=info msg="CreateContainer within sandbox \"15196a94e455303238972687bf14069d5604c86d07b885dbe0fc2264af64616d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:43:27.945929 env[1315]: time="2025-09-13T00:43:27.945890446Z" level=info msg="CreateContainer within sandbox \"15196a94e455303238972687bf14069d5604c86d07b885dbe0fc2264af64616d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4e78452dc512588aa7d98ffebc329582754e69ea4d640a54943c9722048522b5\"" Sep 13 00:43:27.946277 env[1315]: time="2025-09-13T00:43:27.946252545Z" level=info msg="StartContainer for \"4e78452dc512588aa7d98ffebc329582754e69ea4d640a54943c9722048522b5\"" Sep 13 00:43:27.999029 env[1315]: time="2025-09-13T00:43:27.998971632Z" level=info msg="StartContainer for \"4e78452dc512588aa7d98ffebc329582754e69ea4d640a54943c9722048522b5\" returns successfully" Sep 13 00:43:28.647023 kubelet[2107]: E0913 00:43:28.646974 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twb76" podUID="0495105a-b1c5-41d8-b0c9-fcfac0de6125" Sep 13 00:43:28.694841 kubelet[2107]: E0913 00:43:28.694794 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:28.704086 kubelet[2107]: I0913 00:43:28.704033 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-647b575c7c-s2g6n" podStartSLOduration=1.987673799 podStartE2EDuration="4.704016585s" podCreationTimestamp="2025-09-13 00:43:24 +0000 UTC" firstStartedPulling="2025-09-13 00:43:25.201907775 +0000 UTC m=+18.663743512" lastFinishedPulling="2025-09-13 00:43:27.918250561 +0000 UTC m=+21.380086298" observedRunningTime="2025-09-13 00:43:28.703555051 +0000 UTC m=+22.165390798" watchObservedRunningTime="2025-09-13 00:43:28.704016585 +0000 UTC m=+22.165852322" Sep 13 00:43:28.787175 kubelet[2107]: E0913 00:43:28.787128 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.787175 kubelet[2107]: W0913 00:43:28.787158 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.787369 kubelet[2107]: E0913 00:43:28.787184 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.787520 kubelet[2107]: E0913 00:43:28.787499 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.787520 kubelet[2107]: W0913 00:43:28.787517 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.787633 kubelet[2107]: E0913 00:43:28.787535 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.787765 kubelet[2107]: E0913 00:43:28.787751 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.787765 kubelet[2107]: W0913 00:43:28.787761 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.787835 kubelet[2107]: E0913 00:43:28.787770 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.787957 kubelet[2107]: E0913 00:43:28.787945 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.787957 kubelet[2107]: W0913 00:43:28.787953 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.788041 kubelet[2107]: E0913 00:43:28.787960 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.788158 kubelet[2107]: E0913 00:43:28.788128 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.788158 kubelet[2107]: W0913 00:43:28.788139 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.788158 kubelet[2107]: E0913 00:43:28.788148 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.788388 kubelet[2107]: E0913 00:43:28.788343 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.788388 kubelet[2107]: W0913 00:43:28.788380 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.788444 kubelet[2107]: E0913 00:43:28.788389 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.788557 kubelet[2107]: E0913 00:43:28.788548 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.788584 kubelet[2107]: W0913 00:43:28.788556 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.788584 kubelet[2107]: E0913 00:43:28.788563 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.788748 kubelet[2107]: E0913 00:43:28.788736 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.788748 kubelet[2107]: W0913 00:43:28.788744 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.788802 kubelet[2107]: E0913 00:43:28.788751 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.788899 kubelet[2107]: E0913 00:43:28.788890 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.788927 kubelet[2107]: W0913 00:43:28.788899 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.788927 kubelet[2107]: E0913 00:43:28.788906 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.789044 kubelet[2107]: E0913 00:43:28.789031 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.789044 kubelet[2107]: W0913 00:43:28.789041 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.789111 kubelet[2107]: E0913 00:43:28.789050 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.789190 kubelet[2107]: E0913 00:43:28.789181 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.789190 kubelet[2107]: W0913 00:43:28.789188 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.789240 kubelet[2107]: E0913 00:43:28.789195 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.789399 kubelet[2107]: E0913 00:43:28.789376 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.789399 kubelet[2107]: W0913 00:43:28.789387 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.789399 kubelet[2107]: E0913 00:43:28.789395 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.789566 kubelet[2107]: E0913 00:43:28.789555 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.789566 kubelet[2107]: W0913 00:43:28.789563 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.789614 kubelet[2107]: E0913 00:43:28.789570 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.789716 kubelet[2107]: E0913 00:43:28.789708 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.789747 kubelet[2107]: W0913 00:43:28.789715 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.789747 kubelet[2107]: E0913 00:43:28.789722 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.789881 kubelet[2107]: E0913 00:43:28.789871 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.789881 kubelet[2107]: W0913 00:43:28.789879 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.789936 kubelet[2107]: E0913 00:43:28.789886 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.790083 kubelet[2107]: E0913 00:43:28.790071 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.790083 kubelet[2107]: W0913 00:43:28.790079 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.790083 kubelet[2107]: E0913 00:43:28.790085 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.790279 kubelet[2107]: E0913 00:43:28.790267 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.790279 kubelet[2107]: W0913 00:43:28.790275 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.790279 kubelet[2107]: E0913 00:43:28.790287 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.790497 kubelet[2107]: E0913 00:43:28.790479 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.790497 kubelet[2107]: W0913 00:43:28.790492 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.790568 kubelet[2107]: E0913 00:43:28.790509 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.790673 kubelet[2107]: E0913 00:43:28.790661 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.790673 kubelet[2107]: W0913 00:43:28.790670 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.790728 kubelet[2107]: E0913 00:43:28.790682 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.790855 kubelet[2107]: E0913 00:43:28.790842 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.790855 kubelet[2107]: W0913 00:43:28.790854 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.790918 kubelet[2107]: E0913 00:43:28.790866 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.791038 kubelet[2107]: E0913 00:43:28.791028 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.791038 kubelet[2107]: W0913 00:43:28.791036 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.791085 kubelet[2107]: E0913 00:43:28.791048 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.791199 kubelet[2107]: E0913 00:43:28.791190 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.791224 kubelet[2107]: W0913 00:43:28.791198 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.791224 kubelet[2107]: E0913 00:43:28.791209 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.791376 kubelet[2107]: E0913 00:43:28.791366 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.791405 kubelet[2107]: W0913 00:43:28.791376 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.791405 kubelet[2107]: E0913 00:43:28.791387 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.791567 kubelet[2107]: E0913 00:43:28.791559 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.791567 kubelet[2107]: W0913 00:43:28.791566 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.791615 kubelet[2107]: E0913 00:43:28.791595 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.791727 kubelet[2107]: E0913 00:43:28.791717 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.791727 kubelet[2107]: W0913 00:43:28.791726 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.791776 kubelet[2107]: E0913 00:43:28.791750 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.791871 kubelet[2107]: E0913 00:43:28.791858 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.791871 kubelet[2107]: W0913 00:43:28.791866 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.791941 kubelet[2107]: E0913 00:43:28.791877 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.792058 kubelet[2107]: E0913 00:43:28.792047 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.792058 kubelet[2107]: W0913 00:43:28.792054 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.792116 kubelet[2107]: E0913 00:43:28.792065 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.792340 kubelet[2107]: E0913 00:43:28.792317 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.792340 kubelet[2107]: W0913 00:43:28.792336 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.792424 kubelet[2107]: E0913 00:43:28.792368 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.792540 kubelet[2107]: E0913 00:43:28.792525 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.792540 kubelet[2107]: W0913 00:43:28.792535 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.792610 kubelet[2107]: E0913 00:43:28.792547 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.792736 kubelet[2107]: E0913 00:43:28.792724 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.792736 kubelet[2107]: W0913 00:43:28.792733 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.792804 kubelet[2107]: E0913 00:43:28.792746 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.792951 kubelet[2107]: E0913 00:43:28.792937 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.792951 kubelet[2107]: W0913 00:43:28.792947 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.793020 kubelet[2107]: E0913 00:43:28.792959 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.793114 kubelet[2107]: E0913 00:43:28.793102 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.793114 kubelet[2107]: W0913 00:43:28.793109 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.793176 kubelet[2107]: E0913 00:43:28.793121 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:28.793276 kubelet[2107]: E0913 00:43:28.793261 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:28.793276 kubelet[2107]: W0913 00:43:28.793271 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:28.793358 kubelet[2107]: E0913 00:43:28.793280 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.695441 kubelet[2107]: I0913 00:43:29.695395 2107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:43:29.695857 kubelet[2107]: E0913 00:43:29.695836 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:29.697081 kubelet[2107]: E0913 00:43:29.697061 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.697081 kubelet[2107]: W0913 00:43:29.697078 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.697216 kubelet[2107]: E0913 00:43:29.697094 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.697349 kubelet[2107]: E0913 00:43:29.697329 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.697349 kubelet[2107]: W0913 00:43:29.697342 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.697471 kubelet[2107]: E0913 00:43:29.697353 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.697731 kubelet[2107]: E0913 00:43:29.697690 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.697731 kubelet[2107]: W0913 00:43:29.697704 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.697731 kubelet[2107]: E0913 00:43:29.697715 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.698841 kubelet[2107]: E0913 00:43:29.698821 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.698841 kubelet[2107]: W0913 00:43:29.698836 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.698948 kubelet[2107]: E0913 00:43:29.698848 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.699103 kubelet[2107]: E0913 00:43:29.699081 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.699103 kubelet[2107]: W0913 00:43:29.699097 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.699208 kubelet[2107]: E0913 00:43:29.699107 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.699399 kubelet[2107]: E0913 00:43:29.699378 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.699473 kubelet[2107]: W0913 00:43:29.699401 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.699473 kubelet[2107]: E0913 00:43:29.699441 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.699717 kubelet[2107]: E0913 00:43:29.699702 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.699775 kubelet[2107]: W0913 00:43:29.699729 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.699775 kubelet[2107]: E0913 00:43:29.699742 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.699981 kubelet[2107]: E0913 00:43:29.699951 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.699981 kubelet[2107]: W0913 00:43:29.699962 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.699981 kubelet[2107]: E0913 00:43:29.699979 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.700228 kubelet[2107]: E0913 00:43:29.700202 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.700228 kubelet[2107]: W0913 00:43:29.700214 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.700228 kubelet[2107]: E0913 00:43:29.700224 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.700508 kubelet[2107]: E0913 00:43:29.700389 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.700508 kubelet[2107]: W0913 00:43:29.700399 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.700508 kubelet[2107]: E0913 00:43:29.700419 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.700612 kubelet[2107]: E0913 00:43:29.700554 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.700612 kubelet[2107]: W0913 00:43:29.700564 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.700612 kubelet[2107]: E0913 00:43:29.700573 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.700795 kubelet[2107]: E0913 00:43:29.700773 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.700795 kubelet[2107]: W0913 00:43:29.700788 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.700906 kubelet[2107]: E0913 00:43:29.700804 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.701006 kubelet[2107]: E0913 00:43:29.700972 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.701006 kubelet[2107]: W0913 00:43:29.700984 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.701006 kubelet[2107]: E0913 00:43:29.700997 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.701235 kubelet[2107]: E0913 00:43:29.701221 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.701289 kubelet[2107]: W0913 00:43:29.701235 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.701289 kubelet[2107]: E0913 00:43:29.701249 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.701456 kubelet[2107]: E0913 00:43:29.701443 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.701524 kubelet[2107]: W0913 00:43:29.701456 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.701524 kubelet[2107]: E0913 00:43:29.701470 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.746786 env[1315]: time="2025-09-13T00:43:29.746708464Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:29.749045 env[1315]: time="2025-09-13T00:43:29.748875147Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:29.750493 env[1315]: time="2025-09-13T00:43:29.750448650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:29.752099 env[1315]: time="2025-09-13T00:43:29.752072832Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:29.752483 env[1315]: time="2025-09-13T00:43:29.752442251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:43:29.754814 env[1315]: time="2025-09-13T00:43:29.754782497Z" level=info msg="CreateContainer within sandbox \"f8b62aa41671bdb4a73d6c90f4965150b3827d90c69a1cba23440a5f78935535\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:43:29.770167 env[1315]: time="2025-09-13T00:43:29.770120313Z" level=info msg="CreateContainer within sandbox \"f8b62aa41671bdb4a73d6c90f4965150b3827d90c69a1cba23440a5f78935535\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8f5407059f3444bfefca4b143bd33a4e94727ac9db97d2ab6fa397d864c7ee43\"" Sep 13 00:43:29.770681 env[1315]: time="2025-09-13T00:43:29.770639567Z" level=info msg="StartContainer for \"8f5407059f3444bfefca4b143bd33a4e94727ac9db97d2ab6fa397d864c7ee43\"" Sep 13 00:43:29.798173 kubelet[2107]: E0913 00:43:29.798120 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.798173 kubelet[2107]: W0913 00:43:29.798162 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.798373 kubelet[2107]: E0913 00:43:29.798188 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.798709 kubelet[2107]: E0913 00:43:29.798692 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.798709 kubelet[2107]: W0913 00:43:29.798708 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.798811 kubelet[2107]: E0913 00:43:29.798779 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.799223 kubelet[2107]: E0913 00:43:29.799204 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.799283 kubelet[2107]: W0913 00:43:29.799256 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.799283 kubelet[2107]: E0913 00:43:29.799275 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.800610 kubelet[2107]: E0913 00:43:29.800561 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.800610 kubelet[2107]: W0913 00:43:29.800598 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.801448 kubelet[2107]: E0913 00:43:29.801396 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.801724 kubelet[2107]: E0913 00:43:29.801683 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.801724 kubelet[2107]: W0913 00:43:29.801697 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.801846 kubelet[2107]: E0913 00:43:29.801826 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.802171 kubelet[2107]: E0913 00:43:29.802156 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.802171 kubelet[2107]: W0913 00:43:29.802168 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.802261 kubelet[2107]: E0913 00:43:29.802241 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.802494 kubelet[2107]: E0913 00:43:29.802477 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.802494 kubelet[2107]: W0913 00:43:29.802491 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.802581 kubelet[2107]: E0913 00:43:29.802508 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.802744 kubelet[2107]: E0913 00:43:29.802728 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.802744 kubelet[2107]: W0913 00:43:29.802740 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.802832 kubelet[2107]: E0913 00:43:29.802760 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.802968 kubelet[2107]: E0913 00:43:29.802954 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.802968 kubelet[2107]: W0913 00:43:29.802964 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.803055 kubelet[2107]: E0913 00:43:29.802980 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.803591 kubelet[2107]: E0913 00:43:29.803575 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.803591 kubelet[2107]: W0913 00:43:29.803589 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.803678 kubelet[2107]: E0913 00:43:29.803605 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.804089 kubelet[2107]: E0913 00:43:29.804074 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.804165 kubelet[2107]: W0913 00:43:29.804134 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.804274 kubelet[2107]: E0913 00:43:29.804256 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.804730 kubelet[2107]: E0913 00:43:29.804715 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.804730 kubelet[2107]: W0913 00:43:29.804726 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.804826 kubelet[2107]: E0913 00:43:29.804751 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.806806 kubelet[2107]: E0913 00:43:29.806783 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.806806 kubelet[2107]: W0913 00:43:29.806795 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.806897 kubelet[2107]: E0913 00:43:29.806828 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.808706 kubelet[2107]: E0913 00:43:29.808692 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.808706 kubelet[2107]: W0913 00:43:29.808703 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.808809 kubelet[2107]: E0913 00:43:29.808757 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.809281 kubelet[2107]: E0913 00:43:29.809267 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.809520 kubelet[2107]: W0913 00:43:29.809277 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.809598 kubelet[2107]: E0913 00:43:29.809524 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.810713 kubelet[2107]: E0913 00:43:29.810694 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.810713 kubelet[2107]: W0913 00:43:29.810708 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.810835 kubelet[2107]: E0913 00:43:29.810722 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.811090 kubelet[2107]: E0913 00:43:29.811073 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.811090 kubelet[2107]: W0913 00:43:29.811084 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.811090 kubelet[2107]: E0913 00:43:29.811092 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.811220 kubelet[2107]: E0913 00:43:29.811215 2107 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:29.811254 kubelet[2107]: W0913 00:43:29.811221 2107 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:29.811254 kubelet[2107]: E0913 00:43:29.811229 2107 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:29.827222 env[1315]: time="2025-09-13T00:43:29.827178965Z" level=info msg="StartContainer for \"8f5407059f3444bfefca4b143bd33a4e94727ac9db97d2ab6fa397d864c7ee43\" returns successfully" Sep 13 00:43:29.924047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f5407059f3444bfefca4b143bd33a4e94727ac9db97d2ab6fa397d864c7ee43-rootfs.mount: Deactivated successfully. Sep 13 00:43:30.146525 env[1315]: time="2025-09-13T00:43:30.146477055Z" level=info msg="shim disconnected" id=8f5407059f3444bfefca4b143bd33a4e94727ac9db97d2ab6fa397d864c7ee43 Sep 13 00:43:30.146726 env[1315]: time="2025-09-13T00:43:30.146527695Z" level=warning msg="cleaning up after shim disconnected" id=8f5407059f3444bfefca4b143bd33a4e94727ac9db97d2ab6fa397d864c7ee43 namespace=k8s.io Sep 13 00:43:30.146726 env[1315]: time="2025-09-13T00:43:30.146542364Z" level=info msg="cleaning up dead shim" Sep 13 00:43:30.152510 env[1315]: time="2025-09-13T00:43:30.152483585Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:43:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2854 runtime=io.containerd.runc.v2\n" Sep 13 00:43:30.647522 kubelet[2107]: E0913 00:43:30.647484 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twb76" podUID="0495105a-b1c5-41d8-b0c9-fcfac0de6125" Sep 13 00:43:30.698342 env[1315]: time="2025-09-13T00:43:30.698295957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:43:33.250922 kubelet[2107]: E0913 00:43:33.250869 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twb76" podUID="0495105a-b1c5-41d8-b0c9-fcfac0de6125" Sep 13 00:43:34.646732 kubelet[2107]: E0913 00:43:34.646656 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twb76" podUID="0495105a-b1c5-41d8-b0c9-fcfac0de6125" Sep 13 00:43:34.775342 env[1315]: time="2025-09-13T00:43:34.775294404Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:34.777693 env[1315]: time="2025-09-13T00:43:34.777643073Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:34.778773 env[1315]: time="2025-09-13T00:43:34.778750899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:34.780107 env[1315]: time="2025-09-13T00:43:34.780056298Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:34.780486 env[1315]: time="2025-09-13T00:43:34.780444933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:43:34.788851 env[1315]: time="2025-09-13T00:43:34.788823934Z" level=info msg="CreateContainer within sandbox \"f8b62aa41671bdb4a73d6c90f4965150b3827d90c69a1cba23440a5f78935535\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:43:34.803252 env[1315]: time="2025-09-13T00:43:34.803194340Z" level=info msg="CreateContainer within sandbox \"f8b62aa41671bdb4a73d6c90f4965150b3827d90c69a1cba23440a5f78935535\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dfa87b5426019dc3de9fb4c900db280010003a46855dd6c6c68037a5b4d1a345\"" Sep 13 00:43:34.804579 env[1315]: time="2025-09-13T00:43:34.804528586Z" level=info msg="StartContainer for \"dfa87b5426019dc3de9fb4c900db280010003a46855dd6c6c68037a5b4d1a345\"" Sep 13 00:43:34.846787 env[1315]: time="2025-09-13T00:43:34.846737146Z" level=info msg="StartContainer for \"dfa87b5426019dc3de9fb4c900db280010003a46855dd6c6c68037a5b4d1a345\" returns successfully" Sep 13 00:43:36.593785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfa87b5426019dc3de9fb4c900db280010003a46855dd6c6c68037a5b4d1a345-rootfs.mount: Deactivated successfully. Sep 13 00:43:36.595519 env[1315]: time="2025-09-13T00:43:36.595481480Z" level=info msg="shim disconnected" id=dfa87b5426019dc3de9fb4c900db280010003a46855dd6c6c68037a5b4d1a345 Sep 13 00:43:36.596094 env[1315]: time="2025-09-13T00:43:36.595522318Z" level=warning msg="cleaning up after shim disconnected" id=dfa87b5426019dc3de9fb4c900db280010003a46855dd6c6c68037a5b4d1a345 namespace=k8s.io Sep 13 00:43:36.596094 env[1315]: time="2025-09-13T00:43:36.595530705Z" level=info msg="cleaning up dead shim" Sep 13 00:43:36.601140 env[1315]: time="2025-09-13T00:43:36.601081907Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:43:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2924 runtime=io.containerd.runc.v2\n" Sep 13 00:43:36.642315 kubelet[2107]: I0913 00:43:36.642294 2107 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:43:36.650974 env[1315]: time="2025-09-13T00:43:36.650921882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twb76,Uid:0495105a-b1c5-41d8-b0c9-fcfac0de6125,Namespace:calico-system,Attempt:0,}" Sep 13 00:43:36.719320 env[1315]: time="2025-09-13T00:43:36.718126470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:43:36.761439 env[1315]: time="2025-09-13T00:43:36.761354257Z" level=error msg="Failed to destroy network for sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:36.761928 env[1315]: time="2025-09-13T00:43:36.761894563Z" level=error msg="encountered an error cleaning up failed sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:36.762003 env[1315]: time="2025-09-13T00:43:36.761972254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twb76,Uid:0495105a-b1c5-41d8-b0c9-fcfac0de6125,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:36.762286 kubelet[2107]: E0913 00:43:36.762257 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:36.762411 kubelet[2107]: E0913 00:43:36.762393 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-twb76" Sep 13 00:43:36.762499 kubelet[2107]: E0913 00:43:36.762478 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-twb76" Sep 13 00:43:36.762668 kubelet[2107]: E0913 00:43:36.762610 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-twb76_calico-system(0495105a-b1c5-41d8-b0c9-fcfac0de6125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-twb76_calico-system(0495105a-b1c5-41d8-b0c9-fcfac0de6125)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-twb76" podUID="0495105a-b1c5-41d8-b0c9-fcfac0de6125" Sep 13 00:43:36.763697 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515-shm.mount: Deactivated successfully. Sep 13 00:43:36.852214 kubelet[2107]: I0913 00:43:36.852112 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c47h\" (UniqueName: \"kubernetes.io/projected/4f125e3b-0620-4e01-917a-be90d6600e62-kube-api-access-8c47h\") pod \"coredns-7c65d6cfc9-gpbn2\" (UID: \"4f125e3b-0620-4e01-917a-be90d6600e62\") " pod="kube-system/coredns-7c65d6cfc9-gpbn2" Sep 13 00:43:36.852403 kubelet[2107]: I0913 00:43:36.852385 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-whisker-ca-bundle\") pod \"whisker-55dd58ff54-4mn4x\" (UID: \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\") " pod="calico-system/whisker-55dd58ff54-4mn4x" Sep 13 00:43:36.852496 kubelet[2107]: I0913 00:43:36.852479 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq2pj\" (UniqueName: \"kubernetes.io/projected/c30c5f78-8868-438b-9ac1-1ddc434b02ca-kube-api-access-lq2pj\") pod \"calico-apiserver-69684bbb9f-kh9fl\" (UID: \"c30c5f78-8868-438b-9ac1-1ddc434b02ca\") " pod="calico-apiserver/calico-apiserver-69684bbb9f-kh9fl" Sep 13 00:43:36.852640 kubelet[2107]: I0913 00:43:36.852595 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/631473a4-cdd6-4d56-8d52-071b771820a5-tigera-ca-bundle\") pod \"calico-kube-controllers-66999fb6b9-hjj2v\" (UID: \"631473a4-cdd6-4d56-8d52-071b771820a5\") " pod="calico-system/calico-kube-controllers-66999fb6b9-hjj2v" Sep 13 00:43:36.852640 kubelet[2107]: I0913 00:43:36.852641 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzv9c\" (UniqueName: \"kubernetes.io/projected/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-kube-api-access-xzv9c\") pod \"whisker-55dd58ff54-4mn4x\" (UID: \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\") " pod="calico-system/whisker-55dd58ff54-4mn4x" Sep 13 00:43:36.852827 kubelet[2107]: I0913 00:43:36.852657 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61948f79-c287-454b-a5e0-ba4e09c53ab6-config\") pod \"goldmane-7988f88666-44vsn\" (UID: \"61948f79-c287-454b-a5e0-ba4e09c53ab6\") " pod="calico-system/goldmane-7988f88666-44vsn" Sep 13 00:43:36.852827 kubelet[2107]: I0913 00:43:36.852673 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61948f79-c287-454b-a5e0-ba4e09c53ab6-goldmane-ca-bundle\") pod \"goldmane-7988f88666-44vsn\" (UID: \"61948f79-c287-454b-a5e0-ba4e09c53ab6\") " pod="calico-system/goldmane-7988f88666-44vsn" Sep 13 00:43:36.852827 kubelet[2107]: I0913 00:43:36.852696 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/61948f79-c287-454b-a5e0-ba4e09c53ab6-goldmane-key-pair\") pod \"goldmane-7988f88666-44vsn\" (UID: \"61948f79-c287-454b-a5e0-ba4e09c53ab6\") " pod="calico-system/goldmane-7988f88666-44vsn" Sep 13 00:43:36.852827 kubelet[2107]: I0913 00:43:36.852720 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-whisker-backend-key-pair\") pod \"whisker-55dd58ff54-4mn4x\" (UID: \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\") " pod="calico-system/whisker-55dd58ff54-4mn4x" Sep 13 00:43:36.852827 kubelet[2107]: I0913 00:43:36.852732 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wndm\" (UniqueName: \"kubernetes.io/projected/61948f79-c287-454b-a5e0-ba4e09c53ab6-kube-api-access-9wndm\") pod \"goldmane-7988f88666-44vsn\" (UID: \"61948f79-c287-454b-a5e0-ba4e09c53ab6\") " pod="calico-system/goldmane-7988f88666-44vsn" Sep 13 00:43:36.852954 kubelet[2107]: I0913 00:43:36.852748 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57c2caed-82b7-4c8e-9591-b6e3e76966bb-config-volume\") pod \"coredns-7c65d6cfc9-kp2bx\" (UID: \"57c2caed-82b7-4c8e-9591-b6e3e76966bb\") " pod="kube-system/coredns-7c65d6cfc9-kp2bx" Sep 13 00:43:36.852954 kubelet[2107]: I0913 00:43:36.852771 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2mzn\" (UniqueName: \"kubernetes.io/projected/445a4a34-e91d-44ab-8fb9-191df697ef64-kube-api-access-s2mzn\") pod \"calico-apiserver-69684bbb9f-79hgx\" (UID: \"445a4a34-e91d-44ab-8fb9-191df697ef64\") " pod="calico-apiserver/calico-apiserver-69684bbb9f-79hgx" Sep 13 00:43:36.852954 kubelet[2107]: I0913 00:43:36.852796 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2wkg\" (UniqueName: \"kubernetes.io/projected/57c2caed-82b7-4c8e-9591-b6e3e76966bb-kube-api-access-p2wkg\") pod \"coredns-7c65d6cfc9-kp2bx\" (UID: \"57c2caed-82b7-4c8e-9591-b6e3e76966bb\") " pod="kube-system/coredns-7c65d6cfc9-kp2bx" Sep 13 00:43:36.852954 kubelet[2107]: I0913 00:43:36.852813 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx54l\" (UniqueName: \"kubernetes.io/projected/631473a4-cdd6-4d56-8d52-071b771820a5-kube-api-access-xx54l\") pod \"calico-kube-controllers-66999fb6b9-hjj2v\" (UID: \"631473a4-cdd6-4d56-8d52-071b771820a5\") " pod="calico-system/calico-kube-controllers-66999fb6b9-hjj2v" Sep 13 00:43:36.852954 kubelet[2107]: I0913 00:43:36.852833 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f125e3b-0620-4e01-917a-be90d6600e62-config-volume\") pod \"coredns-7c65d6cfc9-gpbn2\" (UID: \"4f125e3b-0620-4e01-917a-be90d6600e62\") " pod="kube-system/coredns-7c65d6cfc9-gpbn2" Sep 13 00:43:36.853078 kubelet[2107]: I0913 00:43:36.852847 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/445a4a34-e91d-44ab-8fb9-191df697ef64-calico-apiserver-certs\") pod \"calico-apiserver-69684bbb9f-79hgx\" (UID: \"445a4a34-e91d-44ab-8fb9-191df697ef64\") " pod="calico-apiserver/calico-apiserver-69684bbb9f-79hgx" Sep 13 00:43:36.853078 kubelet[2107]: I0913 00:43:36.852862 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c30c5f78-8868-438b-9ac1-1ddc434b02ca-calico-apiserver-certs\") pod \"calico-apiserver-69684bbb9f-kh9fl\" (UID: \"c30c5f78-8868-438b-9ac1-1ddc434b02ca\") " pod="calico-apiserver/calico-apiserver-69684bbb9f-kh9fl" Sep 13 00:43:36.975994 env[1315]: time="2025-09-13T00:43:36.975944796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66999fb6b9-hjj2v,Uid:631473a4-cdd6-4d56-8d52-071b771820a5,Namespace:calico-system,Attempt:0,}" Sep 13 00:43:36.986430 env[1315]: time="2025-09-13T00:43:36.986403643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55dd58ff54-4mn4x,Uid:f306ba4a-0aec-4abe-961c-ead3aa08c8ea,Namespace:calico-system,Attempt:0,}" Sep 13 00:43:36.986768 env[1315]: time="2025-09-13T00:43:36.986721990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69684bbb9f-79hgx,Uid:445a4a34-e91d-44ab-8fb9-191df697ef64,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:43:36.987315 env[1315]: time="2025-09-13T00:43:36.987282737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69684bbb9f-kh9fl,Uid:c30c5f78-8868-438b-9ac1-1ddc434b02ca,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:43:36.988677 env[1315]: time="2025-09-13T00:43:36.988653040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-44vsn,Uid:61948f79-c287-454b-a5e0-ba4e09c53ab6,Namespace:calico-system,Attempt:0,}" Sep 13 00:43:36.992920 kubelet[2107]: E0913 00:43:36.992895 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:36.994429 env[1315]: time="2025-09-13T00:43:36.993281615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kp2bx,Uid:57c2caed-82b7-4c8e-9591-b6e3e76966bb,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:36.994838 kubelet[2107]: E0913 00:43:36.994718 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:36.996992 env[1315]: time="2025-09-13T00:43:36.996879874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gpbn2,Uid:4f125e3b-0620-4e01-917a-be90d6600e62,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:37.037987 env[1315]: time="2025-09-13T00:43:37.037933669Z" level=error msg="Failed to destroy network for sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.038259 env[1315]: time="2025-09-13T00:43:37.038231815Z" level=error msg="encountered an error cleaning up failed sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.038453 env[1315]: time="2025-09-13T00:43:37.038413916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66999fb6b9-hjj2v,Uid:631473a4-cdd6-4d56-8d52-071b771820a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.039021 kubelet[2107]: E0913 00:43:37.038661 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.039021 kubelet[2107]: E0913 00:43:37.038737 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66999fb6b9-hjj2v" Sep 13 00:43:37.039021 kubelet[2107]: E0913 00:43:37.038763 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66999fb6b9-hjj2v" Sep 13 00:43:37.039177 kubelet[2107]: E0913 00:43:37.038807 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66999fb6b9-hjj2v_calico-system(631473a4-cdd6-4d56-8d52-071b771820a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66999fb6b9-hjj2v_calico-system(631473a4-cdd6-4d56-8d52-071b771820a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66999fb6b9-hjj2v" podUID="631473a4-cdd6-4d56-8d52-071b771820a5" Sep 13 00:43:37.101848 env[1315]: time="2025-09-13T00:43:37.101795076Z" level=error msg="Failed to destroy network for sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.102398 env[1315]: time="2025-09-13T00:43:37.102325230Z" level=error msg="encountered an error cleaning up failed sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.102512 env[1315]: time="2025-09-13T00:43:37.102483976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55dd58ff54-4mn4x,Uid:f306ba4a-0aec-4abe-961c-ead3aa08c8ea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.103180 kubelet[2107]: E0913 00:43:37.102846 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.103180 kubelet[2107]: E0913 00:43:37.102907 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55dd58ff54-4mn4x" Sep 13 00:43:37.103180 kubelet[2107]: E0913 00:43:37.102932 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55dd58ff54-4mn4x" Sep 13 00:43:37.103327 kubelet[2107]: E0913 00:43:37.102968 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55dd58ff54-4mn4x_calico-system(f306ba4a-0aec-4abe-961c-ead3aa08c8ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55dd58ff54-4mn4x_calico-system(f306ba4a-0aec-4abe-961c-ead3aa08c8ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55dd58ff54-4mn4x" podUID="f306ba4a-0aec-4abe-961c-ead3aa08c8ea" Sep 13 00:43:37.121840 env[1315]: time="2025-09-13T00:43:37.121769577Z" level=error msg="Failed to destroy network for sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.122329 env[1315]: time="2025-09-13T00:43:37.122299181Z" level=error msg="encountered an error cleaning up failed sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.122471 env[1315]: time="2025-09-13T00:43:37.122434332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-44vsn,Uid:61948f79-c287-454b-a5e0-ba4e09c53ab6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.122930 kubelet[2107]: E0913 00:43:37.122875 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.123010 kubelet[2107]: E0913 00:43:37.122952 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-44vsn" Sep 13 00:43:37.123010 kubelet[2107]: E0913 00:43:37.122973 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-44vsn" Sep 13 00:43:37.123100 kubelet[2107]: E0913 00:43:37.123023 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-44vsn_calico-system(61948f79-c287-454b-a5e0-ba4e09c53ab6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-44vsn_calico-system(61948f79-c287-454b-a5e0-ba4e09c53ab6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-44vsn" podUID="61948f79-c287-454b-a5e0-ba4e09c53ab6" Sep 13 00:43:37.124813 env[1315]: time="2025-09-13T00:43:37.124787291Z" level=error msg="Failed to destroy network for sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.125193 env[1315]: time="2025-09-13T00:43:37.125162385Z" level=error msg="encountered an error cleaning up failed sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.125528 env[1315]: time="2025-09-13T00:43:37.125498495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kp2bx,Uid:57c2caed-82b7-4c8e-9591-b6e3e76966bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.125919 kubelet[2107]: E0913 00:43:37.125746 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.125919 kubelet[2107]: E0913 00:43:37.125820 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kp2bx" Sep 13 00:43:37.125919 kubelet[2107]: E0913 00:43:37.125839 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kp2bx" Sep 13 00:43:37.126022 kubelet[2107]: E0913 00:43:37.125876 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kp2bx_kube-system(57c2caed-82b7-4c8e-9591-b6e3e76966bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kp2bx_kube-system(57c2caed-82b7-4c8e-9591-b6e3e76966bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kp2bx" podUID="57c2caed-82b7-4c8e-9591-b6e3e76966bb" Sep 13 00:43:37.134813 env[1315]: time="2025-09-13T00:43:37.134737365Z" level=error msg="Failed to destroy network for sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.135789 env[1315]: time="2025-09-13T00:43:37.135154501Z" level=error msg="encountered an error cleaning up failed sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.135789 env[1315]: time="2025-09-13T00:43:37.135198074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gpbn2,Uid:4f125e3b-0620-4e01-917a-be90d6600e62,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.136250 kubelet[2107]: E0913 00:43:37.136217 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.136332 kubelet[2107]: E0913 00:43:37.136270 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gpbn2" Sep 13 00:43:37.136332 kubelet[2107]: E0913 00:43:37.136292 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gpbn2" Sep 13 00:43:37.136394 kubelet[2107]: E0913 00:43:37.136334 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gpbn2_kube-system(4f125e3b-0620-4e01-917a-be90d6600e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gpbn2_kube-system(4f125e3b-0620-4e01-917a-be90d6600e62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gpbn2" podUID="4f125e3b-0620-4e01-917a-be90d6600e62" Sep 13 00:43:37.140833 env[1315]: time="2025-09-13T00:43:37.140778773Z" level=error msg="Failed to destroy network for sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.141436 env[1315]: time="2025-09-13T00:43:37.141403440Z" level=error msg="encountered an error cleaning up failed sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.141702 env[1315]: time="2025-09-13T00:43:37.141674033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69684bbb9f-79hgx,Uid:445a4a34-e91d-44ab-8fb9-191df697ef64,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.142047 kubelet[2107]: E0913 00:43:37.142022 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.142126 kubelet[2107]: E0913 00:43:37.142060 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69684bbb9f-79hgx" Sep 13 00:43:37.142126 kubelet[2107]: E0913 00:43:37.142074 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69684bbb9f-79hgx" Sep 13 00:43:37.142126 kubelet[2107]: E0913 00:43:37.142107 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69684bbb9f-79hgx_calico-apiserver(445a4a34-e91d-44ab-8fb9-191df697ef64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69684bbb9f-79hgx_calico-apiserver(445a4a34-e91d-44ab-8fb9-191df697ef64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69684bbb9f-79hgx" podUID="445a4a34-e91d-44ab-8fb9-191df697ef64" Sep 13 00:43:37.148782 env[1315]: time="2025-09-13T00:43:37.148714302Z" level=error msg="Failed to destroy network for sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.149069 env[1315]: time="2025-09-13T00:43:37.149031105Z" level=error msg="encountered an error cleaning up failed sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.149102 env[1315]: time="2025-09-13T00:43:37.149077243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69684bbb9f-kh9fl,Uid:c30c5f78-8868-438b-9ac1-1ddc434b02ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.149325 kubelet[2107]: E0913 00:43:37.149282 2107 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.149383 kubelet[2107]: E0913 00:43:37.149337 2107 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69684bbb9f-kh9fl" Sep 13 00:43:37.149383 kubelet[2107]: E0913 00:43:37.149355 2107 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69684bbb9f-kh9fl" Sep 13 00:43:37.149437 kubelet[2107]: E0913 00:43:37.149396 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69684bbb9f-kh9fl_calico-apiserver(c30c5f78-8868-438b-9ac1-1ddc434b02ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69684bbb9f-kh9fl_calico-apiserver(c30c5f78-8868-438b-9ac1-1ddc434b02ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69684bbb9f-kh9fl" podUID="c30c5f78-8868-438b-9ac1-1ddc434b02ca" Sep 13 00:43:37.719135 kubelet[2107]: I0913 00:43:37.719093 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:43:37.720094 kubelet[2107]: I0913 00:43:37.719804 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:43:37.720195 env[1315]: time="2025-09-13T00:43:37.719783425Z" level=info msg="StopPodSandbox for \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\"" Sep 13 00:43:37.720690 env[1315]: time="2025-09-13T00:43:37.720236110Z" level=info msg="StopPodSandbox for \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\"" Sep 13 00:43:37.721217 kubelet[2107]: I0913 00:43:37.721174 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:43:37.721656 env[1315]: time="2025-09-13T00:43:37.721615868Z" level=info msg="StopPodSandbox for \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\"" Sep 13 00:43:37.728217 kubelet[2107]: I0913 00:43:37.728173 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:43:37.730070 env[1315]: time="2025-09-13T00:43:37.730026987Z" level=info msg="StopPodSandbox for \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\"" Sep 13 00:43:37.731393 kubelet[2107]: I0913 00:43:37.731366 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:43:37.732441 env[1315]: time="2025-09-13T00:43:37.732407949Z" level=info msg="StopPodSandbox for \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\"" Sep 13 00:43:37.733181 kubelet[2107]: I0913 00:43:37.733128 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:43:37.733610 env[1315]: time="2025-09-13T00:43:37.733571068Z" level=info msg="StopPodSandbox for \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\"" Sep 13 00:43:37.734525 kubelet[2107]: I0913 00:43:37.734485 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:43:37.734988 env[1315]: time="2025-09-13T00:43:37.734963379Z" level=info msg="StopPodSandbox for \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\"" Sep 13 00:43:37.738375 kubelet[2107]: I0913 00:43:37.738344 2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:43:37.738999 env[1315]: time="2025-09-13T00:43:37.738951679Z" level=info msg="StopPodSandbox for \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\"" Sep 13 00:43:37.750358 env[1315]: time="2025-09-13T00:43:37.750301799Z" level=error msg="StopPodSandbox for \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\" failed" error="failed to destroy network for sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.750563 kubelet[2107]: E0913 00:43:37.750529 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:43:37.750663 kubelet[2107]: E0913 00:43:37.750585 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4"} Sep 13 00:43:37.750694 kubelet[2107]: E0913 00:43:37.750666 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57c2caed-82b7-4c8e-9591-b6e3e76966bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:43:37.750755 kubelet[2107]: E0913 00:43:37.750694 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57c2caed-82b7-4c8e-9591-b6e3e76966bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kp2bx" podUID="57c2caed-82b7-4c8e-9591-b6e3e76966bb" Sep 13 00:43:37.767505 env[1315]: time="2025-09-13T00:43:37.767435911Z" level=error msg="StopPodSandbox for \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\" failed" error="failed to destroy network for sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.767941 kubelet[2107]: E0913 00:43:37.767890 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:43:37.768042 kubelet[2107]: E0913 00:43:37.767952 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f"} Sep 13 00:43:37.768042 kubelet[2107]: E0913 00:43:37.767990 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c30c5f78-8868-438b-9ac1-1ddc434b02ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:43:37.768042 kubelet[2107]: E0913 00:43:37.768012 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c30c5f78-8868-438b-9ac1-1ddc434b02ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69684bbb9f-kh9fl" podUID="c30c5f78-8868-438b-9ac1-1ddc434b02ca" Sep 13 00:43:37.774588 env[1315]: time="2025-09-13T00:43:37.774503141Z" level=error msg="StopPodSandbox for \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\" failed" error="failed to destroy network for sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.774813 kubelet[2107]: E0913 00:43:37.774764 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:43:37.774890 kubelet[2107]: E0913 00:43:37.774826 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05"} Sep 13 00:43:37.774890 kubelet[2107]: E0913 00:43:37.774858 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f125e3b-0620-4e01-917a-be90d6600e62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:43:37.774890 kubelet[2107]: E0913 00:43:37.774879 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f125e3b-0620-4e01-917a-be90d6600e62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gpbn2" podUID="4f125e3b-0620-4e01-917a-be90d6600e62" Sep 13 00:43:37.776995 env[1315]: time="2025-09-13T00:43:37.776940353Z" level=error msg="StopPodSandbox for \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\" failed" error="failed to destroy network for sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.777194 kubelet[2107]: E0913 00:43:37.777161 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:43:37.777260 kubelet[2107]: E0913 00:43:37.777202 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95"} Sep 13 00:43:37.777260 kubelet[2107]: E0913 00:43:37.777234 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"445a4a34-e91d-44ab-8fb9-191df697ef64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:43:37.777342 kubelet[2107]: E0913 00:43:37.777257 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"445a4a34-e91d-44ab-8fb9-191df697ef64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69684bbb9f-79hgx" podUID="445a4a34-e91d-44ab-8fb9-191df697ef64" Sep 13 00:43:37.786210 env[1315]: time="2025-09-13T00:43:37.786143653Z" level=error msg="StopPodSandbox for \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\" failed" error="failed to destroy network for sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.786685 kubelet[2107]: E0913 00:43:37.786647 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:43:37.786788 kubelet[2107]: E0913 00:43:37.786698 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb"} Sep 13 00:43:37.786788 kubelet[2107]: E0913 00:43:37.786731 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61948f79-c287-454b-a5e0-ba4e09c53ab6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:43:37.786788 kubelet[2107]: E0913 00:43:37.786750 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61948f79-c287-454b-a5e0-ba4e09c53ab6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-44vsn" podUID="61948f79-c287-454b-a5e0-ba4e09c53ab6" Sep 13 00:43:37.792293 env[1315]: time="2025-09-13T00:43:37.792232042Z" level=error msg="StopPodSandbox for \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\" failed" error="failed to destroy network for sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.792491 kubelet[2107]: E0913 00:43:37.792455 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:43:37.792545 kubelet[2107]: E0913 00:43:37.792506 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e"} Sep 13 00:43:37.792545 kubelet[2107]: E0913 00:43:37.792538 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:43:37.792673 kubelet[2107]: E0913 00:43:37.792558 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55dd58ff54-4mn4x" podUID="f306ba4a-0aec-4abe-961c-ead3aa08c8ea" Sep 13 00:43:37.801299 env[1315]: time="2025-09-13T00:43:37.801240005Z" level=error msg="StopPodSandbox for \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\" failed" error="failed to destroy network for sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.801509 kubelet[2107]: E0913 00:43:37.801468 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:43:37.801582 kubelet[2107]: E0913 00:43:37.801521 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515"} Sep 13 00:43:37.801582 kubelet[2107]: E0913 00:43:37.801552 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0495105a-b1c5-41d8-b0c9-fcfac0de6125\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:43:37.801708 kubelet[2107]: E0913 00:43:37.801571 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0495105a-b1c5-41d8-b0c9-fcfac0de6125\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-twb76" podUID="0495105a-b1c5-41d8-b0c9-fcfac0de6125" Sep 13 00:43:37.801972 env[1315]: time="2025-09-13T00:43:37.801930850Z" level=error msg="StopPodSandbox for \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\" failed" error="failed to destroy network for sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:43:37.802049 kubelet[2107]: E0913 00:43:37.802022 2107 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:43:37.802049 kubelet[2107]: E0913 00:43:37.802046 2107 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0"} Sep 13 00:43:37.802140 kubelet[2107]: E0913 00:43:37.802063 2107 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"631473a4-cdd6-4d56-8d52-071b771820a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:43:37.802140 kubelet[2107]: E0913 00:43:37.802079 2107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"631473a4-cdd6-4d56-8d52-071b771820a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66999fb6b9-hjj2v" podUID="631473a4-cdd6-4d56-8d52-071b771820a5" Sep 13 00:43:43.165678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893872588.mount: Deactivated successfully. Sep 13 00:43:44.128739 env[1315]: time="2025-09-13T00:43:44.128680911Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:44.130928 env[1315]: time="2025-09-13T00:43:44.130897562Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:44.132300 env[1315]: time="2025-09-13T00:43:44.132273604Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:44.133748 env[1315]: time="2025-09-13T00:43:44.133718792Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:44.134098 env[1315]: time="2025-09-13T00:43:44.134067065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:43:44.140961 env[1315]: time="2025-09-13T00:43:44.140920340Z" level=info msg="CreateContainer within sandbox \"f8b62aa41671bdb4a73d6c90f4965150b3827d90c69a1cba23440a5f78935535\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:43:44.159736 env[1315]: time="2025-09-13T00:43:44.159692676Z" level=info msg="CreateContainer within sandbox \"f8b62aa41671bdb4a73d6c90f4965150b3827d90c69a1cba23440a5f78935535\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0a7482681e8e07a83e0b7a5da838b66b3b4f29fb90cbfc7b715a90545ef7b765\"" Sep 13 00:43:44.160153 env[1315]: time="2025-09-13T00:43:44.160093403Z" level=info msg="StartContainer for \"0a7482681e8e07a83e0b7a5da838b66b3b4f29fb90cbfc7b715a90545ef7b765\"" Sep 13 00:43:44.209938 env[1315]: time="2025-09-13T00:43:44.209868225Z" level=info msg="StartContainer for \"0a7482681e8e07a83e0b7a5da838b66b3b4f29fb90cbfc7b715a90545ef7b765\" returns successfully" Sep 13 00:43:44.284204 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:43:44.284319 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:43:44.452694 env[1315]: time="2025-09-13T00:43:44.452559952Z" level=info msg="StopPodSandbox for \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\"" Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.503 [INFO][3429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.504 [INFO][3429] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" iface="eth0" netns="/var/run/netns/cni-62d6de98-c2bd-66b8-5cc7-ae55ed94fc24" Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.504 [INFO][3429] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" iface="eth0" netns="/var/run/netns/cni-62d6de98-c2bd-66b8-5cc7-ae55ed94fc24" Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.504 [INFO][3429] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" iface="eth0" netns="/var/run/netns/cni-62d6de98-c2bd-66b8-5cc7-ae55ed94fc24" Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.504 [INFO][3429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.504 [INFO][3429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.563 [INFO][3437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" HandleID="k8s-pod-network.80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Workload="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.563 [INFO][3437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.564 [INFO][3437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.569 [WARNING][3437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" HandleID="k8s-pod-network.80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Workload="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.569 [INFO][3437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" HandleID="k8s-pod-network.80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Workload="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.570 [INFO][3437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:44.572931 env[1315]: 2025-09-13 00:43:44.571 [INFO][3429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:43:44.573977 env[1315]: time="2025-09-13T00:43:44.573065269Z" level=info msg="TearDown network for sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\" successfully" Sep 13 00:43:44.573977 env[1315]: time="2025-09-13T00:43:44.573093194Z" level=info msg="StopPodSandbox for \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\" returns successfully" Sep 13 00:43:44.575415 systemd[1]: run-netns-cni\x2d62d6de98\x2dc2bd\x2d66b8\x2d5cc7\x2dae55ed94fc24.mount: Deactivated successfully. Sep 13 00:43:44.694589 kubelet[2107]: I0913 00:43:44.694528 2107 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzv9c\" (UniqueName: \"kubernetes.io/projected/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-kube-api-access-xzv9c\") pod \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\" (UID: \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\") " Sep 13 00:43:44.694589 kubelet[2107]: I0913 00:43:44.694567 2107 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-whisker-ca-bundle\") pod \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\" (UID: \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\") " Sep 13 00:43:44.695090 kubelet[2107]: I0913 00:43:44.694601 2107 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-whisker-backend-key-pair\") pod \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\" (UID: \"f306ba4a-0aec-4abe-961c-ead3aa08c8ea\") " Sep 13 00:43:44.695090 kubelet[2107]: I0913 00:43:44.695002 2107 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f306ba4a-0aec-4abe-961c-ead3aa08c8ea" (UID: "f306ba4a-0aec-4abe-961c-ead3aa08c8ea"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:43:44.700054 kubelet[2107]: I0913 00:43:44.700014 2107 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-kube-api-access-xzv9c" (OuterVolumeSpecName: "kube-api-access-xzv9c") pod "f306ba4a-0aec-4abe-961c-ead3aa08c8ea" (UID: "f306ba4a-0aec-4abe-961c-ead3aa08c8ea"). InnerVolumeSpecName "kube-api-access-xzv9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:43:44.700159 kubelet[2107]: I0913 00:43:44.700118 2107 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f306ba4a-0aec-4abe-961c-ead3aa08c8ea" (UID: "f306ba4a-0aec-4abe-961c-ead3aa08c8ea"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:43:44.700517 systemd[1]: var-lib-kubelet-pods-f306ba4a\x2d0aec\x2d4abe\x2d961c\x2dead3aa08c8ea-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:43:44.700689 systemd[1]: var-lib-kubelet-pods-f306ba4a\x2d0aec\x2d4abe\x2d961c\x2dead3aa08c8ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxzv9c.mount: Deactivated successfully. Sep 13 00:43:44.790481 kubelet[2107]: I0913 00:43:44.790429 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lssz6" podStartSLOduration=1.170757622 podStartE2EDuration="19.790410293s" podCreationTimestamp="2025-09-13 00:43:25 +0000 UTC" firstStartedPulling="2025-09-13 00:43:25.515169887 +0000 UTC m=+18.977005624" lastFinishedPulling="2025-09-13 00:43:44.134822558 +0000 UTC m=+37.596658295" observedRunningTime="2025-09-13 00:43:44.790088911 +0000 UTC m=+38.251924669" watchObservedRunningTime="2025-09-13 00:43:44.790410293 +0000 UTC m=+38.252246030" Sep 13 00:43:44.795646 kubelet[2107]: I0913 00:43:44.795568 2107 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzv9c\" (UniqueName: \"kubernetes.io/projected/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-kube-api-access-xzv9c\") on node \"localhost\" DevicePath \"\"" Sep 13 00:43:44.795646 kubelet[2107]: I0913 00:43:44.795593 2107 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:43:44.795646 kubelet[2107]: I0913 00:43:44.795601 2107 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f306ba4a-0aec-4abe-961c-ead3aa08c8ea-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:43:44.896812 kubelet[2107]: I0913 00:43:44.896739 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbr6\" (UniqueName: \"kubernetes.io/projected/81aaece5-676c-462a-b669-d1c235589a55-kube-api-access-2pbr6\") pod \"whisker-8ddb88cdf-l2pj6\" (UID: \"81aaece5-676c-462a-b669-d1c235589a55\") " pod="calico-system/whisker-8ddb88cdf-l2pj6" Sep 13 00:43:44.897010 kubelet[2107]: I0913 00:43:44.896828 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81aaece5-676c-462a-b669-d1c235589a55-whisker-backend-key-pair\") pod \"whisker-8ddb88cdf-l2pj6\" (UID: \"81aaece5-676c-462a-b669-d1c235589a55\") " pod="calico-system/whisker-8ddb88cdf-l2pj6" Sep 13 00:43:44.897010 kubelet[2107]: I0913 00:43:44.896846 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81aaece5-676c-462a-b669-d1c235589a55-whisker-ca-bundle\") pod \"whisker-8ddb88cdf-l2pj6\" (UID: \"81aaece5-676c-462a-b669-d1c235589a55\") " pod="calico-system/whisker-8ddb88cdf-l2pj6" Sep 13 00:43:45.094436 env[1315]: time="2025-09-13T00:43:45.094053679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8ddb88cdf-l2pj6,Uid:81aaece5-676c-462a-b669-d1c235589a55,Namespace:calico-system,Attempt:0,}" Sep 13 00:43:45.522138 systemd-networkd[1082]: calibb2a1fc737e: Link UP Sep 13 00:43:45.524587 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:43:45.524683 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibb2a1fc737e: link becomes ready Sep 13 00:43:45.524808 systemd-networkd[1082]: calibb2a1fc737e: Gained carrier Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.457 [INFO][3482] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.466 [INFO][3482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0 whisker-8ddb88cdf- calico-system 81aaece5-676c-462a-b669-d1c235589a55 890 0 2025-09-13 00:43:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8ddb88cdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8ddb88cdf-l2pj6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibb2a1fc737e [] [] }} ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Namespace="calico-system" Pod="whisker-8ddb88cdf-l2pj6" WorkloadEndpoint="localhost-k8s-whisker--8ddb88cdf--l2pj6-" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.466 [INFO][3482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Namespace="calico-system" Pod="whisker-8ddb88cdf-l2pj6" WorkloadEndpoint="localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.488 [INFO][3496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" HandleID="k8s-pod-network.4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Workload="localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.488 [INFO][3496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" HandleID="k8s-pod-network.4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Workload="localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003afe70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8ddb88cdf-l2pj6", "timestamp":"2025-09-13 00:43:45.48853225 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.488 [INFO][3496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.488 [INFO][3496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.488 [INFO][3496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.494 [INFO][3496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" host="localhost" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.498 [INFO][3496] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.501 [INFO][3496] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.502 [INFO][3496] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.504 [INFO][3496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.504 [INFO][3496] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" host="localhost" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.505 [INFO][3496] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539 Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.510 [INFO][3496] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" host="localhost" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.513 [INFO][3496] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" host="localhost" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.513 [INFO][3496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" host="localhost" Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.513 [INFO][3496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:45.537324 env[1315]: 2025-09-13 00:43:45.513 [INFO][3496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" HandleID="k8s-pod-network.4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Workload="localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0" Sep 13 00:43:45.538232 env[1315]: 2025-09-13 00:43:45.515 [INFO][3482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Namespace="calico-system" Pod="whisker-8ddb88cdf-l2pj6" WorkloadEndpoint="localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0", GenerateName:"whisker-8ddb88cdf-", Namespace:"calico-system", SelfLink:"", UID:"81aaece5-676c-462a-b669-d1c235589a55", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8ddb88cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8ddb88cdf-l2pj6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibb2a1fc737e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:45.538232 env[1315]: 2025-09-13 00:43:45.515 [INFO][3482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Namespace="calico-system" Pod="whisker-8ddb88cdf-l2pj6" WorkloadEndpoint="localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0" Sep 13 00:43:45.538232 env[1315]: 2025-09-13 00:43:45.515 [INFO][3482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb2a1fc737e ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Namespace="calico-system" Pod="whisker-8ddb88cdf-l2pj6" WorkloadEndpoint="localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0" Sep 13 00:43:45.538232 env[1315]: 2025-09-13 00:43:45.526 [INFO][3482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Namespace="calico-system" Pod="whisker-8ddb88cdf-l2pj6" WorkloadEndpoint="localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0" Sep 13 00:43:45.538232 env[1315]: 2025-09-13 00:43:45.526 [INFO][3482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Namespace="calico-system" Pod="whisker-8ddb88cdf-l2pj6" WorkloadEndpoint="localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0", GenerateName:"whisker-8ddb88cdf-", Namespace:"calico-system", SelfLink:"", UID:"81aaece5-676c-462a-b669-d1c235589a55", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8ddb88cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539", Pod:"whisker-8ddb88cdf-l2pj6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibb2a1fc737e", MAC:"62:31:84:84:2f:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:45.538232 env[1315]: 2025-09-13 00:43:45.535 [INFO][3482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539" Namespace="calico-system" Pod="whisker-8ddb88cdf-l2pj6" WorkloadEndpoint="localhost-k8s-whisker--8ddb88cdf--l2pj6-eth0" Sep 13 00:43:45.549011 env[1315]: time="2025-09-13T00:43:45.548946145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:45.549011 env[1315]: time="2025-09-13T00:43:45.549015681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:45.549218 env[1315]: time="2025-09-13T00:43:45.549037724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:45.549218 env[1315]: time="2025-09-13T00:43:45.549185224Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539 pid=3518 runtime=io.containerd.runc.v2 Sep 13 00:43:45.609389 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:45.609000 audit[3598]: AVC avc: denied { write } for pid=3598 comm="tee" name="fd" dev="proc" ino=25866 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:43:45.615039 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 13 00:43:45.615167 kernel: audit: type=1400 audit(1757724225.609:289): avc: denied { write } for pid=3598 comm="tee" name="fd" dev="proc" ino=25866 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:43:45.624574 kernel: audit: type=1300 audit(1757724225.609:289): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeaccca7e9 a2=241 a3=1b6 items=1 ppid=3557 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:45.609000 audit[3598]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeaccca7e9 a2=241 a3=1b6 items=1 ppid=3557 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:45.609000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 13 00:43:45.629325 kernel: audit: type=1307 audit(1757724225.609:289): cwd="/etc/service/enabled/felix/log" Sep 13 00:43:45.629361 kernel: audit: type=1302 audit(1757724225.609:289): item=0 name="/dev/fd/63" inode=26804 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:43:45.609000 audit: PATH item=0 name="/dev/fd/63" inode=26804 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:43:45.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:43:45.632082 kernel: audit: type=1327 audit(1757724225.609:289): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:43:45.611000 audit[3615]: AVC avc: denied { write } for pid=3615 comm="tee" name="fd" dev="proc" ino=25870 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:43:45.641819 kernel: audit: type=1400 audit(1757724225.611:290): avc: denied { write } for pid=3615 comm="tee" name="fd" dev="proc" ino=25870 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:43:45.641872 kernel: audit: type=1300 audit(1757724225.611:290): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc28b947eb a2=241 a3=1b6 items=1 ppid=3545 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:45.611000 audit[3615]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc28b947eb a2=241 a3=1b6 items=1 ppid=3545 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:45.611000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 13 00:43:45.643632 kernel: audit: type=1307 audit(1757724225.611:290): cwd="/etc/service/enabled/cni/log" Sep 13 00:43:45.653068 kernel: audit: type=1302 audit(1757724225.611:290): item=0 name="/dev/fd/63" inode=26808 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:43:45.654373 kernel: audit: type=1327 audit(1757724225.611:290): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:43:45.611000 audit: PATH item=0 name="/dev/fd/63" inode=26808 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:43:45.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:43:45.623000 audit[3594]: AVC avc: denied { write } for pid=3594 comm="tee" name="fd" dev="proc" ino=24878 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:43:45.623000 audit[3594]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff9e8d67d9 a2=241 a3=1b6 items=1 ppid=3548 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:45.623000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 00:43:45.623000 audit: PATH item=0 name="/dev/fd/63" inode=26803 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:43:45.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:43:45.636000 audit[3622]: AVC avc: denied { write } for pid=3622 comm="tee" name="fd" dev="proc" ino=25879 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:43:45.636000 audit[3622]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd575317ea a2=241 a3=1b6 items=1 ppid=3552 pid=3622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:45.636000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 13 00:43:45.636000 audit: PATH item=0 name="/dev/fd/63" inode=25876 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:43:45.636000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:43:45.636000 audit[3604]: AVC avc: denied { write } for pid=3604 comm="tee" name="fd" dev="proc" ino=25883 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:43:45.636000 audit[3604]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc661077e9 a2=241 a3=1b6 items=1 ppid=3559 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:45.636000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 13 00:43:45.636000 audit: PATH item=0 name="/dev/fd/63" inode=24874 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:43:45.636000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:43:45.643000 audit[3620]: AVC avc: denied { write } for pid=3620 comm="tee" name="fd" dev="proc" ino=26836 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:43:45.643000 audit[3620]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd2dcf57da a2=241 a3=1b6 items=1 ppid=3553 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:45.643000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:43:45.643000 audit: PATH item=0 name="/dev/fd/63" inode=24884 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:43:45.643000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:43:45.656547 env[1315]: time="2025-09-13T00:43:45.656502641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8ddb88cdf-l2pj6,Uid:81aaece5-676c-462a-b669-d1c235589a55,Namespace:calico-system,Attempt:0,} returns sandbox id \"4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539\"" Sep 13 00:43:45.658508 env[1315]: time="2025-09-13T00:43:45.658470050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:43:45.659979 systemd[1]: Started sshd@7-10.0.0.27:22-10.0.0.1:60500.service. Sep 13 00:43:45.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.27:22-10.0.0.1:60500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:45.667000 audit[3638]: AVC avc: denied { write } for pid=3638 comm="tee" name="fd" dev="proc" ino=24097 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:43:45.667000 audit[3638]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffde42d77e9 a2=241 a3=1b6 items=1 ppid=3569 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:45.667000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 13 00:43:45.667000 audit: PATH item=0 name="/dev/fd/63" inode=25885 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:43:45.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:43:45.914000 audit[3633]: USER_ACCT pid=3633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:45.915022 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 60500 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:43:45.915000 audit[3633]: CRED_ACQ pid=3633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:45.915000 audit[3633]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1313cd70 a2=3 a3=0 items=0 ppid=1 pid=3633 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:45.915000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:43:45.916331 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:45.921341 systemd-logind[1299]: New session 8 of user core. Sep 13 00:43:45.922045 systemd[1]: Started session-8.scope. Sep 13 00:43:45.926000 audit[3633]: USER_START pid=3633 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:45.928000 audit[3671]: CRED_ACQ pid=3671 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:46.039853 sshd[3633]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:46.040000 audit[3633]: USER_END pid=3633 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:46.040000 audit[3633]: CRED_DISP pid=3633 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:46.042059 systemd[1]: sshd@7-10.0.0.27:22-10.0.0.1:60500.service: Deactivated successfully. Sep 13 00:43:46.042835 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:43:46.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.27:22-10.0.0.1:60500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:46.043865 systemd-logind[1299]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:43:46.044592 systemd-logind[1299]: Removed session 8. Sep 13 00:43:46.168942 systemd[1]: run-containerd-runc-k8s.io-4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539-runc.JbgscV.mount: Deactivated successfully. Sep 13 00:43:46.649371 kubelet[2107]: I0913 00:43:46.649325 2107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f306ba4a-0aec-4abe-961c-ead3aa08c8ea" path="/var/lib/kubelet/pods/f306ba4a-0aec-4abe-961c-ead3aa08c8ea/volumes" Sep 13 00:43:47.152752 systemd-networkd[1082]: calibb2a1fc737e: Gained IPv6LL Sep 13 00:43:47.414640 env[1315]: time="2025-09-13T00:43:47.414488170Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:47.416418 env[1315]: time="2025-09-13T00:43:47.416382716Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:47.417936 env[1315]: time="2025-09-13T00:43:47.417901618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:47.419325 env[1315]: time="2025-09-13T00:43:47.419280575Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:47.419730 env[1315]: time="2025-09-13T00:43:47.419696318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:43:47.421512 env[1315]: time="2025-09-13T00:43:47.421485178Z" level=info msg="CreateContainer within sandbox \"4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:43:47.433715 env[1315]: time="2025-09-13T00:43:47.433666047Z" level=info msg="CreateContainer within sandbox \"4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3a09e7d52a68e1c8e3b2bd0762328a1931ed9ab0d479fb542762bd4c2f645faf\"" Sep 13 00:43:47.434377 env[1315]: time="2025-09-13T00:43:47.434346498Z" level=info msg="StartContainer for \"3a09e7d52a68e1c8e3b2bd0762328a1931ed9ab0d479fb542762bd4c2f645faf\"" Sep 13 00:43:47.484452 env[1315]: time="2025-09-13T00:43:47.484392314Z" level=info msg="StartContainer for \"3a09e7d52a68e1c8e3b2bd0762328a1931ed9ab0d479fb542762bd4c2f645faf\" returns successfully" Sep 13 00:43:47.485952 env[1315]: time="2025-09-13T00:43:47.485801129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:43:49.647741 env[1315]: time="2025-09-13T00:43:49.647677648Z" level=info msg="StopPodSandbox for \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\"" Sep 13 00:43:49.648463 env[1315]: time="2025-09-13T00:43:49.647712877Z" level=info msg="StopPodSandbox for \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\"" Sep 13 00:43:49.648562 env[1315]: time="2025-09-13T00:43:49.647743677Z" level=info msg="StopPodSandbox for \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\"" Sep 13 00:43:49.649502 env[1315]: time="2025-09-13T00:43:49.647843492Z" level=info msg="StopPodSandbox for \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\"" Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.730 [INFO][3829] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.730 [INFO][3829] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" iface="eth0" netns="/var/run/netns/cni-77499786-cceb-d9ce-ebf8-15cd03eda1f6" Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.731 [INFO][3829] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" iface="eth0" netns="/var/run/netns/cni-77499786-cceb-d9ce-ebf8-15cd03eda1f6" Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.731 [INFO][3829] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" iface="eth0" netns="/var/run/netns/cni-77499786-cceb-d9ce-ebf8-15cd03eda1f6" Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.731 [INFO][3829] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.731 [INFO][3829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.755 [INFO][3875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" HandleID="k8s-pod-network.6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.755 [INFO][3875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.755 [INFO][3875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.763 [WARNING][3875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" HandleID="k8s-pod-network.6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.763 [INFO][3875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" HandleID="k8s-pod-network.6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.767 [INFO][3875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:49.770278 env[1315]: 2025-09-13 00:43:49.769 [INFO][3829] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:43:49.770991 env[1315]: time="2025-09-13T00:43:49.770960275Z" level=info msg="TearDown network for sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\" successfully" Sep 13 00:43:49.771076 env[1315]: time="2025-09-13T00:43:49.771055532Z" level=info msg="StopPodSandbox for \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\" returns successfully" Sep 13 00:43:49.771986 env[1315]: time="2025-09-13T00:43:49.771958424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66999fb6b9-hjj2v,Uid:631473a4-cdd6-4d56-8d52-071b771820a5,Namespace:calico-system,Attempt:1,}" Sep 13 00:43:49.774227 systemd[1]: run-netns-cni\x2d77499786\x2dcceb\x2dd9ce\x2debf8\x2d15cd03eda1f6.mount: Deactivated successfully. Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.745 [INFO][3845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.745 [INFO][3845] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" iface="eth0" netns="/var/run/netns/cni-346a9aed-878a-ff76-bf1b-b1d482729794" Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.745 [INFO][3845] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" iface="eth0" netns="/var/run/netns/cni-346a9aed-878a-ff76-bf1b-b1d482729794" Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.745 [INFO][3845] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" iface="eth0" netns="/var/run/netns/cni-346a9aed-878a-ff76-bf1b-b1d482729794" Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.745 [INFO][3845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.745 [INFO][3845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.781 [INFO][3884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" HandleID="k8s-pod-network.37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.781 [INFO][3884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.781 [INFO][3884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.814 [WARNING][3884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" HandleID="k8s-pod-network.37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.814 [INFO][3884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" HandleID="k8s-pod-network.37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.817 [INFO][3884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:49.821320 env[1315]: 2025-09-13 00:43:49.819 [INFO][3845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:43:49.821791 env[1315]: time="2025-09-13T00:43:49.821529967Z" level=info msg="TearDown network for sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\" successfully" Sep 13 00:43:49.821791 env[1315]: time="2025-09-13T00:43:49.821565617Z" level=info msg="StopPodSandbox for \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\" returns successfully" Sep 13 00:43:49.822088 kubelet[2107]: E0913 00:43:49.822055 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:49.822921 env[1315]: time="2025-09-13T00:43:49.822862628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gpbn2,Uid:4f125e3b-0620-4e01-917a-be90d6600e62,Namespace:kube-system,Attempt:1,}" Sep 13 00:43:49.824698 systemd[1]: run-netns-cni\x2d346a9aed\x2d878a\x2dff76\x2dbf1b\x2db1d482729794.mount: Deactivated successfully. Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.712 [INFO][3828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.712 [INFO][3828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" iface="eth0" netns="/var/run/netns/cni-186ef449-77ee-44f5-3173-65ae429b6590" Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.714 [INFO][3828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" iface="eth0" netns="/var/run/netns/cni-186ef449-77ee-44f5-3173-65ae429b6590" Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.715 [INFO][3828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" iface="eth0" netns="/var/run/netns/cni-186ef449-77ee-44f5-3173-65ae429b6590" Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.715 [INFO][3828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.715 [INFO][3828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.815 [INFO][3867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" HandleID="k8s-pod-network.6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.815 [INFO][3867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.818 [INFO][3867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.826 [WARNING][3867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" HandleID="k8s-pod-network.6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.826 [INFO][3867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" HandleID="k8s-pod-network.6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.828 [INFO][3867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:49.831333 env[1315]: 2025-09-13 00:43:49.830 [INFO][3828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:43:49.832800 env[1315]: time="2025-09-13T00:43:49.832749077Z" level=info msg="TearDown network for sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\" successfully" Sep 13 00:43:49.832855 env[1315]: time="2025-09-13T00:43:49.832800918Z" level=info msg="StopPodSandbox for \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\" returns successfully" Sep 13 00:43:49.840636 systemd[1]: run-netns-cni\x2d186ef449\x2d77ee\x2d44f5\x2d3173\x2d65ae429b6590.mount: Deactivated successfully. Sep 13 00:43:49.847581 env[1315]: time="2025-09-13T00:43:49.847542894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-44vsn,Uid:61948f79-c287-454b-a5e0-ba4e09c53ab6,Namespace:calico-system,Attempt:1,}" Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.834 [INFO][3858] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.835 [INFO][3858] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" iface="eth0" netns="/var/run/netns/cni-befe8d12-e2ec-6c33-003a-0150dc7605ea" Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.835 [INFO][3858] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" iface="eth0" netns="/var/run/netns/cni-befe8d12-e2ec-6c33-003a-0150dc7605ea" Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.835 [INFO][3858] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" iface="eth0" netns="/var/run/netns/cni-befe8d12-e2ec-6c33-003a-0150dc7605ea" Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.835 [INFO][3858] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.835 [INFO][3858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.864 [INFO][3898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" HandleID="k8s-pod-network.3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.864 [INFO][3898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.864 [INFO][3898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.871 [WARNING][3898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" HandleID="k8s-pod-network.3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.871 [INFO][3898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" HandleID="k8s-pod-network.3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.873 [INFO][3898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:49.876001 env[1315]: 2025-09-13 00:43:49.874 [INFO][3858] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:43:49.876764 env[1315]: time="2025-09-13T00:43:49.876730207Z" level=info msg="TearDown network for sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\" successfully" Sep 13 00:43:49.876857 env[1315]: time="2025-09-13T00:43:49.876832658Z" level=info msg="StopPodSandbox for \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\" returns successfully" Sep 13 00:43:49.877720 env[1315]: time="2025-09-13T00:43:49.877698528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69684bbb9f-kh9fl,Uid:c30c5f78-8868-438b-9ac1-1ddc434b02ca,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:43:50.260039 systemd-networkd[1082]: calid2ddacd25e5: Link UP Sep 13 00:43:50.293792 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:43:50.297913 systemd-networkd[1082]: calid2ddacd25e5: Gained carrier Sep 13 00:43:50.299227 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid2ddacd25e5: link becomes ready Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:49.895 [INFO][3917] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:49.903 [INFO][3917] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0 coredns-7c65d6cfc9- kube-system 4f125e3b-0620-4e01-917a-be90d6600e62 964 0 2025-09-13 00:43:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-gpbn2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2ddacd25e5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpbn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gpbn2-" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:49.903 [INFO][3917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpbn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:49.948 [INFO][3959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" HandleID="k8s-pod-network.8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:49.948 [INFO][3959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" HandleID="k8s-pod-network.8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000130790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-gpbn2", "timestamp":"2025-09-13 00:43:49.948660996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:49.948 [INFO][3959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:49.948 [INFO][3959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:49.948 [INFO][3959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:49.959 [INFO][3959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" host="localhost" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:49.988 [INFO][3959] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:50.012 [INFO][3959] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:50.021 [INFO][3959] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:50.033 [INFO][3959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:50.033 [INFO][3959] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" host="localhost" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:50.035 [INFO][3959] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261 Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:50.058 [INFO][3959] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" host="localhost" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:50.256 [INFO][3959] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" host="localhost" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:50.256 [INFO][3959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" host="localhost" Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:50.256 [INFO][3959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:50.455489 env[1315]: 2025-09-13 00:43:50.256 [INFO][3959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" HandleID="k8s-pod-network.8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:43:50.485925 env[1315]: 2025-09-13 00:43:50.258 [INFO][3917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpbn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4f125e3b-0620-4e01-917a-be90d6600e62", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-gpbn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2ddacd25e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:50.485925 env[1315]: 2025-09-13 00:43:50.259 [INFO][3917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpbn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:43:50.485925 env[1315]: 2025-09-13 00:43:50.259 [INFO][3917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2ddacd25e5 ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpbn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:43:50.485925 env[1315]: 2025-09-13 00:43:50.298 [INFO][3917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpbn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:43:50.485925 env[1315]: 2025-09-13 00:43:50.299 [INFO][3917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpbn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4f125e3b-0620-4e01-917a-be90d6600e62", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261", Pod:"coredns-7c65d6cfc9-gpbn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2ddacd25e5", MAC:"02:86:08:0c:73:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:50.485925 env[1315]: 2025-09-13 00:43:50.453 [INFO][3917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpbn2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:43:50.626703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2452031305.mount: Deactivated successfully. Sep 13 00:43:50.626835 systemd[1]: run-netns-cni\x2dbefe8d12\x2de2ec\x2d6c33\x2d003a\x2d0150dc7605ea.mount: Deactivated successfully. Sep 13 00:43:50.649266 systemd-networkd[1082]: cali70e7a0bcd28: Link UP Sep 13 00:43:50.649395 systemd-networkd[1082]: cali70e7a0bcd28: Gained carrier Sep 13 00:43:50.649681 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali70e7a0bcd28: link becomes ready Sep 13 00:43:50.690679 env[1315]: time="2025-09-13T00:43:50.690605602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:50.690679 env[1315]: time="2025-09-13T00:43:50.690652072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:50.690679 env[1315]: time="2025-09-13T00:43:50.690661421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:50.691085 env[1315]: time="2025-09-13T00:43:50.690825430Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261 pid=4038 runtime=io.containerd.runc.v2 Sep 13 00:43:50.708290 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:50.728333 env[1315]: time="2025-09-13T00:43:50.728288201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gpbn2,Uid:4f125e3b-0620-4e01-917a-be90d6600e62,Namespace:kube-system,Attempt:1,} returns sandbox id \"8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261\"" Sep 13 00:43:50.729995 kubelet[2107]: E0913 00:43:50.729138 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:50.730963 env[1315]: time="2025-09-13T00:43:50.730927136Z" level=info msg="CreateContainer within sandbox \"8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:49.883 [INFO][3904] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:49.896 [INFO][3904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0 calico-kube-controllers-66999fb6b9- calico-system 631473a4-cdd6-4d56-8d52-071b771820a5 965 0 2025-09-13 00:43:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66999fb6b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66999fb6b9-hjj2v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali70e7a0bcd28 [] [] }} ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Namespace="calico-system" Pod="calico-kube-controllers-66999fb6b9-hjj2v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:49.896 [INFO][3904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Namespace="calico-system" Pod="calico-kube-controllers-66999fb6b9-hjj2v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:49.949 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" HandleID="k8s-pod-network.c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:49.950 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" HandleID="k8s-pod-network.c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000251010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66999fb6b9-hjj2v", "timestamp":"2025-09-13 00:43:49.949931697 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:49.950 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.256 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.256 [INFO][3966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.294 [INFO][3966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" host="localhost" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.297 [INFO][3966] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.451 [INFO][3966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.455 [INFO][3966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.457 [INFO][3966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.457 [INFO][3966] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" host="localhost" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.458 [INFO][3966] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179 Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.493 [INFO][3966] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" host="localhost" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.643 [INFO][3966] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" host="localhost" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.643 [INFO][3966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" host="localhost" Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.643 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:50.743251 env[1315]: 2025-09-13 00:43:50.643 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" HandleID="k8s-pod-network.c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:43:50.744081 env[1315]: 2025-09-13 00:43:50.645 [INFO][3904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Namespace="calico-system" Pod="calico-kube-controllers-66999fb6b9-hjj2v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0", GenerateName:"calico-kube-controllers-66999fb6b9-", Namespace:"calico-system", SelfLink:"", UID:"631473a4-cdd6-4d56-8d52-071b771820a5", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66999fb6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66999fb6b9-hjj2v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70e7a0bcd28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:50.744081 env[1315]: 2025-09-13 00:43:50.645 [INFO][3904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Namespace="calico-system" Pod="calico-kube-controllers-66999fb6b9-hjj2v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:43:50.744081 env[1315]: 2025-09-13 00:43:50.645 [INFO][3904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70e7a0bcd28 ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Namespace="calico-system" Pod="calico-kube-controllers-66999fb6b9-hjj2v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:43:50.744081 env[1315]: 2025-09-13 00:43:50.649 [INFO][3904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Namespace="calico-system" Pod="calico-kube-controllers-66999fb6b9-hjj2v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:43:50.744081 env[1315]: 2025-09-13 00:43:50.649 [INFO][3904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Namespace="calico-system" Pod="calico-kube-controllers-66999fb6b9-hjj2v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0", GenerateName:"calico-kube-controllers-66999fb6b9-", Namespace:"calico-system", SelfLink:"", UID:"631473a4-cdd6-4d56-8d52-071b771820a5", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66999fb6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179", Pod:"calico-kube-controllers-66999fb6b9-hjj2v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70e7a0bcd28", MAC:"6e:48:ab:37:b4:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:50.744081 env[1315]: 2025-09-13 00:43:50.741 [INFO][3904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179" Namespace="calico-system" Pod="calico-kube-controllers-66999fb6b9-hjj2v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:43:50.770360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount804844812.mount: Deactivated successfully. Sep 13 00:43:50.771696 env[1315]: time="2025-09-13T00:43:50.771599836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:50.771696 env[1315]: time="2025-09-13T00:43:50.771661646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:50.771696 env[1315]: time="2025-09-13T00:43:50.771672348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:50.771989 env[1315]: time="2025-09-13T00:43:50.771950479Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179 pid=4087 runtime=io.containerd.runc.v2 Sep 13 00:43:50.780829 systemd-networkd[1082]: cali2efd7d0fd31: Link UP Sep 13 00:43:50.783124 env[1315]: time="2025-09-13T00:43:50.782722935Z" level=info msg="CreateContainer within sandbox \"8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15c05a04407d191a2ff4b16a651d0ed113e4bc6f457e46acc91e7ae8ec1f8f5b\"" Sep 13 00:43:50.783941 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2efd7d0fd31: link becomes ready Sep 13 00:43:50.783318 systemd-networkd[1082]: cali2efd7d0fd31: Gained carrier Sep 13 00:43:50.784066 env[1315]: time="2025-09-13T00:43:50.783372271Z" level=info msg="StartContainer for \"15c05a04407d191a2ff4b16a651d0ed113e4bc6f457e46acc91e7ae8ec1f8f5b\"" Sep 13 00:43:50.786673 env[1315]: time="2025-09-13T00:43:50.786614604Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:50.795859 env[1315]: time="2025-09-13T00:43:50.795820144Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:49.907 [INFO][3933] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:49.919 [INFO][3933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--44vsn-eth0 goldmane-7988f88666- calico-system 61948f79-c287-454b-a5e0-ba4e09c53ab6 963 0 2025-09-13 00:43:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-44vsn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2efd7d0fd31 [] [] }} ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Namespace="calico-system" Pod="goldmane-7988f88666-44vsn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--44vsn-" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:49.919 [INFO][3933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Namespace="calico-system" Pod="goldmane-7988f88666-44vsn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.028 [INFO][3975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" HandleID="k8s-pod-network.561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.029 [INFO][3975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" HandleID="k8s-pod-network.561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001234d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-44vsn", "timestamp":"2025-09-13 00:43:50.028890215 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.029 [INFO][3975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.644 [INFO][3975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.644 [INFO][3975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.747 [INFO][3975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" host="localhost" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.752 [INFO][3975] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.757 [INFO][3975] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.759 [INFO][3975] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.761 [INFO][3975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.762 [INFO][3975] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" host="localhost" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.763 [INFO][3975] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.767 [INFO][3975] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" host="localhost" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.773 [INFO][3975] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" host="localhost" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.773 [INFO][3975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" host="localhost" Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.773 [INFO][3975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:50.797423 env[1315]: 2025-09-13 00:43:50.773 [INFO][3975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" HandleID="k8s-pod-network.561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:43:50.798037 env[1315]: 2025-09-13 00:43:50.777 [INFO][3933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Namespace="calico-system" Pod="goldmane-7988f88666-44vsn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--44vsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--44vsn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"61948f79-c287-454b-a5e0-ba4e09c53ab6", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-44vsn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2efd7d0fd31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:50.798037 env[1315]: 2025-09-13 00:43:50.777 [INFO][3933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Namespace="calico-system" Pod="goldmane-7988f88666-44vsn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:43:50.798037 env[1315]: 2025-09-13 00:43:50.777 [INFO][3933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2efd7d0fd31 ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Namespace="calico-system" Pod="goldmane-7988f88666-44vsn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:43:50.798037 env[1315]: 2025-09-13 00:43:50.784 [INFO][3933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Namespace="calico-system" Pod="goldmane-7988f88666-44vsn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:43:50.798037 env[1315]: 2025-09-13 00:43:50.784 [INFO][3933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Namespace="calico-system" Pod="goldmane-7988f88666-44vsn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--44vsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--44vsn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"61948f79-c287-454b-a5e0-ba4e09c53ab6", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b", Pod:"goldmane-7988f88666-44vsn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2efd7d0fd31", MAC:"de:77:5d:1b:64:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:50.798037 env[1315]: 2025-09-13 00:43:50.795 [INFO][3933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b" Namespace="calico-system" Pod="goldmane-7988f88666-44vsn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:43:50.807087 env[1315]: time="2025-09-13T00:43:50.807045383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:50.810354 env[1315]: time="2025-09-13T00:43:50.809664891Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:50.810466 env[1315]: time="2025-09-13T00:43:50.810424782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:43:50.813846 env[1315]: time="2025-09-13T00:43:50.813815013Z" level=info msg="CreateContainer within sandbox \"4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:43:50.819229 env[1315]: time="2025-09-13T00:43:50.819163802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:50.819484 env[1315]: time="2025-09-13T00:43:50.819343563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:50.819484 env[1315]: time="2025-09-13T00:43:50.819373972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:50.819831 env[1315]: time="2025-09-13T00:43:50.819790444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b pid=4152 runtime=io.containerd.runc.v2 Sep 13 00:43:50.821408 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:50.828999 env[1315]: time="2025-09-13T00:43:50.828951999Z" level=info msg="CreateContainer within sandbox \"4272f8dcc1cfbedb35ae664f0dfe18fd200f781d693f73afbb2cc416b3a6b539\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f7449a29cff118d2d9e512cfa9c2f4f18e9a51c3f117433202c688d93ea23c70\"" Sep 13 00:43:50.829996 env[1315]: time="2025-09-13T00:43:50.829970666Z" level=info msg="StartContainer for \"f7449a29cff118d2d9e512cfa9c2f4f18e9a51c3f117433202c688d93ea23c70\"" Sep 13 00:43:50.858903 env[1315]: time="2025-09-13T00:43:50.857119224Z" level=info msg="StartContainer for \"15c05a04407d191a2ff4b16a651d0ed113e4bc6f457e46acc91e7ae8ec1f8f5b\" returns successfully" Sep 13 00:43:50.863121 env[1315]: time="2025-09-13T00:43:50.863010091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66999fb6b9-hjj2v,Uid:631473a4-cdd6-4d56-8d52-071b771820a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179\"" Sep 13 00:43:50.865501 env[1315]: time="2025-09-13T00:43:50.865454377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:43:50.870066 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:50.905454 systemd-networkd[1082]: calif1a9bc4b29d: Link UP Sep 13 00:43:50.912951 systemd-networkd[1082]: calif1a9bc4b29d: Gained carrier Sep 13 00:43:50.913728 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif1a9bc4b29d: link becomes ready Sep 13 00:43:50.920675 env[1315]: time="2025-09-13T00:43:50.920569819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-44vsn,Uid:61948f79-c287-454b-a5e0-ba4e09c53ab6,Namespace:calico-system,Attempt:1,} returns sandbox id \"561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b\"" Sep 13 00:43:50.921811 env[1315]: time="2025-09-13T00:43:50.921791751Z" level=info msg="StartContainer for \"f7449a29cff118d2d9e512cfa9c2f4f18e9a51c3f117433202c688d93ea23c70\" returns successfully" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:49.946 [INFO][3946] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:49.961 [INFO][3946] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0 calico-apiserver-69684bbb9f- calico-apiserver c30c5f78-8868-438b-9ac1-1ddc434b02ca 968 0 2025-09-13 00:43:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69684bbb9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69684bbb9f-kh9fl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif1a9bc4b29d [] [] }} ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-kh9fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:49.961 [INFO][3946] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-kh9fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.048 [INFO][3987] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" HandleID="k8s-pod-network.9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.048 [INFO][3987] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" HandleID="k8s-pod-network.9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e75f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69684bbb9f-kh9fl", "timestamp":"2025-09-13 00:43:50.048537696 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.048 [INFO][3987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.773 [INFO][3987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.774 [INFO][3987] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.848 [INFO][3987] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" host="localhost" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.862 [INFO][3987] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.869 [INFO][3987] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.871 [INFO][3987] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.874 [INFO][3987] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.874 [INFO][3987] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" host="localhost" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.876 [INFO][3987] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1 Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.881 [INFO][3987] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" host="localhost" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.890 [INFO][3987] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" host="localhost" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.890 [INFO][3987] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" host="localhost" Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.890 [INFO][3987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:50.930931 env[1315]: 2025-09-13 00:43:50.890 [INFO][3987] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" HandleID="k8s-pod-network.9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:43:50.931596 env[1315]: 2025-09-13 00:43:50.899 [INFO][3946] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-kh9fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0", GenerateName:"calico-apiserver-69684bbb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c30c5f78-8868-438b-9ac1-1ddc434b02ca", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69684bbb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69684bbb9f-kh9fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1a9bc4b29d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:50.931596 env[1315]: 2025-09-13 00:43:50.899 [INFO][3946] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-kh9fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:43:50.931596 env[1315]: 2025-09-13 00:43:50.899 [INFO][3946] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1a9bc4b29d ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-kh9fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:43:50.931596 env[1315]: 2025-09-13 00:43:50.914 [INFO][3946] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-kh9fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:43:50.931596 env[1315]: 2025-09-13 00:43:50.915 [INFO][3946] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-kh9fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0", GenerateName:"calico-apiserver-69684bbb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c30c5f78-8868-438b-9ac1-1ddc434b02ca", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69684bbb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1", Pod:"calico-apiserver-69684bbb9f-kh9fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1a9bc4b29d", MAC:"72:fe:e4:86:ba:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:50.931596 env[1315]: 2025-09-13 00:43:50.929 [INFO][3946] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-kh9fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:43:50.949646 env[1315]: time="2025-09-13T00:43:50.949433872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:50.949646 env[1315]: time="2025-09-13T00:43:50.949487466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:50.949646 env[1315]: time="2025-09-13T00:43:50.949498238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:50.949854 env[1315]: time="2025-09-13T00:43:50.949714239Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1 pid=4257 runtime=io.containerd.runc.v2 Sep 13 00:43:50.971021 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:50.992589 env[1315]: time="2025-09-13T00:43:50.992551910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69684bbb9f-kh9fl,Uid:c30c5f78-8868-438b-9ac1-1ddc434b02ca,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1\"" Sep 13 00:43:51.045672 kernel: kauditd_printk_skb: 36 callbacks suppressed Sep 13 00:43:51.045776 kernel: audit: type=1130 audit(1757724231.042:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.27:22-10.0.0.1:34810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:51.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.27:22-10.0.0.1:34810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:51.042837 systemd[1]: Started sshd@8-10.0.0.27:22-10.0.0.1:34810.service. Sep 13 00:43:51.079000 audit[4292]: USER_ACCT pid=4292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.080240 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 34810 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:43:51.083000 audit[4292]: CRED_ACQ pid=4292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.084328 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:51.087486 kernel: audit: type=1101 audit(1757724231.079:306): pid=4292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.087535 kernel: audit: type=1103 audit(1757724231.083:307): pid=4292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.087555 kernel: audit: type=1006 audit(1757724231.083:308): pid=4292 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Sep 13 00:43:51.088334 systemd-logind[1299]: New session 9 of user core. Sep 13 00:43:51.088424 systemd[1]: Started session-9.scope. Sep 13 00:43:51.089714 kernel: audit: type=1300 audit(1757724231.083:308): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe464590a0 a2=3 a3=0 items=0 ppid=1 pid=4292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:51.083000 audit[4292]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe464590a0 a2=3 a3=0 items=0 ppid=1 pid=4292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:51.093523 kernel: audit: type=1327 audit(1757724231.083:308): proctitle=737368643A20636F7265205B707269765D Sep 13 00:43:51.083000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:43:51.092000 audit[4292]: USER_START pid=4292 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.099029 kernel: audit: type=1105 audit(1757724231.092:309): pid=4292 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.099090 kernel: audit: type=1103 audit(1757724231.093:310): pid=4295 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.093000 audit[4295]: CRED_ACQ pid=4295 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.198901 sshd[4292]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:51.199000 audit[4292]: USER_END pid=4292 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.201565 systemd[1]: sshd@8-10.0.0.27:22-10.0.0.1:34810.service: Deactivated successfully. Sep 13 00:43:51.202748 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:43:51.203181 systemd-logind[1299]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:43:51.204068 systemd-logind[1299]: Removed session 9. Sep 13 00:43:51.199000 audit[4292]: CRED_DISP pid=4292 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.208929 kernel: audit: type=1106 audit(1757724231.199:311): pid=4292 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.208979 kernel: audit: type=1104 audit(1757724231.199:312): pid=4292 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:51.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.27:22-10.0.0.1:34810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:51.647252 env[1315]: time="2025-09-13T00:43:51.647202424Z" level=info msg="StopPodSandbox for \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\"" Sep 13 00:43:51.772134 kubelet[2107]: E0913 00:43:51.772100 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:52.016863 systemd-networkd[1082]: cali2efd7d0fd31: Gained IPv6LL Sep 13 00:43:52.017121 systemd-networkd[1082]: calid2ddacd25e5: Gained IPv6LL Sep 13 00:43:52.139996 kubelet[2107]: I0913 00:43:52.139935 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8ddb88cdf-l2pj6" podStartSLOduration=2.98546382 podStartE2EDuration="8.139913252s" podCreationTimestamp="2025-09-13 00:43:44 +0000 UTC" firstStartedPulling="2025-09-13 00:43:45.658055326 +0000 UTC m=+39.119891063" lastFinishedPulling="2025-09-13 00:43:50.812504758 +0000 UTC m=+44.274340495" observedRunningTime="2025-09-13 00:43:52.005747221 +0000 UTC m=+45.467582958" watchObservedRunningTime="2025-09-13 00:43:52.139913252 +0000 UTC m=+45.601748979" Sep 13 00:43:52.154000 audit[4356]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:52.154000 audit[4356]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffd091d8e0 a2=0 a3=7fffd091d8cc items=0 ppid=2259 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:52.154000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:52.158000 audit[4356]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:52.158000 audit[4356]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffd091d8e0 a2=0 a3=7fffd091d8cc items=0 ppid=2259 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:52.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.140 [INFO][4339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.140 [INFO][4339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" iface="eth0" netns="/var/run/netns/cni-a187026a-76b0-5859-5113-482c40922748" Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.140 [INFO][4339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" iface="eth0" netns="/var/run/netns/cni-a187026a-76b0-5859-5113-482c40922748" Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.141 [INFO][4339] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" iface="eth0" netns="/var/run/netns/cni-a187026a-76b0-5859-5113-482c40922748" Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.141 [INFO][4339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.141 [INFO][4339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.164 [INFO][4349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" HandleID="k8s-pod-network.4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.164 [INFO][4349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.164 [INFO][4349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.169 [WARNING][4349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" HandleID="k8s-pod-network.4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.169 [INFO][4349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" HandleID="k8s-pod-network.4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.170 [INFO][4349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:52.173255 env[1315]: 2025-09-13 00:43:52.171 [INFO][4339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:43:52.173000 audit[4359]: NETFILTER_CFG table=filter:101 family=2 entries=17 op=nft_register_rule pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:52.173000 audit[4359]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc99d0c630 a2=0 a3=7ffc99d0c61c items=0 ppid=2259 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:52.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:52.176969 systemd[1]: run-netns-cni\x2da187026a\x2d76b0\x2d5859\x2d5113\x2d482c40922748.mount: Deactivated successfully. Sep 13 00:43:52.178208 env[1315]: time="2025-09-13T00:43:52.178152497Z" level=info msg="TearDown network for sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\" successfully" Sep 13 00:43:52.178208 env[1315]: time="2025-09-13T00:43:52.178205029Z" level=info msg="StopPodSandbox for \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\" returns successfully" Sep 13 00:43:52.179190 env[1315]: time="2025-09-13T00:43:52.179154787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69684bbb9f-79hgx,Uid:445a4a34-e91d-44ab-8fb9-191df697ef64,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:43:52.178000 audit[4359]: NETFILTER_CFG table=nat:102 family=2 entries=35 op=nft_register_chain pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:52.178000 audit[4359]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc99d0c630 a2=0 a3=7ffc99d0c61c items=0 ppid=2259 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:52.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:52.282393 systemd-networkd[1082]: cali230135788b8: Link UP Sep 13 00:43:52.284794 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:43:52.284917 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali230135788b8: link becomes ready Sep 13 00:43:52.285236 systemd-networkd[1082]: cali230135788b8: Gained carrier Sep 13 00:43:52.296378 kubelet[2107]: I0913 00:43:52.296277 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gpbn2" podStartSLOduration=40.296257913 podStartE2EDuration="40.296257913s" podCreationTimestamp="2025-09-13 00:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:52.162462824 +0000 UTC m=+45.624298561" watchObservedRunningTime="2025-09-13 00:43:52.296257913 +0000 UTC m=+45.758093650" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.207 [INFO][4361] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.215 [INFO][4361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0 calico-apiserver-69684bbb9f- calico-apiserver 445a4a34-e91d-44ab-8fb9-191df697ef64 1006 0 2025-09-13 00:43:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69684bbb9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69684bbb9f-79hgx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali230135788b8 [] [] }} ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-79hgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.215 [INFO][4361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-79hgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.235 [INFO][4376] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" HandleID="k8s-pod-network.d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.235 [INFO][4376] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" HandleID="k8s-pod-network.d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df070), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69684bbb9f-79hgx", "timestamp":"2025-09-13 00:43:52.235129405 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.235 [INFO][4376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.235 [INFO][4376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.235 [INFO][4376] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.242 [INFO][4376] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" host="localhost" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.263 [INFO][4376] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.266 [INFO][4376] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.268 [INFO][4376] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.270 [INFO][4376] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.270 [INFO][4376] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" host="localhost" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.271 [INFO][4376] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.274 [INFO][4376] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" host="localhost" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.279 [INFO][4376] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" host="localhost" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.279 [INFO][4376] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" host="localhost" Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.279 [INFO][4376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:52.298559 env[1315]: 2025-09-13 00:43:52.279 [INFO][4376] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" HandleID="k8s-pod-network.d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:43:52.299143 env[1315]: 2025-09-13 00:43:52.281 [INFO][4361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-79hgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0", GenerateName:"calico-apiserver-69684bbb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"445a4a34-e91d-44ab-8fb9-191df697ef64", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69684bbb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69684bbb9f-79hgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali230135788b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:52.299143 env[1315]: 2025-09-13 00:43:52.281 [INFO][4361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-79hgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:43:52.299143 env[1315]: 2025-09-13 00:43:52.281 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali230135788b8 ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-79hgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:43:52.299143 env[1315]: 2025-09-13 00:43:52.285 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-79hgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:43:52.299143 env[1315]: 2025-09-13 00:43:52.285 [INFO][4361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-79hgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0", GenerateName:"calico-apiserver-69684bbb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"445a4a34-e91d-44ab-8fb9-191df697ef64", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69684bbb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b", Pod:"calico-apiserver-69684bbb9f-79hgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali230135788b8", MAC:"32:a1:bf:79:ad:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:52.299143 env[1315]: 2025-09-13 00:43:52.296 [INFO][4361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b" Namespace="calico-apiserver" Pod="calico-apiserver-69684bbb9f-79hgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:43:52.309611 env[1315]: time="2025-09-13T00:43:52.309520552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:52.309611 env[1315]: time="2025-09-13T00:43:52.309574047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:52.309611 env[1315]: time="2025-09-13T00:43:52.309586221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:52.310096 env[1315]: time="2025-09-13T00:43:52.310050965Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b pid=4398 runtime=io.containerd.runc.v2 Sep 13 00:43:52.334150 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:52.336759 systemd-networkd[1082]: cali70e7a0bcd28: Gained IPv6LL Sep 13 00:43:52.366773 env[1315]: time="2025-09-13T00:43:52.366723992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69684bbb9f-79hgx,Uid:445a4a34-e91d-44ab-8fb9-191df697ef64,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b\"" Sep 13 00:43:52.464796 systemd-networkd[1082]: calif1a9bc4b29d: Gained IPv6LL Sep 13 00:43:52.536769 kubelet[2107]: I0913 00:43:52.536647 2107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:43:52.537054 kubelet[2107]: E0913 00:43:52.537018 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:52.651193 env[1315]: time="2025-09-13T00:43:52.651148850Z" level=info msg="StopPodSandbox for \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\"" Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.688 [INFO][4466] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.688 [INFO][4466] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" iface="eth0" netns="/var/run/netns/cni-fb8c58c6-30d7-1ca9-a9c7-2b584c6c320d" Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.688 [INFO][4466] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" iface="eth0" netns="/var/run/netns/cni-fb8c58c6-30d7-1ca9-a9c7-2b584c6c320d" Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.688 [INFO][4466] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" iface="eth0" netns="/var/run/netns/cni-fb8c58c6-30d7-1ca9-a9c7-2b584c6c320d" Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.688 [INFO][4466] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.688 [INFO][4466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.704 [INFO][4475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" HandleID="k8s-pod-network.b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.704 [INFO][4475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.704 [INFO][4475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.711 [WARNING][4475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" HandleID="k8s-pod-network.b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.711 [INFO][4475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" HandleID="k8s-pod-network.b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.712 [INFO][4475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:52.715591 env[1315]: 2025-09-13 00:43:52.714 [INFO][4466] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:43:52.716047 env[1315]: time="2025-09-13T00:43:52.715758472Z" level=info msg="TearDown network for sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\" successfully" Sep 13 00:43:52.716047 env[1315]: time="2025-09-13T00:43:52.715789512Z" level=info msg="StopPodSandbox for \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\" returns successfully" Sep 13 00:43:52.716098 kubelet[2107]: E0913 00:43:52.716072 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:52.716690 env[1315]: time="2025-09-13T00:43:52.716667932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kp2bx,Uid:57c2caed-82b7-4c8e-9591-b6e3e76966bb,Namespace:kube-system,Attempt:1,}" Sep 13 00:43:52.718347 systemd[1]: run-netns-cni\x2dfb8c58c6\x2d30d7\x2d1ca9\x2da9c7\x2d2b584c6c320d.mount: Deactivated successfully. Sep 13 00:43:52.777770 kubelet[2107]: E0913 00:43:52.777438 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:52.778257 kubelet[2107]: E0913 00:43:52.778001 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:52.806282 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calief1f525c8c7: link becomes ready Sep 13 00:43:52.804005 systemd-networkd[1082]: calief1f525c8c7: Link UP Sep 13 00:43:52.806684 systemd-networkd[1082]: calief1f525c8c7: Gained carrier Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.745 [INFO][4483] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.756 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0 coredns-7c65d6cfc9- kube-system 57c2caed-82b7-4c8e-9591-b6e3e76966bb 1032 0 2025-09-13 00:43:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-kp2bx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calief1f525c8c7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kp2bx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kp2bx-" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.756 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kp2bx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.777 [INFO][4499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" HandleID="k8s-pod-network.d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.777 [INFO][4499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" HandleID="k8s-pod-network.d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001383f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-kp2bx", "timestamp":"2025-09-13 00:43:52.777060156 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.777 [INFO][4499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.777 [INFO][4499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.777 [INFO][4499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.782 [INFO][4499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" host="localhost" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.785 [INFO][4499] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.788 [INFO][4499] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.790 [INFO][4499] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.791 [INFO][4499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.791 [INFO][4499] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" host="localhost" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.792 [INFO][4499] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.796 [INFO][4499] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" host="localhost" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.800 [INFO][4499] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" host="localhost" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.800 [INFO][4499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" host="localhost" Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.801 [INFO][4499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:52.819017 env[1315]: 2025-09-13 00:43:52.801 [INFO][4499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" HandleID="k8s-pod-network.d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:43:52.819631 env[1315]: 2025-09-13 00:43:52.802 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kp2bx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"57c2caed-82b7-4c8e-9591-b6e3e76966bb", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-kp2bx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief1f525c8c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:52.819631 env[1315]: 2025-09-13 00:43:52.802 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kp2bx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:43:52.819631 env[1315]: 2025-09-13 00:43:52.802 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief1f525c8c7 ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kp2bx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:43:52.819631 env[1315]: 2025-09-13 00:43:52.805 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kp2bx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:43:52.819631 env[1315]: 2025-09-13 00:43:52.807 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kp2bx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"57c2caed-82b7-4c8e-9591-b6e3e76966bb", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d", Pod:"coredns-7c65d6cfc9-kp2bx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief1f525c8c7", MAC:"8e:f1:fa:95:7b:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:52.819631 env[1315]: 2025-09-13 00:43:52.817 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kp2bx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:43:52.827664 env[1315]: time="2025-09-13T00:43:52.827587254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:52.827664 env[1315]: time="2025-09-13T00:43:52.827646909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:52.827752 env[1315]: time="2025-09-13T00:43:52.827660897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:52.827890 env[1315]: time="2025-09-13T00:43:52.827834905Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d pid=4520 runtime=io.containerd.runc.v2 Sep 13 00:43:52.852981 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:52.871594 env[1315]: time="2025-09-13T00:43:52.871543373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kp2bx,Uid:57c2caed-82b7-4c8e-9591-b6e3e76966bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d\"" Sep 13 00:43:52.872446 kubelet[2107]: E0913 00:43:52.872087 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:52.877274 env[1315]: time="2025-09-13T00:43:52.877233917Z" level=info msg="CreateContainer within sandbox \"d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:43:52.890125 env[1315]: time="2025-09-13T00:43:52.890084886Z" level=info msg="CreateContainer within sandbox \"d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1262fcf18a4a30bee6b436a552df9cb3cec296ee45c00222994288d955c5fb94\"" Sep 13 00:43:52.890559 env[1315]: time="2025-09-13T00:43:52.890512287Z" level=info msg="StartContainer for \"1262fcf18a4a30bee6b436a552df9cb3cec296ee45c00222994288d955c5fb94\"" Sep 13 00:43:52.928887 env[1315]: time="2025-09-13T00:43:52.928852959Z" level=info msg="StartContainer for \"1262fcf18a4a30bee6b436a552df9cb3cec296ee45c00222994288d955c5fb94\" returns successfully" Sep 13 00:43:53.203000 audit[4592]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=4592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:53.203000 audit[4592]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fffac67df60 a2=0 a3=7fffac67df4c items=0 ppid=2259 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:53.208000 audit[4592]: NETFILTER_CFG table=nat:104 family=2 entries=27 op=nft_register_chain pid=4592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:53.208000 audit[4592]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fffac67df60 a2=0 a3=7fffac67df4c items=0 ppid=2259 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.208000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:53.285000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.285000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.285000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.285000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.285000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.285000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.285000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.285000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.285000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.285000 audit: BPF prog-id=10 op=LOAD Sep 13 00:43:53.285000 audit[4610]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc1fdb5b60 a2=98 a3=1fffffffffffffff items=0 ppid=4590 pid=4610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.285000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:43:53.286000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit: BPF prog-id=11 op=LOAD Sep 13 00:43:53.286000 audit[4610]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc1fdb5a40 a2=94 a3=3 items=0 ppid=4590 pid=4610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.286000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:43:53.286000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { bpf } for pid=4610 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit: BPF prog-id=12 op=LOAD Sep 13 00:43:53.286000 audit[4610]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc1fdb5a80 a2=94 a3=7ffc1fdb5c60 items=0 ppid=4590 pid=4610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.286000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:43:53.286000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:43:53.286000 audit[4610]: AVC avc: denied { perfmon } for pid=4610 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.286000 audit[4610]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc1fdb5b50 a2=50 a3=a000000085 items=0 ppid=4590 pid=4610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.286000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:43:53.287000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.287000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.287000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.287000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.287000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.287000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.287000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.287000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.287000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.287000 audit: BPF prog-id=13 op=LOAD Sep 13 00:43:53.287000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc59bc0b0 a2=98 a3=3 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.287000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.287000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit: BPF prog-id=14 op=LOAD Sep 13 00:43:53.288000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc59bbea0 a2=94 a3=54428f items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.288000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.288000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.288000 audit: BPF prog-id=15 op=LOAD Sep 13 00:43:53.288000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc59bbed0 a2=94 a3=2 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.288000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.288000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:43:53.405000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.405000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.405000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.405000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.405000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.405000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.405000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.405000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.405000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.405000 audit: BPF prog-id=16 op=LOAD Sep 13 00:43:53.405000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc59bbd90 a2=94 a3=1 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.405000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.406000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:43:53.407000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.407000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffc59bbe60 a2=50 a3=7fffc59bbf40 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.407000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.419000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.419000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc59bbda0 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.419000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.419000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.419000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffc59bbdd0 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.419000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.419000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.419000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffc59bbce0 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.419000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.420000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.420000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc59bbdf0 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.420000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.420000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.420000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc59bbdd0 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.420000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.420000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.420000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc59bbdc0 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.420000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.421000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.421000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc59bbdf0 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.421000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.421000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffc59bbdd0 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.421000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.421000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffc59bbdf0 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.422000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.422000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffc59bbdc0 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.422000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.422000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc59bbe30 a2=28 a3=0 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.422000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.422000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffc59bbbe0 a2=50 a3=1 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.423000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.423000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.423000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.423000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.423000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.423000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.423000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.423000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.423000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.423000 audit: BPF prog-id=17 op=LOAD Sep 13 00:43:53.423000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc59bbbe0 a2=94 a3=5 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.423000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.423000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffc59bbc90 a2=50 a3=1 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.424000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffc59bbdb0 a2=4 a3=38 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.424000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.424000 audit[4611]: AVC avc: denied { confidentiality } for pid=4611 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:43:53.424000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffc59bbe00 a2=94 a3=6 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.424000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.425000 audit[4611]: AVC avc: denied { confidentiality } for pid=4611 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:43:53.425000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffc59bb5b0 a2=94 a3=88 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { perfmon } for pid=4611 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { bpf } for pid=4611 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.426000 audit[4611]: AVC avc: denied { confidentiality } for pid=4611 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:43:53.426000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffc59bb5b0 a2=94 a3=88 items=0 ppid=4590 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit: BPF prog-id=18 op=LOAD Sep 13 00:43:53.444000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc0509c270 a2=98 a3=1999999999999999 items=0 ppid=4590 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.444000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:43:53.444000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit: BPF prog-id=19 op=LOAD Sep 13 00:43:53.444000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc0509c150 a2=94 a3=ffff items=0 ppid=4590 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.444000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:43:53.444000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { perfmon } for pid=4633 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit[4633]: AVC avc: denied { bpf } for pid=4633 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.444000 audit: BPF prog-id=20 op=LOAD Sep 13 00:43:53.444000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc0509c190 a2=94 a3=7ffc0509c370 items=0 ppid=4590 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.444000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:43:53.444000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:43:53.648008 env[1315]: time="2025-09-13T00:43:53.647961737Z" level=info msg="StopPodSandbox for \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\"" Sep 13 00:43:53.780463 kubelet[2107]: E0913 00:43:53.780438 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:53.780957 kubelet[2107]: E0913 00:43:53.780536 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:53.837108 systemd-networkd[1082]: vxlan.calico: Link UP Sep 13 00:43:53.837115 systemd-networkd[1082]: vxlan.calico: Gained carrier Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit: BPF prog-id=21 op=LOAD Sep 13 00:43:53.853000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe370d6a00 a2=98 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.853000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.853000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit: BPF prog-id=22 op=LOAD Sep 13 00:43:53.853000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe370d6810 a2=94 a3=54428f items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.853000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.853000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.853000 audit: BPF prog-id=23 op=LOAD Sep 13 00:43:53.853000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe370d6840 a2=94 a3=2 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.853000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe370d6710 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe370d6740 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe370d6650 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe370d6760 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe370d6740 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe370d6730 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe370d6760 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe370d6740 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe370d6760 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe370d6730 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe370d67a0 a2=28 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.854000 audit: BPF prog-id=24 op=LOAD Sep 13 00:43:53.854000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe370d6610 a2=94 a3=0 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.854000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe370d6600 a2=50 a3=2800 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.855000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe370d6600 a2=50 a3=2800 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.855000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit: BPF prog-id=25 op=LOAD Sep 13 00:43:53.855000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe370d5e20 a2=94 a3=2 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.855000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.855000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { perfmon } for pid=4678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit[4678]: AVC avc: denied { bpf } for pid=4678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.855000 audit: BPF prog-id=26 op=LOAD Sep 13 00:43:53.855000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe370d5f20 a2=94 a3=30 items=0 ppid=4590 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.855000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit: BPF prog-id=27 op=LOAD Sep 13 00:43:53.858000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa3bd3790 a2=98 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.858000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.858000 audit: BPF prog-id=27 op=UNLOAD Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit: BPF prog-id=28 op=LOAD Sep 13 00:43:53.858000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffa3bd3580 a2=94 a3=54428f items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.858000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.858000 audit: BPF prog-id=28 op=UNLOAD Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.858000 audit: BPF prog-id=29 op=LOAD Sep 13 00:43:53.858000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffa3bd35b0 a2=94 a3=2 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.858000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.858000 audit: BPF prog-id=29 op=UNLOAD Sep 13 00:43:53.964000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.964000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.964000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.964000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.964000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.964000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.964000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.964000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.964000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.964000 audit: BPF prog-id=30 op=LOAD Sep 13 00:43:53.964000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffa3bd3470 a2=94 a3=1 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.964000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.964000 audit: BPF prog-id=30 op=UNLOAD Sep 13 00:43:53.964000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.964000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffa3bd3540 a2=50 a3=7fffa3bd3620 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.964000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa3bd3480 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa3bd34b0 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa3bd33c0 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa3bd34d0 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa3bd34b0 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa3bd34a0 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa3bd34d0 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa3bd34b0 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa3bd34d0 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa3bd34a0 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa3bd3510 a2=28 a3=0 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffa3bd32c0 a2=50 a3=1 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit: BPF prog-id=31 op=LOAD Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffa3bd32c0 a2=94 a3=5 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit: BPF prog-id=31 op=UNLOAD Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffa3bd3370 a2=50 a3=1 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffa3bd3490 a2=4 a3=38 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.972000 audit[4682]: AVC avc: denied { confidentiality } for pid=4682 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:43:53.972000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffa3bd34e0 a2=94 a3=6 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.972000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { confidentiality } for pid=4682 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:43:53.973000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffa3bd2c90 a2=94 a3=88 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { perfmon } for pid=4682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { confidentiality } for pid=4682 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:43:53.973000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffa3bd2c90 a2=94 a3=88 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffa3bd46c0 a2=10 a3=208 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffa3bd4560 a2=10 a3=3 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffa3bd4500 a2=10 a3=3 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:53.973000 audit[4682]: AVC avc: denied { bpf } for pid=4682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:43:53.973000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffa3bd4500 a2=10 a3=7 items=0 ppid=4590 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:53.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:43:54.090000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:43:54.193718 systemd-networkd[1082]: cali230135788b8: Gained IPv6LL Sep 13 00:43:54.198000 audit[4727]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=4727 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:43:54.198000 audit[4727]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff8dbdd2d0 a2=0 a3=7fff8dbdd2bc items=0 ppid=4590 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:54.198000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:43:54.204000 audit[4729]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=4729 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:43:54.204000 audit[4729]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe61ae0950 a2=0 a3=7ffe61ae093c items=0 ppid=4590 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:54.204000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:43:54.206000 audit[4726]: NETFILTER_CFG table=raw:107 family=2 entries=21 op=nft_register_chain pid=4726 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:43:54.206000 audit[4726]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd793ea150 a2=0 a3=7ffd793ea13c items=0 ppid=4590 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:54.206000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:43:54.212000 audit[4731]: NETFILTER_CFG table=filter:108 family=2 entries=293 op=nft_register_chain pid=4731 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:43:54.212000 audit[4731]: SYSCALL arch=c000003e syscall=46 success=yes exit=173940 a0=3 a1=7ffd204b6410 a2=0 a3=7ffd204b63fc items=0 ppid=4590 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:54.212000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:43:54.247556 kubelet[2107]: I0913 00:43:54.247079 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kp2bx" podStartSLOduration=42.247060136 podStartE2EDuration="42.247060136s" podCreationTimestamp="2025-09-13 00:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:54.246928289 +0000 UTC m=+47.708764036" watchObservedRunningTime="2025-09-13 00:43:54.247060136 +0000 UTC m=+47.708895873" Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:53.923 [INFO][4658] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:53.923 [INFO][4658] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" iface="eth0" netns="/var/run/netns/cni-bff8fd2d-6adf-6cb6-b7be-b9f297323c0a" Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:53.923 [INFO][4658] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" iface="eth0" netns="/var/run/netns/cni-bff8fd2d-6adf-6cb6-b7be-b9f297323c0a" Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:53.923 [INFO][4658] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" iface="eth0" netns="/var/run/netns/cni-bff8fd2d-6adf-6cb6-b7be-b9f297323c0a" Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:53.923 [INFO][4658] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:53.923 [INFO][4658] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:53.947 [INFO][4691] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" HandleID="k8s-pod-network.9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:53.947 [INFO][4691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:53.947 [INFO][4691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:54.243 [WARNING][4691] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" HandleID="k8s-pod-network.9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:54.243 [INFO][4691] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" HandleID="k8s-pod-network.9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:54.245 [INFO][4691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:54.249288 env[1315]: 2025-09-13 00:43:54.247 [INFO][4658] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:43:54.252028 systemd[1]: run-netns-cni\x2dbff8fd2d\x2d6adf\x2d6cb6\x2db7be\x2db9f297323c0a.mount: Deactivated successfully. Sep 13 00:43:54.252723 env[1315]: time="2025-09-13T00:43:54.252673709Z" level=info msg="TearDown network for sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\" successfully" Sep 13 00:43:54.252723 env[1315]: time="2025-09-13T00:43:54.252708757Z" level=info msg="StopPodSandbox for \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\" returns successfully" Sep 13 00:43:54.253373 env[1315]: time="2025-09-13T00:43:54.253351395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twb76,Uid:0495105a-b1c5-41d8-b0c9-fcfac0de6125,Namespace:calico-system,Attempt:1,}" Sep 13 00:43:54.432000 audit[4743]: NETFILTER_CFG table=filter:109 family=2 entries=12 op=nft_register_rule pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:54.432000 audit[4743]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd0bad1dd0 a2=0 a3=7ffd0bad1dbc items=0 ppid=2259 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:54.432000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:54.440000 audit[4743]: NETFILTER_CFG table=nat:110 family=2 entries=46 op=nft_register_rule pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:54.440000 audit[4743]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffd0bad1dd0 a2=0 a3=7ffd0bad1dbc items=0 ppid=2259 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:54.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:54.577750 systemd-networkd[1082]: calief1f525c8c7: Gained IPv6LL Sep 13 00:43:54.637123 systemd-networkd[1082]: cali2fcb2ed7d62: Link UP Sep 13 00:43:54.638681 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2fcb2ed7d62: link becomes ready Sep 13 00:43:54.638821 systemd-networkd[1082]: cali2fcb2ed7d62: Gained carrier Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.567 [INFO][4759] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--twb76-eth0 csi-node-driver- calico-system 0495105a-b1c5-41d8-b0c9-fcfac0de6125 1046 0 2025-09-13 00:43:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-twb76 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2fcb2ed7d62 [] [] }} ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Namespace="calico-system" Pod="csi-node-driver-twb76" WorkloadEndpoint="localhost-k8s-csi--node--driver--twb76-" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.567 [INFO][4759] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Namespace="calico-system" Pod="csi-node-driver-twb76" WorkloadEndpoint="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.600 [INFO][4774] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" HandleID="k8s-pod-network.4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.600 [INFO][4774] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" HandleID="k8s-pod-network.4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Workload="localhost-k8s-csi--node--driver--twb76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000130b60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-twb76", "timestamp":"2025-09-13 00:43:54.599986723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.600 [INFO][4774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.600 [INFO][4774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.600 [INFO][4774] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.607 [INFO][4774] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" host="localhost" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.612 [INFO][4774] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.616 [INFO][4774] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.617 [INFO][4774] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.619 [INFO][4774] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.619 [INFO][4774] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" host="localhost" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.621 [INFO][4774] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.625 [INFO][4774] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" host="localhost" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.632 [INFO][4774] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" host="localhost" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.632 [INFO][4774] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" host="localhost" Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.632 [INFO][4774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:43:54.652686 env[1315]: 2025-09-13 00:43:54.632 [INFO][4774] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" HandleID="k8s-pod-network.4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:43:54.653493 env[1315]: 2025-09-13 00:43:54.634 [INFO][4759] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Namespace="calico-system" Pod="csi-node-driver-twb76" WorkloadEndpoint="localhost-k8s-csi--node--driver--twb76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--twb76-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0495105a-b1c5-41d8-b0c9-fcfac0de6125", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-twb76", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fcb2ed7d62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:54.653493 env[1315]: 2025-09-13 00:43:54.635 [INFO][4759] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Namespace="calico-system" Pod="csi-node-driver-twb76" WorkloadEndpoint="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:43:54.653493 env[1315]: 2025-09-13 00:43:54.635 [INFO][4759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fcb2ed7d62 ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Namespace="calico-system" Pod="csi-node-driver-twb76" WorkloadEndpoint="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:43:54.653493 env[1315]: 2025-09-13 00:43:54.637 [INFO][4759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Namespace="calico-system" Pod="csi-node-driver-twb76" WorkloadEndpoint="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:43:54.653493 env[1315]: 2025-09-13 00:43:54.638 [INFO][4759] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Namespace="calico-system" Pod="csi-node-driver-twb76" WorkloadEndpoint="localhost-k8s-csi--node--driver--twb76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--twb76-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0495105a-b1c5-41d8-b0c9-fcfac0de6125", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef", Pod:"csi-node-driver-twb76", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fcb2ed7d62", MAC:"da:58:56:49:7b:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:43:54.653493 env[1315]: 2025-09-13 00:43:54.651 [INFO][4759] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef" Namespace="calico-system" Pod="csi-node-driver-twb76" WorkloadEndpoint="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:43:54.672000 audit[4790]: NETFILTER_CFG table=filter:111 family=2 entries=40 op=nft_register_chain pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:43:54.672000 audit[4790]: SYSCALL arch=c000003e syscall=46 success=yes exit=20784 a0=3 a1=7fffda413740 a2=0 a3=7fffda41372c items=0 ppid=4590 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:54.672000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:43:54.675950 env[1315]: time="2025-09-13T00:43:54.675890755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:54.675950 env[1315]: time="2025-09-13T00:43:54.675925783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:54.676182 env[1315]: time="2025-09-13T00:43:54.675935552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:54.676407 env[1315]: time="2025-09-13T00:43:54.676357511Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef pid=4796 runtime=io.containerd.runc.v2 Sep 13 00:43:54.700590 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:54.709689 env[1315]: time="2025-09-13T00:43:54.709640318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twb76,Uid:0495105a-b1c5-41d8-b0c9-fcfac0de6125,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef\"" Sep 13 00:43:54.784254 kubelet[2107]: E0913 00:43:54.784200 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:55.506000 audit[4831]: NETFILTER_CFG table=filter:112 family=2 entries=12 op=nft_register_rule pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:55.506000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc51459a40 a2=0 a3=7ffc51459a2c items=0 ppid=2259 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:55.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:55.516000 audit[4831]: NETFILTER_CFG table=nat:113 family=2 entries=58 op=nft_register_chain pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:43:55.516000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=20628 a0=3 a1=7ffc51459a40 a2=0 a3=7ffc51459a2c items=0 ppid=2259 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:55.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:43:55.590582 env[1315]: time="2025-09-13T00:43:55.590515174Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:55.592638 env[1315]: time="2025-09-13T00:43:55.592579540Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:55.594180 env[1315]: time="2025-09-13T00:43:55.594156531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:55.595538 env[1315]: time="2025-09-13T00:43:55.595488837Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:55.596015 env[1315]: time="2025-09-13T00:43:55.595974679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:43:55.597083 env[1315]: time="2025-09-13T00:43:55.597056950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:43:55.612252 env[1315]: time="2025-09-13T00:43:55.612212461Z" level=info msg="CreateContainer within sandbox \"c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:43:55.624185 env[1315]: time="2025-09-13T00:43:55.624136500Z" level=info msg="CreateContainer within sandbox \"c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4a2fcb54dbd4d0cb40c9ee9d55ac3a66c8ff485de77cd72b660b3d8c2db30a01\"" Sep 13 00:43:55.624701 env[1315]: time="2025-09-13T00:43:55.624615660Z" level=info msg="StartContainer for \"4a2fcb54dbd4d0cb40c9ee9d55ac3a66c8ff485de77cd72b660b3d8c2db30a01\"" Sep 13 00:43:55.685116 env[1315]: time="2025-09-13T00:43:55.685061711Z" level=info msg="StartContainer for \"4a2fcb54dbd4d0cb40c9ee9d55ac3a66c8ff485de77cd72b660b3d8c2db30a01\" returns successfully" Sep 13 00:43:55.788275 kubelet[2107]: E0913 00:43:55.788036 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:55.799946 kubelet[2107]: I0913 00:43:55.799764 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66999fb6b9-hjj2v" podStartSLOduration=26.067642717 podStartE2EDuration="30.799743272s" podCreationTimestamp="2025-09-13 00:43:25 +0000 UTC" firstStartedPulling="2025-09-13 00:43:50.864790733 +0000 UTC m=+44.326626470" lastFinishedPulling="2025-09-13 00:43:55.596891288 +0000 UTC m=+49.058727025" observedRunningTime="2025-09-13 00:43:55.798699927 +0000 UTC m=+49.260535675" watchObservedRunningTime="2025-09-13 00:43:55.799743272 +0000 UTC m=+49.261579009" Sep 13 00:43:55.856788 systemd-networkd[1082]: vxlan.calico: Gained IPv6LL Sep 13 00:43:56.203058 systemd[1]: Started sshd@9-10.0.0.27:22-10.0.0.1:34816.service. Sep 13 00:43:56.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.27:22-10.0.0.1:34816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:56.204894 kernel: kauditd_printk_skb: 556 callbacks suppressed Sep 13 00:43:56.204957 kernel: audit: type=1130 audit(1757724236.201:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.27:22-10.0.0.1:34816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:56.236000 audit[4885]: USER_ACCT pid=4885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.238386 sshd[4885]: Accepted publickey for core from 10.0.0.1 port 34816 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:43:56.240000 audit[4885]: CRED_ACQ pid=4885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.242584 sshd[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:56.246023 systemd-logind[1299]: New session 10 of user core. Sep 13 00:43:56.246454 kernel: audit: type=1101 audit(1757724236.236:428): pid=4885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.246483 kernel: audit: type=1103 audit(1757724236.240:429): pid=4885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.246504 kernel: audit: type=1006 audit(1757724236.240:430): pid=4885 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 13 00:43:56.247011 systemd[1]: Started session-10.scope. Sep 13 00:43:56.240000 audit[4885]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb898b2f0 a2=3 a3=0 items=0 ppid=1 pid=4885 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:56.254045 kernel: audit: type=1300 audit(1757724236.240:430): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb898b2f0 a2=3 a3=0 items=0 ppid=1 pid=4885 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:43:56.254110 kernel: audit: type=1327 audit(1757724236.240:430): proctitle=737368643A20636F7265205B707269765D Sep 13 00:43:56.240000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:43:56.249000 audit[4885]: USER_START pid=4885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.259271 kernel: audit: type=1105 audit(1757724236.249:431): pid=4885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.259312 kernel: audit: type=1103 audit(1757724236.250:432): pid=4888 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.250000 audit[4888]: CRED_ACQ pid=4888 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.418561 sshd[4885]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:56.417000 audit[4885]: USER_END pid=4885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.420522 systemd[1]: sshd@9-10.0.0.27:22-10.0.0.1:34816.service: Deactivated successfully. Sep 13 00:43:56.421330 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:43:56.417000 audit[4885]: CRED_DISP pid=4885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.424306 systemd-logind[1299]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:43:56.425133 systemd-logind[1299]: Removed session 10. Sep 13 00:43:56.428014 kernel: audit: type=1106 audit(1757724236.417:433): pid=4885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.428077 kernel: audit: type=1104 audit(1757724236.417:434): pid=4885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:43:56.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.27:22-10.0.0.1:34816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:43:56.432806 systemd-networkd[1082]: cali2fcb2ed7d62: Gained IPv6LL Sep 13 00:43:56.797291 kubelet[2107]: E0913 00:43:56.796948 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:56.817883 systemd[1]: run-containerd-runc-k8s.io-4a2fcb54dbd4d0cb40c9ee9d55ac3a66c8ff485de77cd72b660b3d8c2db30a01-runc.oXERty.mount: Deactivated successfully. Sep 13 00:43:58.829430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount636263149.mount: Deactivated successfully. Sep 13 00:44:01.421477 systemd[1]: Started sshd@10-10.0.0.27:22-10.0.0.1:48214.service. Sep 13 00:44:01.424664 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:44:01.424775 kernel: audit: type=1130 audit(1757724241.420:436): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.27:22-10.0.0.1:48214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:01.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.27:22-10.0.0.1:48214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:01.463000 audit[4928]: USER_ACCT pid=4928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.464175 sshd[4928]: Accepted publickey for core from 10.0.0.1 port 48214 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:01.465426 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:01.464000 audit[4928]: CRED_ACQ pid=4928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.468989 systemd-logind[1299]: New session 11 of user core. Sep 13 00:44:01.469915 systemd[1]: Started session-11.scope. Sep 13 00:44:01.471186 kernel: audit: type=1101 audit(1757724241.463:437): pid=4928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.471238 kernel: audit: type=1103 audit(1757724241.464:438): pid=4928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.473568 kernel: audit: type=1006 audit(1757724241.464:439): pid=4928 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Sep 13 00:44:01.464000 audit[4928]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa907cd20 a2=3 a3=0 items=0 ppid=1 pid=4928 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:01.464000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:01.478870 kernel: audit: type=1300 audit(1757724241.464:439): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa907cd20 a2=3 a3=0 items=0 ppid=1 pid=4928 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:01.478903 kernel: audit: type=1327 audit(1757724241.464:439): proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:01.478921 kernel: audit: type=1105 audit(1757724241.474:440): pid=4928 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.474000 audit[4928]: USER_START pid=4928 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.482849 kernel: audit: type=1103 audit(1757724241.475:441): pid=4931 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.475000 audit[4931]: CRED_ACQ pid=4931 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.718911 sshd[4928]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:01.721372 systemd[1]: Started sshd@11-10.0.0.27:22-10.0.0.1:48228.service. Sep 13 00:44:01.719000 audit[4928]: USER_END pid=4928 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.738669 kernel: audit: type=1106 audit(1757724241.719:442): pid=4928 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.738711 kernel: audit: type=1104 audit(1757724241.719:443): pid=4928 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.719000 audit[4928]: CRED_DISP pid=4928 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.27:22-10.0.0.1:48228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:01.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.27:22-10.0.0.1:48214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:01.724993 systemd[1]: sshd@10-10.0.0.27:22-10.0.0.1:48214.service: Deactivated successfully. Sep 13 00:44:01.726039 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:44:01.726312 systemd-logind[1299]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:44:01.726984 systemd-logind[1299]: Removed session 11. Sep 13 00:44:01.748381 env[1315]: time="2025-09-13T00:44:01.748335532Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:01.757000 audit[4942]: USER_ACCT pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.758372 sshd[4942]: Accepted publickey for core from 10.0.0.1 port 48228 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:01.758000 audit[4942]: CRED_ACQ pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.758000 audit[4942]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd90597640 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:01.758000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:01.759460 sshd[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:01.762664 systemd-logind[1299]: New session 12 of user core. Sep 13 00:44:01.763377 systemd[1]: Started session-12.scope. Sep 13 00:44:01.767000 audit[4942]: USER_START pid=4942 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.768000 audit[4947]: CRED_ACQ pid=4947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:01.828405 env[1315]: time="2025-09-13T00:44:01.828349223Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:01.872050 env[1315]: time="2025-09-13T00:44:01.872007304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:01.913773 env[1315]: time="2025-09-13T00:44:01.913728125Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:01.914358 env[1315]: time="2025-09-13T00:44:01.914330809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:44:01.916598 env[1315]: time="2025-09-13T00:44:01.916564312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:44:01.917894 env[1315]: time="2025-09-13T00:44:01.917869292Z" level=info msg="CreateContainer within sandbox \"561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:44:02.072872 systemd[1]: Started sshd@12-10.0.0.27:22-10.0.0.1:48236.service. Sep 13 00:44:02.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.27:22-10.0.0.1:48236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:02.221801 sshd[4942]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:02.222000 audit[4942]: USER_END pid=4942 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:02.222000 audit[4942]: CRED_DISP pid=4942 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:02.227812 systemd-logind[1299]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:44:02.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.27:22-10.0.0.1:48228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:02.228533 systemd[1]: sshd@11-10.0.0.27:22-10.0.0.1:48228.service: Deactivated successfully. Sep 13 00:44:02.229586 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:44:02.231302 systemd-logind[1299]: Removed session 12. Sep 13 00:44:02.464000 audit[4954]: USER_ACCT pid=4954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:02.465852 sshd[4954]: Accepted publickey for core from 10.0.0.1 port 48236 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:02.466000 audit[4954]: CRED_ACQ pid=4954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:02.466000 audit[4954]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0ad7e8d0 a2=3 a3=0 items=0 ppid=1 pid=4954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:02.466000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:02.476000 audit[4954]: USER_START pid=4954 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:02.477000 audit[4959]: CRED_ACQ pid=4959 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:02.471505 systemd-logind[1299]: New session 13 of user core. Sep 13 00:44:02.467347 sshd[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:02.472569 systemd[1]: Started session-13.scope. Sep 13 00:44:02.529839 env[1315]: time="2025-09-13T00:44:02.529777524Z" level=info msg="CreateContainer within sandbox \"561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dd6871c8cb00dd1a4e6771a94a22cebbb2f393a9f9e6705b0d131d936ecd2f6a\"" Sep 13 00:44:02.530563 env[1315]: time="2025-09-13T00:44:02.530535627Z" level=info msg="StartContainer for \"dd6871c8cb00dd1a4e6771a94a22cebbb2f393a9f9e6705b0d131d936ecd2f6a\"" Sep 13 00:44:02.699489 sshd[4954]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:02.699000 audit[4954]: USER_END pid=4954 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:02.699000 audit[4954]: CRED_DISP pid=4954 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:02.701439 systemd[1]: sshd@12-10.0.0.27:22-10.0.0.1:48236.service: Deactivated successfully. Sep 13 00:44:02.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.27:22-10.0.0.1:48236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:02.702323 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:44:02.703212 systemd-logind[1299]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:44:02.704021 systemd-logind[1299]: Removed session 13. Sep 13 00:44:02.851096 env[1315]: time="2025-09-13T00:44:02.851026262Z" level=info msg="StartContainer for \"dd6871c8cb00dd1a4e6771a94a22cebbb2f393a9f9e6705b0d131d936ecd2f6a\" returns successfully" Sep 13 00:44:02.934243 kubelet[2107]: I0913 00:44:02.934167 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-44vsn" podStartSLOduration=27.94079414 podStartE2EDuration="38.93414862s" podCreationTimestamp="2025-09-13 00:43:24 +0000 UTC" firstStartedPulling="2025-09-13 00:43:50.922460505 +0000 UTC m=+44.384296242" lastFinishedPulling="2025-09-13 00:44:01.915814985 +0000 UTC m=+55.377650722" observedRunningTime="2025-09-13 00:44:02.933337935 +0000 UTC m=+56.395173692" watchObservedRunningTime="2025-09-13 00:44:02.93414862 +0000 UTC m=+56.395984377" Sep 13 00:44:02.949000 audit[5030]: NETFILTER_CFG table=filter:114 family=2 entries=12 op=nft_register_rule pid=5030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:02.949000 audit[5030]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffeea85b840 a2=0 a3=7ffeea85b82c items=0 ppid=2259 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:02.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:02.953000 audit[5030]: NETFILTER_CFG table=nat:115 family=2 entries=22 op=nft_register_rule pid=5030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:02.953000 audit[5030]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffeea85b840 a2=0 a3=7ffeea85b82c items=0 ppid=2259 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:02.953000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:06.619760 env[1315]: time="2025-09-13T00:44:06.619515119Z" level=info msg="StopPodSandbox for \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\"" Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.663 [WARNING][5070] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4f125e3b-0620-4e01-917a-be90d6600e62", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261", Pod:"coredns-7c65d6cfc9-gpbn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2ddacd25e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.664 [INFO][5070] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.664 [INFO][5070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" iface="eth0" netns="" Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.664 [INFO][5070] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.664 [INFO][5070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.686 [INFO][5081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" HandleID="k8s-pod-network.37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.687 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.687 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.692 [WARNING][5081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" HandleID="k8s-pod-network.37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.692 [INFO][5081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" HandleID="k8s-pod-network.37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.693 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:06.697724 env[1315]: 2025-09-13 00:44:06.696 [INFO][5070] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:44:06.698215 env[1315]: time="2025-09-13T00:44:06.697740330Z" level=info msg="TearDown network for sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\" successfully" Sep 13 00:44:06.698215 env[1315]: time="2025-09-13T00:44:06.697763956Z" level=info msg="StopPodSandbox for \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\" returns successfully" Sep 13 00:44:06.700008 env[1315]: time="2025-09-13T00:44:06.699964501Z" level=info msg="RemovePodSandbox for \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\"" Sep 13 00:44:06.700061 env[1315]: time="2025-09-13T00:44:06.700013205Z" level=info msg="Forcibly stopping sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\"" Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.729 [WARNING][5099] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4f125e3b-0620-4e01-917a-be90d6600e62", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e504ee5f56556b857c71961a1f8df310acd706cadf7786b003f089111290261", Pod:"coredns-7c65d6cfc9-gpbn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2ddacd25e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.729 [INFO][5099] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.729 [INFO][5099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" iface="eth0" netns="" Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.729 [INFO][5099] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.729 [INFO][5099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.747 [INFO][5109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" HandleID="k8s-pod-network.37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.747 [INFO][5109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.747 [INFO][5109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.752 [WARNING][5109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" HandleID="k8s-pod-network.37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.752 [INFO][5109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" HandleID="k8s-pod-network.37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Workload="localhost-k8s-coredns--7c65d6cfc9--gpbn2-eth0" Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.753 [INFO][5109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:06.765715 env[1315]: 2025-09-13 00:44:06.764 [INFO][5099] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05" Sep 13 00:44:06.766571 env[1315]: time="2025-09-13T00:44:06.765746334Z" level=info msg="TearDown network for sandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\" successfully" Sep 13 00:44:06.769048 env[1315]: time="2025-09-13T00:44:06.769016618Z" level=info msg="RemovePodSandbox \"37a0346cc1d8c74fbd52b05a603b728c49ab98b727854f6bfe95eb514ddd7b05\" returns successfully" Sep 13 00:44:06.769605 env[1315]: time="2025-09-13T00:44:06.769566036Z" level=info msg="StopPodSandbox for \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\"" Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.797 [WARNING][5129] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0", GenerateName:"calico-kube-controllers-66999fb6b9-", Namespace:"calico-system", SelfLink:"", UID:"631473a4-cdd6-4d56-8d52-071b771820a5", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66999fb6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179", Pod:"calico-kube-controllers-66999fb6b9-hjj2v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70e7a0bcd28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.797 [INFO][5129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.797 [INFO][5129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" iface="eth0" netns="" Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.797 [INFO][5129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.797 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.823 [INFO][5138] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" HandleID="k8s-pod-network.6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.823 [INFO][5138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.823 [INFO][5138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.829 [WARNING][5138] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" HandleID="k8s-pod-network.6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.829 [INFO][5138] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" HandleID="k8s-pod-network.6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.830 [INFO][5138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:06.834410 env[1315]: 2025-09-13 00:44:06.832 [INFO][5129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:44:06.834902 env[1315]: time="2025-09-13T00:44:06.834439901Z" level=info msg="TearDown network for sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\" successfully" Sep 13 00:44:06.834902 env[1315]: time="2025-09-13T00:44:06.834469498Z" level=info msg="StopPodSandbox for \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\" returns successfully" Sep 13 00:44:06.835166 env[1315]: time="2025-09-13T00:44:06.835132825Z" level=info msg="RemovePodSandbox for \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\"" Sep 13 00:44:06.835240 env[1315]: time="2025-09-13T00:44:06.835184594Z" level=info msg="Forcibly stopping sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\"" Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.865 [WARNING][5155] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0", GenerateName:"calico-kube-controllers-66999fb6b9-", Namespace:"calico-system", SelfLink:"", UID:"631473a4-cdd6-4d56-8d52-071b771820a5", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66999fb6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6eb55db595c656931f11af4b0fa57bcd646a1b3e3d4c82add770e668eca6179", Pod:"calico-kube-controllers-66999fb6b9-hjj2v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70e7a0bcd28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.865 [INFO][5155] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.866 [INFO][5155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" iface="eth0" netns="" Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.866 [INFO][5155] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.866 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.886 [INFO][5164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" HandleID="k8s-pod-network.6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.886 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.886 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.892 [WARNING][5164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" HandleID="k8s-pod-network.6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.892 [INFO][5164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" HandleID="k8s-pod-network.6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Workload="localhost-k8s-calico--kube--controllers--66999fb6b9--hjj2v-eth0" Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.893 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:06.897076 env[1315]: 2025-09-13 00:44:06.895 [INFO][5155] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0" Sep 13 00:44:06.897076 env[1315]: time="2025-09-13T00:44:06.897034219Z" level=info msg="TearDown network for sandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\" successfully" Sep 13 00:44:07.034140 env[1315]: time="2025-09-13T00:44:07.034088723Z" level=info msg="RemovePodSandbox \"6f8af9ef01050e8de6f5d8e4e38e387bfa904a2b6963da43692034c48d3327b0\" returns successfully" Sep 13 00:44:07.034515 env[1315]: time="2025-09-13T00:44:07.034482029Z" level=info msg="StopPodSandbox for \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\"" Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.080 [WARNING][5223] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0", GenerateName:"calico-apiserver-69684bbb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"445a4a34-e91d-44ab-8fb9-191df697ef64", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69684bbb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b", Pod:"calico-apiserver-69684bbb9f-79hgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali230135788b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.081 [INFO][5223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.081 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" iface="eth0" netns="" Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.081 [INFO][5223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.081 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.097 [INFO][5231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" HandleID="k8s-pod-network.4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.097 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.097 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.102 [WARNING][5231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" HandleID="k8s-pod-network.4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.102 [INFO][5231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" HandleID="k8s-pod-network.4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.104 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:07.107094 env[1315]: 2025-09-13 00:44:07.105 [INFO][5223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:44:07.107658 env[1315]: time="2025-09-13T00:44:07.107599744Z" level=info msg="TearDown network for sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\" successfully" Sep 13 00:44:07.107658 env[1315]: time="2025-09-13T00:44:07.107648869Z" level=info msg="StopPodSandbox for \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\" returns successfully" Sep 13 00:44:07.108193 env[1315]: time="2025-09-13T00:44:07.108146936Z" level=info msg="RemovePodSandbox for \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\"" Sep 13 00:44:07.108247 env[1315]: time="2025-09-13T00:44:07.108201120Z" level=info msg="Forcibly stopping sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\"" Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.136 [WARNING][5249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0", GenerateName:"calico-apiserver-69684bbb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"445a4a34-e91d-44ab-8fb9-191df697ef64", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69684bbb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b", Pod:"calico-apiserver-69684bbb9f-79hgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali230135788b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.136 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.136 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" iface="eth0" netns="" Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.136 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.136 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.154 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" HandleID="k8s-pod-network.4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.155 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.155 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.160 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" HandleID="k8s-pod-network.4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.160 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" HandleID="k8s-pod-network.4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Workload="localhost-k8s-calico--apiserver--69684bbb9f--79hgx-eth0" Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.161 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:07.164154 env[1315]: 2025-09-13 00:44:07.162 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95" Sep 13 00:44:07.164154 env[1315]: time="2025-09-13T00:44:07.164132969Z" level=info msg="TearDown network for sandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\" successfully" Sep 13 00:44:07.167437 env[1315]: time="2025-09-13T00:44:07.167409211Z" level=info msg="RemovePodSandbox \"4d20579a436f4e1e312808c06a26467dacfa0666793b9117f9c0a7d2403aca95\" returns successfully" Sep 13 00:44:07.167897 env[1315]: time="2025-09-13T00:44:07.167870979Z" level=info msg="StopPodSandbox for \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\"" Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.196 [WARNING][5274] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"57c2caed-82b7-4c8e-9591-b6e3e76966bb", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d", Pod:"coredns-7c65d6cfc9-kp2bx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief1f525c8c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.197 [INFO][5274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.197 [INFO][5274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" iface="eth0" netns="" Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.197 [INFO][5274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.197 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.223 [INFO][5283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" HandleID="k8s-pod-network.b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.223 [INFO][5283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.223 [INFO][5283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.228 [WARNING][5283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" HandleID="k8s-pod-network.b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.232 [INFO][5283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" HandleID="k8s-pod-network.b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.233 [INFO][5283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:07.239856 env[1315]: 2025-09-13 00:44:07.237 [INFO][5274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:44:07.240430 env[1315]: time="2025-09-13T00:44:07.239887716Z" level=info msg="TearDown network for sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\" successfully" Sep 13 00:44:07.240430 env[1315]: time="2025-09-13T00:44:07.239919837Z" level=info msg="StopPodSandbox for \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\" returns successfully" Sep 13 00:44:07.240568 env[1315]: time="2025-09-13T00:44:07.240516705Z" level=info msg="RemovePodSandbox for \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\"" Sep 13 00:44:07.240606 env[1315]: time="2025-09-13T00:44:07.240570760Z" level=info msg="Forcibly stopping sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\"" Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.271 [WARNING][5301] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"57c2caed-82b7-4c8e-9591-b6e3e76966bb", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d082515f1f3da3c6cd25970825a8d503075f1c90516bbdb168b120699110162d", Pod:"coredns-7c65d6cfc9-kp2bx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief1f525c8c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.271 [INFO][5301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.271 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" iface="eth0" netns="" Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.271 [INFO][5301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.271 [INFO][5301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.292 [INFO][5310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" HandleID="k8s-pod-network.b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.292 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.292 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.298 [WARNING][5310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" HandleID="k8s-pod-network.b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.298 [INFO][5310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" HandleID="k8s-pod-network.b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Workload="localhost-k8s-coredns--7c65d6cfc9--kp2bx-eth0" Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.299 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:07.302292 env[1315]: 2025-09-13 00:44:07.300 [INFO][5301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4" Sep 13 00:44:07.303019 env[1315]: time="2025-09-13T00:44:07.302320388Z" level=info msg="TearDown network for sandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\" successfully" Sep 13 00:44:07.703274 systemd[1]: Started sshd@13-10.0.0.27:22-10.0.0.1:48244.service. Sep 13 00:44:07.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.27:22-10.0.0.1:48244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:07.704743 kernel: kauditd_printk_skb: 29 callbacks suppressed Sep 13 00:44:07.704818 kernel: audit: type=1130 audit(1757724247.702:465): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.27:22-10.0.0.1:48244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:07.704875 env[1315]: time="2025-09-13T00:44:07.704822629Z" level=info msg="RemovePodSandbox \"b38896aa8c55c4399364f43802bbdcecb408cacf1ee16ea915b04f94514ccba4\" returns successfully" Sep 13 00:44:07.705816 env[1315]: time="2025-09-13T00:44:07.705776133Z" level=info msg="StopPodSandbox for \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\"" Sep 13 00:44:07.720156 env[1315]: time="2025-09-13T00:44:07.720109371Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:07.723047 env[1315]: time="2025-09-13T00:44:07.723019559Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:07.724874 env[1315]: time="2025-09-13T00:44:07.724829049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:07.726287 env[1315]: time="2025-09-13T00:44:07.726262747Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:07.726726 env[1315]: time="2025-09-13T00:44:07.726695810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:44:07.729117 env[1315]: time="2025-09-13T00:44:07.729084955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:44:07.732233 env[1315]: time="2025-09-13T00:44:07.732200357Z" level=info msg="CreateContainer within sandbox \"9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:44:07.762145 kernel: audit: type=1101 audit(1757724247.745:466): pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:07.762245 kernel: audit: type=1103 audit(1757724247.749:467): pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:07.762267 kernel: audit: type=1006 audit(1757724247.749:468): pid=5318 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 13 00:44:07.762282 kernel: audit: type=1300 audit(1757724247.749:468): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca20aa390 a2=3 a3=0 items=0 ppid=1 pid=5318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:07.762300 kernel: audit: type=1327 audit(1757724247.749:468): proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:07.745000 audit[5318]: USER_ACCT pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:07.749000 audit[5318]: CRED_ACQ pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:07.749000 audit[5318]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca20aa390 a2=3 a3=0 items=0 ppid=1 pid=5318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:07.749000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:07.762563 env[1315]: time="2025-09-13T00:44:07.754937289Z" level=info msg="CreateContainer within sandbox \"9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b62d6e02b9e3f2a20471be522ad27ece9affafa0dda32a78424d2371e25c759d\"" Sep 13 00:44:07.762563 env[1315]: time="2025-09-13T00:44:07.755715746Z" level=info msg="StartContainer for \"b62d6e02b9e3f2a20471be522ad27ece9affafa0dda32a78424d2371e25c759d\"" Sep 13 00:44:07.750460 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:07.762868 sshd[5318]: Accepted publickey for core from 10.0.0.1 port 48244 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:07.745637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3102343804.mount: Deactivated successfully. Sep 13 00:44:07.767390 systemd[1]: Started session-14.scope. Sep 13 00:44:07.767837 systemd-logind[1299]: New session 14 of user core. Sep 13 00:44:07.772000 audit[5318]: USER_START pid=5318 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:07.777656 kernel: audit: type=1105 audit(1757724247.772:469): pid=5318 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:07.777000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:07.781648 kernel: audit: type=1103 audit(1757724247.777:470): pid=5364 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.738 [WARNING][5329] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" WorkloadEndpoint="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.738 [INFO][5329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.738 [INFO][5329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" iface="eth0" netns="" Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.738 [INFO][5329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.738 [INFO][5329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.783 [INFO][5339] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" HandleID="k8s-pod-network.80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Workload="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.783 [INFO][5339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.784 [INFO][5339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.789 [WARNING][5339] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" HandleID="k8s-pod-network.80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Workload="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.789 [INFO][5339] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" HandleID="k8s-pod-network.80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Workload="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.790 [INFO][5339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:07.794164 env[1315]: 2025-09-13 00:44:07.792 [INFO][5329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:44:07.794640 env[1315]: time="2025-09-13T00:44:07.794188525Z" level=info msg="TearDown network for sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\" successfully" Sep 13 00:44:07.794640 env[1315]: time="2025-09-13T00:44:07.794228522Z" level=info msg="StopPodSandbox for \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\" returns successfully" Sep 13 00:44:07.794865 env[1315]: time="2025-09-13T00:44:07.794826863Z" level=info msg="RemovePodSandbox for \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\"" Sep 13 00:44:07.794898 env[1315]: time="2025-09-13T00:44:07.794867200Z" level=info msg="Forcibly stopping sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\"" Sep 13 00:44:07.818286 env[1315]: time="2025-09-13T00:44:07.818215808Z" level=info msg="StartContainer for \"b62d6e02b9e3f2a20471be522ad27ece9affafa0dda32a78424d2371e25c759d\" returns successfully" Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.827 [WARNING][5383] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" WorkloadEndpoint="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.827 [INFO][5383] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.827 [INFO][5383] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" iface="eth0" netns="" Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.827 [INFO][5383] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.827 [INFO][5383] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.849 [INFO][5406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" HandleID="k8s-pod-network.80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Workload="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.849 [INFO][5406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.849 [INFO][5406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.857 [WARNING][5406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" HandleID="k8s-pod-network.80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Workload="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.857 [INFO][5406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" HandleID="k8s-pod-network.80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Workload="localhost-k8s-whisker--55dd58ff54--4mn4x-eth0" Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.859 [INFO][5406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:07.863011 env[1315]: 2025-09-13 00:44:07.860 [INFO][5383] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e" Sep 13 00:44:07.863011 env[1315]: time="2025-09-13T00:44:07.861817482Z" level=info msg="TearDown network for sandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\" successfully" Sep 13 00:44:07.866373 env[1315]: time="2025-09-13T00:44:07.866348437Z" level=info msg="RemovePodSandbox \"80ba0696ddc4d723822f0720983032edcd76086f5548e9672d2972e5197a777e\" returns successfully" Sep 13 00:44:07.866830 env[1315]: time="2025-09-13T00:44:07.866804414Z" level=info msg="StopPodSandbox for \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\"" Sep 13 00:44:07.896949 kubelet[2107]: I0913 00:44:07.896449 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69684bbb9f-kh9fl" podStartSLOduration=29.161923408 podStartE2EDuration="45.896431028s" podCreationTimestamp="2025-09-13 00:43:22 +0000 UTC" firstStartedPulling="2025-09-13 00:43:50.993708364 +0000 UTC m=+44.455544101" lastFinishedPulling="2025-09-13 00:44:07.728215983 +0000 UTC m=+61.190051721" observedRunningTime="2025-09-13 00:44:07.896194071 +0000 UTC m=+61.358029818" watchObservedRunningTime="2025-09-13 00:44:07.896431028 +0000 UTC m=+61.358266765" Sep 13 00:44:07.909000 audit[5441]: NETFILTER_CFG table=filter:116 family=2 entries=12 op=nft_register_rule pid=5441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:07.917391 kernel: audit: type=1325 audit(1757724247.909:471): table=filter:116 family=2 entries=12 op=nft_register_rule pid=5441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:07.917471 kernel: audit: type=1300 audit(1757724247.909:471): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd21b2b550 a2=0 a3=7ffd21b2b53c items=0 ppid=2259 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:07.909000 audit[5441]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd21b2b550 a2=0 a3=7ffd21b2b53c items=0 ppid=2259 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:07.909000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:07.919000 audit[5441]: NETFILTER_CFG table=nat:117 family=2 entries=22 op=nft_register_rule pid=5441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:07.919000 audit[5441]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd21b2b550 a2=0 a3=7ffd21b2b53c items=0 ppid=2259 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:07.919000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.921 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--44vsn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"61948f79-c287-454b-a5e0-ba4e09c53ab6", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b", Pod:"goldmane-7988f88666-44vsn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2efd7d0fd31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.921 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.921 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" iface="eth0" netns="" Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.921 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.921 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.943 [INFO][5443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" HandleID="k8s-pod-network.6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.943 [INFO][5443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.943 [INFO][5443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.956 [WARNING][5443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" HandleID="k8s-pod-network.6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.956 [INFO][5443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" HandleID="k8s-pod-network.6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.958 [INFO][5443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:07.961255 env[1315]: 2025-09-13 00:44:07.959 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:44:07.961255 env[1315]: time="2025-09-13T00:44:07.961208970Z" level=info msg="TearDown network for sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\" successfully" Sep 13 00:44:07.961255 env[1315]: time="2025-09-13T00:44:07.961242505Z" level=info msg="StopPodSandbox for \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\" returns successfully" Sep 13 00:44:07.961867 env[1315]: time="2025-09-13T00:44:07.961660800Z" level=info msg="RemovePodSandbox for \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\"" Sep 13 00:44:07.961867 env[1315]: time="2025-09-13T00:44:07.961685307Z" level=info msg="Forcibly stopping sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\"" Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.000 [WARNING][5460] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--44vsn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"61948f79-c287-454b-a5e0-ba4e09c53ab6", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"561dde4c38af0b57a08b093d117b0ff04785b945ffd33fe52115366faaa7b76b", Pod:"goldmane-7988f88666-44vsn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2efd7d0fd31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.000 [INFO][5460] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.000 [INFO][5460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" iface="eth0" netns="" Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.000 [INFO][5460] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.000 [INFO][5460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.020 [INFO][5469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" HandleID="k8s-pod-network.6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.020 [INFO][5469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.020 [INFO][5469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.025 [WARNING][5469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" HandleID="k8s-pod-network.6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.025 [INFO][5469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" HandleID="k8s-pod-network.6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Workload="localhost-k8s-goldmane--7988f88666--44vsn-eth0" Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.026 [INFO][5469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:08.030652 env[1315]: 2025-09-13 00:44:08.028 [INFO][5460] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb" Sep 13 00:44:08.031162 env[1315]: time="2025-09-13T00:44:08.031114892Z" level=info msg="TearDown network for sandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\" successfully" Sep 13 00:44:08.083075 env[1315]: time="2025-09-13T00:44:08.083016640Z" level=info msg="RemovePodSandbox \"6f418d61a26cf41292c09e70a7eec3b9dfaea2963570d41ab3c9d940146daaeb\" returns successfully" Sep 13 00:44:08.083479 env[1315]: time="2025-09-13T00:44:08.083452727Z" level=info msg="StopPodSandbox for \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\"" Sep 13 00:44:08.105279 sshd[5318]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:08.105000 audit[5318]: USER_END pid=5318 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:08.105000 audit[5318]: CRED_DISP pid=5318 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:08.107977 systemd-logind[1299]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:44:08.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.27:22-10.0.0.1:48244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:08.109205 systemd[1]: sshd@13-10.0.0.27:22-10.0.0.1:48244.service: Deactivated successfully. Sep 13 00:44:08.109920 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:44:08.111096 systemd-logind[1299]: Removed session 14. Sep 13 00:44:08.113648 env[1315]: time="2025-09-13T00:44:08.113498522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:08.116192 env[1315]: time="2025-09-13T00:44:08.116159998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:08.117881 env[1315]: time="2025-09-13T00:44:08.117850447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:08.119470 env[1315]: time="2025-09-13T00:44:08.119438610Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:08.120271 env[1315]: time="2025-09-13T00:44:08.120239009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:44:08.122993 env[1315]: time="2025-09-13T00:44:08.122953797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:44:08.123799 env[1315]: time="2025-09-13T00:44:08.123729319Z" level=info msg="CreateContainer within sandbox \"d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:44:08.137123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402747964.mount: Deactivated successfully. Sep 13 00:44:08.143122 env[1315]: time="2025-09-13T00:44:08.143088141Z" level=info msg="CreateContainer within sandbox \"d4099ae6fa900876cfa5145a92db50cdf9aae85ef22cec21f743f7340380f16b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37739cc1bed10d9c3dc75d3c97614513557854d0cb28adfd632ab56f077b7f3a\"" Sep 13 00:44:08.144369 env[1315]: time="2025-09-13T00:44:08.144291403Z" level=info msg="StartContainer for \"37739cc1bed10d9c3dc75d3c97614513557854d0cb28adfd632ab56f077b7f3a\"" Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.135 [WARNING][5487] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--twb76-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0495105a-b1c5-41d8-b0c9-fcfac0de6125", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef", Pod:"csi-node-driver-twb76", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fcb2ed7d62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.135 [INFO][5487] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.135 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" iface="eth0" netns="" Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.135 [INFO][5487] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.136 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.177 [INFO][5498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" HandleID="k8s-pod-network.9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.177 [INFO][5498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.177 [INFO][5498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.185 [WARNING][5498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" HandleID="k8s-pod-network.9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.185 [INFO][5498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" HandleID="k8s-pod-network.9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.187 [INFO][5498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:08.192918 env[1315]: 2025-09-13 00:44:08.189 [INFO][5487] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:44:08.193400 env[1315]: time="2025-09-13T00:44:08.192965797Z" level=info msg="TearDown network for sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\" successfully" Sep 13 00:44:08.193400 env[1315]: time="2025-09-13T00:44:08.193019160Z" level=info msg="StopPodSandbox for \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\" returns successfully" Sep 13 00:44:08.193641 env[1315]: time="2025-09-13T00:44:08.193573335Z" level=info msg="RemovePodSandbox for \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\"" Sep 13 00:44:08.193698 env[1315]: time="2025-09-13T00:44:08.193655072Z" level=info msg="Forcibly stopping sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\"" Sep 13 00:44:08.247808 env[1315]: time="2025-09-13T00:44:08.247761838Z" level=info msg="StartContainer for \"37739cc1bed10d9c3dc75d3c97614513557854d0cb28adfd632ab56f077b7f3a\" returns successfully" Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.244 [WARNING][5543] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--twb76-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0495105a-b1c5-41d8-b0c9-fcfac0de6125", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef", Pod:"csi-node-driver-twb76", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fcb2ed7d62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.244 [INFO][5543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.244 [INFO][5543] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" iface="eth0" netns="" Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.244 [INFO][5543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.244 [INFO][5543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.264 [INFO][5562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" HandleID="k8s-pod-network.9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.265 [INFO][5562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.265 [INFO][5562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.271 [WARNING][5562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" HandleID="k8s-pod-network.9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.271 [INFO][5562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" HandleID="k8s-pod-network.9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Workload="localhost-k8s-csi--node--driver--twb76-eth0" Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.272 [INFO][5562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:08.280223 env[1315]: 2025-09-13 00:44:08.276 [INFO][5543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515" Sep 13 00:44:08.280223 env[1315]: time="2025-09-13T00:44:08.280230911Z" level=info msg="TearDown network for sandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\" successfully" Sep 13 00:44:08.457099 env[1315]: time="2025-09-13T00:44:08.457045390Z" level=info msg="RemovePodSandbox \"9ad4cc30129ec45337eb8dd178bced50a408bbd580f4279262424b788b07e515\" returns successfully" Sep 13 00:44:08.457854 env[1315]: time="2025-09-13T00:44:08.457814779Z" level=info msg="StopPodSandbox for \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\"" Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.510 [WARNING][5586] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0", GenerateName:"calico-apiserver-69684bbb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c30c5f78-8868-438b-9ac1-1ddc434b02ca", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69684bbb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1", Pod:"calico-apiserver-69684bbb9f-kh9fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1a9bc4b29d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.510 [INFO][5586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.510 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" iface="eth0" netns="" Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.510 [INFO][5586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.510 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.529 [INFO][5595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" HandleID="k8s-pod-network.3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.529 [INFO][5595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.529 [INFO][5595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.597 [WARNING][5595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" HandleID="k8s-pod-network.3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.598 [INFO][5595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" HandleID="k8s-pod-network.3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.599 [INFO][5595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:08.604179 env[1315]: 2025-09-13 00:44:08.601 [INFO][5586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:44:08.604179 env[1315]: time="2025-09-13T00:44:08.604139950Z" level=info msg="TearDown network for sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\" successfully" Sep 13 00:44:08.604179 env[1315]: time="2025-09-13T00:44:08.604183323Z" level=info msg="StopPodSandbox for \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\" returns successfully" Sep 13 00:44:08.604967 env[1315]: time="2025-09-13T00:44:08.604932213Z" level=info msg="RemovePodSandbox for \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\"" Sep 13 00:44:08.605160 env[1315]: time="2025-09-13T00:44:08.605052735Z" level=info msg="Forcibly stopping sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\"" Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.641 [WARNING][5612] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0", GenerateName:"calico-apiserver-69684bbb9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c30c5f78-8868-438b-9ac1-1ddc434b02ca", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69684bbb9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9aac8e57d32903fd6e4a3d2410fa0fbfa603670a394ad33d94d0a2aa257792b1", Pod:"calico-apiserver-69684bbb9f-kh9fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1a9bc4b29d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.641 [INFO][5612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.641 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" iface="eth0" netns="" Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.641 [INFO][5612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.641 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.662 [INFO][5622] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" HandleID="k8s-pod-network.3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.662 [INFO][5622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.662 [INFO][5622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.800 [WARNING][5622] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" HandleID="k8s-pod-network.3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.800 [INFO][5622] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" HandleID="k8s-pod-network.3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Workload="localhost-k8s-calico--apiserver--69684bbb9f--kh9fl-eth0" Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.802 [INFO][5622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:08.806411 env[1315]: 2025-09-13 00:44:08.804 [INFO][5612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f" Sep 13 00:44:08.807064 env[1315]: time="2025-09-13T00:44:08.806427654Z" level=info msg="TearDown network for sandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\" successfully" Sep 13 00:44:08.810954 env[1315]: time="2025-09-13T00:44:08.810894240Z" level=info msg="RemovePodSandbox \"3d657dc4a4ad81052dd7a681b0563729882bd15c2f3afda81c2283c5af08235f\" returns successfully" Sep 13 00:44:08.906000 audit[5630]: NETFILTER_CFG table=filter:118 family=2 entries=12 op=nft_register_rule pid=5630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:08.906000 audit[5630]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd5cc52060 a2=0 a3=7ffd5cc5204c items=0 ppid=2259 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:08.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:08.911000 audit[5630]: NETFILTER_CFG table=nat:119 family=2 entries=22 op=nft_register_rule pid=5630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:08.911000 audit[5630]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd5cc52060 a2=0 a3=7ffd5cc5204c items=0 ppid=2259 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:08.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:09.375961 kubelet[2107]: I0913 00:44:09.375888 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69684bbb9f-79hgx" podStartSLOduration=31.622436629 podStartE2EDuration="47.37586796s" podCreationTimestamp="2025-09-13 00:43:22 +0000 UTC" firstStartedPulling="2025-09-13 00:43:52.367781349 +0000 UTC m=+45.829617086" lastFinishedPulling="2025-09-13 00:44:08.12121269 +0000 UTC m=+61.583048417" observedRunningTime="2025-09-13 00:44:08.89349626 +0000 UTC m=+62.355331987" watchObservedRunningTime="2025-09-13 00:44:09.37586796 +0000 UTC m=+62.837703697" Sep 13 00:44:09.564000 audit[5632]: NETFILTER_CFG table=filter:120 family=2 entries=11 op=nft_register_rule pid=5632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:09.564000 audit[5632]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffce8e97980 a2=0 a3=7ffce8e9796c items=0 ppid=2259 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:09.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:09.569000 audit[5632]: NETFILTER_CFG table=nat:121 family=2 entries=29 op=nft_register_chain pid=5632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:09.569000 audit[5632]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffce8e97980 a2=0 a3=7ffce8e9796c items=0 ppid=2259 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:09.569000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:09.885788 kubelet[2107]: I0913 00:44:09.885755 2107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:44:10.458187 env[1315]: time="2025-09-13T00:44:10.458136491Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:10.460188 env[1315]: time="2025-09-13T00:44:10.460153964Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:10.461819 env[1315]: time="2025-09-13T00:44:10.461768615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:10.463106 env[1315]: time="2025-09-13T00:44:10.463081916Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:10.463603 env[1315]: time="2025-09-13T00:44:10.463571726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:44:10.465803 env[1315]: time="2025-09-13T00:44:10.465735000Z" level=info msg="CreateContainer within sandbox \"4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:44:10.478564 env[1315]: time="2025-09-13T00:44:10.478525379Z" level=info msg="CreateContainer within sandbox \"4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c0a1a60eb499c5ca831b6db817620887e883505dc9b0ce7c635a10af37f9f1ed\"" Sep 13 00:44:10.479002 env[1315]: time="2025-09-13T00:44:10.478980682Z" level=info msg="StartContainer for \"c0a1a60eb499c5ca831b6db817620887e883505dc9b0ce7c635a10af37f9f1ed\"" Sep 13 00:44:10.528875 env[1315]: time="2025-09-13T00:44:10.528822259Z" level=info msg="StartContainer for \"c0a1a60eb499c5ca831b6db817620887e883505dc9b0ce7c635a10af37f9f1ed\" returns successfully" Sep 13 00:44:10.536737 env[1315]: time="2025-09-13T00:44:10.533413355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:44:12.210415 env[1315]: time="2025-09-13T00:44:12.210332648Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:12.212522 env[1315]: time="2025-09-13T00:44:12.212475227Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:12.214102 env[1315]: time="2025-09-13T00:44:12.214062291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:12.215811 env[1315]: time="2025-09-13T00:44:12.215764285Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:44:12.216199 env[1315]: time="2025-09-13T00:44:12.216161206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:44:12.218423 env[1315]: time="2025-09-13T00:44:12.218376666Z" level=info msg="CreateContainer within sandbox \"4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:44:12.231818 env[1315]: time="2025-09-13T00:44:12.231744173Z" level=info msg="CreateContainer within sandbox \"4c8a14076b5ff3197559f71b189d4eec1a72ab464cec2f7da33013d4984cecef\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"03c6d3cafaf7fa062a2fe8c463a52473dc554872ee4c555efe50d96ed2b1f843\"" Sep 13 00:44:12.232787 env[1315]: time="2025-09-13T00:44:12.232761786Z" level=info msg="StartContainer for \"03c6d3cafaf7fa062a2fe8c463a52473dc554872ee4c555efe50d96ed2b1f843\"" Sep 13 00:44:12.298022 env[1315]: time="2025-09-13T00:44:12.297951666Z" level=info msg="StartContainer for \"03c6d3cafaf7fa062a2fe8c463a52473dc554872ee4c555efe50d96ed2b1f843\" returns successfully" Sep 13 00:44:12.734307 kubelet[2107]: I0913 00:44:12.734251 2107 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:44:12.734307 kubelet[2107]: I0913 00:44:12.734297 2107 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:44:13.107780 systemd[1]: Started sshd@14-10.0.0.27:22-10.0.0.1:35956.service. Sep 13 00:44:13.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.27:22-10.0.0.1:35956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:13.108817 kernel: kauditd_printk_skb: 19 callbacks suppressed Sep 13 00:44:13.108874 kernel: audit: type=1130 audit(1757724253.106:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.27:22-10.0.0.1:35956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:13.145000 audit[5735]: USER_ACCT pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.146942 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 35956 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:13.149317 sshd[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:13.147000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.155318 kernel: audit: type=1101 audit(1757724253.145:481): pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.155428 kernel: audit: type=1103 audit(1757724253.147:482): pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.155458 kernel: audit: type=1006 audit(1757724253.147:483): pid=5735 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 13 00:44:13.154675 systemd-logind[1299]: New session 15 of user core. Sep 13 00:44:13.155138 systemd[1]: Started session-15.scope. Sep 13 00:44:13.147000 audit[5735]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefdec2190 a2=3 a3=0 items=0 ppid=1 pid=5735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:13.160184 kernel: audit: type=1300 audit(1757724253.147:483): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefdec2190 a2=3 a3=0 items=0 ppid=1 pid=5735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:13.147000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:13.161547 kernel: audit: type=1327 audit(1757724253.147:483): proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:13.160000 audit[5735]: USER_START pid=5735 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.161000 audit[5738]: CRED_ACQ pid=5738 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.168968 kernel: audit: type=1105 audit(1757724253.160:484): pid=5735 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.169036 kernel: audit: type=1103 audit(1757724253.161:485): pid=5738 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.385139 sshd[5735]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:13.384000 audit[5735]: USER_END pid=5735 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.387853 systemd[1]: sshd@14-10.0.0.27:22-10.0.0.1:35956.service: Deactivated successfully. Sep 13 00:44:13.388980 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:44:13.389324 systemd-logind[1299]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:44:13.390190 systemd-logind[1299]: Removed session 15. Sep 13 00:44:13.384000 audit[5735]: CRED_DISP pid=5735 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.395520 kernel: audit: type=1106 audit(1757724253.384:486): pid=5735 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.395585 kernel: audit: type=1104 audit(1757724253.384:487): pid=5735 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:13.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.27:22-10.0.0.1:35956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:16.058495 kubelet[2107]: I0913 00:44:16.058435 2107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:44:16.076857 kubelet[2107]: I0913 00:44:16.076797 2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-twb76" podStartSLOduration=33.57044714 podStartE2EDuration="51.076777366s" podCreationTimestamp="2025-09-13 00:43:25 +0000 UTC" firstStartedPulling="2025-09-13 00:43:54.710704466 +0000 UTC m=+48.172540203" lastFinishedPulling="2025-09-13 00:44:12.217034692 +0000 UTC m=+65.678870429" observedRunningTime="2025-09-13 00:44:12.909495665 +0000 UTC m=+66.371331412" watchObservedRunningTime="2025-09-13 00:44:16.076777366 +0000 UTC m=+69.538613093" Sep 13 00:44:16.092000 audit[5760]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=5760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:16.092000 audit[5760]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fffbdb3a4e0 a2=0 a3=7fffbdb3a4cc items=0 ppid=2259 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:16.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:16.099000 audit[5760]: NETFILTER_CFG table=nat:123 family=2 entries=36 op=nft_register_chain pid=5760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:16.099000 audit[5760]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7fffbdb3a4e0 a2=0 a3=7fffbdb3a4cc items=0 ppid=2259 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:16.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:16.647294 kubelet[2107]: E0913 00:44:16.647247 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:18.388218 systemd[1]: Started sshd@15-10.0.0.27:22-10.0.0.1:35962.service. Sep 13 00:44:18.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.27:22-10.0.0.1:35962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:18.389326 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:44:18.389365 kernel: audit: type=1130 audit(1757724258.386:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.27:22-10.0.0.1:35962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:18.422000 audit[5761]: USER_ACCT pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.424737 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 35962 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:18.427048 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:18.425000 audit[5761]: CRED_ACQ pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.430457 systemd-logind[1299]: New session 16 of user core. Sep 13 00:44:18.431445 systemd[1]: Started session-16.scope. Sep 13 00:44:18.432606 kernel: audit: type=1101 audit(1757724258.422:492): pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.432673 kernel: audit: type=1103 audit(1757724258.425:493): pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.432694 kernel: audit: type=1006 audit(1757724258.425:494): pid=5761 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 13 00:44:18.425000 audit[5761]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea73c7970 a2=3 a3=0 items=0 ppid=1 pid=5761 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:18.439508 kernel: audit: type=1300 audit(1757724258.425:494): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea73c7970 a2=3 a3=0 items=0 ppid=1 pid=5761 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:18.439673 kernel: audit: type=1327 audit(1757724258.425:494): proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:18.425000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:18.441017 kernel: audit: type=1105 audit(1757724258.434:495): pid=5761 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.434000 audit[5761]: USER_START pid=5761 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.445072 kernel: audit: type=1103 audit(1757724258.435:496): pid=5764 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.435000 audit[5764]: CRED_ACQ pid=5764 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.557893 sshd[5761]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:18.557000 audit[5761]: USER_END pid=5761 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.560850 systemd[1]: sshd@15-10.0.0.27:22-10.0.0.1:35962.service: Deactivated successfully. Sep 13 00:44:18.561974 systemd-logind[1299]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:44:18.561977 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:44:18.562755 systemd-logind[1299]: Removed session 16. Sep 13 00:44:18.557000 audit[5761]: CRED_DISP pid=5761 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.567461 kernel: audit: type=1106 audit(1757724258.557:497): pid=5761 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.567529 kernel: audit: type=1104 audit(1757724258.557:498): pid=5761 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:18.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.27:22-10.0.0.1:35962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:18.648223 kubelet[2107]: E0913 00:44:18.648085 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:23.561222 systemd[1]: Started sshd@16-10.0.0.27:22-10.0.0.1:38094.service. Sep 13 00:44:23.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.27:22-10.0.0.1:38094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:23.562669 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:44:23.562733 kernel: audit: type=1130 audit(1757724263.560:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.27:22-10.0.0.1:38094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:23.600000 audit[5816]: USER_ACCT pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.600942 sshd[5816]: Accepted publickey for core from 10.0.0.1 port 38094 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:23.602221 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:23.600000 audit[5816]: CRED_ACQ pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.606242 systemd-logind[1299]: New session 17 of user core. Sep 13 00:44:23.606980 systemd[1]: Started session-17.scope. Sep 13 00:44:23.609647 kernel: audit: type=1101 audit(1757724263.600:501): pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.609711 kernel: audit: type=1103 audit(1757724263.600:502): pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.609734 kernel: audit: type=1006 audit(1757724263.600:503): pid=5816 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 13 00:44:23.600000 audit[5816]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8741a980 a2=3 a3=0 items=0 ppid=1 pid=5816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:23.616490 kernel: audit: type=1300 audit(1757724263.600:503): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8741a980 a2=3 a3=0 items=0 ppid=1 pid=5816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:23.616553 kernel: audit: type=1327 audit(1757724263.600:503): proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:23.600000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:23.617883 kernel: audit: type=1105 audit(1757724263.611:504): pid=5816 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.611000 audit[5816]: USER_START pid=5816 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.622652 kernel: audit: type=1103 audit(1757724263.611:505): pid=5819 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.611000 audit[5819]: CRED_ACQ pid=5819 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.775073 sshd[5816]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:23.775000 audit[5816]: USER_END pid=5816 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.777681 systemd[1]: Started sshd@17-10.0.0.27:22-10.0.0.1:38100.service. Sep 13 00:44:23.778084 systemd[1]: sshd@16-10.0.0.27:22-10.0.0.1:38094.service: Deactivated successfully. Sep 13 00:44:23.779417 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:44:23.779449 systemd-logind[1299]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:44:23.780412 systemd-logind[1299]: Removed session 17. Sep 13 00:44:23.775000 audit[5816]: CRED_DISP pid=5816 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.786255 kernel: audit: type=1106 audit(1757724263.775:506): pid=5816 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.786368 kernel: audit: type=1104 audit(1757724263.775:507): pid=5816 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.27:22-10.0.0.1:38100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:23.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.27:22-10.0.0.1:38094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:23.811000 audit[5828]: USER_ACCT pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.812432 sshd[5828]: Accepted publickey for core from 10.0.0.1 port 38100 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:23.812000 audit[5828]: CRED_ACQ pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.812000 audit[5828]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc90961a00 a2=3 a3=0 items=0 ppid=1 pid=5828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:23.812000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:23.813912 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:23.817475 systemd-logind[1299]: New session 18 of user core. Sep 13 00:44:23.818312 systemd[1]: Started session-18.scope. Sep 13 00:44:23.821000 audit[5828]: USER_START pid=5828 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:23.823000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:24.555940 sshd[5828]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:24.558680 systemd[1]: Started sshd@18-10.0.0.27:22-10.0.0.1:38112.service. Sep 13 00:44:24.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.27:22-10.0.0.1:38112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:24.559000 audit[5828]: USER_END pid=5828 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:24.559000 audit[5828]: CRED_DISP pid=5828 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:24.561931 systemd[1]: sshd@17-10.0.0.27:22-10.0.0.1:38100.service: Deactivated successfully. Sep 13 00:44:24.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.27:22-10.0.0.1:38100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:24.562811 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:44:24.562877 systemd-logind[1299]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:44:24.565083 systemd-logind[1299]: Removed session 18. Sep 13 00:44:24.600000 audit[5840]: USER_ACCT pid=5840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:24.601827 sshd[5840]: Accepted publickey for core from 10.0.0.1 port 38112 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:24.602000 audit[5840]: CRED_ACQ pid=5840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:24.602000 audit[5840]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffadea2690 a2=3 a3=0 items=0 ppid=1 pid=5840 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:24.602000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:24.603130 sshd[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:24.607216 systemd-logind[1299]: New session 19 of user core. Sep 13 00:44:24.608128 systemd[1]: Started session-19.scope. Sep 13 00:44:24.613000 audit[5840]: USER_START pid=5840 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:24.614000 audit[5845]: CRED_ACQ pid=5845 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:25.646978 kubelet[2107]: E0913 00:44:25.646932 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:26.245000 audit[5857]: NETFILTER_CFG table=filter:124 family=2 entries=22 op=nft_register_rule pid=5857 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:26.245000 audit[5857]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffcace2f1a0 a2=0 a3=7ffcace2f18c items=0 ppid=2259 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:26.245000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:26.255000 audit[5857]: NETFILTER_CFG table=nat:125 family=2 entries=24 op=nft_register_rule pid=5857 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:26.255000 audit[5857]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffcace2f1a0 a2=0 a3=0 items=0 ppid=2259 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:26.255000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:26.268609 systemd[1]: Started sshd@19-10.0.0.27:22-10.0.0.1:38120.service. Sep 13 00:44:26.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.27:22-10.0.0.1:38120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:26.271875 sshd[5840]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:26.272000 audit[5840]: USER_END pid=5840 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:26.272000 audit[5840]: CRED_DISP pid=5840 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:26.275473 systemd-logind[1299]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:44:26.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.27:22-10.0.0.1:38112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:26.277997 systemd[1]: sshd@18-10.0.0.27:22-10.0.0.1:38112.service: Deactivated successfully. Sep 13 00:44:26.279033 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:44:26.280670 systemd-logind[1299]: Removed session 19. Sep 13 00:44:26.282000 audit[5863]: NETFILTER_CFG table=filter:126 family=2 entries=34 op=nft_register_rule pid=5863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:26.282000 audit[5863]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffff5877090 a2=0 a3=7ffff587707c items=0 ppid=2259 pid=5863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:26.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:26.290000 audit[5863]: NETFILTER_CFG table=nat:127 family=2 entries=24 op=nft_register_rule pid=5863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:26.290000 audit[5863]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffff5877090 a2=0 a3=0 items=0 ppid=2259 pid=5863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:26.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:26.308000 audit[5859]: USER_ACCT pid=5859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:26.309818 sshd[5859]: Accepted publickey for core from 10.0.0.1 port 38120 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:26.309000 audit[5859]: CRED_ACQ pid=5859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:26.309000 audit[5859]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5b536600 a2=3 a3=0 items=0 ppid=1 pid=5859 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:26.309000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:26.310600 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:26.315363 systemd[1]: Started session-20.scope. Sep 13 00:44:26.315678 systemd-logind[1299]: New session 20 of user core. Sep 13 00:44:26.319000 audit[5859]: USER_START pid=5859 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:26.321000 audit[5865]: CRED_ACQ pid=5865 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:26.708763 kubelet[2107]: E0913 00:44:26.708645 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:27.029781 sshd[5859]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:27.032378 systemd[1]: Started sshd@20-10.0.0.27:22-10.0.0.1:38122.service. Sep 13 00:44:27.030000 audit[5859]: USER_END pid=5859 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:27.031000 audit[5859]: CRED_DISP pid=5859 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:27.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.27:22-10.0.0.1:38122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:27.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.27:22-10.0.0.1:38120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:27.033319 systemd[1]: sshd@19-10.0.0.27:22-10.0.0.1:38120.service: Deactivated successfully. Sep 13 00:44:27.034403 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:44:27.035051 systemd-logind[1299]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:44:27.036067 systemd-logind[1299]: Removed session 20. Sep 13 00:44:27.078000 audit[5873]: USER_ACCT pid=5873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:27.079444 sshd[5873]: Accepted publickey for core from 10.0.0.1 port 38122 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:27.079000 audit[5873]: CRED_ACQ pid=5873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:27.079000 audit[5873]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3f7fe900 a2=3 a3=0 items=0 ppid=1 pid=5873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:27.079000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:27.081530 sshd[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:27.086942 systemd-logind[1299]: New session 21 of user core. Sep 13 00:44:27.087820 systemd[1]: Started session-21.scope. Sep 13 00:44:27.092000 audit[5873]: USER_START pid=5873 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:27.093000 audit[5878]: CRED_ACQ pid=5878 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:27.292247 sshd[5873]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:27.292000 audit[5873]: USER_END pid=5873 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:27.292000 audit[5873]: CRED_DISP pid=5873 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:27.294807 systemd[1]: sshd@20-10.0.0.27:22-10.0.0.1:38122.service: Deactivated successfully. Sep 13 00:44:27.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.27:22-10.0.0.1:38122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:27.295811 systemd-logind[1299]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:44:27.295846 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:44:27.296733 systemd-logind[1299]: Removed session 21. Sep 13 00:44:32.300725 kernel: kauditd_printk_skb: 57 callbacks suppressed Sep 13 00:44:32.300840 kernel: audit: type=1130 audit(1757724272.295:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.27:22-10.0.0.1:42116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:32.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.27:22-10.0.0.1:42116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:32.295844 systemd[1]: Started sshd@21-10.0.0.27:22-10.0.0.1:42116.service. Sep 13 00:44:32.330000 audit[5890]: USER_ACCT pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.331412 sshd[5890]: Accepted publickey for core from 10.0.0.1 port 42116 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:32.332473 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:32.331000 audit[5890]: CRED_ACQ pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.336711 systemd-logind[1299]: New session 22 of user core. Sep 13 00:44:32.337681 systemd[1]: Started session-22.scope. Sep 13 00:44:32.340013 kernel: audit: type=1101 audit(1757724272.330:550): pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.340073 kernel: audit: type=1103 audit(1757724272.331:551): pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.342703 kernel: audit: type=1006 audit(1757724272.331:552): pid=5890 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Sep 13 00:44:32.331000 audit[5890]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9a7d5e20 a2=3 a3=0 items=0 ppid=1 pid=5890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:32.346986 kernel: audit: type=1300 audit(1757724272.331:552): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9a7d5e20 a2=3 a3=0 items=0 ppid=1 pid=5890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:32.347031 kernel: audit: type=1327 audit(1757724272.331:552): proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:32.331000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:32.348452 kernel: audit: type=1105 audit(1757724272.341:553): pid=5890 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.341000 audit[5890]: USER_START pid=5890 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.352911 kernel: audit: type=1103 audit(1757724272.342:554): pid=5893 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.342000 audit[5893]: CRED_ACQ pid=5893 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.442027 sshd[5890]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:32.442000 audit[5890]: USER_END pid=5890 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.497259 kernel: audit: type=1106 audit(1757724272.442:555): pid=5890 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.497311 kernel: audit: type=1104 audit(1757724272.442:556): pid=5890 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.442000 audit[5890]: CRED_DISP pid=5890 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:32.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.27:22-10.0.0.1:42116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:32.444082 systemd[1]: sshd@21-10.0.0.27:22-10.0.0.1:42116.service: Deactivated successfully. Sep 13 00:44:32.444973 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:44:32.450117 systemd-logind[1299]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:44:32.450741 systemd-logind[1299]: Removed session 22. Sep 13 00:44:37.096000 audit[5954]: NETFILTER_CFG table=filter:128 family=2 entries=33 op=nft_register_rule pid=5954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:37.096000 audit[5954]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffc0bbb0670 a2=0 a3=7ffc0bbb065c items=0 ppid=2259 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:37.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:37.102000 audit[5954]: NETFILTER_CFG table=nat:129 family=2 entries=31 op=nft_register_chain pid=5954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:37.102000 audit[5954]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffc0bbb0670 a2=0 a3=7ffc0bbb065c items=0 ppid=2259 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:37.102000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:37.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.27:22-10.0.0.1:42126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:37.445921 systemd[1]: Started sshd@22-10.0.0.27:22-10.0.0.1:42126.service. Sep 13 00:44:37.447332 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:44:37.447379 kernel: audit: type=1130 audit(1757724277.445:560): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.27:22-10.0.0.1:42126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:37.480000 audit[5955]: USER_ACCT pid=5955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.481224 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 42126 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:37.482831 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:37.481000 audit[5955]: CRED_ACQ pid=5955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.486712 systemd-logind[1299]: New session 23 of user core. Sep 13 00:44:37.487419 systemd[1]: Started session-23.scope. Sep 13 00:44:37.488212 kernel: audit: type=1101 audit(1757724277.480:561): pid=5955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.488268 kernel: audit: type=1103 audit(1757724277.481:562): pid=5955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.488305 kernel: audit: type=1006 audit(1757724277.481:563): pid=5955 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 13 00:44:37.490683 kernel: audit: type=1300 audit(1757724277.481:563): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf7ceacf0 a2=3 a3=0 items=0 ppid=1 pid=5955 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:37.481000 audit[5955]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf7ceacf0 a2=3 a3=0 items=0 ppid=1 pid=5955 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:37.481000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:37.495612 kernel: audit: type=1327 audit(1757724277.481:563): proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:37.495687 kernel: audit: type=1105 audit(1757724277.492:564): pid=5955 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.492000 audit[5955]: USER_START pid=5955 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.493000 audit[5958]: CRED_ACQ pid=5958 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.503021 kernel: audit: type=1103 audit(1757724277.493:565): pid=5958 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.610759 sshd[5955]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:37.611000 audit[5955]: USER_END pid=5955 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.612988 systemd[1]: sshd@22-10.0.0.27:22-10.0.0.1:42126.service: Deactivated successfully. Sep 13 00:44:37.613799 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:44:37.614832 systemd-logind[1299]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:44:37.615602 systemd-logind[1299]: Removed session 23. Sep 13 00:44:37.611000 audit[5955]: CRED_DISP pid=5955 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.619539 kernel: audit: type=1106 audit(1757724277.611:566): pid=5955 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.619588 kernel: audit: type=1104 audit(1757724277.611:567): pid=5955 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:37.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.27:22-10.0.0.1:42126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:38.622000 audit[5971]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:38.622000 audit[5971]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fffbd52c100 a2=0 a3=7fffbd52c0ec items=0 ppid=2259 pid=5971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:38.622000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:38.631000 audit[5971]: NETFILTER_CFG table=nat:131 family=2 entries=110 op=nft_register_chain pid=5971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:44:38.631000 audit[5971]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fffbd52c100 a2=0 a3=7fffbd52c0ec items=0 ppid=2259 pid=5971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:38.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:44:40.885475 systemd[1]: run-containerd-runc-k8s.io-0a7482681e8e07a83e0b7a5da838b66b3b4f29fb90cbfc7b715a90545ef7b765-runc.z4cRqi.mount: Deactivated successfully. Sep 13 00:44:42.613848 systemd[1]: Started sshd@23-10.0.0.27:22-10.0.0.1:40626.service. Sep 13 00:44:42.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.27:22-10.0.0.1:40626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:42.614896 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:44:42.615017 kernel: audit: type=1130 audit(1757724282.612:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.27:22-10.0.0.1:40626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:42.646000 audit[5996]: USER_ACCT pid=5996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.649328 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 40626 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:42.652300 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:42.650000 audit[5996]: CRED_ACQ pid=5996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.656410 kernel: audit: type=1101 audit(1757724282.646:572): pid=5996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.656554 kernel: audit: type=1103 audit(1757724282.650:573): pid=5996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.656588 kernel: audit: type=1006 audit(1757724282.650:574): pid=5996 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 13 00:44:42.656766 systemd-logind[1299]: New session 24 of user core. Sep 13 00:44:42.657945 systemd[1]: Started session-24.scope. Sep 13 00:44:42.663538 kernel: audit: type=1300 audit(1757724282.650:574): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9203b8c0 a2=3 a3=0 items=0 ppid=1 pid=5996 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:42.650000 audit[5996]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9203b8c0 a2=3 a3=0 items=0 ppid=1 pid=5996 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:42.650000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:42.661000 audit[5996]: USER_START pid=5996 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.669946 kernel: audit: type=1327 audit(1757724282.650:574): proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:42.670006 kernel: audit: type=1105 audit(1757724282.661:575): pid=5996 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.670031 kernel: audit: type=1103 audit(1757724282.662:576): pid=5999 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.662000 audit[5999]: CRED_ACQ pid=5999 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.778819 sshd[5996]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:42.778000 audit[5996]: USER_END pid=5996 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.781512 systemd[1]: sshd@23-10.0.0.27:22-10.0.0.1:40626.service: Deactivated successfully. Sep 13 00:44:42.782844 systemd-logind[1299]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:44:42.782866 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:44:42.783928 systemd-logind[1299]: Removed session 24. Sep 13 00:44:42.778000 audit[5996]: CRED_DISP pid=5996 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.787816 kernel: audit: type=1106 audit(1757724282.778:577): pid=5996 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.787868 kernel: audit: type=1104 audit(1757724282.778:578): pid=5996 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:42.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.27:22-10.0.0.1:40626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:47.782367 systemd[1]: Started sshd@24-10.0.0.27:22-10.0.0.1:40638.service. Sep 13 00:44:47.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.27:22-10.0.0.1:40638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:47.783972 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:44:47.784026 kernel: audit: type=1130 audit(1757724287.780:580): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.27:22-10.0.0.1:40638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:47.819000 audit[6012]: USER_ACCT pid=6012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.821011 sshd[6012]: Accepted publickey for core from 10.0.0.1 port 40638 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:47.823252 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:47.821000 audit[6012]: CRED_ACQ pid=6012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.829935 kernel: audit: type=1101 audit(1757724287.819:581): pid=6012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.833985 kernel: audit: type=1103 audit(1757724287.821:582): pid=6012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.834023 kernel: audit: type=1006 audit(1757724287.821:583): pid=6012 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 13 00:44:47.834041 kernel: audit: type=1300 audit(1757724287.821:583): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6ff8d170 a2=3 a3=0 items=0 ppid=1 pid=6012 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:47.821000 audit[6012]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6ff8d170 a2=3 a3=0 items=0 ppid=1 pid=6012 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:47.827847 systemd-logind[1299]: New session 25 of user core. Sep 13 00:44:47.828330 systemd[1]: Started session-25.scope. Sep 13 00:44:47.821000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:47.835870 kernel: audit: type=1327 audit(1757724287.821:583): proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:47.835987 kernel: audit: type=1105 audit(1757724287.831:584): pid=6012 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.831000 audit[6012]: USER_START pid=6012 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.832000 audit[6015]: CRED_ACQ pid=6015 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.843114 kernel: audit: type=1103 audit(1757724287.832:585): pid=6015 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.986717 sshd[6012]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:47.986000 audit[6012]: USER_END pid=6012 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.990353 systemd-logind[1299]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:44:47.986000 audit[6012]: CRED_DISP pid=6012 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.991895 systemd[1]: sshd@24-10.0.0.27:22-10.0.0.1:40638.service: Deactivated successfully. Sep 13 00:44:47.992889 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:44:47.994800 systemd-logind[1299]: Removed session 25. Sep 13 00:44:47.995486 kernel: audit: type=1106 audit(1757724287.986:586): pid=6012 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.995565 kernel: audit: type=1104 audit(1757724287.986:587): pid=6012 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:47.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.27:22-10.0.0.1:40638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:52.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.27:22-10.0.0.1:45734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:52.973405 systemd[1]: Started sshd@25-10.0.0.27:22-10.0.0.1:45734.service. Sep 13 00:44:52.974805 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:44:52.974943 kernel: audit: type=1130 audit(1757724292.971:589): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.27:22-10.0.0.1:45734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:44:53.013000 audit[6026]: USER_ACCT pid=6026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.015254 sshd[6026]: Accepted publickey for core from 10.0.0.1 port 45734 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:44:53.017000 audit[6026]: CRED_ACQ pid=6026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.019461 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:53.022563 kernel: audit: type=1101 audit(1757724293.013:590): pid=6026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.022637 kernel: audit: type=1103 audit(1757724293.017:591): pid=6026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.022655 kernel: audit: type=1006 audit(1757724293.017:592): pid=6026 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 13 00:44:53.023206 systemd-logind[1299]: New session 26 of user core. Sep 13 00:44:53.023922 systemd[1]: Started session-26.scope. Sep 13 00:44:53.017000 audit[6026]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd898cf6d0 a2=3 a3=0 items=0 ppid=1 pid=6026 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:53.028759 kernel: audit: type=1300 audit(1757724293.017:592): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd898cf6d0 a2=3 a3=0 items=0 ppid=1 pid=6026 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:44:53.028822 kernel: audit: type=1327 audit(1757724293.017:592): proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:53.017000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:44:53.026000 audit[6026]: USER_START pid=6026 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.034149 kernel: audit: type=1105 audit(1757724293.026:593): pid=6026 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.034197 kernel: audit: type=1103 audit(1757724293.028:594): pid=6029 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.028000 audit[6029]: CRED_ACQ pid=6029 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.183345 sshd[6026]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:53.182000 audit[6026]: USER_END pid=6026 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.185702 systemd[1]: sshd@25-10.0.0.27:22-10.0.0.1:45734.service: Deactivated successfully. Sep 13 00:44:53.186454 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:44:53.187391 systemd-logind[1299]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:44:53.188206 systemd-logind[1299]: Removed session 26. Sep 13 00:44:53.182000 audit[6026]: CRED_DISP pid=6026 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.201199 kernel: audit: type=1106 audit(1757724293.182:595): pid=6026 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.201278 kernel: audit: type=1104 audit(1757724293.182:596): pid=6026 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 13 00:44:53.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.27:22-10.0.0.1:45734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'